url
stringlengths
60
61
repository_url
stringclasses
1 value
labels_url
stringlengths
74
75
comments_url
stringlengths
69
70
events_url
stringlengths
67
68
html_url
stringlengths
48
51
id
int64
636M
1.45B
node_id
stringlengths
18
32
number
int64
258
5.24k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
3 values
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
69
70
performed_via_github_app
null
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/966
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/966/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/966/comments
https://api.github.com/repos/huggingface/datasets/issues/966/events
https://github.com/huggingface/datasets/pull/966
754,558,686
MDExOlB1bGxSZXF1ZXN0NTMwNDM4NDE4
966
Add CLINC150 Dataset
{ "login": "sumanthd17", "id": 28291870, "node_id": "MDQ6VXNlcjI4MjkxODcw", "avatar_url": "https://avatars.githubusercontent.com/u/28291870?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sumanthd17", "html_url": "https://github.com/sumanthd17", "followers_url": "https://api.github.com/users/sumanthd17/followers", "following_url": "https://api.github.com/users/sumanthd17/following{/other_user}", "gists_url": "https://api.github.com/users/sumanthd17/gists{/gist_id}", "starred_url": "https://api.github.com/users/sumanthd17/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sumanthd17/subscriptions", "organizations_url": "https://api.github.com/users/sumanthd17/orgs", "repos_url": "https://api.github.com/users/sumanthd17/repos", "events_url": "https://api.github.com/users/sumanthd17/events{/privacy}", "received_events_url": "https://api.github.com/users/sumanthd17/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Looks like your PR now shows changes in many other files than the ones for CLINC150.\r\nFeel free to create another branch and another PR", "created new [PR](https://github.com/huggingface/datasets/pull/1016)\r\n\r\nclosing this!" ]
2020-12-01T16:50:13
2020-12-02T18:45:43
2020-12-02T18:45:30
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/966", "html_url": "https://github.com/huggingface/datasets/pull/966", "diff_url": "https://github.com/huggingface/datasets/pull/966.diff", "patch_url": "https://github.com/huggingface/datasets/pull/966.patch", "merged_at": null }
Added CLINC150 Dataset. The link to the dataset can be found [here](https://github.com/clinc/oos-eval) and the paper can be found [here](https://www.aclweb.org/anthology/D19-1131.pdf) - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/966/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/966/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/965
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/965/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/965/comments
https://api.github.com/repos/huggingface/datasets/issues/965/events
https://github.com/huggingface/datasets/pull/965
754,553,169
MDExOlB1bGxSZXF1ZXN0NTMwNDMzODQ2
965
Add CLINC150 Dataset
{ "login": "sumanthd17", "id": 28291870, "node_id": "MDQ6VXNlcjI4MjkxODcw", "avatar_url": "https://avatars.githubusercontent.com/u/28291870?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sumanthd17", "html_url": "https://github.com/sumanthd17", "followers_url": "https://api.github.com/users/sumanthd17/followers", "following_url": "https://api.github.com/users/sumanthd17/following{/other_user}", "gists_url": "https://api.github.com/users/sumanthd17/gists{/gist_id}", "starred_url": "https://api.github.com/users/sumanthd17/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sumanthd17/subscriptions", "organizations_url": "https://api.github.com/users/sumanthd17/orgs", "repos_url": "https://api.github.com/users/sumanthd17/repos", "events_url": "https://api.github.com/users/sumanthd17/events{/privacy}", "received_events_url": "https://api.github.com/users/sumanthd17/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-12-01T16:43:00
2020-12-01T16:51:16
2020-12-01T16:49:15
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/965", "html_url": "https://github.com/huggingface/datasets/pull/965", "diff_url": "https://github.com/huggingface/datasets/pull/965.diff", "patch_url": "https://github.com/huggingface/datasets/pull/965.patch", "merged_at": null }
Added CLINC150 Dataset. The link to the dataset can be found [here](https://github.com/clinc/oos-eval) and the paper can be found [here](https://www.aclweb.org/anthology/D19-1131.pdf) - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/965/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/965/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/964
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/964/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/964/comments
https://api.github.com/repos/huggingface/datasets/issues/964/events
https://github.com/huggingface/datasets/pull/964
754,474,660
MDExOlB1bGxSZXF1ZXN0NTMwMzY4OTAy
964
Adding the WebNLG dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "This is task is part of the GEM suite so will actually need a more complete dataset card. I'm taking a break for now though and will get back to it before merging :) " ]
2020-12-01T15:05:23
2020-12-02T17:34:05
2020-12-02T17:34:05
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/964", "html_url": "https://github.com/huggingface/datasets/pull/964", "diff_url": "https://github.com/huggingface/datasets/pull/964.diff", "patch_url": "https://github.com/huggingface/datasets/pull/964.patch", "merged_at": "2020-12-02T17:34:05" }
This PR adds data from the WebNLG challenge, with one configuration per release and challenge iteration. More information can be found [here](https://webnlg-challenge.loria.fr/) Unfortunately, the data itself comes from a pretty large number of small XML files, so the dummy data ends up being quite large (8.4 MB even keeping only one example per file).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/964/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/964/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/963
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/963/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/963/comments
https://api.github.com/repos/huggingface/datasets/issues/963/events
https://github.com/huggingface/datasets/pull/963
754,451,234
MDExOlB1bGxSZXF1ZXN0NTMwMzQ5NjQ4
963
add CODAH dataset
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-12-01T14:37:05
2020-12-02T13:45:58
2020-12-02T13:21:25
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/963", "html_url": "https://github.com/huggingface/datasets/pull/963", "diff_url": "https://github.com/huggingface/datasets/pull/963.diff", "patch_url": "https://github.com/huggingface/datasets/pull/963.patch", "merged_at": "2020-12-02T13:21:25" }
Adding CODAH dataset. More info: https://github.com/Websail-NU/CODAH
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/963/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/963/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/962
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/962/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/962/comments
https://api.github.com/repos/huggingface/datasets/issues/962/events
https://github.com/huggingface/datasets/pull/962
754,441,428
MDExOlB1bGxSZXF1ZXN0NTMwMzQxMDA2
962
Add Danish Political Comments Dataset
{ "login": "abhishekkrthakur", "id": 1183441, "node_id": "MDQ6VXNlcjExODM0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhishekkrthakur", "html_url": "https://github.com/abhishekkrthakur", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-12-01T14:28:32
2020-12-03T10:31:55
2020-12-03T10:31:54
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/962", "html_url": "https://github.com/huggingface/datasets/pull/962", "diff_url": "https://github.com/huggingface/datasets/pull/962.diff", "patch_url": "https://github.com/huggingface/datasets/pull/962.patch", "merged_at": "2020-12-03T10:31:54" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/962/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/962/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/961
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/961/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/961/comments
https://api.github.com/repos/huggingface/datasets/issues/961/events
https://github.com/huggingface/datasets/issues/961
754,434,398
MDU6SXNzdWU3NTQ0MzQzOTg=
961
sample multiple datasets
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "here I share my dataloader currently for multiple tasks: https://gist.github.com/rabeehkarimimahabadi/39f9444a4fb6f53dcc4fca5d73bf8195 \r\n\r\nI need to train my model distributedly with this dataloader, \"MultiTasksataloader\", currently this does not work in distributed fasion,\r\nto save on memory I tried to use iterative datasets, could you have a look in this dataloader and tell me if this is indeed the case? not sure how to make datasets being iterative to not load them in memory, then I remove the sampler for dataloader, and shard the data per core, could you tell me please how I should implement this case in datasets library? and how do you find my implementation in terms of correctness? thanks \r\n" ]
2020-12-01T14:20:02
2020-12-02T01:32:44
null
CONTRIBUTOR
null
null
null
Hi I am dealing with multiple datasets, I need to have a dataloader over them with a condition that in each batch data samples are coming from one of the datasets. My main question is: - I need to have a way to sample the datasets first with some weights, lets say 2x dataset1 1x dataset2, could you point me how I can do it sub-questions: - I want to concat sampled datasets and define one dataloader on it, then I need a way to make sure batches come from 1 dataset in each iteration, could you assist me how I can do? - I use iterative-type of datasets, but I need a method of shuffling still since it brings accuracy performance issues if not doing it, thanks for the help.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/961/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/961/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/960
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/960/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/960/comments
https://api.github.com/repos/huggingface/datasets/issues/960/events
https://github.com/huggingface/datasets/pull/960
754,422,710
MDExOlB1bGxSZXF1ZXN0NTMwMzI1MzUx
960
Add code to automate parts of the dataset card
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-12-01T14:04:51
2021-04-26T07:56:01
2021-04-26T07:56:01
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/960", "html_url": "https://github.com/huggingface/datasets/pull/960", "diff_url": "https://github.com/huggingface/datasets/pull/960.diff", "patch_url": "https://github.com/huggingface/datasets/pull/960.patch", "merged_at": null }
Most parts of the "Dataset Structure" section can be generated automatically. This PR adds some code to do so.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/960/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/960/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/959
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/959/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/959/comments
https://api.github.com/repos/huggingface/datasets/issues/959/events
https://github.com/huggingface/datasets/pull/959
754,418,610
MDExOlB1bGxSZXF1ZXN0NTMwMzIxOTM1
959
Add Tunizi Dataset
{ "login": "abhishekkrthakur", "id": 1183441, "node_id": "MDQ6VXNlcjExODM0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhishekkrthakur", "html_url": "https://github.com/abhishekkrthakur", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-12-01T13:59:39
2020-12-03T14:21:41
2020-12-03T14:21:40
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/959", "html_url": "https://github.com/huggingface/datasets/pull/959", "diff_url": "https://github.com/huggingface/datasets/pull/959.diff", "patch_url": "https://github.com/huggingface/datasets/pull/959.patch", "merged_at": "2020-12-03T14:21:40" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/959/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/959/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/958
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/958/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/958/comments
https://api.github.com/repos/huggingface/datasets/issues/958/events
https://github.com/huggingface/datasets/pull/958
754,404,095
MDExOlB1bGxSZXF1ZXN0NTMwMzA5ODkz
958
dataset(ncslgr): add initial loading script
{ "login": "AmitMY", "id": 5757359, "node_id": "MDQ6VXNlcjU3NTczNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmitMY", "html_url": "https://github.com/AmitMY", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "repos_url": "https://api.github.com/users/AmitMY/repos", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq I added the README files, and now the tests fail... (check commit history, only changed MD file)\r\nThe tests seem a bit unstable", "the `RemoteDatasetTest ` errors in the CI are fixed on master so it's fine", "merging since the CI is fixed on master" ]
2020-12-01T13:41:17
2020-12-07T16:35:39
2020-12-07T16:35:39
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/958", "html_url": "https://github.com/huggingface/datasets/pull/958", "diff_url": "https://github.com/huggingface/datasets/pull/958.diff", "patch_url": "https://github.com/huggingface/datasets/pull/958.patch", "merged_at": "2020-12-07T16:35:39" }
clean #789
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/958/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/958/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/957
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/957/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/957/comments
https://api.github.com/repos/huggingface/datasets/issues/957/events
https://github.com/huggingface/datasets/pull/957
754,380,073
MDExOlB1bGxSZXF1ZXN0NTMwMjg5OTk4
957
Isixhosa ner corpus
{ "login": "yvonnegitau", "id": 7923902, "node_id": "MDQ6VXNlcjc5MjM5MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/7923902?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yvonnegitau", "html_url": "https://github.com/yvonnegitau", "followers_url": "https://api.github.com/users/yvonnegitau/followers", "following_url": "https://api.github.com/users/yvonnegitau/following{/other_user}", "gists_url": "https://api.github.com/users/yvonnegitau/gists{/gist_id}", "starred_url": "https://api.github.com/users/yvonnegitau/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yvonnegitau/subscriptions", "organizations_url": "https://api.github.com/users/yvonnegitau/orgs", "repos_url": "https://api.github.com/users/yvonnegitau/repos", "events_url": "https://api.github.com/users/yvonnegitau/events{/privacy}", "received_events_url": "https://api.github.com/users/yvonnegitau/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-12-01T13:08:36
2020-12-01T18:14:58
2020-12-01T18:14:58
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/957", "html_url": "https://github.com/huggingface/datasets/pull/957", "diff_url": "https://github.com/huggingface/datasets/pull/957.diff", "patch_url": "https://github.com/huggingface/datasets/pull/957.patch", "merged_at": "2020-12-01T18:14:58" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/957/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/957/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/956
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/956/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/956/comments
https://api.github.com/repos/huggingface/datasets/issues/956/events
https://github.com/huggingface/datasets/pull/956
754,368,378
MDExOlB1bGxSZXF1ZXN0NTMwMjgwMzU1
956
Add Norwegian NER
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Merging this one, good job and thank you @jplu :) " ]
2020-12-01T12:51:02
2020-12-02T08:53:11
2020-12-01T18:09:21
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/956", "html_url": "https://github.com/huggingface/datasets/pull/956", "diff_url": "https://github.com/huggingface/datasets/pull/956.diff", "patch_url": "https://github.com/huggingface/datasets/pull/956.patch", "merged_at": "2020-12-01T18:09:21" }
This PR adds the [Norwegian NER](https://github.com/ljos/navnkjenner) dataset. I have added the `conllu` package as a test dependency. This is required to properly parse the `.conllu` files.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/956/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/956/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/955
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/955/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/955/comments
https://api.github.com/repos/huggingface/datasets/issues/955/events
https://github.com/huggingface/datasets/pull/955
754,367,291
MDExOlB1bGxSZXF1ZXN0NTMwMjc5NDQw
955
Added PragmEval benchmark
{ "login": "sileod", "id": 9168444, "node_id": "MDQ6VXNlcjkxNjg0NDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9168444?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sileod", "html_url": "https://github.com/sileod", "followers_url": "https://api.github.com/users/sileod/followers", "following_url": "https://api.github.com/users/sileod/following{/other_user}", "gists_url": "https://api.github.com/users/sileod/gists{/gist_id}", "starred_url": "https://api.github.com/users/sileod/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sileod/subscriptions", "organizations_url": "https://api.github.com/users/sileod/orgs", "repos_url": "https://api.github.com/users/sileod/repos", "events_url": "https://api.github.com/users/sileod/events{/privacy}", "received_events_url": "https://api.github.com/users/sileod/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "> Really cool ! Thanks for adding this one :)\r\n> Good job at adding all those citations for each task\r\n> \r\n> Looks like the dummy data test doesn't pass. Maybe some files are missing in the dummy_data.zip files ?\r\n> The error reports `pragmeval/verifiability/train.tsv` to be missing\r\n> \r\n> Also could you add the tags part of the dataset card (the rest is optional) ?\r\n> See more info here : https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card\r\n\r\nIn the prior commits I generated dataset_infos and the dummy files myself\r\nNow they are generated with the cli, and the tests now seem to be passing better\r\nI will look into the tag\r\n", "Looks like you did a good job with dummy data in the first place !\r\nThe downside of automatically generated dummy data is that the files are heavier (here 40KB per file).\r\nIf you could replace the generated dummy files with the one you created yourself it would be awesome, since the one you did yourself are way lighter (around 1KB per file). Using small files make `git clone` run faster so we encourage to use small dummy_data files.", "could you rebase from master ? it should fix the CI", "> could you rebase from master ? it should fix the CI\r\n\r\nI think it is due to the file structure of the dummy data that causes test failure. The automatically generated dummy data pass the tests", "Indeed the error reports that `pragmeval/verifiability/train.tsv` is missing for the verifiability dummy_data.zip file.\r\nTo fix that you should add the missing data files in each dummy_data.zip file.\r\nTo test that your dummy data work you can run\r\n```\r\nRUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_\r\n```\r\nif some file is missing it should tell you which one", "Also it looks like you haven't rebased from master yet, even though you did a `rebase` commit. \r\n\r\nrebasing should fix the other CI fails", "It's ok if we have `RemoteDatasetTest ` errors, they're fixed on master", "merging since the CI is fixed on master", "Hey @sileod! Super nice to see you participating ;)\r\n\r\nDid you officially joined the sprint by posting on [the forum thread](https://discuss.huggingface.co/t/open-to-the-community-one-week-team-effort-to-reach-v2-0-of-hf-datasets-library/2176) and joining our slack?\r\n\r\nI can't seem to find you there! Should I add you directly with your gmail address?", "Hi @sileod 👋 " ]
2020-12-01T12:49:15
2020-12-04T10:43:32
2020-12-03T09:36:47
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/955", "html_url": "https://github.com/huggingface/datasets/pull/955", "diff_url": "https://github.com/huggingface/datasets/pull/955.diff", "patch_url": "https://github.com/huggingface/datasets/pull/955.patch", "merged_at": "2020-12-03T09:36:47" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/955/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/955/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/954
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/954/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/954/comments
https://api.github.com/repos/huggingface/datasets/issues/954/events
https://github.com/huggingface/datasets/pull/954
754,362,012
MDExOlB1bGxSZXF1ZXN0NTMwMjc1MDY4
954
add prachathai67k
{ "login": "cstorm125", "id": 15519308, "node_id": "MDQ6VXNlcjE1NTE5MzA4", "avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cstorm125", "html_url": "https://github.com/cstorm125", "followers_url": "https://api.github.com/users/cstorm125/followers", "following_url": "https://api.github.com/users/cstorm125/following{/other_user}", "gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}", "starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions", "organizations_url": "https://api.github.com/users/cstorm125/orgs", "repos_url": "https://api.github.com/users/cstorm125/repos", "events_url": "https://api.github.com/users/cstorm125/events{/privacy}", "received_events_url": "https://api.github.com/users/cstorm125/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Test failing for same issues as https://github.com/huggingface/datasets/pull/939\r\nPlease advise.\r\n\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_class_flue\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_class_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_configs_flue\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_configs_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_flue\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_xglue\r\n===== 7 failed, 1309 passed, 932 skipped, 11 warnings in 166.71s (0:02:46) =====\r\n```", "Closing and opening a new pull request to solve rebase issues", "To be continued on https://github.com/huggingface/datasets/pull/982" ]
2020-12-01T12:40:55
2020-12-02T05:12:11
2020-12-02T04:43:52
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/954", "html_url": "https://github.com/huggingface/datasets/pull/954", "diff_url": "https://github.com/huggingface/datasets/pull/954.diff", "patch_url": "https://github.com/huggingface/datasets/pull/954.patch", "merged_at": null }
`prachathai-67k`: News Article Corpus and Multi-label Text Classificdation from Prachathai.com The prachathai-67k dataset was scraped from the news site Prachathai. We filtered out those articles with less than 500 characters of body text, mostly images and cartoons. It contains 67,889 articles wtih 12 curated tags from August 24, 2004 to November 15, 2018. The dataset was originally scraped by @lukkiddd and cleaned by @cstorm125. You can also see preliminary exploration at https://github.com/PyThaiNLP/prachathai-67k/blob/master/exploration.ipynb
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/954/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/954/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/953
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/953/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/953/comments
https://api.github.com/repos/huggingface/datasets/issues/953/events
https://github.com/huggingface/datasets/pull/953
754,359,942
MDExOlB1bGxSZXF1ZXN0NTMwMjczMzg5
953
added health_fact dataset
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @lhoestq,\r\nInitially I tried int(-1) only in place of nan labels and missing values but I kept on getting this error ```pyarrow.lib.ArrowTypeError: Expected bytes, got a 'int' object``` maybe because I'm sending int values (-1) to objects which are string type" ]
2020-12-01T12:37:44
2020-12-01T23:11:33
2020-12-01T23:11:33
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/953", "html_url": "https://github.com/huggingface/datasets/pull/953", "diff_url": "https://github.com/huggingface/datasets/pull/953.diff", "patch_url": "https://github.com/huggingface/datasets/pull/953.patch", "merged_at": "2020-12-01T23:11:33" }
Added dataset Explainable Fact-Checking for Public Health Claims (dataset_id: health_fact)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/953/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/953/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/952
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/952/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/952/comments
https://api.github.com/repos/huggingface/datasets/issues/952/events
https://github.com/huggingface/datasets/pull/952
754,357,270
MDExOlB1bGxSZXF1ZXN0NTMwMjcxMTQz
952
Add orange sum
{ "login": "moussaKam", "id": 28675016, "node_id": "MDQ6VXNlcjI4Njc1MDE2", "avatar_url": "https://avatars.githubusercontent.com/u/28675016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/moussaKam", "html_url": "https://github.com/moussaKam", "followers_url": "https://api.github.com/users/moussaKam/followers", "following_url": "https://api.github.com/users/moussaKam/following{/other_user}", "gists_url": "https://api.github.com/users/moussaKam/gists{/gist_id}", "starred_url": "https://api.github.com/users/moussaKam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/moussaKam/subscriptions", "organizations_url": "https://api.github.com/users/moussaKam/orgs", "repos_url": "https://api.github.com/users/moussaKam/repos", "events_url": "https://api.github.com/users/moussaKam/events{/privacy}", "received_events_url": "https://api.github.com/users/moussaKam/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-12-01T12:33:34
2020-12-01T15:44:00
2020-12-01T15:44:00
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/952", "html_url": "https://github.com/huggingface/datasets/pull/952", "diff_url": "https://github.com/huggingface/datasets/pull/952.diff", "patch_url": "https://github.com/huggingface/datasets/pull/952.patch", "merged_at": "2020-12-01T15:44:00" }
Add OrangeSum a french abstractive summarization dataset. Paper: [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/952/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/952/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/951
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/951/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/951/comments
https://api.github.com/repos/huggingface/datasets/issues/951/events
https://github.com/huggingface/datasets/pull/951
754,349,979
MDExOlB1bGxSZXF1ZXN0NTMwMjY1MTY0
951
Prachathai67k
{ "login": "cstorm125", "id": 15519308, "node_id": "MDQ6VXNlcjE1NTE5MzA4", "avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cstorm125", "html_url": "https://github.com/cstorm125", "followers_url": "https://api.github.com/users/cstorm125/followers", "following_url": "https://api.github.com/users/cstorm125/following{/other_user}", "gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}", "starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions", "organizations_url": "https://api.github.com/users/cstorm125/orgs", "repos_url": "https://api.github.com/users/cstorm125/repos", "events_url": "https://api.github.com/users/cstorm125/events{/privacy}", "received_events_url": "https://api.github.com/users/cstorm125/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Wrongly branching from existing branch of wisesight_sentiment. Closing and opening another one specifically for prachathai67k" ]
2020-12-01T12:21:52
2020-12-01T12:29:53
2020-12-01T12:28:26
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/951", "html_url": "https://github.com/huggingface/datasets/pull/951", "diff_url": "https://github.com/huggingface/datasets/pull/951.diff", "patch_url": "https://github.com/huggingface/datasets/pull/951.patch", "merged_at": null }
Add `prachathai-67k`: News Article Corpus and Multi-label Text Classificdation from Prachathai.com The `prachathai-67k` dataset was scraped from the news site [Prachathai](prachathai.com). We filtered out those articles with less than 500 characters of body text, mostly images and cartoons. It contains 67,889 articles wtih 12 curated tags from August 24, 2004 to November 15, 2018. The dataset was originally scraped by [@lukkiddd](https://github.com/lukkiddd) and cleaned by [@cstorm125](https://github.com/cstorm125). Download the dataset [here](https://www.dropbox.com/s/fsxepdka4l2pr45/prachathai-67k.zip?dl=1). You can also see preliminary exploration in [exploration.ipynb](https://github.com/PyThaiNLP/prachathai-67k/blob/master/exploration.ipynb). This dataset is a part of [pyThaiNLP](https://github.com/PyThaiNLP/) Thai text [classification-benchmarks](https://github.com/PyThaiNLP/classification-benchmarks). For the benchmark, we selected the following tags with substantial volume that resemble **classifying types of articles**: * `การเมือง` - politics * `สิทธิมนุษยชน` - human_rights * `คุณภาพชีวิต` - quality_of_life * `ต่างประเทศ` - international * `สังคม` - social * `สิ่งแวดล้อม` - environment * `เศรษฐกิจ` - economics * `วัฒนธรรม` - culture * `แรงงาน` - labor * `ความมั่นคง` - national_security * `ไอซีที` - ict * `การศึกษา` - education
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/951/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/951/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/950
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/950/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/950/comments
https://api.github.com/repos/huggingface/datasets/issues/950/events
https://github.com/huggingface/datasets/pull/950
754,318,686
MDExOlB1bGxSZXF1ZXN0NTMwMjM4OTQx
950
Support .xz file format
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-12-01T11:34:48
2020-12-01T13:39:18
2020-12-01T13:39:18
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/950", "html_url": "https://github.com/huggingface/datasets/pull/950", "diff_url": "https://github.com/huggingface/datasets/pull/950.diff", "patch_url": "https://github.com/huggingface/datasets/pull/950.patch", "merged_at": "2020-12-01T13:39:18" }
Add support to extract/uncompress files in .xz format.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/950/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/950/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/949
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/949/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/949/comments
https://api.github.com/repos/huggingface/datasets/issues/949/events
https://github.com/huggingface/datasets/pull/949
754,317,777
MDExOlB1bGxSZXF1ZXN0NTMwMjM4MTky
949
Add GermaNER Dataset
{ "login": "abhishekkrthakur", "id": 1183441, "node_id": "MDQ6VXNlcjExODM0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhishekkrthakur", "html_url": "https://github.com/abhishekkrthakur", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq added. " ]
2020-12-01T11:33:31
2020-12-03T14:06:41
2020-12-03T14:06:40
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/949", "html_url": "https://github.com/huggingface/datasets/pull/949", "diff_url": "https://github.com/huggingface/datasets/pull/949.diff", "patch_url": "https://github.com/huggingface/datasets/pull/949.patch", "merged_at": "2020-12-03T14:06:40" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/949/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/949/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/948
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/948/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/948/comments
https://api.github.com/repos/huggingface/datasets/issues/948/events
https://github.com/huggingface/datasets/pull/948
754,306,260
MDExOlB1bGxSZXF1ZXN0NTMwMjI4NjQz
948
docs(ADD_NEW_DATASET): correct indentation for script
{ "login": "AmitMY", "id": 5757359, "node_id": "MDQ6VXNlcjU3NTczNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmitMY", "html_url": "https://github.com/AmitMY", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "repos_url": "https://api.github.com/users/AmitMY/repos", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-12-01T11:17:38
2020-12-01T11:25:18
2020-12-01T11:25:18
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/948", "html_url": "https://github.com/huggingface/datasets/pull/948", "diff_url": "https://github.com/huggingface/datasets/pull/948.diff", "patch_url": "https://github.com/huggingface/datasets/pull/948.patch", "merged_at": "2020-12-01T11:25:18" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/948/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/948/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/947
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/947/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/947/comments
https://api.github.com/repos/huggingface/datasets/issues/947/events
https://github.com/huggingface/datasets/pull/947
754,286,658
MDExOlB1bGxSZXF1ZXN0NTMwMjEyMjc3
947
Add europeana newspapers
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-12-01T10:52:18
2020-12-02T09:42:35
2020-12-02T09:42:09
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/947", "html_url": "https://github.com/huggingface/datasets/pull/947", "diff_url": "https://github.com/huggingface/datasets/pull/947.diff", "patch_url": "https://github.com/huggingface/datasets/pull/947.patch", "merged_at": "2020-12-02T09:42:09" }
This PR adds the [Europeana newspapers](https://github.com/EuropeanaNewspapers/ner-corpora) dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/947/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/947/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/946
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/946/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/946/comments
https://api.github.com/repos/huggingface/datasets/issues/946/events
https://github.com/huggingface/datasets/pull/946
754,278,632
MDExOlB1bGxSZXF1ZXN0NTMwMjA1Nzgw
946
add PEC dataset
{ "login": "zhongpeixiang", "id": 11826803, "node_id": "MDQ6VXNlcjExODI2ODAz", "avatar_url": "https://avatars.githubusercontent.com/u/11826803?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhongpeixiang", "html_url": "https://github.com/zhongpeixiang", "followers_url": "https://api.github.com/users/zhongpeixiang/followers", "following_url": "https://api.github.com/users/zhongpeixiang/following{/other_user}", "gists_url": "https://api.github.com/users/zhongpeixiang/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhongpeixiang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhongpeixiang/subscriptions", "organizations_url": "https://api.github.com/users/zhongpeixiang/orgs", "repos_url": "https://api.github.com/users/zhongpeixiang/repos", "events_url": "https://api.github.com/users/zhongpeixiang/events{/privacy}", "received_events_url": "https://api.github.com/users/zhongpeixiang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The checks failed again even if I didn't make any changes.", "you just need to rebase from master to fix the CI :)", "Sorry for the mess, I'm confused by the rebase and thus created a new branch." ]
2020-12-01T10:41:41
2020-12-03T02:47:14
2020-12-03T02:47:14
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/946", "html_url": "https://github.com/huggingface/datasets/pull/946", "diff_url": "https://github.com/huggingface/datasets/pull/946.diff", "patch_url": "https://github.com/huggingface/datasets/pull/946.patch", "merged_at": null }
A persona-based empathetic conversation dataset published at EMNLP 2020.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/946/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/946/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/945
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/945/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/945/comments
https://api.github.com/repos/huggingface/datasets/issues/945/events
https://github.com/huggingface/datasets/pull/945
754,273,920
MDExOlB1bGxSZXF1ZXN0NTMwMjAyMDM1
945
Adding Babi dataset - English version
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Replaced by #1126" ]
2020-12-01T10:35:36
2020-12-04T15:43:05
2020-12-04T15:42:54
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/945", "html_url": "https://github.com/huggingface/datasets/pull/945", "diff_url": "https://github.com/huggingface/datasets/pull/945.diff", "patch_url": "https://github.com/huggingface/datasets/pull/945.patch", "merged_at": null }
Adding the English version of bAbI. Samples are taken from ParlAI for consistency with the main users at the moment.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/945/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/945/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/944
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/944/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/944/comments
https://api.github.com/repos/huggingface/datasets/issues/944/events
https://github.com/huggingface/datasets/pull/944
754,228,947
MDExOlB1bGxSZXF1ZXN0NTMwMTY0NTU5
944
Add German Legal Entity Recognition Dataset
{ "login": "abhishekkrthakur", "id": 1183441, "node_id": "MDQ6VXNlcjExODM0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhishekkrthakur", "html_url": "https://github.com/abhishekkrthakur", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "thanks ! merging this one" ]
2020-12-01T09:38:22
2020-12-03T13:06:56
2020-12-03T13:06:55
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/944", "html_url": "https://github.com/huggingface/datasets/pull/944", "diff_url": "https://github.com/huggingface/datasets/pull/944.diff", "patch_url": "https://github.com/huggingface/datasets/pull/944.patch", "merged_at": "2020-12-03T13:06:54" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/944/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/944/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/943
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/943/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/943/comments
https://api.github.com/repos/huggingface/datasets/issues/943/events
https://github.com/huggingface/datasets/pull/943
754,192,491
MDExOlB1bGxSZXF1ZXN0NTMwMTM2ODM3
943
The FLUE Benchmark
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-12-01T09:00:50
2020-12-01T15:24:38
2020-12-01T15:24:30
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/943", "html_url": "https://github.com/huggingface/datasets/pull/943", "diff_url": "https://github.com/huggingface/datasets/pull/943.diff", "patch_url": "https://github.com/huggingface/datasets/pull/943.patch", "merged_at": "2020-12-01T15:24:30" }
This PR adds the [FLUE](https://github.com/getalp/Flaubert/tree/master/flue) benchmark which is a set of different datasets to evaluate models for French content. Two datasets are missing, the French Treebank that we can use only for research purpose and we are not allowed to distribute, and the Word Sense disambiguation for Nouns that will be added later.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/943/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/943/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/942
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/942/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/942/comments
https://api.github.com/repos/huggingface/datasets/issues/942/events
https://github.com/huggingface/datasets/issues/942
754,162,318
MDU6SXNzdWU3NTQxNjIzMTg=
942
D
{ "login": "CryptoMiKKi", "id": 74238514, "node_id": "MDQ6VXNlcjc0MjM4NTE0", "avatar_url": "https://avatars.githubusercontent.com/u/74238514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CryptoMiKKi", "html_url": "https://github.com/CryptoMiKKi", "followers_url": "https://api.github.com/users/CryptoMiKKi/followers", "following_url": "https://api.github.com/users/CryptoMiKKi/following{/other_user}", "gists_url": "https://api.github.com/users/CryptoMiKKi/gists{/gist_id}", "starred_url": "https://api.github.com/users/CryptoMiKKi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CryptoMiKKi/subscriptions", "organizations_url": "https://api.github.com/users/CryptoMiKKi/orgs", "repos_url": "https://api.github.com/users/CryptoMiKKi/repos", "events_url": "https://api.github.com/users/CryptoMiKKi/events{/privacy}", "received_events_url": "https://api.github.com/users/CryptoMiKKi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-12-01T08:17:10
2020-12-03T16:42:53
2020-12-03T16:42:53
NONE
null
null
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/942/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/942/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/941
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/941/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/941/comments
https://api.github.com/repos/huggingface/datasets/issues/941/events
https://github.com/huggingface/datasets/pull/941
754,141,321
MDExOlB1bGxSZXF1ZXN0NTMwMDk0MTI2
941
Add People's Daily NER dataset
{ "login": "JetRunner", "id": 22514219, "node_id": "MDQ6VXNlcjIyNTE0MjE5", "avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JetRunner", "html_url": "https://github.com/JetRunner", "followers_url": "https://api.github.com/users/JetRunner/followers", "following_url": "https://api.github.com/users/JetRunner/following{/other_user}", "gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}", "starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions", "organizations_url": "https://api.github.com/users/JetRunner/orgs", "repos_url": "https://api.github.com/users/JetRunner/repos", "events_url": "https://api.github.com/users/JetRunner/events{/privacy}", "received_events_url": "https://api.github.com/users/JetRunner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "> LGTM thanks :)\n> \n> \n> \n> Before we merge, could you add a dataset card ? see here for more info: https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card\n> \n> \n> \n> Note that only the tags at the top of the dataset card are mandatory, if you feel like it's going to take too much time writing the rest to fill it all you can just skip the paragraphs\n\nNope. I don't think there is a citation. Also, can I do the dataset card later (maybe in bulk)?", "We're doing one PR = one dataset to keep track of things. Feel free to add the tags later in this PR if you want to.\r\nAlso only the tags are required now, because we don't want people spending too much time on the cards", "added @lhoestq ", "Merging since the CI is fixed on master" ]
2020-12-01T07:48:53
2020-12-02T18:42:43
2020-12-02T18:42:41
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/941", "html_url": "https://github.com/huggingface/datasets/pull/941", "diff_url": "https://github.com/huggingface/datasets/pull/941.diff", "patch_url": "https://github.com/huggingface/datasets/pull/941.patch", "merged_at": "2020-12-02T18:42:41" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/941/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/941/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/940
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/940/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/940/comments
https://api.github.com/repos/huggingface/datasets/issues/940/events
https://github.com/huggingface/datasets/pull/940
754,010,753
MDExOlB1bGxSZXF1ZXN0NTI5OTc3OTQ2
940
Add MSRA NER dataset
{ "login": "JetRunner", "id": 22514219, "node_id": "MDQ6VXNlcjIyNTE0MjE5", "avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JetRunner", "html_url": "https://github.com/JetRunner", "followers_url": "https://api.github.com/users/JetRunner/followers", "following_url": "https://api.github.com/users/JetRunner/following{/other_user}", "gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}", "starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions", "organizations_url": "https://api.github.com/users/JetRunner/orgs", "repos_url": "https://api.github.com/users/JetRunner/repos", "events_url": "https://api.github.com/users/JetRunner/events{/privacy}", "received_events_url": "https://api.github.com/users/JetRunner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "LGTM, don't forget the tags ;)" ]
2020-12-01T05:02:11
2020-12-04T09:29:40
2020-12-01T07:25:53
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/940", "html_url": "https://github.com/huggingface/datasets/pull/940", "diff_url": "https://github.com/huggingface/datasets/pull/940.diff", "patch_url": "https://github.com/huggingface/datasets/pull/940.patch", "merged_at": "2020-12-01T07:25:53" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/940/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/940/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/939
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/939/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/939/comments
https://api.github.com/repos/huggingface/datasets/issues/939/events
https://github.com/huggingface/datasets/pull/939
753,965,405
MDExOlB1bGxSZXF1ZXN0NTI5OTQwOTYz
939
add wisesight_sentiment
{ "login": "cstorm125", "id": 15519308, "node_id": "MDQ6VXNlcjE1NTE5MzA4", "avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cstorm125", "html_url": "https://github.com/cstorm125", "followers_url": "https://api.github.com/users/cstorm125/followers", "following_url": "https://api.github.com/users/cstorm125/following{/other_user}", "gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}", "starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions", "organizations_url": "https://api.github.com/users/cstorm125/orgs", "repos_url": "https://api.github.com/users/cstorm125/repos", "events_url": "https://api.github.com/users/cstorm125/events{/privacy}", "received_events_url": "https://api.github.com/users/cstorm125/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq Thanks, Quentin. Removed the .ipynb_checkpoints and edited the README.md. The tests are failing because of other dataets. I'm figuring out why since the commits only have changes on `wisesight_sentiment`\r\n\r\n```\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_class_flue\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_class_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_configs_flue\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_configs_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_flue\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_xglue\r\n```", "@cstorm125 I really like the dataset and dataset card but there seems to have been a rebase issue at some point since it's now changing 140 files :D \r\n\r\nCould you rebase from master?", "I think it might be faster to close and reopen.", "To be continued on: https://github.com/huggingface/datasets/pull/981" ]
2020-12-01T03:06:39
2020-12-02T04:52:38
2020-12-02T04:35:51
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/939", "html_url": "https://github.com/huggingface/datasets/pull/939", "diff_url": "https://github.com/huggingface/datasets/pull/939.diff", "patch_url": "https://github.com/huggingface/datasets/pull/939.patch", "merged_at": null }
Add `wisesight_sentiment` Social media messages in Thai language with sentiment label (positive, neutral, negative, question) Model Card: --- YAML tags: annotations_creators: - expert-generated language_creators: - found languages: - th licenses: - cc0-1.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification --- # Dataset Card for wisesight_sentiment ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://github.com/PyThaiNLP/wisesight-sentiment - **Repository:** https://github.com/PyThaiNLP/wisesight-sentiment - **Paper:** - **Leaderboard:** https://www.kaggle.com/c/wisesight-sentiment/ - **Point of Contact:** https://github.com/PyThaiNLP/ ### Dataset Summary Wisesight Sentiment Corpus: Social media messages in Thai language with sentiment label (positive, neutral, negative, question) - Released to public domain under Creative Commons Zero v1.0 Universal license. - Labels: {"pos": 0, "neu": 1, "neg": 2, "q": 3} - Size: 26,737 messages - Language: Central Thai - Style: Informal and conversational. With some news headlines and advertisement. - Time period: Around 2016 to early 2019. With small amount from other period. - Domains: Mixed. Majority are consumer products and services (restaurants, cosmetics, drinks, car, hotels), with some current affairs. - Privacy: - Only messages that made available to the public on the internet (websites, blogs, social network sites). - For Facebook, this means the public comments (everyone can see) that made on a public page. - Private/protected messages and messages in groups, chat, and inbox are not included. - Alternations and modifications: - Keep in mind that this corpus does not statistically represent anything in the language register. - Large amount of messages are not in their original form. Personal data are removed or masked. - Duplicated, leading, and trailing whitespaces are removed. Other punctuations, symbols, and emojis are kept intact. (Mis)spellings are kept intact. - Messages longer than 2,000 characters are removed. - Long non-Thai messages are removed. Duplicated message (exact match) are removed. - More characteristics of the data can be explore [this notebook](https://github.com/PyThaiNLP/wisesight-sentiment/blob/master/exploration.ipynb) ### Supported Tasks and Leaderboards Sentiment analysis / [Kaggle Leaderboard](https://www.kaggle.com/c/wisesight-sentiment/) ### Languages Thai ## Dataset Structure ### Data Instances ``` {'category': 'pos', 'texts': 'น่าสนนน'} {'category': 'neu', 'texts': 'ครับ #phithanbkk'} {'category': 'neg', 'texts': 'ซื้อแต่ผ้าอนามัยแบบเย็นมาค่ะ แบบว่าอีห่ากูนอนไม่ได้'} {'category': 'q', 'texts': 'มีแอลกอฮอลมั้ยคะ'} ``` ### Data Fields - `texts`: texts - `category`: sentiment of texts ranging from `pos` (positive; 0), `neu` (neutral; 1), `neg` (negative; 2) and `q` (question; 3) ### Data Splits | | train | valid | test | |-----------|-------|-------|-------| | # samples | 21628 | 2404 | 2671 | | # neu | 11795 | 1291 | 1453 | | # neg | 5491 | 637 | 683 | | # pos | 3866 | 434 | 478 | | # q | 476 | 42 | 57 | | avg words | 27.21 | 27.18 | 27.12 | | avg chars | 89.82 | 89.50 | 90.36 | ## Dataset Creation ### Curation Rationale Originally, the dataset was conceived for the [In-class Kaggle Competition](https://www.kaggle.com/c/wisesight-sentiment/) at Chulalongkorn university by [Ekapol Chuangsuwanich](https://www.cp.eng.chula.ac.th/en/about/faculty/ekapolc/) (Faculty of Engineering, Chulalongkorn University). It has since become one of the benchmarks for sentiment analysis in Thai. ### Source Data #### Initial Data Collection and Normalization - Style: Informal and conversational. With some news headlines and advertisement. - Time period: Around 2016 to early 2019. With small amount from other period. - Domains: Mixed. Majority are consumer products and services (restaurants, cosmetics, drinks, car, hotels), with some current affairs. - Privacy: - Only messages that made available to the public on the internet (websites, blogs, social network sites). - For Facebook, this means the public comments (everyone can see) that made on a public page. - Private/protected messages and messages in groups, chat, and inbox are not included. - Usernames and non-public figure names are removed - Phone numbers are masked (e.g. 088-888-8888, 09-9999-9999, 0-2222-2222) - If you see any personal data still remain in the set, please tell us - so we can remove them. - Alternations and modifications: - Keep in mind that this corpus does not statistically represent anything in the language register. - Large amount of messages are not in their original form. Personal data are removed or masked. - Duplicated, leading, and trailing whitespaces are removed. Other punctuations, symbols, and emojis are kept intact. - (Mis)spellings are kept intact. - Messages longer than 2,000 characters are removed. - Long non-Thai messages are removed. Duplicated message (exact match) are removed. #### Who are the source language producers? Social media users in Thailand ### Annotations #### Annotation process - Sentiment values are assigned by human annotators. - A human annotator put his/her best effort to assign just one label, out of four, to a message. - Agreement, enjoyment, and satisfaction are positive. Disagreement, sadness, and disappointment are negative. - Showing interest in a topic or in a product is counted as positive. In this sense, a question about a particular product could has a positive sentiment value, if it shows the interest in the product. - Saying that other product or service is better is counted as negative. - General information or news title tend to be counted as neutral. #### Who are the annotators? Outsourced annotators hired by [Wisesight (Thailand) Co., Ltd.](https://github.com/wisesight/) ### Personal and Sensitive Information - We trying to exclude any known personally identifiable information from this data set. - Usernames and non-public figure names are removed - Phone numbers are masked (e.g. 088-888-8888, 09-9999-9999, 0-2222-2222) - If you see any personal data still remain in the set, please tell us - so we can remove them. ## Considerations for Using the Data ### Social Impact of Dataset - `wisesight_sentiment` is the first and one of the few open datasets for sentiment analysis of social media data in Thai - There are risks of personal information that escape the anonymization process ### Discussion of Biases - A message can be ambiguous. When possible, the judgement will be based solely on the text itself. - In some situation, like when the context is missing, the annotator may have to rely on his/her own world knowledge and just guess. - In some cases, the human annotator may have an access to the message's context, like an image. These additional information are not included as part of this corpus. ### Other Known Limitations - The labels are imbalanced; over half of the texts are `neu` (neutral) whereas there are very few `q` (question). - Misspellings in social media texts make word tokenization process for Thai difficult, thus impacting the model performance ## Additional Information ### Dataset Curators Thanks [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp) community, [Kitsuchart Pasupa](http://www.it.kmitl.ac.th/~kitsuchart/) (Faculty of Information Technology, King Mongkut's Institute of Technology Ladkrabang), and [Ekapol Chuangsuwanich](https://www.cp.eng.chula.ac.th/en/about/faculty/ekapolc/) (Faculty of Engineering, Chulalongkorn University) for advice. The original Kaggle competition, using the first version of this corpus, can be found at https://www.kaggle.com/c/wisesight-sentiment/ ### Licensing Information - If applicable, copyright of each message content belongs to the original poster. - **Annotation data (labels) are released to public domain.** - [Wisesight (Thailand) Co., Ltd.](https://github.com/wisesight/) helps facilitate the annotation, but does not necessarily agree upon the labels made by the human annotators. This annotation is for research purpose and does not reflect the professional work that Wisesight has been done for its customers. - The human annotator does not necessarily agree or disagree with the message. Likewise, the label he/she made to the message does not necessarily reflect his/her personal view towards the message. ### Citation Information Please cite the following if you make use of the dataset: Arthit Suriyawongkul, Ekapol Chuangsuwanich, Pattarawat Chormai, and Charin Polpanumas. 2019. **PyThaiNLP/wisesight-sentiment: First release.** September. BibTeX: ``` @software{bact_2019_3457447, author = {Suriyawongkul, Arthit and Chuangsuwanich, Ekapol and Chormai, Pattarawat and Polpanumas, Charin}, title = {PyThaiNLP/wisesight-sentiment: First release}, month = sep, year = 2019, publisher = {Zenodo}, version = {v1.0}, doi = {10.5281/zenodo.3457447}, url = {https://doi.org/10.5281/zenodo.3457447} } ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/939/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/939/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/938
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/938/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/938/comments
https://api.github.com/repos/huggingface/datasets/issues/938/events
https://github.com/huggingface/datasets/pull/938
753,940,979
MDExOlB1bGxSZXF1ZXN0NTI5OTIxNzU5
938
V-1.0.0 of isizulu_ner_corpus
{ "login": "yvonnegitau", "id": 7923902, "node_id": "MDQ6VXNlcjc5MjM5MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/7923902?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yvonnegitau", "html_url": "https://github.com/yvonnegitau", "followers_url": "https://api.github.com/users/yvonnegitau/followers", "following_url": "https://api.github.com/users/yvonnegitau/following{/other_user}", "gists_url": "https://api.github.com/users/yvonnegitau/gists{/gist_id}", "starred_url": "https://api.github.com/users/yvonnegitau/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yvonnegitau/subscriptions", "organizations_url": "https://api.github.com/users/yvonnegitau/orgs", "repos_url": "https://api.github.com/users/yvonnegitau/repos", "events_url": "https://api.github.com/users/yvonnegitau/events{/privacy}", "received_events_url": "https://api.github.com/users/yvonnegitau/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "closing since it's been added in #957 " ]
2020-12-01T02:04:32
2020-12-01T23:34:36
2020-12-01T23:34:36
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/938", "html_url": "https://github.com/huggingface/datasets/pull/938", "diff_url": "https://github.com/huggingface/datasets/pull/938.diff", "patch_url": "https://github.com/huggingface/datasets/pull/938.patch", "merged_at": null }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/938/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 1, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/938/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/937
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/937/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/937/comments
https://api.github.com/repos/huggingface/datasets/issues/937/events
https://github.com/huggingface/datasets/issues/937
753,921,078
MDU6SXNzdWU3NTM5MjEwNzg=
937
Local machine/cluster Beam Datasets example/tutorial
{ "login": "shangw-nvidia", "id": 66387198, "node_id": "MDQ6VXNlcjY2Mzg3MTk4", "avatar_url": "https://avatars.githubusercontent.com/u/66387198?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shangw-nvidia", "html_url": "https://github.com/shangw-nvidia", "followers_url": "https://api.github.com/users/shangw-nvidia/followers", "following_url": "https://api.github.com/users/shangw-nvidia/following{/other_user}", "gists_url": "https://api.github.com/users/shangw-nvidia/gists{/gist_id}", "starred_url": "https://api.github.com/users/shangw-nvidia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shangw-nvidia/subscriptions", "organizations_url": "https://api.github.com/users/shangw-nvidia/orgs", "repos_url": "https://api.github.com/users/shangw-nvidia/repos", "events_url": "https://api.github.com/users/shangw-nvidia/events{/privacy}", "received_events_url": "https://api.github.com/users/shangw-nvidia/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "I tried to make it run once on the SparkRunner but it seems that this runner has some issues when it is run locally.\r\nFrom my experience the DirectRunner is fine though, even if it's clearly not memory efficient.\r\n\r\nIt would be awesome though to make it work locally on a SparkRunner !\r\nDid you manage to make your processing work ?" ]
2020-12-01T01:11:43
2020-12-23T13:54:56
null
NONE
null
null
null
Hi, I'm wondering if https://huggingface.co/docs/datasets/beam_dataset.html has an non-GCP or non-Dataflow version example/tutorial? I tried to migrate it to run on DirectRunner and SparkRunner, however, there were way too many runtime errors that I had to fix during the process, and even so I wasn't able to get either runner correctly producing the desired output. Thanks! Shang
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/937/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/937/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/936
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/936/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/936/comments
https://api.github.com/repos/huggingface/datasets/issues/936/events
https://github.com/huggingface/datasets/pull/936
753,915,603
MDExOlB1bGxSZXF1ZXN0NTI5OTAxODMw
936
Added HANS parses and categories
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-12-01T00:58:16
2020-12-01T13:19:41
2020-12-01T13:19:40
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/936", "html_url": "https://github.com/huggingface/datasets/pull/936", "diff_url": "https://github.com/huggingface/datasets/pull/936.diff", "patch_url": "https://github.com/huggingface/datasets/pull/936.patch", "merged_at": "2020-12-01T13:19:40" }
This pull request adds HANS missing information: the sentence parses, as well as the heuristic category.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/936/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/936/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/935
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/935/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/935/comments
https://api.github.com/repos/huggingface/datasets/issues/935/events
https://github.com/huggingface/datasets/pull/935
753,863,055
MDExOlB1bGxSZXF1ZXN0NTI5ODU5MjM4
935
add PIB dataset
{ "login": "thevasudevgupta", "id": 53136577, "node_id": "MDQ6VXNlcjUzMTM2NTc3", "avatar_url": "https://avatars.githubusercontent.com/u/53136577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thevasudevgupta", "html_url": "https://github.com/thevasudevgupta", "followers_url": "https://api.github.com/users/thevasudevgupta/followers", "following_url": "https://api.github.com/users/thevasudevgupta/following{/other_user}", "gists_url": "https://api.github.com/users/thevasudevgupta/gists{/gist_id}", "starred_url": "https://api.github.com/users/thevasudevgupta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thevasudevgupta/subscriptions", "organizations_url": "https://api.github.com/users/thevasudevgupta/orgs", "repos_url": "https://api.github.com/users/thevasudevgupta/repos", "events_url": "https://api.github.com/users/thevasudevgupta/events{/privacy}", "received_events_url": "https://api.github.com/users/thevasudevgupta/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi, \r\n\r\nI am unable to get success in these tests. Can someone help me by pointing out possible errors?\r\n\r\nThanks", "Hi ! you can read the tests by logging in to circleci.\r\n\r\nAnyway for information here are the errors : \r\n```\r\ndatasets/pib/pib.py:19:1: F401 'csv' imported but unused\r\ndatasets/pib/pib.py:20:1: F401 'json' imported but unused\r\ndatasets/pib/pib.py:36:84: W291 trailing whitespace\r\n```\r\nand \r\n```\r\nFAILED tests/test_file_encoding.py::TestFileEncoding::test_no_encoding_on_file_open\r\n```\r\n\r\nTo fix the `test_no_encoding_on_file_open` you just have to specify an encoding while opening a text file. For example `encoding=\"utf-8\"`\r\n", "All suggested changes are done.", "Nice ! can you re-generate the dataset_infos.json file to take into account the feature type change ?\r\n```\r\ndatasets-cli test ./datasets/pib --save_infos --all_configs --ignore_verifications\r\n```\r\nAnd also format your code ?\r\n```\r\nmake style\r\n```" ]
2020-11-30T22:55:43
2020-12-01T23:17:11
2020-12-01T23:17:11
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/935", "html_url": "https://github.com/huggingface/datasets/pull/935", "diff_url": "https://github.com/huggingface/datasets/pull/935.diff", "patch_url": "https://github.com/huggingface/datasets/pull/935.patch", "merged_at": "2020-12-01T23:17:11" }
This pull request will add PIB dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/935/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/935/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/934
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/934/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/934/comments
https://api.github.com/repos/huggingface/datasets/issues/934/events
https://github.com/huggingface/datasets/pull/934
753,860,095
MDExOlB1bGxSZXF1ZXN0NTI5ODU2ODY4
934
small updates to the "add new dataset" guide
{ "login": "VictorSanh", "id": 16107619, "node_id": "MDQ6VXNlcjE2MTA3NjE5", "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VictorSanh", "html_url": "https://github.com/VictorSanh", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "repos_url": "https://api.github.com/users/VictorSanh/repos", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "cc @yjernite @lhoestq @thomwolf " ]
2020-11-30T22:49:10
2020-12-01T04:56:22
2020-11-30T23:14:00
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/934", "html_url": "https://github.com/huggingface/datasets/pull/934", "diff_url": "https://github.com/huggingface/datasets/pull/934.diff", "patch_url": "https://github.com/huggingface/datasets/pull/934.patch", "merged_at": "2020-11-30T23:14:00" }
small updates (corrections/typos) to the "add new dataset" guide
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/934/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/934/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/933
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/933/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/933/comments
https://api.github.com/repos/huggingface/datasets/issues/933/events
https://github.com/huggingface/datasets/pull/933
753,854,272
MDExOlB1bGxSZXF1ZXN0NTI5ODUyMTI1
933
Add NumerSense
{ "login": "joeddav", "id": 9353833, "node_id": "MDQ6VXNlcjkzNTM4MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joeddav", "html_url": "https://github.com/joeddav", "followers_url": "https://api.github.com/users/joeddav/followers", "following_url": "https://api.github.com/users/joeddav/following{/other_user}", "gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}", "starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joeddav/subscriptions", "organizations_url": "https://api.github.com/users/joeddav/orgs", "repos_url": "https://api.github.com/users/joeddav/repos", "events_url": "https://api.github.com/users/joeddav/events{/privacy}", "received_events_url": "https://api.github.com/users/joeddav/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-11-30T22:36:33
2020-12-01T20:25:50
2020-12-01T19:51:56
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/933", "html_url": "https://github.com/huggingface/datasets/pull/933", "diff_url": "https://github.com/huggingface/datasets/pull/933.diff", "patch_url": "https://github.com/huggingface/datasets/pull/933.patch", "merged_at": "2020-12-01T19:51:56" }
Adds the NumerSense dataset - Webpage/leaderboard: https://inklab.usc.edu/NumerSense/ - Paper: https://arxiv.org/abs/2005.00683 - Description: NumerSense is a new numerical commonsense reasoning probing task, with a diagnostic dataset consisting of 3,145 masked-word-prediction probes. Basically, it's a benchmark to see whether your MLM can figure out the right number in a fill-in-the-blank task based on commonsense knowledge (a bird has **two** legs)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/933/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/933/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/932
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/932/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/932/comments
https://api.github.com/repos/huggingface/datasets/issues/932/events
https://github.com/huggingface/datasets/pull/932
753,840,300
MDExOlB1bGxSZXF1ZXN0NTI5ODQwNjQ3
932
adding metooma dataset
{ "login": "akash418", "id": 23264033, "node_id": "MDQ6VXNlcjIzMjY0MDMz", "avatar_url": "https://avatars.githubusercontent.com/u/23264033?v=4", "gravatar_id": "", "url": "https://api.github.com/users/akash418", "html_url": "https://github.com/akash418", "followers_url": "https://api.github.com/users/akash418/followers", "following_url": "https://api.github.com/users/akash418/following{/other_user}", "gists_url": "https://api.github.com/users/akash418/gists{/gist_id}", "starred_url": "https://api.github.com/users/akash418/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akash418/subscriptions", "organizations_url": "https://api.github.com/users/akash418/orgs", "repos_url": "https://api.github.com/users/akash418/repos", "events_url": "https://api.github.com/users/akash418/events{/privacy}", "received_events_url": "https://api.github.com/users/akash418/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "This PR adds the #MeToo MA dataset. It presents multi-label data points for tweets mined in the backdrop of the #MeToo movement. The dataset includes data points in the form of Tweet ids and appropriate labels. Please refer to the accompanying paper for detailed information regarding annotation, collection, and guidelines. \r\n\r\nPaper: https://ojs.aaai.org/index.php/ICWSM/article/view/7292\r\nDataset Link: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU\r\n\r\nYAML tags:\r\nannotations_creators:\r\n- expert-generated\r\nlanguage_creators:\r\n- found\r\nlanguages:\r\n- en\r\nmultilinguality:\r\n- monolingual\r\nsize_categories:\r\n- 1K<n<10K\r\nsource_datasets:\r\n- original\r\ntask_categories:\r\n- text-classification\r\n- text-retrieval\r\ntask_ids:\r\n- multi-class-classification\r\n- multi-label-classification\r\n\r\n# Dataset Card for #MeTooMA dataset\r\n\r\n## Table of Contents\r\n- [Dataset Description](#dataset-description)\r\n - [Dataset Summary](#dataset-summary)\r\n - [Supported Tasks](#supported-tasks-and-leaderboards)\r\n - [Languages](#languages)\r\n- [Dataset Structure](#dataset-structure)\r\n - [Data Instances](#data-instances)\r\n - [Data Fields](#data-instances)\r\n - [Data Splits](#data-instances)\r\n- [Dataset Creation](#dataset-creation)\r\n - [Curation Rationale](#curation-rationale)\r\n - [Source Data](#source-data)\r\n - [Annotations](#annotations)\r\n - [Personal and Sensitive Information](#personal-and-sensitive-information)\r\n- [Considerations for Using the Data](#considerations-for-using-the-data)\r\n - [Social Impact of Dataset](#social-impact-of-dataset)\r\n - [Discussion of Biases](#discussion-of-biases)\r\n - [Other Known Limitations](#other-known-limitations)\r\n- [Additional Information](#additional-information)\r\n - [Dataset Curators](#dataset-curators)\r\n - [Licensing Information](#licensing-information)\r\n - [Citation Information](#citation-information)\r\n\r\n## Dataset Description\r\n\r\n- **Homepage:** https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU\r\n- **Paper:** https://ojs.aaai.org//index.php/ICWSM/article/view/7292\r\n- **Point of Contact:** https://github.com/midas-research/MeTooMA\r\n\r\n\r\n### Dataset Summary\r\n\r\n- The dataset consists of tweets belonging to #MeToo movement on Twitter, labelled into different categories.\r\n- This dataset includes more data points and has more labels than any of the previous datasets in that contain social media\r\nposts about sexual abuse discloures. Please refer to the Related Datasets of the publication for a detailed information about this.\r\n- Due to Twitters development policies, the authors provide only the tweet IDs and corresponding labels,\r\nother data can be fetched via Twitter API.\r\n- The data has been labelled by experts, with the majority taken into the account for deciding the final label.\r\n- The authors provide these labels for each of the tweets.\r\n - Relevance\r\n - Directed Hate\r\n - Generalized Hate\r\n - Sarcasm\r\n - Allegation\r\n - Justification\r\n - Refutation\r\n - Support\r\n - Oppose\r\n- The definitions for each task/label is in the main publication.\r\n- Please refer to the accompanying paper https://aaai.org/ojs/index.php/ICWSM/article/view/7292 for statistical analysis on the textual data\r\nextracted from this dataset.\r\n- The language of all the tweets in this dataset is English\r\n- Time period: October 2018 - December 2018\r\n- Suggested Use Cases of this dataset:\r\n - Evaluating usage of linguistic acts such as: hate-spech and sarcasm in the incontext of public sexual abuse discloures.\r\n - Extracting actionable insights and virtual dynamics of gender roles in sexual abuse revelations.\r\n - Identifying how influential people were potrayed on public platform in the\r\n events of mass social movements.\r\n - Polarization analysis based on graph simulations of social nodes of users involved\r\n in the #MeToo movement.\r\n\r\n\r\n### Supported Tasks and Leaderboards\r\n\r\nMulti Label and Multi-Class Classification\r\n\r\n### Languages\r\n\r\nEnglish\r\n\r\n## Dataset Structure\r\n- The dataset is structured into CSV format with TweetID and accompanying labels.\r\n- Train and Test sets are split into respective files.\r\n\r\n### Data Instances\r\n\r\nTweet ID and the appropriatelabels\r\n\r\n### Data Fields\r\n\r\nTweet ID and appropriate labels (binary label applicable for a data point) and multiple labels for each Tweet ID\r\n\r\n### Data Splits\r\n\r\n- Train: 7979\r\n- Test: 1996\r\n\r\n## Dataset Creation\r\n\r\n### Curation Rationale\r\n\r\n- Twitter was the major source of all the public discloures of sexual abuse incidents during the #MeToo movement.\r\n- People expressed their opinions over issues which were previously missing from the social media space.\r\n- This provides an option to study the linguistic behaviours of social media users in an informal setting,\r\ntherefore the authors decide to curate this annotated dataset.\r\n- The authors expect this dataset would be of great interest and use to both computational and socio-linguists.\r\n- For computational linguists, it provides an opportunity to model three new complex dialogue acts (allegation, refutation, and justification) and also to study how these acts interact with some of the other linguistic components like stance, hate, and sarcasm. For socio-linguists, it provides an opportunity to explore how a movement manifests in social media.\r\n\r\n\r\n### Source Data\r\n- Source of all the data points in this dataset is Twitter.\r\n\r\n#### Initial Data Collection and Normalization\r\n\r\n- All the tweets are mined from Twitter with initial search paramters identified using keywords from the #MeToo movement.\r\n- Redundant keywords were removed based on manual inspection.\r\n- Public streaming APIs of Twitter were used for querying with the selected keywords.\r\n- Based on text de-duplication and cosine similarity score, the set of tweets were pruned.\r\n- Non english tweets were removed.\r\n- The final set was labelled by experts with the majority label taken into the account for deciding the final label.\r\n- Please refer to this paper for detailed information: https://ojs.aaai.org//index.php/ICWSM/article/view/7292\r\n\r\n#### Who are the source language producers?\r\n\r\nPlease refer to this paper for detailed information: https://ojs.aaai.org//index.php/ICWSM/article/view/7292\r\n\r\n### Annotations\r\n\r\n#### Annotation process\r\n\r\n- The authors chose against crowd sourcing for labeling this dataset due to its highly sensitive nature.\r\n- The annotators are domain experts having degress in advanced clinical psychology and gender studies.\r\n- They were provided a guidelines document with instructions about each task and its definitions, labels and examples.\r\n- They studied the document, worked a few examples to get used to this annotation task.\r\n- They also provided feedback for improving the class definitions.\r\n- The annotation process is not mutually exclusive, implying that presence of one label does not mean the\r\nabsence of the other one.\r\n\r\n\r\n#### Who are the annotators?\r\n\r\n- The annotators are domain experts having a degree in clinical psychology and gender studies.\r\n- Please refer to the accompnaying paper for a detailed annotation process.\r\n\r\n### Personal and Sensitive Information\r\n\r\n- Considering Twitters policy for distribution of data, only Tweet ID and applicable labels are shared for the public use.\r\n- It is highly encouraged to use this dataset for scientific purposes only.\r\n- This dataset collection completely follows the Twitter mandated guidelines for distribution and usage.\r\n\r\n## Considerations for Using the Data\r\n\r\n### Social Impact of Dataset\r\n\r\n- The authors of this dataset do not intend to conduct a population centric analysis of #MeToo movement on Twitter.\r\n- The authors acknowledge that findings from this dataset cannot be used as-is for any direct social intervention, these\r\nshould be used to assist already existing human intervention tools and therapies.\r\n- Enough care has been taken to ensure that this work comes of as trying to target a specific person for their\r\npersonal stance of issues pertaining to the #MeToo movement.\r\n- The authors of this work do not aim to vilify anyone accused in the #MeToo movement in any manner.\r\n- Please refer to the ethics and discussion section of the mentioned publication for appropriate sharing of this dataset\r\nand social impact of this work.\r\n\r\n\r\n### Discussion of Biases\r\n\r\n- The #MeToo movement acted as a catalyst for implementing social policy changes to benefit the members of\r\ncommunity affected by sexual abuse.\r\n- Any work undertaken on this dataset should aim to minimize the bias against minority groups which\r\nmight amplified in cases of sudden outburst of public reactions over sensitive social media discussions.\r\n\r\n### Other Known Limitations\r\n\r\n- Considering privacy concerns, social media practitioners should be aware of making automated interventions\r\nto aid the victims of sexual abuse as some people might not prefer to disclose their notions.\r\n- Concerned social media users might also repeal their social information, if they found out that their\r\ninformation is being used for computational purposes, hence it is important seek subtle individual consent\r\nbefore trying to profile authors involved in online discussions to uphold personal privacy.\r\n\r\n## Additional Information\r\n\r\nPlease refer to this link: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU\r\n\r\n### Dataset Curators\r\n\r\n- If you use the corpus in a product or application, then please credit the authors\r\nand [Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi]\r\n(http://midas.iiitd.edu.in) appropriately.\r\nAlso, if you send us an email, we will be thrilled to know about how you have used the corpus.\r\n- If interested in commercial use of the corpus, send email to midas@iiitd.ac.in.\r\n- Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India\r\ndisclaims any responsibility for the use of the corpus and does not provide technical support.\r\nHowever, the contact listed above will be happy to respond to queries and clarifications\r\n- Please feel free to send us an email:\r\n - with feedback regarding the corpus.\r\n - with information on how you have used the corpus.\r\n - if interested in having us analyze your social media data.\r\n - if interested in a collaborative research project.\r\n\r\n### Licensing Information\r\n\r\n[More Information Needed]\r\n\r\n### Citation Information\r\n\r\nPlease cite the following publication if you make use of the dataset: https://ojs.aaai.org/index.php/ICWSM/article/view/7292\r\n\r\n```\r\n\r\n@article{Gautam_Mathur_Gosangi_Mahata_Sawhney_Shah_2020, title={#MeTooMA: Multi-Aspect Annotations of Tweets Related to the MeToo Movement}, volume={14}, url={https://aaai.org/ojs/index.php/ICWSM/article/view/7292}, abstractNote={&lt;p&gt;In this paper, we present a dataset containing 9,973 tweets related to the MeToo movement that were manually annotated for five different linguistic aspects: relevance, stance, hate speech, sarcasm, and dialogue acts. We present a detailed account of the data collection and annotation processes. The annotations have a very high inter-annotator agreement (0.79 to 0.93 k-alpha) due to the domain expertise of the annotators and clear annotation instructions. We analyze the data in terms of geographical distribution, label correlations, and keywords. Lastly, we present some potential use cases of this dataset. We expect this dataset would be of great interest to psycholinguists, socio-linguists, and computational linguists to study the discursive space of digitally mobilized social movements on sensitive issues like sexual harassment.&lt;/p&#38;gt;}, number={1}, journal={Proceedings of the International AAAI Conference on Web and Social Media}, author={Gautam, Akash and Mathur, Puneet and Gosangi, Rakesh and Mahata, Debanjan and Sawhney, Ramit and Shah, Rajiv Ratn}, year={2020}, month={May}, pages={209-216} }\r\n\r\n```\r\n\r\n\r\n\r\n", "Hi, @lhoestq I have resolved all the comments you have raised. Can you review the PR again? However, I do need assistance on how to remove other files that came along in my PR. Should I manually delete unwanted files from the PR raised?", "I am closing this PR, @lhoestq please review this PR instead https://github.com/huggingface/datasets/pull/975 where I have removed the unwanted files of other datasets and addressed each of your points. " ]
2020-11-30T22:09:49
2020-12-02T00:37:54
2020-12-02T00:37:54
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/932", "html_url": "https://github.com/huggingface/datasets/pull/932", "diff_url": "https://github.com/huggingface/datasets/pull/932.diff", "patch_url": "https://github.com/huggingface/datasets/pull/932.patch", "merged_at": null }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/932/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/932/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/931
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/931/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/931/comments
https://api.github.com/repos/huggingface/datasets/issues/931/events
https://github.com/huggingface/datasets/pull/931
753,818,193
MDExOlB1bGxSZXF1ZXN0NTI5ODIzMDYz
931
[WIP] complex_webqa - Error zipfile.BadZipFile: Bad CRC-32
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[ { "id": 4564477500, "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution", "name": "dataset contribution", "color": "0e8a16", "default": false, "description": "Contribution to a dataset script" } ]
closed
false
null
[]
null
[ "Thanks for your contribution, @thomwolf. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest that you create this dataset there. Please, feel free to tell us if you need some help." ]
2020-11-30T21:30:21
2022-10-03T09:40:09
2022-10-03T09:40:09
MEMBER
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/931", "html_url": "https://github.com/huggingface/datasets/pull/931", "diff_url": "https://github.com/huggingface/datasets/pull/931.diff", "patch_url": "https://github.com/huggingface/datasets/pull/931.patch", "merged_at": null }
Have a string `zipfile.BadZipFile: Bad CRC-32 for file 'web_snippets_train.json'` error when downloading the largest file from dropbox: `https://www.dropbox.com/sh/7pkwkrfnwqhsnpo/AABVENv_Q9rFtnM61liyzO0La/web_snippets_train.json.zip?dl=1` Didn't managed to see how to solve that. Putting aside for now.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/931/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 1, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/931/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/930
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/930/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/930/comments
https://api.github.com/repos/huggingface/datasets/issues/930/events
https://github.com/huggingface/datasets/pull/930
753,801,204
MDExOlB1bGxSZXF1ZXN0NTI5ODA5MzM1
930
Lambada
{ "login": "VictorSanh", "id": 16107619, "node_id": "MDQ6VXNlcjE2MTA3NjE5", "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VictorSanh", "html_url": "https://github.com/VictorSanh", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "repos_url": "https://api.github.com/users/VictorSanh/repos", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-11-30T21:02:33
2020-12-01T00:37:12
2020-12-01T00:37:11
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/930", "html_url": "https://github.com/huggingface/datasets/pull/930", "diff_url": "https://github.com/huggingface/datasets/pull/930.diff", "patch_url": "https://github.com/huggingface/datasets/pull/930.patch", "merged_at": "2020-12-01T00:37:11" }
Added LAMBADA dataset. A couple of points of attention (mostly because I am not sure) - The training data are compressed in a .tar file inside the main tar.gz file. I had to manually un-tar the training file to access the examples. - The dev and test splits don't have the `category` field so I put `None` by default. Happy to make changes if it doesn't respect the guidelines! Victor
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/930/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/930/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/929
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/929/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/929/comments
https://api.github.com/repos/huggingface/datasets/issues/929/events
https://github.com/huggingface/datasets/pull/929
753,737,794
MDExOlB1bGxSZXF1ZXN0NTI5NzU4NTU3
929
Add weibo NER dataset
{ "login": "abhishekkrthakur", "id": 1183441, "node_id": "MDQ6VXNlcjExODM0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhishekkrthakur", "html_url": "https://github.com/abhishekkrthakur", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-11-30T19:22:47
2020-12-03T13:36:55
2020-12-03T13:36:54
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/929", "html_url": "https://github.com/huggingface/datasets/pull/929", "diff_url": "https://github.com/huggingface/datasets/pull/929.diff", "patch_url": "https://github.com/huggingface/datasets/pull/929.patch", "merged_at": "2020-12-03T13:36:54" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/929/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/929/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/928
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/928/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/928/comments
https://api.github.com/repos/huggingface/datasets/issues/928/events
https://github.com/huggingface/datasets/pull/928
753,722,324
MDExOlB1bGxSZXF1ZXN0NTI5NzQ1OTIx
928
Add the Multilingual Amazon Reviews Corpus
{ "login": "joeddav", "id": 9353833, "node_id": "MDQ6VXNlcjkzNTM4MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joeddav", "html_url": "https://github.com/joeddav", "followers_url": "https://api.github.com/users/joeddav/followers", "following_url": "https://api.github.com/users/joeddav/following{/other_user}", "gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}", "starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joeddav/subscriptions", "organizations_url": "https://api.github.com/users/joeddav/orgs", "repos_url": "https://api.github.com/users/joeddav/repos", "events_url": "https://api.github.com/users/joeddav/events{/privacy}", "received_events_url": "https://api.github.com/users/joeddav/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-11-30T18:58:06
2020-12-01T16:04:30
2020-12-01T16:04:27
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/928", "html_url": "https://github.com/huggingface/datasets/pull/928", "diff_url": "https://github.com/huggingface/datasets/pull/928.diff", "patch_url": "https://github.com/huggingface/datasets/pull/928.patch", "merged_at": "2020-12-01T16:04:27" }
- **Name:** Multilingual Amazon Reviews Corpus* (`amazon_reviews_multi`) - **Description:** A collection of Amazon reviews in English, Japanese, German, French, Spanish and Chinese. - **Paper:** https://arxiv.org/abs/2010.02573 ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [x] Fill the `_DESCRIPTION` and `_CITATION` variables - [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [x] Generate the metadata file `dataset_infos.json` for all configurations - [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [x] Both tests for the real data and the dummy data pass.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/928/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/928/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/927
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/927/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/927/comments
https://api.github.com/repos/huggingface/datasets/issues/927/events
https://github.com/huggingface/datasets/issues/927
753,679,020
MDU6SXNzdWU3NTM2NzkwMjA=
927
Hello
{ "login": "k125-ak", "id": 75259546, "node_id": "MDQ6VXNlcjc1MjU5NTQ2", "avatar_url": "https://avatars.githubusercontent.com/u/75259546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/k125-ak", "html_url": "https://github.com/k125-ak", "followers_url": "https://api.github.com/users/k125-ak/followers", "following_url": "https://api.github.com/users/k125-ak/following{/other_user}", "gists_url": "https://api.github.com/users/k125-ak/gists{/gist_id}", "starred_url": "https://api.github.com/users/k125-ak/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/k125-ak/subscriptions", "organizations_url": "https://api.github.com/users/k125-ak/orgs", "repos_url": "https://api.github.com/users/k125-ak/repos", "events_url": "https://api.github.com/users/k125-ak/events{/privacy}", "received_events_url": "https://api.github.com/users/k125-ak/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-11-30T17:50:05
2020-11-30T17:50:30
2020-11-30T17:50:30
NONE
null
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/927/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/927/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/926
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/926/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/926/comments
https://api.github.com/repos/huggingface/datasets/issues/926/events
https://github.com/huggingface/datasets/pull/926
753,676,069
MDExOlB1bGxSZXF1ZXN0NTI5NzA4MTcy
926
add inquisitive
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "`dummy_data` right now contains all article files, keeping only the required articles for dummy data fails the dummy data test.\r\nAny idea ?", "> `dummy_data` right now contains all article files, keeping only the required articles for dummy data fails the dummy data test.\r\n> Any idea ?\r\n\r\nWe should definitely find a way to make it work with only a few articles.\r\n\r\nIf it doesn't work right now for dummy data, I guess it's because it tries to load every single article file ?\r\n\r\nIf so, then maybe you can use `os.listdir` method to first check all the data files available in the path where the `articles.tgz` file is extracted. Then you can simply iter through the data files and depending on their ID, include them in the train or test set. With this method you should be able to have only a few articles files per split in the dummy data. Does that make sense ?", "fixed! so the issue was, `articles_ids` were prepared based on the number of files in articles dir, so for dummy data questions it was not able to load some articles due to incorrect ids and the test was failing" ]
2020-11-30T17:45:22
2020-12-02T13:45:22
2020-12-02T13:40:13
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/926", "html_url": "https://github.com/huggingface/datasets/pull/926", "diff_url": "https://github.com/huggingface/datasets/pull/926.diff", "patch_url": "https://github.com/huggingface/datasets/pull/926.patch", "merged_at": "2020-12-02T13:40:13" }
Adding inquisitive qg dataset More info: https://github.com/wjko2/INQUISITIVE
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/926/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/926/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/925
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/925/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/925/comments
https://api.github.com/repos/huggingface/datasets/issues/925/events
https://github.com/huggingface/datasets/pull/925
753,672,661
MDExOlB1bGxSZXF1ZXN0NTI5NzA1MzM4
925
Add Turku NLP Corpus for Finnish NER
{ "login": "abhishekkrthakur", "id": 1183441, "node_id": "MDQ6VXNlcjExODM0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhishekkrthakur", "html_url": "https://github.com/abhishekkrthakur", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "> Did you generate the dummy data with the cli or manually ?\r\n\r\nIt was generated by the cli. Do you want me to make it smaller keep it like this?\r\n\r\n" ]
2020-11-30T17:40:19
2020-12-03T14:07:11
2020-12-03T14:07:10
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/925", "html_url": "https://github.com/huggingface/datasets/pull/925", "diff_url": "https://github.com/huggingface/datasets/pull/925.diff", "patch_url": "https://github.com/huggingface/datasets/pull/925.patch", "merged_at": "2020-12-03T14:07:10" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/925/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/925/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/924
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/924/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/924/comments
https://api.github.com/repos/huggingface/datasets/issues/924/events
https://github.com/huggingface/datasets/pull/924
753,631,951
MDExOlB1bGxSZXF1ZXN0NTI5NjcyMzgw
924
Add DART
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "LGTM!" ]
2020-11-30T16:42:37
2020-12-02T03:13:42
2020-12-02T03:13:41
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/924", "html_url": "https://github.com/huggingface/datasets/pull/924", "diff_url": "https://github.com/huggingface/datasets/pull/924.diff", "patch_url": "https://github.com/huggingface/datasets/pull/924.patch", "merged_at": "2020-12-02T03:13:41" }
- **Name:** *DART* - **Description:** *DART is a large dataset for open-domain structured data record to text generation.* - **Paper:** *https://arxiv.org/abs/2007.02871* - **Data:** *https://github.com/Yale-LILY/dart#leaderboard* ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [x] Fill the `_DESCRIPTION` and `_CITATION` variables - [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [x] Generate the metadata file `dataset_infos.json` for all configurations - [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [x] Both tests for the real data and the dummy data pass.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/924/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/924/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/923
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/923/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/923/comments
https://api.github.com/repos/huggingface/datasets/issues/923/events
https://github.com/huggingface/datasets/pull/923
753,569,220
MDExOlB1bGxSZXF1ZXN0NTI5NjIyMDQx
923
Add CC-100 dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892913, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEz", "url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": "This will not be worked on" } ]
closed
false
null
[]
null
[ "Hello @lhoestq, I would like just to ask you if it is OK that I include this feature 9f32ba1 in this PR or you would prefer to have it in a separate one.\r\n\r\nI was wondering whether include also a test, but I did not find any test for the other file formats...", "Hi ! Sure that would be valuable to support .xz files. Feel free to open a separate PR for this.\r\nAnd feel free to create the first test case for extracting compressed files if you have some inspiration (maybe create test_file_utils.py ?). We can still spend more time on tests next week when the sprint is over though so don't spend too much time on it.", "@lhoestq, DONE! ;) See PR #950.", "Thanks for adding support for `.xz` files :)\r\n\r\nFeel free to rebase from master to include it in your PR", "@lhoestq DONE; I have merged instead, to avoid changing the history of my public PR ;)", "Hi @lhoestq, I would need that you generate the dataset_infos.json and the dummy data for this dataset with a bigger computer. Sorry, but my laptop did not succeed...", "Thanks for your work @albertvillanova \r\nWe'll definitely look into it after this sprint :)", "Looks like #1456 added CC100 already.\r\nThe difference with your approach is that this implementation uses the `BuilderConfig` parameters to allow the creation of custom configs for all the languages, without having to specify them in the `BUILDER_CONFIGS` class attribute.\r\nFor example even if the dataset doesn't have a config for english already, you can still load the english CC100 with\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"cc100\", lang=\"en\")\r\n```", "@lhoestq, oops!! I remember having assigned this dataset to me in the Google sheet, besides having mentioned the corresponding issue in the Pull Request... Nevermind! :)", "Yes indeed I can see that...\r\nSorry for noticing that only now \r\n\r\nThe code of the other PR ended up being pretty close to yours though\r\nIf you want to add more details to the cc100 dataset card or in the script feel to do so, any addition is welcome" ]
2020-11-30T15:23:22
2021-04-20T13:34:17
2021-04-20T13:34:17
MEMBER
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/923", "html_url": "https://github.com/huggingface/datasets/pull/923", "diff_url": "https://github.com/huggingface/datasets/pull/923.diff", "patch_url": "https://github.com/huggingface/datasets/pull/923.patch", "merged_at": null }
Add CC-100. Close #773
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/923/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/923/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/922
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/922/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/922/comments
https://api.github.com/repos/huggingface/datasets/issues/922/events
https://github.com/huggingface/datasets/pull/922
753,559,130
MDExOlB1bGxSZXF1ZXN0NTI5NjEzOTA4
922
Add XOR QA Dataset
{ "login": "sumanthd17", "id": 28291870, "node_id": "MDQ6VXNlcjI4MjkxODcw", "avatar_url": "https://avatars.githubusercontent.com/u/28291870?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sumanthd17", "html_url": "https://github.com/sumanthd17", "followers_url": "https://api.github.com/users/sumanthd17/followers", "following_url": "https://api.github.com/users/sumanthd17/following{/other_user}", "gists_url": "https://api.github.com/users/sumanthd17/gists{/gist_id}", "starred_url": "https://api.github.com/users/sumanthd17/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sumanthd17/subscriptions", "organizations_url": "https://api.github.com/users/sumanthd17/orgs", "repos_url": "https://api.github.com/users/sumanthd17/repos", "events_url": "https://api.github.com/users/sumanthd17/events{/privacy}", "received_events_url": "https://api.github.com/users/sumanthd17/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @sumanthd17 \r\n\r\nLooks like a good start! You will also need to add a Dataset card, following the instructions given [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#manually-tag-the-dataset-and-write-the-dataset-card)", "I followed the instructions mentioned there but my dataset isn't showing up in the dropdown list. Am I missing something here? @yjernite ", "> I followed the instructions mentioned there but my dataset isn't showing up in the dropdown list. Am I missing something here? @yjernite\r\n\r\nThe best way is to run the tagging app locally and provide it the location to the `dataset_infos.json` after you've run the CLI:\r\nhttps://github.com/huggingface/datasets-tagging\r\n", "This is a really good data card!!\r\n\r\nSmall changes to make it even better:\r\n- Tags: the dataset has both \"original\" data and data that is \"extended\" from a source dataset: TydiQA - you should choose both options in the tagging apps\r\n- The language and annotation creator tags are off: the language here is the questions: I understand it's a mix of crowd-sourced and expert-generated? Is there any machine translation involved? The annotations are the span selections: is that crowd-sourced?\r\n- Personal and sensitive information: there should be a statement there, even if only to say that none could be found or that it only mentions public figures" ]
2020-11-30T15:10:54
2020-12-02T03:12:21
2020-12-02T03:12:21
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/922", "html_url": "https://github.com/huggingface/datasets/pull/922", "diff_url": "https://github.com/huggingface/datasets/pull/922.diff", "patch_url": "https://github.com/huggingface/datasets/pull/922.patch", "merged_at": "2020-12-02T03:12:21" }
Added XOR Question Answering Dataset. The link to the dataset can be found [here](https://nlp.cs.washington.edu/xorqa/) - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/922/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/922/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/920
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/920/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/920/comments
https://api.github.com/repos/huggingface/datasets/issues/920/events
https://github.com/huggingface/datasets/pull/920
753,445,747
MDExOlB1bGxSZXF1ZXN0NTI5NTIzMTgz
920
add dream dataset
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "> Awesome good job !\r\n> \r\n> Could you also add a dataset card using the template guide here : https://github.com/huggingface/datasets/blob/master/templates/README_guide.md\r\n> If you can't fill some fields then just leave `[N/A]`\r\n\r\nQuick amendment: `[N/A]` is for fields that are not relevant: if you can't find the information just leave `[More Information Needed]`", "@lhoestq since datset cards are optional for this sprint I'll add those later. Good for merge.", "Indeed we only require the tags to be added now (the yaml part at the top of the dataset card).\r\nCould you add them please ?\r\nYou can find more infos here : https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card", "@lhoestq added tags, I'll fill rest of the info after current sprint :)", "The tests are failing tests for other datasets, not this one.", "@lhoestq could you tell me why these tests are failing, they don't seem related to this PR. " ]
2020-11-30T12:40:14
2020-12-03T16:45:12
2020-12-02T15:39:12
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/920", "html_url": "https://github.com/huggingface/datasets/pull/920", "diff_url": "https://github.com/huggingface/datasets/pull/920.diff", "patch_url": "https://github.com/huggingface/datasets/pull/920.patch", "merged_at": "2020-12-02T15:39:12" }
Adding Dream: a Dataset and for Dialogue-Based Reading Comprehension More details: https://dataset.org/dream/ https://github.com/nlpdata/dream
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/920/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/920/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/919
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/919/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/919/comments
https://api.github.com/repos/huggingface/datasets/issues/919/events
https://github.com/huggingface/datasets/issues/919
753,434,472
MDU6SXNzdWU3NTM0MzQ0NzI=
919
wrong length with datasets
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Also, I cannot first convert it to torch format, since huggingface seq2seq_trainer codes process the datasets afterwards during datacollector function to make it optimize for TPUs. ", "sorry I misunderstood length of dataset with dataloader, closed. thanks " ]
2020-11-30T12:23:39
2020-11-30T12:37:27
2020-11-30T12:37:26
CONTRIBUTOR
null
null
null
Hi I have a MRPC dataset which I convert it to seq2seq format, then this is of this format: `Dataset(features: {'src_texts': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 10) ` I feed it to a dataloader: ``` dataloader = DataLoader( train_dataset, batch_size=self.args.train_batch_size, sampler=train_sampler, collate_fn=self.data_collator, drop_last=self.args.dataloader_drop_last, num_workers=self.args.dataloader_num_workers, ) ``` now if I type len(dataloader) this is 1, which is wrong, and this needs to be 10. could you assist me please? thanks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/919/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/919/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/918
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/918/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/918/comments
https://api.github.com/repos/huggingface/datasets/issues/918/events
https://github.com/huggingface/datasets/pull/918
753,397,440
MDExOlB1bGxSZXF1ZXN0NTI5NDgzOTk4
918
Add conll2002
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-11-30T11:29:35
2020-11-30T18:34:30
2020-11-30T18:34:29
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/918", "html_url": "https://github.com/huggingface/datasets/pull/918", "diff_url": "https://github.com/huggingface/datasets/pull/918.diff", "patch_url": "https://github.com/huggingface/datasets/pull/918.patch", "merged_at": "2020-11-30T18:34:29" }
Adding the Conll2002 dataset for NER. More info here : https://www.clips.uantwerpen.be/conll2002/ner/ ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [x] Fill the `_DESCRIPTION` and `_CITATION` variables - [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [x] Generate the metadata file `dataset_infos.json` for all configurations - [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [x] Both tests for the real data and the dummy data pass.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/918/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/918/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/917
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/917/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/917/comments
https://api.github.com/repos/huggingface/datasets/issues/917/events
https://github.com/huggingface/datasets/pull/917
753,391,591
MDExOlB1bGxSZXF1ZXN0NTI5NDc5MTIy
917
Addition of Concode Dataset
{ "login": "reshinthadithyan", "id": 36307201, "node_id": "MDQ6VXNlcjM2MzA3MjAx", "avatar_url": "https://avatars.githubusercontent.com/u/36307201?v=4", "gravatar_id": "", "url": "https://api.github.com/users/reshinthadithyan", "html_url": "https://github.com/reshinthadithyan", "followers_url": "https://api.github.com/users/reshinthadithyan/followers", "following_url": "https://api.github.com/users/reshinthadithyan/following{/other_user}", "gists_url": "https://api.github.com/users/reshinthadithyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/reshinthadithyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/reshinthadithyan/subscriptions", "organizations_url": "https://api.github.com/users/reshinthadithyan/orgs", "repos_url": "https://api.github.com/users/reshinthadithyan/repos", "events_url": "https://api.github.com/users/reshinthadithyan/events{/privacy}", "received_events_url": "https://api.github.com/users/reshinthadithyan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Testing command doesn't work\r\n###trace\r\n-- Docs: https://docs.pytest.org/en/stable/warnings.html\r\n========================================================= short test summary info ========================================================== \r\nERROR tests/test_dataset_common.py - absl.testing.parameterized.NoTestsError: parameterized test decorators did not generate any tests. Ma...\r\n====================================================== 2 warnings, 1 error in 54.23s ======================================================= \r\nERROR: not found: G:\\Work Related\\hf\\datasets\\tests\\test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_concode\r\n(no name 'G:\\\\Work Related\\\\hf\\\\datasets\\\\tests\\\\test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_concode' in any of [<Module test_dataset_common.py>])\r\n", "Hello @lhoestq Test checks are passing in my local, but the commit fails in ci. Any idea onto why? \r\n#### Dummy Dataset Test \r\n====================================================== 1 passed, 6 warnings in 7.14s ======================================================= \r\n#### Real Dataset Test \r\n====================================================== 1 passed, 6 warnings in 25.54s ====================================================== ", "Hello @lhoestq, Have a look, I've changed the file according to the reviews. Thanks!", "@reshinthadithyan that's a great start! You will also need to add a Dataset card, following the instructions given [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#manually-tag-the-dataset-and-write-the-dataset-card)", "> @reshinthadithyan that's a great start! You will also need to add a Dataset card, following the instructions given [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#manually-tag-the-dataset-and-write-the-dataset-card)\r\n\r\nHello @yjernite I'm facing issues in using the datasets-tagger Refer #1 in datasets-tagger. Thanks", "> > @reshinthadithyan that's a great start! You will also need to add a Dataset card, following the instructions given [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#manually-tag-the-dataset-and-write-the-dataset-card)\r\n> \r\n> Hello @yjernite I'm facing issues in using the datasets-tagger Refer #1 in datasets-tagger. Thanks\r\n\r\nHi @reshinthadithyan ! Did you try with the latest version of the tagger? What issues are you facing?\r\n\r\nWe're also relaxed the dataset requirement for now, you'll only add to add the tags :) ", "Could you work on another branch when adding different datasets ?\r\nThe idea is to have one PR per dataset", "Thanks ! The github diff looks all clean now :) \r\nTo fix the CI you just need to rebase from master\r\n\r\nDon't forget to add the tags of the dataset card. It's the yaml part at the top of the dataset card\r\nMore infor here : https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card\r\n\r\nThe issue you had with the tagger should be fixed now by https://github.com/huggingface/datasets-tagging/pull/5\r\n" ]
2020-11-30T11:20:59
2020-12-29T02:55:36
2020-12-29T02:55:36
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/917", "html_url": "https://github.com/huggingface/datasets/pull/917", "diff_url": "https://github.com/huggingface/datasets/pull/917.diff", "patch_url": "https://github.com/huggingface/datasets/pull/917.patch", "merged_at": null }
##Overview Concode Dataset contains pairs of Nl Queries and the corresponding Code.(Contextual Code Generation) Reference Links Paper Link = https://arxiv.org/pdf/1904.09086.pdf Github Link =https://github.com/microsoft/CodeXGLUE/tree/main/Text-Code/text-to-code
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/917/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/917/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/916
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/916/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/916/comments
https://api.github.com/repos/huggingface/datasets/issues/916/events
https://github.com/huggingface/datasets/pull/916
753,376,643
MDExOlB1bGxSZXF1ZXN0NTI5NDY3MTkx
916
Add Swedish NER Corpus
{ "login": "abhishekkrthakur", "id": 1183441, "node_id": "MDQ6VXNlcjExODM0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhishekkrthakur", "html_url": "https://github.com/abhishekkrthakur", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Yes the use of configs is optional", "@abhishekkrthakur we want to keep track of the information that is and isn't in the dataset cards so we're asking everyone to use the full template :) If there is some information in there that you really can't find or don't feel qualified to add, you can just leave the `[More Information Needed]` text" ]
2020-11-30T10:59:51
2020-12-02T03:10:50
2020-12-02T03:10:49
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/916", "html_url": "https://github.com/huggingface/datasets/pull/916", "diff_url": "https://github.com/huggingface/datasets/pull/916.diff", "patch_url": "https://github.com/huggingface/datasets/pull/916.patch", "merged_at": "2020-12-02T03:10:49" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/916/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/916/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/915
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/915/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/915/comments
https://api.github.com/repos/huggingface/datasets/issues/915/events
https://github.com/huggingface/datasets/issues/915
753,118,481
MDU6SXNzdWU3NTMxMTg0ODE=
915
Shall we change the hashing to encoding to reduce potential replicated cache files?
{ "login": "zhuzilin", "id": 10428324, "node_id": "MDQ6VXNlcjEwNDI4MzI0", "avatar_url": "https://avatars.githubusercontent.com/u/10428324?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhuzilin", "html_url": "https://github.com/zhuzilin", "followers_url": "https://api.github.com/users/zhuzilin/followers", "following_url": "https://api.github.com/users/zhuzilin/following{/other_user}", "gists_url": "https://api.github.com/users/zhuzilin/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhuzilin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhuzilin/subscriptions", "organizations_url": "https://api.github.com/users/zhuzilin/orgs", "repos_url": "https://api.github.com/users/zhuzilin/repos", "events_url": "https://api.github.com/users/zhuzilin/events{/privacy}", "received_events_url": "https://api.github.com/users/zhuzilin/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
open
false
null
[]
null
[ "This is an interesting idea !\r\nDo you have ideas about how to approach the decoding and the normalization ?", "@lhoestq\r\nI think we first need to save the transformation chain to a list in `self._fingerprint`. Then we can\r\n- decode all the current saved datasets to see if there is already one that is equivalent to the transformation we need now.\r\n- or, calculate all the possible hash value of the current chain for comparison so that we could continue to use hashing.\r\nIf we find one, we can adjust the list in `self._fingerprint` to it.\r\n\r\nAs for the transformation reordering rules, we can just start with some manual rules, like two sort on the same column should merge to one, filter and select can change orders.\r\n\r\nAnd for encoding and decoding, we can just manually specify `sort` is 0, `shuffling` is 2 and create a base-n number or use some general algorithm like `base64.urlsafe_b64encode`.\r\n\r\nBecause we are not doing lazy evaluation now, we may not be able to normalize the transformation to its minimal form. If we want to support that, we can provde a `Sequential` api and let user input a list or transformation, so that user would not use the intermediate datasets. This would look like tf.data.Dataset." ]
2020-11-30T03:50:46
2020-12-24T05:11:49
null
NONE
null
null
null
Hi there. For now, we are using `xxhash` to hash the transformations to fingerprint and we will save a copy of the processed dataset to disk if there is a new hash value. However, there are some transformations that are idempotent or commutative to each other. I think that encoding the transformation chain as the fingerprint may help in those cases, for example, use `base64.urlsafe_b64encode`. In this way, before we want to save a new copy, we can decode the transformation chain and normalize it to prevent omit potential reuse. As the main targets of this project are the really large datasets that cannot be loaded entirely in memory, I believe it would save a lot of time if we can avoid some write. If you have interest in this, I'd love to help :).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/915/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/915/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/914
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/914/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/914/comments
https://api.github.com/repos/huggingface/datasets/issues/914/events
https://github.com/huggingface/datasets/pull/914
752,956,106
MDExOlB1bGxSZXF1ZXN0NTI5MTM2Njk3
914
Add list_github_datasets api for retrieving dataset name list in github repo
{ "login": "zhuzilin", "id": 10428324, "node_id": "MDQ6VXNlcjEwNDI4MzI0", "avatar_url": "https://avatars.githubusercontent.com/u/10428324?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhuzilin", "html_url": "https://github.com/zhuzilin", "followers_url": "https://api.github.com/users/zhuzilin/followers", "following_url": "https://api.github.com/users/zhuzilin/following{/other_user}", "gists_url": "https://api.github.com/users/zhuzilin/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhuzilin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhuzilin/subscriptions", "organizations_url": "https://api.github.com/users/zhuzilin/orgs", "repos_url": "https://api.github.com/users/zhuzilin/repos", "events_url": "https://api.github.com/users/zhuzilin/events{/privacy}", "received_events_url": "https://api.github.com/users/zhuzilin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "We can look into removing some of the attributes from `GET /api/datasets` to make it smaller/faster, what do you think @lhoestq?", "> We can look into removing some of the attributes from `GET /api/datasets` to make it smaller/faster, what do you think @lhoestq?\r\n\r\nyes at least remove all the `dummy_data.zip`", "`GET /api/datasets` should now be much faster. @zhuzilin can you check if `list_datasets` is now faster for you?", "> `GET /api/datasets` should now be much faster. @zhuzilin can you check if `list_datasets` is now faster for you?\r\n\r\nYes, much faster! Thank you!" ]
2020-11-29T16:42:15
2020-12-02T07:21:16
2020-12-02T07:21:16
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/914", "html_url": "https://github.com/huggingface/datasets/pull/914", "diff_url": "https://github.com/huggingface/datasets/pull/914.diff", "patch_url": "https://github.com/huggingface/datasets/pull/914.patch", "merged_at": null }
Thank you for your great effort on unifying data processing for NLP! This pr is trying to add a new api `list_github_datasets` in the `inspect` module. The reason for it is that the current `list_datasets` api need to access https://huggingface.co/api/datasets to get a large json. However, this connection can be really slow... (I was visiting from China) and from my own experience, most of the time `requests.get` failed to download the whole json after a long wait and will trigger fault in `r.json()`. I also noticed that the current implementation will first try to download from github, which makes me be able to smoothly run `load_dataset('squad')` in the example. Therefore, I think it would be better if we can have an api to get the list of datasets that are available on github, and it will also improve newcomers' experience (it is a little frustrating if one cannot successfully run the first function in the README example.) before we have faster source for huggingface.co. As for the implementation, I've added a `dataset_infos.json` file under the `datasets` folder, and it has the following structure: ```json { "id": "aeslc", "folder": "datasets/aeslc", "dataset_infos": "datasets/aeslc/dataset_infos.json" }, ... { "id": "json", "folder": "datasets/json" }, ... ``` The script I used to get this file is: ```python import json import os DATASETS_BASE_DIR = "/root/datasets" DATASET_INFOS_JSON = "dataset_infos.json" datasets = [] for item in os.listdir(os.path.join(DATASETS_BASE_DIR, "datasets")): if os.path.isdir(os.path.join(DATASETS_BASE_DIR, "datasets", item)): datasets.append(item) datasets.sort() total_ds_info = [] for ds in datasets: ds_dir = os.path.join("datasets", ds) ds_info_dir = os.path.join(ds_dir, DATASET_INFOS_JSON) if os.path.isfile(os.path.join(DATASETS_BASE_DIR, ds_info_dir)): total_ds_info.append({"id": ds, "folder": ds_dir, "dataset_infos": ds_info_dir}) else: total_ds_info.append({"id": ds, "folder": ds_dir}) with open(DATASET_INFOS_JSON, "w") as f: json.dump(total_ds_info, f) ``` The new `dataset_infos.json` was saved as a formated json so that it will be easy to add new dataset. When calling `list_github_datasets`, the user will get the list of dataset names in this github repo and if `with_details` is set to be `True`, they can get the url of specific dataset info. Thank you for your time on reviewing this pr :).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/914/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/914/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/913
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/913/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/913/comments
https://api.github.com/repos/huggingface/datasets/issues/913/events
https://github.com/huggingface/datasets/pull/913
752,892,020
MDExOlB1bGxSZXF1ZXN0NTI5MDkyOTc3
913
My new dataset PEC
{ "login": "zhongpeixiang", "id": 11826803, "node_id": "MDQ6VXNlcjExODI2ODAz", "avatar_url": "https://avatars.githubusercontent.com/u/11826803?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhongpeixiang", "html_url": "https://github.com/zhongpeixiang", "followers_url": "https://api.github.com/users/zhongpeixiang/followers", "following_url": "https://api.github.com/users/zhongpeixiang/following{/other_user}", "gists_url": "https://api.github.com/users/zhongpeixiang/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhongpeixiang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhongpeixiang/subscriptions", "organizations_url": "https://api.github.com/users/zhongpeixiang/orgs", "repos_url": "https://api.github.com/users/zhongpeixiang/repos", "events_url": "https://api.github.com/users/zhongpeixiang/events{/privacy}", "received_events_url": "https://api.github.com/users/zhongpeixiang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "How to resolve these failed checks?", "Thanks for adding this one :) \r\n\r\nTo fix the check_code_quality, please run `make style` with the latest version of black, isort, flake8\r\nTo fix the test_no_encoding_on_file_open, make sure to specify the encoding each time you call `open()` on a text file.\r\nFor example : `encoding=\"utf-8\"`\r\nTo fix the test_load_dataset_pec , you must add the dummy_data.zip file. It is used to test the dataset script and make sure it runs fine. To add it, please refer to the steps in https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-add-a-dataset\r\n\r\n", "Could you also add a dataset card ? you can find a template here : https://github.com/huggingface/datasets/blob/master/templates/README.md\r\n\r\nThat would be awesome", "> Thanks for adding this one :)\r\n> \r\n> To fix the check_code_quality, please run `make style` with the latest version of black, isort, flake8\r\n> To fix the test_no_encoding_on_file_open, make sure to specify the encoding each time you call `open()` on a text file.\r\n> For example : `encoding=\"utf-8\"`\r\n> To fix the test_load_dataset_pec , you must add the dummy_data.zip file. It is used to test the dataset script and make sure it runs fine. To add it, please refer to the steps in https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-add-a-dataset\r\n\r\nThank you for the detailed suggestion.\r\n\r\nI have added dummy_data but it still failed the DistributedDatasetTest check. My dataset has a central file (containing a python dict) that needs to be accessed by each data example. Is it because the central file cannot be distributed (which would lead to a partial dictionary)?\r\n\r\nSpecifically, the central file contains a dictionary of speakers with their attributes. Each data example is also associated with a speaker. As of now, I keep the central file and data files separately. If I remove the central file by appending the speaker attributes to each data example, then there would be lots of redundancy because there are lots of duplicate speakers in the data files.", "The `DistributedDatasetTest` fail and the changes of this PR are not related, there was just a bug in the CI. You can ignore it", "> Really cool thanks !\r\n> \r\n> Could you make the dummy files smaller ? For example by reducing the size of persona.txt ?\r\n> I also left a comment about the files concatenation. It would be cool to replace that with simple iterations through the different files.\r\n> \r\n> Then once this is done, you can add a dataset card using the template guide here : https://github.com/huggingface/datasets/blob/master/templates/README_guide.md\r\n> If some fields can't be filled, just leave `[N/A]`\r\n\r\nSmall change: if you don't have the information for a field, please leave `[More Information Needed]` rather than `[N/A]`\r\n\r\nThe full information can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#manually-tag-the-dataset-and-write-the-dataset-card)" ]
2020-11-29T11:10:37
2020-12-01T10:41:53
2020-12-01T10:41:53
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/913", "html_url": "https://github.com/huggingface/datasets/pull/913", "diff_url": "https://github.com/huggingface/datasets/pull/913.diff", "patch_url": "https://github.com/huggingface/datasets/pull/913.patch", "merged_at": null }
A new dataset PEC published in EMNLP 2020.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/913/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/913/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/911
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/911/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/911/comments
https://api.github.com/repos/huggingface/datasets/issues/911/events
https://github.com/huggingface/datasets/issues/911
752,806,215
MDU6SXNzdWU3NTI4MDYyMTU=
911
datasets module not found
{ "login": "sbassam", "id": 15836274, "node_id": "MDQ6VXNlcjE1ODM2Mjc0", "avatar_url": "https://avatars.githubusercontent.com/u/15836274?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sbassam", "html_url": "https://github.com/sbassam", "followers_url": "https://api.github.com/users/sbassam/followers", "following_url": "https://api.github.com/users/sbassam/following{/other_user}", "gists_url": "https://api.github.com/users/sbassam/gists{/gist_id}", "starred_url": "https://api.github.com/users/sbassam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sbassam/subscriptions", "organizations_url": "https://api.github.com/users/sbassam/orgs", "repos_url": "https://api.github.com/users/sbassam/repos", "events_url": "https://api.github.com/users/sbassam/events{/privacy}", "received_events_url": "https://api.github.com/users/sbassam/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "nvm, I'd made an assumption that the library gets installed with transformers. " ]
2020-11-29T01:24:15
2020-11-29T14:33:09
2020-11-29T14:33:09
NONE
null
null
null
Currently, running `from datasets import load_dataset` will throw a `ModuleNotFoundError: No module named 'datasets'` error.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/911/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/911/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/910
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/910/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/910/comments
https://api.github.com/repos/huggingface/datasets/issues/910/events
https://github.com/huggingface/datasets/issues/910
752,772,723
MDU6SXNzdWU3NTI3NzI3MjM=
910
Grindr meeting app web.Grindr
{ "login": "jackin34", "id": 75184749, "node_id": "MDQ6VXNlcjc1MTg0NzQ5", "avatar_url": "https://avatars.githubusercontent.com/u/75184749?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jackin34", "html_url": "https://github.com/jackin34", "followers_url": "https://api.github.com/users/jackin34/followers", "following_url": "https://api.github.com/users/jackin34/following{/other_user}", "gists_url": "https://api.github.com/users/jackin34/gists{/gist_id}", "starred_url": "https://api.github.com/users/jackin34/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jackin34/subscriptions", "organizations_url": "https://api.github.com/users/jackin34/orgs", "repos_url": "https://api.github.com/users/jackin34/repos", "events_url": "https://api.github.com/users/jackin34/events{/privacy}", "received_events_url": "https://api.github.com/users/jackin34/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-11-28T21:36:23
2020-11-29T10:11:51
2020-11-29T10:11:51
NONE
null
null
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/910/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/910/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/909
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/909/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/909/comments
https://api.github.com/repos/huggingface/datasets/issues/909/events
https://github.com/huggingface/datasets/pull/909
752,508,299
MDExOlB1bGxSZXF1ZXN0NTI4ODE1NDYz
909
Add FiNER dataset
{ "login": "stefan-it", "id": 20651387, "node_id": "MDQ6VXNlcjIwNjUxMzg3", "avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stefan-it", "html_url": "https://github.com/stefan-it", "followers_url": "https://api.github.com/users/stefan-it/followers", "following_url": "https://api.github.com/users/stefan-it/following{/other_user}", "gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}", "starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions", "organizations_url": "https://api.github.com/users/stefan-it/orgs", "repos_url": "https://api.github.com/users/stefan-it/repos", "events_url": "https://api.github.com/users/stefan-it/events{/privacy}", "received_events_url": "https://api.github.com/users/stefan-it/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "> That's really cool thank you !\r\n> \r\n> Could you also add a dataset card ?\r\n> You can find a template here : https://github.com/huggingface/datasets/blob/master/templates/README.md\r\n\r\nThe full information for adding a dataset card can be found here :) \r\nhttps://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#manually-tag-the-dataset-and-write-the-dataset-card\r\n", "Thanks your suggestions! I've fixed them, and currently working on the dataset card!", "@yjernite and @lhoestq I will add the dataset card a bit later in a separate PR if that's ok for you!", "Yes I want to re-emphasize if it was not clear that dataset cards are optional for the sprint. \r\n\r\nOnly the tags are required for merging a datasets.\r\n\r\nPlease try to enforce this rule as well @lhoestq and @yjernite ", "Yes @stefan-it if you could just add the tags (the yaml part at the top of the dataset card) that'd be perfect :) ", "Oh, sorry, will add them now!\r\n", "Initial README file is now added :) ", "the `RemoteDatasetTest ` errors in the CI are fixed on master so it's fine", "merging since the CI is fixed on master" ]
2020-11-27T23:54:20
2020-12-07T16:56:23
2020-12-07T16:56:23
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/909", "html_url": "https://github.com/huggingface/datasets/pull/909", "diff_url": "https://github.com/huggingface/datasets/pull/909.diff", "patch_url": "https://github.com/huggingface/datasets/pull/909.patch", "merged_at": "2020-12-07T16:56:23" }
Hi, this PR adds "A Finnish News Corpus for Named Entity Recognition" as new `finer` dataset. The dataset is described in [this paper](https://arxiv.org/abs/1908.04212). The data is publicly available in [this GitHub](https://github.com/mpsilfve/finer-data). Notice: they provide two testsets. The additional test dataset taken from Wikipedia is named as "test_wikipedia" split.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/909/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/909/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/908
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/908/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/908/comments
https://api.github.com/repos/huggingface/datasets/issues/908/events
https://github.com/huggingface/datasets/pull/908
752,428,652
MDExOlB1bGxSZXF1ZXN0NTI4NzUzMjcz
908
Add dependency on black for tests
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Sorry, I have just seen that it was already in `QUALITY_REQUIRE`.\r\n\r\nFor some reason it did not get installed on my virtual environment..." ]
2020-11-27T19:12:48
2020-11-27T21:46:53
2020-11-27T21:46:52
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/908", "html_url": "https://github.com/huggingface/datasets/pull/908", "diff_url": "https://github.com/huggingface/datasets/pull/908.diff", "patch_url": "https://github.com/huggingface/datasets/pull/908.patch", "merged_at": null }
Add package 'black' as an installation requirement for tests.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/908/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/908/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/907
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/907/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/907/comments
https://api.github.com/repos/huggingface/datasets/issues/907/events
https://github.com/huggingface/datasets/pull/907
752,422,351
MDExOlB1bGxSZXF1ZXN0NTI4NzQ4ODMx
907
Remove os.path.join from all URLs
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-11-27T18:55:30
2020-11-29T22:48:20
2020-11-29T22:48:19
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/907", "html_url": "https://github.com/huggingface/datasets/pull/907", "diff_url": "https://github.com/huggingface/datasets/pull/907.diff", "patch_url": "https://github.com/huggingface/datasets/pull/907.patch", "merged_at": "2020-11-29T22:48:19" }
Remove `os.path.join` from all URLs in dataset scripts.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/907/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/907/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/906
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/906/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/906/comments
https://api.github.com/repos/huggingface/datasets/issues/906/events
https://github.com/huggingface/datasets/pull/906
752,403,395
MDExOlB1bGxSZXF1ZXN0NTI4NzM0MDY0
906
Fix url with backslash in windows for blimp and pg19
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-11-27T17:59:11
2020-11-27T18:19:56
2020-11-27T18:19:56
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/906", "html_url": "https://github.com/huggingface/datasets/pull/906", "diff_url": "https://github.com/huggingface/datasets/pull/906.diff", "patch_url": "https://github.com/huggingface/datasets/pull/906.patch", "merged_at": "2020-11-27T18:19:55" }
Following #903 I also fixed blimp and pg19 which were using the `os.path.join` to create urls cc @albertvillanova
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/906/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/906/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/905
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/905/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/905/comments
https://api.github.com/repos/huggingface/datasets/issues/905/events
https://github.com/huggingface/datasets/pull/905
752,395,456
MDExOlB1bGxSZXF1ZXN0NTI4NzI3OTEy
905
Disallow backslash in urls
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Looks like the test doesn't detect all the problems fixed by #907 , I'll fix that", "Ok found why it doesn't detect the problems fixed by #907 . That's because for all those datasets the urls are actually fine (no backslash) on windows, even if it uses `os.path.join`.\r\n\r\nThis is because of the behavior of `os.path.join` on windows when the first path ends with a slash : \r\n\r\n```python\r\nimport os\r\nos.path.join(\"https://test.com/foo\", \"bar.txt\")\r\n# 'https://test.com/foo\\\\bar.txt'\r\nos.path.join(\"https://test.com/foo/\", \"bar.txt\")\r\n# 'https://test.com/foo/bar.txt'\r\n```\r\n\r\nHowever even though the urls are correct, this is definitely bad practice and we should never use `os.path.join` for urls" ]
2020-11-27T17:38:28
2020-11-29T22:48:37
2020-11-29T22:48:36
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/905", "html_url": "https://github.com/huggingface/datasets/pull/905", "diff_url": "https://github.com/huggingface/datasets/pull/905.diff", "patch_url": "https://github.com/huggingface/datasets/pull/905.patch", "merged_at": "2020-11-29T22:48:36" }
Following #903 @albertvillanova noticed that there are sometimes bad usage of `os.path.join` in datasets scripts to create URLS. However this should be avoided since it doesn't work on windows. I'm suggesting a test to make sure we that all the urls don't have backslashes in them in the datasets scripts. The tests works by adding a callback feature to the MockDownloadManager used to test the dataset scripts. In a download callback I just make sure that the url is valid.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/905/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/905/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/904
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/904/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/904/comments
https://api.github.com/repos/huggingface/datasets/issues/904/events
https://github.com/huggingface/datasets/pull/904
752,372,743
MDExOlB1bGxSZXF1ZXN0NTI4NzA5NTUx
904
Very detailed step-by-step on how to add a dataset
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Awesome! Thanks @lhoestq " ]
2020-11-27T16:45:21
2020-11-30T09:56:27
2020-11-30T09:56:26
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/904", "html_url": "https://github.com/huggingface/datasets/pull/904", "diff_url": "https://github.com/huggingface/datasets/pull/904.diff", "patch_url": "https://github.com/huggingface/datasets/pull/904.patch", "merged_at": "2020-11-30T09:56:26" }
Add very detailed step-by-step instructions to add a new dataset to the library.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/904/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/904/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/903
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/903/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/903/comments
https://api.github.com/repos/huggingface/datasets/issues/903/events
https://github.com/huggingface/datasets/pull/903
752,360,614
MDExOlB1bGxSZXF1ZXN0NTI4Njk5NDQ3
903
Fix URL with backslash in Windows
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq I was indeed working on that... to make another commit on this feature branch...", "But as you prefer... nevermind! :)", "Ah what do you have in mind for the tests ? I was thinking of adding a check in the MockDownloadManager used for tests based on dummy data. I'm creating a PR right now, I'd be happy to have your opinion", "Indeed I was thinking of something similar: monckeypatching the HTTP request...", "Therefore, if you agree, I am removing all the rest of `os.path.join`, both from the code and the docs...", "If you spot other `os.path.join` for urls in dataset scripts or metrics scripts feel free to fix them.\r\nIn the library itself (/src/datasets) it should be fine since there are tests and a windows CI, but if you have doubts of some usage of `os.path.join` somewhere, let me know.", "Alright create the test in #905 .\r\nThe windows CI is failing for all the datasets that have bad usage of `os.path.join` for urls.\r\nThere are of course the ones you fixed in this PR (thanks again !) but I found others as well such as pg19 and blimp.\r\nYou can check the full list by looking at the CI failures of the commit 1ce3354", "I am merging this one as well as #906 that should fix all of the datasets.\r\nThen I'll rebase #905 which adds the test that checks for bad urls and make sure it' all green now" ]
2020-11-27T16:26:24
2020-11-27T18:04:46
2020-11-27T18:04:46
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/903", "html_url": "https://github.com/huggingface/datasets/pull/903", "diff_url": "https://github.com/huggingface/datasets/pull/903.diff", "patch_url": "https://github.com/huggingface/datasets/pull/903.patch", "merged_at": "2020-11-27T18:04:46" }
In Windows, `os.path.join` generates URLs containing backslashes, when the first "path" does not end with a slash. In general, `os.path.join` should be avoided to generate URLs.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/903/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/903/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/902
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/902/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/902/comments
https://api.github.com/repos/huggingface/datasets/issues/902/events
https://github.com/huggingface/datasets/pull/902
752,345,739
MDExOlB1bGxSZXF1ZXN0NTI4Njg3NTYw
902
Follow cache_dir parameter to gcs downloader
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-11-27T16:02:06
2020-11-29T22:48:54
2020-11-29T22:48:53
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/902", "html_url": "https://github.com/huggingface/datasets/pull/902", "diff_url": "https://github.com/huggingface/datasets/pull/902.diff", "patch_url": "https://github.com/huggingface/datasets/pull/902.patch", "merged_at": "2020-11-29T22:48:53" }
As noticed in #900 the cache_dir parameter was not followed to the downloader in the case of an already processed dataset hosted on our google storage (one of them is natural questions). Fix #900
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/902/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/902/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/901
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/901/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/901/comments
https://api.github.com/repos/huggingface/datasets/issues/901/events
https://github.com/huggingface/datasets/pull/901
752,233,851
MDExOlB1bGxSZXF1ZXN0NTI4NTk3NDU5
901
Addition of Nl2Bash Dataset
{ "login": "reshinthadithyan", "id": 36307201, "node_id": "MDQ6VXNlcjM2MzA3MjAx", "avatar_url": "https://avatars.githubusercontent.com/u/36307201?v=4", "gravatar_id": "", "url": "https://api.github.com/users/reshinthadithyan", "html_url": "https://github.com/reshinthadithyan", "followers_url": "https://api.github.com/users/reshinthadithyan/followers", "following_url": "https://api.github.com/users/reshinthadithyan/following{/other_user}", "gists_url": "https://api.github.com/users/reshinthadithyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/reshinthadithyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/reshinthadithyan/subscriptions", "organizations_url": "https://api.github.com/users/reshinthadithyan/orgs", "repos_url": "https://api.github.com/users/reshinthadithyan/repos", "events_url": "https://api.github.com/users/reshinthadithyan/events{/privacy}", "received_events_url": "https://api.github.com/users/reshinthadithyan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hello, thanks. I had a talk with the dataset authors, found out that the data now is obsolete and they'll get a stable version soon. So temporality closing the PR.\r\n Although I have a question, What should _id_ be in the return statement? Should that be something like a start index (or) the type of split will do? Thanks. ", "@reshinthadithyan we should hold off on this for a couple of weeks till NeurIPS concludes. The [NLC2CMD](http://nlc2cmd.us-east.mybluemix.net/) data will be out then; which includes a cleaner version of this NL2Bash data. The older data is sort of obsolete now. ", "Ah nvm you already commented 😆 " ]
2020-11-27T12:53:55
2020-11-29T18:09:25
2020-11-29T18:08:51
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/901", "html_url": "https://github.com/huggingface/datasets/pull/901", "diff_url": "https://github.com/huggingface/datasets/pull/901.diff", "patch_url": "https://github.com/huggingface/datasets/pull/901.patch", "merged_at": null }
## Overview The NL2Bash data contains over 10,000 instances of linux shell commands and their corresponding natural language descriptions provided by experts, from the Tellina system. The dataset features 100+ commonly used shell utilities. ## Footnotes The following dataset marks the first ML on source code related Dataset in datasets module. It'll be really useful as a lot of the research direction involves Transformer Based Model. Thanks. ### Reference Links > Paper Link = https://arxiv.org/pdf/1802.08979.pdf > Github Link = https://github.com/TellinaTool/nl2bash
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/901/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/901/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/900
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/900/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/900/comments
https://api.github.com/repos/huggingface/datasets/issues/900/events
https://github.com/huggingface/datasets/issues/900
752,214,066
MDU6SXNzdWU3NTIyMTQwNjY=
900
datasets.load_dataset() custom chaching directory bug
{ "login": "SapirWeissbuch", "id": 44585792, "node_id": "MDQ6VXNlcjQ0NTg1Nzky", "avatar_url": "https://avatars.githubusercontent.com/u/44585792?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SapirWeissbuch", "html_url": "https://github.com/SapirWeissbuch", "followers_url": "https://api.github.com/users/SapirWeissbuch/followers", "following_url": "https://api.github.com/users/SapirWeissbuch/following{/other_user}", "gists_url": "https://api.github.com/users/SapirWeissbuch/gists{/gist_id}", "starred_url": "https://api.github.com/users/SapirWeissbuch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SapirWeissbuch/subscriptions", "organizations_url": "https://api.github.com/users/SapirWeissbuch/orgs", "repos_url": "https://api.github.com/users/SapirWeissbuch/repos", "events_url": "https://api.github.com/users/SapirWeissbuch/events{/privacy}", "received_events_url": "https://api.github.com/users/SapirWeissbuch/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting ! I'm looking into it." ]
2020-11-27T12:18:53
2020-11-29T22:48:53
2020-11-29T22:48:53
NONE
null
null
null
Hello, I'm having issue with loading a dataset with a custom `cache_dir`. Despite specifying the output dir, it is still downloaded to `~/.cache`. ## Environment info - `datasets` version: 1.1.3 - Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1 - Python version: 3.7.3 ## The code I'm running: ```python import datasets from pathlib import Path validation_dataset = datasets.load_dataset("natural_questions", split="validation[:5%]", cache_dir=Path("./data")) ``` ## The output: * The dataset is downloaded to my home directory's `.cache` * A new empty directory named "`natural_questions` is created in the specified directory `.data` * `tree data` in the shell outputs: ``` data └── natural_questions └── default └── 0.0.2 3 directories, 0 files ``` The output: ``` Downloading: 8.61kB [00:00, 5.11MB/s] Downloading: 13.6kB [00:00, 7.89MB/s] Using custom data configuration default Downloading and preparing dataset natural_questions/default (download: 41.97 GiB, generated: 92.95 GiB, post-processed: Unknown size, total: 134.92 GiB) to ./data/natural_questions/default/0.0.2/867dbbaf9137c1b8 3ecb19f5eb80559e1002ea26e702c6b919cfa81a17a8c531... Downloading: 100%|██████████████████████████████████████████████████| 13.6k/13.6k [00:00<00:00, 1.51MB/s] Downloading: 7%|███▎ | 6.70G/97.4G [03:46<1:37:05, 15.6MB/s] ``` ## Expected behaviour: The dataset "Natural Questions" should be downloaded to the directory "./data"
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/900/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/900/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/899
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/899/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/899/comments
https://api.github.com/repos/huggingface/datasets/issues/899/events
https://github.com/huggingface/datasets/pull/899
752,191,227
MDExOlB1bGxSZXF1ZXN0NTI4NTYzNzYz
899
Allow arrow based builder in auto dummy data generation
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-11-27T11:39:38
2020-11-27T13:30:09
2020-11-27T13:30:08
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/899", "html_url": "https://github.com/huggingface/datasets/pull/899", "diff_url": "https://github.com/huggingface/datasets/pull/899.diff", "patch_url": "https://github.com/huggingface/datasets/pull/899.patch", "merged_at": "2020-11-27T13:30:08" }
Following #898 I added support for arrow based builder for the auto dummy data generator
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/899/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/899/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/898
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/898/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/898/comments
https://api.github.com/repos/huggingface/datasets/issues/898/events
https://github.com/huggingface/datasets/pull/898
752,148,284
MDExOlB1bGxSZXF1ZXN0NTI4NTI4MDY1
898
Adding SQA dataset
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "This dataset seems to have around 1000 configs. Therefore when creating the dummy data we end up with hundreds of MB of dummy data which we don't want to add in the repo.\r\nLet's make this PR on hold for now and find a solution after the sprint of next week", "Closing in favor of #1566 " ]
2020-11-27T10:29:18
2020-12-15T12:54:40
2020-12-15T12:54:19
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/898", "html_url": "https://github.com/huggingface/datasets/pull/898", "diff_url": "https://github.com/huggingface/datasets/pull/898.diff", "patch_url": "https://github.com/huggingface/datasets/pull/898.patch", "merged_at": null }
As discussed in #880 Seems like automatic dummy-data generation doesn't work if the builder is a `ArrowBasedBuilder`, do you think you could take a look @lhoestq ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/898/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/898/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/897
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/897/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/897/comments
https://api.github.com/repos/huggingface/datasets/issues/897/events
https://github.com/huggingface/datasets/issues/897
752,100,256
MDU6SXNzdWU3NTIxMDAyNTY=
897
Dataset viewer issues
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[ { "id": 2107841032, "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer", "name": "nlp-viewer", "color": "94203D", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Thanks for reporting !\r\ncc @srush for the empty feature list issue and the encoding issue\r\ncc @julien-c maybe we can update the url and just have a redirection from the old url to the new one ?", "Ok, I redirected on our side to a new url. ⚠️ @srush: if you update the Streamlit config too to `/datasets/viewer`, let me know because I'll need to change our nginx config at the same time", "9", "‏⠀‏‏‏⠀‏‏‏⠀ ‏⠀ ", "‏⠀‏‏‏⠀‏‏‏⠀ ‏⠀ " ]
2020-11-27T09:14:34
2021-10-31T09:12:01
2021-10-31T09:12:01
CONTRIBUTOR
null
null
null
I was looking through the dataset viewer and I like it a lot. Version numbers, citation information, everything's there! I've spotted a few issues/bugs though: - the URL is still under `nlp`, perhaps an alias for `datasets` can be made - when I remove a **feature** (and the feature list is empty), I get an error. This is probably expected, but perhaps a better error message can be shown to the user ```bash IndexError: list index out of range Traceback: File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script exec(code, module.__dict__) File "/home/sasha/nlp-viewer/run.py", line 316, in <module> st.table(style) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 122, in wrapped_method return dg._enqueue_new_element_delta(marshall_element, delta_type, last_index) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 367, in _enqueue_new_element_delta rv = marshall_element(msg.delta.new_element) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 120, in marshall_element return method(dg, element, *args, **kwargs) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 2944, in table data_frame_proto.marshall_data_frame(data, element.table) File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 54, in marshall_data_frame _marshall_styles(proto_df.style, df, styler) File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 73, in _marshall_styles translated_style = styler._translate() File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/pandas/io/formats/style.py", line 351, in _translate * (len(clabels[0]) - len(hidden_columns)) ``` - there seems to be **an encoding issue** in the default view, the dataset examples are shown as raw monospace text, without a decent encoding. That makes it hard to read for languages that use a lot of special characters. Take for instance the [cs-en WMT19 set](https://huggingface.co/nlp/viewer/?dataset=wmt19&config=cs-en). This problem goes away when you enable "List view", because then some syntax highlighteris used, and the special characters are coded correctly.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/897/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/897/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/896
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/896/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/896/comments
https://api.github.com/repos/huggingface/datasets/issues/896/events
https://github.com/huggingface/datasets/pull/896
751,834,265
MDExOlB1bGxSZXF1ZXN0NTI4MjcyMjc0
896
Add template and documentation for dataset card
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-11-26T21:30:25
2020-11-28T01:10:15
2020-11-28T01:10:15
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/896", "html_url": "https://github.com/huggingface/datasets/pull/896", "diff_url": "https://github.com/huggingface/datasets/pull/896.diff", "patch_url": "https://github.com/huggingface/datasets/pull/896.patch", "merged_at": "2020-11-28T01:10:14" }
This PR adds a template for dataset cards, as well as a guide to filling out the template and a completed example for the ELI5 dataset, building on the work of @mcmillanmajora New pull requests adding datasets should now have a README.md file which serves both to hold the tags we will have to index the datasets and as a data statement. The template is designed to be pretty extensive. The idea is that the person who uploads the dataset should put in all the basic information (at least the Dataset Description section) and whatever else they feel comfortable adding and leave the `[More Information Needed]` annotation everywhere else as a placeholder. We will then work with @mcmillanmajora to involve the data authors more directly in filling out the remaining information. Direct links to: - [Documentation](https://github.com/yjernite/datasets/blob/add_dataset_card_doc/templates/README_guide.md) - [Empty template](https://github.com/yjernite/datasets/blob/add_dataset_card_doc/templates/README.md) - [ELI5 example](https://github.com/yjernite/datasets/blob/add_dataset_card_doc/datasets/eli5/README.md)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/896/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/896/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/895
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/895/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/895/comments
https://api.github.com/repos/huggingface/datasets/issues/895/events
https://github.com/huggingface/datasets/pull/895
751,782,295
MDExOlB1bGxSZXF1ZXN0NTI4MjMyMjU3
895
Better messages regarding split naming
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-11-26T18:55:46
2020-11-27T13:31:00
2020-11-27T13:30:59
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/895", "html_url": "https://github.com/huggingface/datasets/pull/895", "diff_url": "https://github.com/huggingface/datasets/pull/895.diff", "patch_url": "https://github.com/huggingface/datasets/pull/895.patch", "merged_at": "2020-11-27T13:30:59" }
I made explicit the error message when a bad split name is used. Also I wanted to allow the `-` symbol for split names but actually this symbol is used to name the arrow files `{dataset_name}-{dataset_split}.arrow` so we should probably keep it this way, i.e. not allowing the `-` symbol in split names. Moreover in the future we might want to use `{dataset_name}-{dataset_split}-{shard_id}_of_{n_shards}.arrow` and reuse the `-` symbol.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/895/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/895/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/894
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/894/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/894/comments
https://api.github.com/repos/huggingface/datasets/issues/894/events
https://github.com/huggingface/datasets/pull/894
751,734,905
MDExOlB1bGxSZXF1ZXN0NTI4MTkzNzQy
894
Allow several tags sets
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Closing since we don't need to update the tags of those three datasets (for each one of them there is only one tag set)" ]
2020-11-26T17:04:13
2021-05-05T18:24:17
2020-11-27T20:15:49
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/894", "html_url": "https://github.com/huggingface/datasets/pull/894", "diff_url": "https://github.com/huggingface/datasets/pull/894.diff", "patch_url": "https://github.com/huggingface/datasets/pull/894.patch", "merged_at": null }
Hi ! Currently we have three dataset cards : snli, cnn_dailymail and allocine. For each one of those datasets a set of tag is defined. The set of tags contains fields like `multilinguality`, `task_ids`, `licenses` etc. For certain datasets like `glue` for example, there exist several configurations: `sst2`, `mnli` etc. Therefore we should define one set of tags per configuration. However the current format used for tags only supports one set of tags per dataset. In this PR I propose a simple change in the yaml format used for tags to allow for several sets of tags. Let me know what you think, especially @julien-c let me know if it's good for you since it's going to be parsed by moon-landing
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/894/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/894/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/893
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/893/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/893/comments
https://api.github.com/repos/huggingface/datasets/issues/893/events
https://github.com/huggingface/datasets/pull/893
751,703,696
MDExOlB1bGxSZXF1ZXN0NTI4MTY4NDgx
893
add metrec: arabic poetry dataset
{ "login": "zaidalyafeai", "id": 15667714, "node_id": "MDQ6VXNlcjE1NjY3NzE0", "avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zaidalyafeai", "html_url": "https://github.com/zaidalyafeai", "followers_url": "https://api.github.com/users/zaidalyafeai/followers", "following_url": "https://api.github.com/users/zaidalyafeai/following{/other_user}", "gists_url": "https://api.github.com/users/zaidalyafeai/gists{/gist_id}", "starred_url": "https://api.github.com/users/zaidalyafeai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zaidalyafeai/subscriptions", "organizations_url": "https://api.github.com/users/zaidalyafeai/orgs", "repos_url": "https://api.github.com/users/zaidalyafeai/repos", "events_url": "https://api.github.com/users/zaidalyafeai/events{/privacy}", "received_events_url": "https://api.github.com/users/zaidalyafeai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq removed prints and added the dataset card. ", "@lhoestq, I want to add other datasets as well. I am not sure if it is possible to do so with the same branch. ", "Hi @zaidalyafeai, really excited to get more Arabic coverage in the lib, thanks for your contribution!\r\n\r\nCouple of last comments:\r\n- this PR seems to modify some files that are unrelated to your dataset. Could you rebase from master? It should take care of that.\r\n- The dataset card is a good start! Can you describe the task in a few words and add more information in the Data Structure part, including listing and describing the fields? Also, if you don't know how to fill out a paragraph, or if you have some information but think more would be beneficial, please leave `[More Information Needed]` instead of `[N/A]`", "> Hi @zaidalyafeai, really excited to get more Arabic coverage in the lib, thanks for your contribution!\r\n> \r\n> Couple of last comments:\r\n> \r\n> * this PR seems to modify some files that are unrelated to your dataset. Could you rebase from master? It should take care of that.\r\n> * The dataset card is a good start! Can you describe the task in a few words and add more information in the Data Structure part, including listing and describing the fields? Also, if you don't know how to fill out a paragraph, or if you have some information but think more would be beneficial, please leave `[More Information Needed]` instead of `[N/A]`\r\n\r\nI have no idea how some other files changed. I tried to rebase and push but this created some errors. I had to run the command \r\n`git push -u --force origin add-metrec-dataset` which might cause some problems. ", "Feel free to create another branch/another PR without all the other changes", "@yjernite can you explain which other files are changed because of the PR ? https://github.com/huggingface/datasets/pull/893/files only shows files related to the dataset. ", "Right ! github is nice with us today :)", "Looks like this one is ready to merge, thanks @zaidalyafeai !", "@lhoestq thanks for the merge. I am not a GitHub geek. I already have another dataset to add. I'm not sure how to add another given my forked repo. Do I follow the same steps with a different checkout name ?", "If you've followed the instructions in here : https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#start-by-preparing-your-environment\r\n\r\n(especially point 2. and the command `git remote add upstream ....`)\r\n\r\nThen you can try\r\n```\r\ngit checkout master\r\ngit fetch upstream\r\ngit rebase upstream/master\r\ngit checkout -b add-<my-new-dataset-name>\r\n```" ]
2020-11-26T16:10:16
2020-12-01T16:24:55
2020-12-01T15:15:07
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/893", "html_url": "https://github.com/huggingface/datasets/pull/893", "diff_url": "https://github.com/huggingface/datasets/pull/893.diff", "patch_url": "https://github.com/huggingface/datasets/pull/893.patch", "merged_at": "2020-12-01T15:15:07" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/893/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/893/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/892
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/892/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/892/comments
https://api.github.com/repos/huggingface/datasets/issues/892/events
https://github.com/huggingface/datasets/pull/892
751,658,262
MDExOlB1bGxSZXF1ZXN0NTI4MTMxNTE1
892
Add a few datasets of reference in the documentation
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Looks good to me. Do we also support TSV in this helper (explain if it should be text or CSV) and in the dummy-data creator?", "snli is basically based on tsv files (but named as .txt) and it is in the list of datasets of reference.\r\nThe dummy data creator supports tsv", "merging this one.\r\nIf you think of other datasets of reference to add we can still add them later" ]
2020-11-26T15:02:39
2020-11-27T18:08:45
2020-11-27T18:08:44
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/892", "html_url": "https://github.com/huggingface/datasets/pull/892", "diff_url": "https://github.com/huggingface/datasets/pull/892.diff", "patch_url": "https://github.com/huggingface/datasets/pull/892.patch", "merged_at": "2020-11-27T18:08:44" }
I started making a small list of various datasets of reference in the documentation. Since many datasets share a lot in common I think it's good to have a list of datasets scripts to get some inspiration from. Let me know what you think, and if you have ideas of other datasets that we may add to this list, please let me know.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/892/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/892/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/891
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/891/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/891/comments
https://api.github.com/repos/huggingface/datasets/issues/891/events
https://github.com/huggingface/datasets/pull/891
751,576,869
MDExOlB1bGxSZXF1ZXN0NTI4MDY1MTQ3
891
gitignore .python-version
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-11-26T13:05:58
2020-11-26T13:28:27
2020-11-26T13:28:26
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/891", "html_url": "https://github.com/huggingface/datasets/pull/891", "diff_url": "https://github.com/huggingface/datasets/pull/891.diff", "patch_url": "https://github.com/huggingface/datasets/pull/891.patch", "merged_at": "2020-11-26T13:28:26" }
ignore `.python-version` added by `pyenv`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/891/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/891/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/890
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/890/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/890/comments
https://api.github.com/repos/huggingface/datasets/issues/890/events
https://github.com/huggingface/datasets/pull/890
751,534,050
MDExOlB1bGxSZXF1ZXN0NTI4MDI5NjA3
890
Add LER
{ "login": "JoelNiklaus", "id": 3775944, "node_id": "MDQ6VXNlcjM3NzU5NDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JoelNiklaus", "html_url": "https://github.com/JoelNiklaus", "followers_url": "https://api.github.com/users/JoelNiklaus/followers", "following_url": "https://api.github.com/users/JoelNiklaus/following{/other_user}", "gists_url": "https://api.github.com/users/JoelNiklaus/gists{/gist_id}", "starred_url": "https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JoelNiklaus/subscriptions", "organizations_url": "https://api.github.com/users/JoelNiklaus/orgs", "repos_url": "https://api.github.com/users/JoelNiklaus/repos", "events_url": "https://api.github.com/users/JoelNiklaus/events{/privacy}", "received_events_url": "https://api.github.com/users/JoelNiklaus/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for the comments. I addressed them and pushed again.\r\nWhen I run \"make quality\" I get the following error but I don't know how to resolve it or what the problem ist respectively:\r\nwould reformat /Users/joelniklaus/NextCloud/PhDJoelNiklaus/Code/datasets/datasets/ler/ler.py\r\nOh no! 💥 💔 💥\r\n1 file would be reformatted, 257 files would be left unchanged.\r\nmake: *** [quality] Error 1\r\n", "Awesome thanks :)\r\nTo automatically format the python files you can run `make style`", "I did that now. But still getting the following error:\r\nblack --check --line-length 119 --target-version py36 tests src benchmarks datasets metrics\r\nAll done! ✨ 🍰 ✨\r\n258 files would be left unchanged.\r\nisort --check-only tests src benchmarks datasets metrics\r\nflake8 tests src benchmarks datasets metrics\r\ndatasets/ler/ler.py:46:96: W291 trailing whitespace\r\ndatasets/ler/ler.py:47:68: W291 trailing whitespace\r\ndatasets/ler/ler.py:48:102: W291 trailing whitespace\r\ndatasets/ler/ler.py:49:112: W291 trailing whitespace\r\ndatasets/ler/ler.py:50:92: W291 trailing whitespace\r\ndatasets/ler/ler.py:51:116: W291 trailing whitespace\r\ndatasets/ler/ler.py:52:84: W291 trailing whitespace\r\nmake: *** [quality] Error 1\r\n\r\nHowever: When I look at the file I don't see any trailing whitespace", "maybe a bug with flake8 ? could you try to update it ? which version do you have ?", "This is my flake8 version: 3.7.9 (mccabe: 0.6.1, pycodestyle: 2.5.0, pyflakes: 2.1.1) CPython 3.8.5 on Darwin\r\n", "Now I updated to: 3.8.4 (mccabe: 0.6.1, pycodestyle: 2.6.0, pyflakes: 2.2.0) CPython 3.8.5 on Darwin\r\n\r\nAnd now I even get additional errors:\r\nblack --check --line-length 119 --target-version py36 tests src benchmarks datasets metrics\r\nAll done! ✨ 🍰 ✨\r\n258 files would be left unchanged.\r\nisort --check-only tests src benchmarks datasets metrics\r\nflake8 tests src benchmarks datasets metrics\r\ndatasets/polyglot_ner/polyglot_ner.py:123:64: F541 f-string is missing placeholders\r\ndatasets/ler/ler.py:46:96: W291 trailing whitespace\r\ndatasets/ler/ler.py:47:68: W291 trailing whitespace\r\ndatasets/ler/ler.py:48:102: W291 trailing whitespace\r\ndatasets/ler/ler.py:49:112: W291 trailing whitespace\r\ndatasets/ler/ler.py:50:92: W291 trailing whitespace\r\ndatasets/ler/ler.py:51:116: W291 trailing whitespace\r\ndatasets/ler/ler.py:52:84: W291 trailing whitespace\r\ndatasets/math_dataset/math_dataset.py:233:25: E741 ambiguous variable name 'l'\r\nmetrics/coval/coval.py:236:31: F541 f-string is missing placeholders\r\nmake: *** [quality] Error 1\r\n\r\nI do this on macOS Catalina 10.15.7 in case this matters", "Code quality test now passes, thanks :) \r\n\r\nTo fix the other tests failing I think you can just rebase from master.\r\nAlso make sure that the dummy data test passes with\r\n```python\r\nRUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_ler\r\n```", "I will close this PR because abishek did the same better (https://github.com/huggingface/datasets/pull/944)", "Sorry you had to close your PR ! It looks like this week's sprint doesn't always make it easy to see what's being added/what's already added. \r\nThank you for contributing to the library. You did a great job on adding LER so feel free to add other ones that you would like to see in the library, it will be a pleasure to review" ]
2020-11-26T11:58:23
2020-12-01T13:33:35
2020-12-01T13:26:16
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/890", "html_url": "https://github.com/huggingface/datasets/pull/890", "diff_url": "https://github.com/huggingface/datasets/pull/890.diff", "patch_url": "https://github.com/huggingface/datasets/pull/890.patch", "merged_at": null }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/890/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/890/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/889
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/889/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/889/comments
https://api.github.com/repos/huggingface/datasets/issues/889/events
https://github.com/huggingface/datasets/pull/889
751,115,691
MDExOlB1bGxSZXF1ZXN0NTI3NjkwODE2
889
Optional per-dataset default config name
{ "login": "joeddav", "id": 9353833, "node_id": "MDQ6VXNlcjkzNTM4MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joeddav", "html_url": "https://github.com/joeddav", "followers_url": "https://api.github.com/users/joeddav/followers", "following_url": "https://api.github.com/users/joeddav/following{/other_user}", "gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}", "starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joeddav/subscriptions", "organizations_url": "https://api.github.com/users/joeddav/orgs", "repos_url": "https://api.github.com/users/joeddav/repos", "events_url": "https://api.github.com/users/joeddav/events{/privacy}", "received_events_url": "https://api.github.com/users/joeddav/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I like the idea ! And the approach is right imo\r\n\r\nNote that by changing this we will have to add a way for users to get the config lists of a dataset. In the current user workflow, the user could see the list of the config when the missing config error is raised but now it won't be the case because of the default config.", "Maybe let's add a test in the test_builder.py test script ?", "@lhoestq Okay great, I added a test as well as two new inspect functions: `get_dataset_config_names` and `get_dataset_infos` (the latter is something I've been wanting anyway). As a quick hack, you can also just pass a random config name (e.g. an empty string) to `load_dataset` to get the config names in the error msg as before. Also added a couple paragraphs to the adding new datasets doc.\r\n\r\nI'll send a separate PR incorporating this in existing datasets so we can get this merged before our sprint on Monday.\r\n\r\nAny ideas on the failing tests? I'm having trouble making sense of it. **Edit**: nvm, it was master." ]
2020-11-25T21:02:30
2020-11-30T17:27:33
2020-11-30T17:27:27
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/889", "html_url": "https://github.com/huggingface/datasets/pull/889", "diff_url": "https://github.com/huggingface/datasets/pull/889.diff", "patch_url": "https://github.com/huggingface/datasets/pull/889.patch", "merged_at": "2020-11-30T17:27:27" }
This PR adds a `DEFAULT_CONFIG_NAME` class attribute to `DatasetBuilder`. This allows a dataset to have a specified default config name when a dataset has more than one config but the user does not specify it. For example, after defining `DEFAULT_CONFIG_NAME = "combined"` in PolyglotNER, a user can now do the following: ```python ds = load_dataset("polyglot_ner") ``` which is equivalent to, ```python ds = load_dataset("polyglot_ner", "combined") ``` In effect (for this particular dataset configuration), this means that if the user doesn't specify a language, they are given the combined dataset including all languages. Since it doesn't always make sense to have a default config, this feature is opt-in. If `DEFAULT_CONFIG_NAME` is not defined and a user does not pass a config for a dataset with multiple configs available, a ValueError is raised like usual. Let me know what you think about this approach @lhoestq @thomwolf and I'll add some documentation and define a default for some of our existing datasets.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/889/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/889/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/888
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/888/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/888/comments
https://api.github.com/repos/huggingface/datasets/issues/888/events
https://github.com/huggingface/datasets/issues/888
750,944,422
MDU6SXNzdWU3NTA5NDQ0MjI=
888
Nested lists are zipped unexpectedly
{ "login": "AmitMY", "id": 5757359, "node_id": "MDQ6VXNlcjU3NTczNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmitMY", "html_url": "https://github.com/AmitMY", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "repos_url": "https://api.github.com/users/AmitMY/repos", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Yes following the Tensorflow Datasets convention, objects with type `Sequence of a Dict` are actually stored as a `dictionary of lists`.\r\nSee the [documentation](https://huggingface.co/docs/datasets/features.html?highlight=features) for more details", "Thanks.\r\nThis is a bit (very) confusing, but I guess if its intended, I'll just work with it as if its how my data was originally structured :) \r\n" ]
2020-11-25T16:07:46
2020-11-25T17:30:39
2020-11-25T17:30:39
CONTRIBUTOR
null
null
null
I might misunderstand something, but I expect that if I define: ```python "top": datasets.features.Sequence({ "middle": datasets.features.Sequence({ "bottom": datasets.Value("int32") }) }) ``` And I then create an example: ```python yield 1, { "top": [{ "middle": [ {"bottom": 1}, {"bottom": 2} ] }] } ``` I then load my dataset: ```python train = load_dataset("my dataset")["train"] ``` and expect to be able to access `data[0]["top"][0]["middle"][0]`. That is not the case. Here is `data[0]` as JSON: ```json {"top": {"middle": [{"bottom": [1, 2]}]}} ``` Clearly different than the thing I inputted. ```json {"top": [{"middle": [{"bottom": 1},{"bottom": 2}]}]} ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/888/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/888/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/887
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/887/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/887/comments
https://api.github.com/repos/huggingface/datasets/issues/887/events
https://github.com/huggingface/datasets/issues/887
750,868,831
MDU6SXNzdWU3NTA4Njg4MzE=
887
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
{ "login": "AmitMY", "id": 5757359, "node_id": "MDQ6VXNlcjU3NTczNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmitMY", "html_url": "https://github.com/AmitMY", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "repos_url": "https://api.github.com/users/AmitMY/repos", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Yes right now `ArrayXD` can only be used as a column feature type, not a subtype.\r\nWith the current Arrow limitations I don't think we'll be able to make it work as a subtype, however it should be possible to allow dimensions of dynamic sizes (`Array3D(shape=(None, 137, 2), dtype=\"float32\")` for example since the [underlying arrow type](https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L236) allows dynamic sizes.\r\n\r\nFor now I'd suggest the use of nested `Sequence` types. Once we have the dynamic sizes you can update the dataset.\r\nWhat do you think ?", "> Yes right now ArrayXD can only be used as a column feature type, not a subtype. \r\n\r\nMeaning it can't be nested under `Sequence`?\r\nIf so, for now I'll just make it a python list and make it with the nested `Sequence` type you suggested.", "Yea unfortunately..\r\nThat's a current limitation with Arrow ExtensionTypes that can't be used in the default Arrow Array objects.\r\nWe already have an ExtensionArray that allows us to use them as column types but not for subtypes.\r\nMaybe we can extend it, I haven't experimented with that yet", "Cool\r\nSo please consider this issue as a feature request for:\r\n```\r\nArray3D(shape=(None, 137, 2), dtype=\"float32\")\r\n```\r\n\r\nits a way to represent videos, poses, and other cool sequences", "@lhoestq well, so sequence of sequences doesn't work either...\r\n\r\n```\r\npyarrow.lib.ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648\r\n```\r\n\r\n\r\n", "Working with Arrow can be quite fun sometimes.\r\nYou can fix this issue by trying to reduce the writer batch size (same trick than the one used to reduce the RAM usage in https://github.com/huggingface/datasets/issues/741).\r\n\r\nLet me know if it works.\r\nI haven't investigated yet on https://github.com/huggingface/datasets/issues/741 since I was preparing this week's sprint to add datasets but this is in my priority list for early next week.", "The batch size fix doesn't work... not for #741 and not for this dataset I'm trying (DGS corpus)\r\nLoading the DGS corpus takes 400GB of RAM, which is fine with me as my machine is large enough\r\n", "Sorry it doesn't work. Will let you know once I fixed it", "Hi @lhoestq , any update on dynamic sized arrays?\r\n(`Array3D(shape=(None, 137, 2), dtype=\"float32\")`)", "Not yet, I've been pretty busy with the dataset sprint lately but this is something that's been asked several times already. So I'll definitely work on this as soon as I'm done with the sprint and with the RAM issue you reported.", "Hi @lhoestq,\r\nAny chance you have some updates on the supporting `ArrayXD` as a subtype or support of dynamic sized arrays?\r\n\r\ne.g.:\r\n`datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype=\"float32\"))`\r\n`Array3D(shape=(None, 137, 2), dtype=\"float32\")`", "Hi ! We haven't worked in this lately and it's not in our very short-term roadmap since it requires a bit a work to make it work with arrow. Though this will definitely be added at one point.", "@lhoestq, thanks for the update.\r\n\r\nI actually tried to modify some piece of code to make it work. Can you please tell if I missing anything here?\r\nI think that for vast majority of cases it's enough to make first dimension of the array dynamic i.e. `shape=(None, 100, 100)`. For that, it's enough to modify class [ArrayExtensionArray](https://github.com/huggingface/datasets/blob/9ca24250ea44e7611c4dabd01ecf9415a7f0be6c/src/datasets/features.py#L397) to output list of arrays of different sizes instead of list of arrays of same sizes (current version)\r\nBelow are my modifications of this class.\r\n\r\n```\r\nclass ArrayExtensionArray(pa.ExtensionArray):\r\n def __array__(self):\r\n zero_copy_only = _is_zero_copy_only(self.storage.type)\r\n return self.to_numpy(zero_copy_only=zero_copy_only)\r\n\r\n def __getitem__(self, i):\r\n return self.storage[i]\r\n\r\n def to_numpy(self, zero_copy_only=True):\r\n storage: pa.ListArray = self.storage\r\n size = 1\r\n for i in range(self.type.ndims):\r\n size *= self.type.shape[i]\r\n storage = storage.flatten()\r\n numpy_arr = storage.to_numpy(zero_copy_only=zero_copy_only)\r\n numpy_arr = numpy_arr.reshape(len(self), *self.type.shape)\r\n return numpy_arr\r\n\r\n def to_list_of_numpy(self, zero_copy_only=True):\r\n storage: pa.ListArray = self.storage\r\n shape = self.type.shape\r\n arrays = []\r\n for dim in range(1, self.type.ndims):\r\n assert shape[dim] is not None, f\"Support only dynamic size on first dimension. Got: {shape}\"\r\n\r\n first_dim_offsets = np.array([off.as_py() for off in storage.offsets])\r\n for i in range(len(storage)):\r\n storage_el = storage[i:i+1]\r\n first_dim = first_dim_offsets[i+1] - first_dim_offsets[i]\r\n # flatten storage\r\n for dim in range(self.type.ndims):\r\n storage_el = storage_el.flatten()\r\n\r\n numpy_arr = storage_el.to_numpy(zero_copy_only=zero_copy_only)\r\n arrays.append(numpy_arr.reshape(first_dim, *shape[1:]))\r\n\r\n return arrays\r\n\r\n def to_pylist(self):\r\n zero_copy_only = _is_zero_copy_only(self.storage.type)\r\n if self.type.shape[0] is None:\r\n return self.to_list_of_numpy(zero_copy_only=zero_copy_only)\r\n else:\r\n return self.to_numpy(zero_copy_only=zero_copy_only).tolist()\r\n```\r\n\r\nI ran few tests and it works as expected. Let me know what you think.", "Thanks for diving into this !\r\n\r\nIndeed focusing on making the first dimensions dynamic make total sense (and users could still re-order their dimensions to match this constraint).\r\nYour code looks great :) I think it can even be extended to support several dynamic dimensions if we want to.\r\n\r\nFeel free to open a PR to include these changes, then we can update our test suite to make sure it works in all use cases.\r\nIn particular I think we might need a few tweaks to allow it to be converted to pandas (though I haven't tested yet):\r\n\r\n```python\r\nfrom datasets import Dataset, Features, Array3D\r\n\r\n# this works\r\nmatrix = [[1, 0], [0, 1]]\r\nfeatures = Features({\"a\": Array3D(dtype=\"int32\", shape=(1, 2, 2))})\r\nd = Dataset.from_dict({\"a\": [[matrix], [matrix]]})\r\nprint(d.to_pandas())\r\n\r\n# this should work as well\r\nmatrix = [[1, 0], [0, 1]]\r\nfeatures = Features({\"a\": Array3D(dtype=\"int32\", shape=(None, 2, 2))})\r\nd = Dataset.from_dict({\"a\": [[matrix], [matrix] * 2]})\r\nprint(d.to_pandas())\r\n```\r\n\r\nI'll be happy to help you on this :)" ]
2020-11-25T14:32:21
2021-09-09T17:03:40
null
CONTRIBUTOR
null
null
null
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and their types features=datasets.Features( { "pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32")) } ), homepage=_HOMEPAGE, citation=_CITATION, ) def _generate_examples(self): """ Yields examples. """ yield 1, { "pose": [np.zeros(shape=(137, 2), dtype=np.float32)] } ``` But this doesn't work - > pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/887/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/887/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/886
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/886/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/886/comments
https://api.github.com/repos/huggingface/datasets/issues/886/events
https://github.com/huggingface/datasets/pull/886
750,829,314
MDExOlB1bGxSZXF1ZXN0NTI3NDU1MDU5
886
Fix wikipedia custom config
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I think this issue is still not resolve yet. Please check my comment in the following issue, thanks.\r\n[#577](https://github.com/huggingface/datasets/issues/577#issuecomment-868122769)" ]
2020-11-25T13:44:12
2021-06-25T05:24:16
2020-11-25T15:42:13
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/886", "html_url": "https://github.com/huggingface/datasets/pull/886", "diff_url": "https://github.com/huggingface/datasets/pull/886.diff", "patch_url": "https://github.com/huggingface/datasets/pull/886.patch", "merged_at": "2020-11-25T15:42:13" }
It should be possible to use the wikipedia dataset with any `language` and `date`. However it was not working as noticed in #784 . Indeed the custom wikipedia configurations were not enabled for some reason. I fixed that and was able to run ```python from datasets import load_dataset load_dataset("./datasets/wikipedia", language="zh", date="20201120", beam_runner='DirectRunner') ``` cc @stvhuang @SamuelCahyawijaya Fix #784
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/886/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/886/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/885
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/885/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/885/comments
https://api.github.com/repos/huggingface/datasets/issues/885/events
https://github.com/huggingface/datasets/issues/885
750,789,052
MDU6SXNzdWU3NTA3ODkwNTI=
885
Very slow cold-start
{ "login": "AmitMY", "id": 5757359, "node_id": "MDQ6VXNlcjU3NTczNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmitMY", "html_url": "https://github.com/AmitMY", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "repos_url": "https://api.github.com/users/AmitMY/repos", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "Good point!", "Yes indeed. We can probably improve that by using lazy imports", "#1690 added fast start-up of the library " ]
2020-11-25T12:47:58
2021-01-13T11:31:25
2021-01-13T11:31:25
CONTRIBUTOR
null
null
null
Hi, I expect when importing `datasets` that nothing major happens in the background, and so the import should be insignificant. When I load a metric, or a dataset, its fine that it takes time. The following ranges from 3 to 9 seconds: ``` python -m timeit -n 1 -r 1 'from datasets import load_dataset' ``` edit: sorry for the mis-tag, not sure how I added it.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/885/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/885/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/884
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/884/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/884/comments
https://api.github.com/repos/huggingface/datasets/issues/884/events
https://github.com/huggingface/datasets/pull/884
749,862,034
MDExOlB1bGxSZXF1ZXN0NTI2NjA5MDc1
884
Auto generate dummy data
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I took your comments into account.\r\nAlso now after compressing the dummy_data.zip file it runs a dummy data test (=make sure each split has at least 1 example using the dummy data)", "I just tested the tool with some datasets and found out that it's not working for datasets that download files using `download_and_extract(file_url)` (where file_url is a `str`). That's because in that case the dummy_data.zip is not a folder but a single zipped file.\r\n\r\nI think we have to fix that or we can have unexpected behavior when a scripts calls `download_and_extract(file_url)` several times, since it would always point to the same dummy data file.\r\n\r\nSo I decided to change that to have a folder containing the dummy files instead but it breaks around 90 tests so I need to update 90 dummy data files to follow this scheme. I'll probably fix them tomorrow morning.\r\n\r\nWhat do you guys think ? Also cc @patrickvonplaten to make sure I understand things correctly", "Ok I changed to use the dummy_data.zip content to be a folder even for single url calls to `dl_manager.download_and_extract`. Therefore the automatic dummy data generation tool works for most datasets now.\r\n\r\nTo avoid having to change all the old dummy_data.zip files I added backward compatiblity. \r\n\r\nThe only test failing is `tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_xcopa`\r\nIt is expected to fail since I had modify its dummy data structure that was wrong. It was causing issue with backward compatibility. It will be fixed as soon as this PR is merged" ]
2020-11-24T16:31:34
2020-11-26T14:18:47
2020-11-26T14:18:46
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/884", "html_url": "https://github.com/huggingface/datasets/pull/884", "diff_url": "https://github.com/huggingface/datasets/pull/884.diff", "patch_url": "https://github.com/huggingface/datasets/pull/884.patch", "merged_at": "2020-11-26T14:18:46" }
When adding a new dataset to the library, dummy data creation can take some time. To make things easier I added a command line tool that automatically generates dummy data when possible. The tool only supports certain data files types: txt, csv, tsv, jsonl, json and xml. Here are some examples: ``` python datasets-cli dummy_data ./datasets/snli --auto_generate python datasets-cli dummy_data ./datasets/squad --auto_generate --json_field data python datasets-cli dummy_data ./datasets/iwslt2017 --auto_generate --xml_tag seg --match_text_files "train*" --n_lines 15 # --xml_tag seg => each sample corresponds to a "seg" tag in the xml tree # --match_text_files "train*" => also match text files that don't have a proper text file extension (no suffix like ".txt" for example) # --n_lines 15 => some text files have headers so we have to use at least 15 lines ``` and here is the command usage: ``` usage: datasets-cli <command> [<args>] dummy_data [-h] [--auto_generate] [--n_lines N_LINES] [--json_field JSON_FIELD] [--xml_tag XML_TAG] [--match_text_files MATCH_TEXT_FILES] [--keep_uncompressed] [--cache_dir CACHE_DIR] path_to_dataset positional arguments: path_to_dataset Path to the dataset (example: ./datasets/squad) optional arguments: -h, --help show this help message and exit --auto_generate Try to automatically generate dummy data --n_lines N_LINES Number of lines or samples to keep when auto- generating dummy data --json_field JSON_FIELD Optional, json field to read the data from when auto- generating dummy data. In the json data files, this field must point to a list of samples as json objects (ex: the 'data' field for squad-like files) --xml_tag XML_TAG Optional, xml tag name of the samples inside the xml files when auto-generating dummy data. --match_text_files MATCH_TEXT_FILES Optional, a comma separated list of file patterns that looks for line-by-line text files other than *.txt or *.csv. Example: --match_text_files *.label --keep_uncompressed Don't compress the dummy data folders when auto- generating dummy data. Useful for debugging for to do manual adjustements before compressing. --cache_dir CACHE_DIR Cache directory to download and cache files when auto- generating dummy data ``` The command generates all the necessary `dummy_data.zip` files (one per config). How it works: - it runs the split_generators() method of the dataset script to download the original data files - when downloading it records a mapping between the downloaded files and the corresponding expected dummy data files paths - then for each data file it creates the dummy data file keeping only the first samples (the strategy depends on the type of file) - finally it compresses the dummy data folders into dummy_zip files ready for dataset tests Let me know if that makes sense or if you have ideas to improve this tool ! I also added a unit test.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/884/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/884/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/883
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/883/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/883/comments
https://api.github.com/repos/huggingface/datasets/issues/883/events
https://github.com/huggingface/datasets/issues/883
749,750,801
MDU6SXNzdWU3NDk3NTA4MDE=
883
Downloading/caching only a part of a datasets' dataset.
{ "login": "SapirWeissbuch", "id": 44585792, "node_id": "MDQ6VXNlcjQ0NTg1Nzky", "avatar_url": "https://avatars.githubusercontent.com/u/44585792?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SapirWeissbuch", "html_url": "https://github.com/SapirWeissbuch", "followers_url": "https://api.github.com/users/SapirWeissbuch/followers", "following_url": "https://api.github.com/users/SapirWeissbuch/following{/other_user}", "gists_url": "https://api.github.com/users/SapirWeissbuch/gists{/gist_id}", "starred_url": "https://api.github.com/users/SapirWeissbuch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SapirWeissbuch/subscriptions", "organizations_url": "https://api.github.com/users/SapirWeissbuch/orgs", "repos_url": "https://api.github.com/users/SapirWeissbuch/repos", "events_url": "https://api.github.com/users/SapirWeissbuch/events{/privacy}", "received_events_url": "https://api.github.com/users/SapirWeissbuch/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" } ]
open
false
null
[]
null
[ "Not at the moment but we could likely support this feature.", "?", "I think it would be a very helpful feature, because sometimes one only wants to evaluate models on the dev set, and the whole training data may be many times bigger.\r\nThis makes the task impossible with limited memory resources." ]
2020-11-24T14:25:18
2020-11-27T13:51:55
null
NONE
null
null
null
Hi, I want to use the validation data *only* (of natural question). I don't want to have the whole dataset cached in my machine, just the dev set. Is this possible? I can't find a way to do it in the docs. Thank you, Sapir
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/883/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/883/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/882
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/882/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/882/comments
https://api.github.com/repos/huggingface/datasets/issues/882/events
https://github.com/huggingface/datasets/pull/882
749,662,188
MDExOlB1bGxSZXF1ZXN0NTI2NDQyMjA2
882
Update README.md
{ "login": "vaibhavad", "id": 32997732, "node_id": "MDQ6VXNlcjMyOTk3NzMy", "avatar_url": "https://avatars.githubusercontent.com/u/32997732?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vaibhavad", "html_url": "https://github.com/vaibhavad", "followers_url": "https://api.github.com/users/vaibhavad/followers", "following_url": "https://api.github.com/users/vaibhavad/following{/other_user}", "gists_url": "https://api.github.com/users/vaibhavad/gists{/gist_id}", "starred_url": "https://api.github.com/users/vaibhavad/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vaibhavad/subscriptions", "organizations_url": "https://api.github.com/users/vaibhavad/orgs", "repos_url": "https://api.github.com/users/vaibhavad/repos", "events_url": "https://api.github.com/users/vaibhavad/events{/privacy}", "received_events_url": "https://api.github.com/users/vaibhavad/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-11-24T12:23:52
2021-01-29T10:41:07
2021-01-29T10:41:07
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/882", "html_url": "https://github.com/huggingface/datasets/pull/882", "diff_url": "https://github.com/huggingface/datasets/pull/882.diff", "patch_url": "https://github.com/huggingface/datasets/pull/882.patch", "merged_at": "2021-01-29T10:41:06" }
"no label" is "-" in the original dataset but "-1" in Huggingface distribution.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/882/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/882/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/881
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/881/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/881/comments
https://api.github.com/repos/huggingface/datasets/issues/881/events
https://github.com/huggingface/datasets/pull/881
749,548,107
MDExOlB1bGxSZXF1ZXN0NTI2MzQ5MDM2
881
Use GCP download url instead of tensorflow custom download for boolq
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-11-24T09:47:11
2020-11-24T10:12:34
2020-11-24T10:12:33
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/881", "html_url": "https://github.com/huggingface/datasets/pull/881", "diff_url": "https://github.com/huggingface/datasets/pull/881.diff", "patch_url": "https://github.com/huggingface/datasets/pull/881.patch", "merged_at": "2020-11-24T10:12:33" }
BoolQ is a dataset that used tf.io.gfile.copy to download the file from a GCP bucket. It prevented the dataset to be downloaded twice because of a FileAlreadyExistsError. Even though the error could be fixed by providing `overwrite=True` to the tf.io.gfile.copy call, I changed the script to use GCP download urls and use regular downloads instead and remove the tensorflow dependency. Fix #875
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/881/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/881/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/880
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/880/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/880/comments
https://api.github.com/repos/huggingface/datasets/issues/880/events
https://github.com/huggingface/datasets/issues/880
748,949,606
MDU6SXNzdWU3NDg5NDk2MDY=
880
Add SQA
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "I’ll take this one to test the workflow for the sprint next week cc @yjernite @lhoestq ", "@thomwolf here's a slightly adapted version of the code from the [official Tapas repository](https://github.com/google-research/tapas/blob/master/tapas/utils/interaction_utils.py) that is used to turn the `answer_coordinates` and `answer_texts` columns into true Python lists of tuples/strings:\r\n\r\n```\r\nimport pandas as pd\r\nimport ast\r\n\r\ndata = pd.read_csv(\"/content/sqa_data/random-split-1-dev.tsv\", sep='\\t')\r\n\r\ndef _parse_answer_coordinates(answer_coordinate_str):\r\n \"\"\"Parses the answer_coordinates of a question.\r\n Args:\r\n answer_coordinate_str: A string representation of a Python list of tuple\r\n strings.\r\n For example: \"['(1, 4)','(1, 3)', ...]\"\r\n \"\"\"\r\n\r\n try:\r\n answer_coordinates = []\r\n # make a list of strings\r\n coords = ast.literal_eval(answer_coordinate_str)\r\n # parse each string as a tuple\r\n for row_index, column_index in sorted(\r\n ast.literal_eval(coord) for coord in coords):\r\n answer_coordinates.append((row_index, column_index))\r\n except SyntaxError:\r\n raise ValueError('Unable to evaluate %s' % answer_coordinate_str)\r\n \r\n return answer_coordinates\r\n\r\n\r\ndef _parse_answer_text(answer_text):\r\n \"\"\"Populates the answer_texts field of `answer` by parsing `answer_text`.\r\n Args:\r\n answer_text: A string representation of a Python list of strings.\r\n For example: \"[u'test', u'hello', ...]\"\r\n \"\"\"\r\n try:\r\n answer = []\r\n for value in ast.literal_eval(answer_text):\r\n answer.append(value)\r\n except SyntaxError:\r\n raise ValueError('Unable to evaluate %s' % answer_text)\r\n\r\n return answer\r\n\r\ndata['answer_coordinates'] = data['answer_coordinates'].apply(lambda coords_str: _parse_answer_coordinates(coords_str))\r\ndata['answer_text'] = data['answer_text'].apply(lambda txt: _parse_answer_text(txt))\r\n```\r\n\r\nHere I'm using Pandas to read in one of the TSV files (the dev set). \r\n\r\n", "Closing since SQA was added in #1566 " ]
2020-11-23T16:31:55
2020-12-23T13:58:24
2020-12-23T13:58:23
CONTRIBUTOR
null
null
null
## Adding a Dataset - **Name:** SQA (Sequential Question Answering) by Microsoft. - **Description:** The SQA dataset was created to explore the task of answering sequences of inter-related questions on HTML tables. It has 6,066 sequences with 17,553 questions in total. - **Paper:** https://www.microsoft.com/en-us/research/publication/search-based-neural-structured-learning-sequential-question-answering/ - **Data:** https://www.microsoft.com/en-us/download/details.aspx?id=54253 - **Motivation:** currently, the [Tapas](https://ai.googleblog.com/2020/04/using-neural-networks-to-find-answers.html) algorithm by Google AI is being added to the Transformers library (see https://github.com/huggingface/transformers/pull/8113). It would be great to use that model in combination with this dataset, on which it achieves SOTA results (average question accuracy of 0.71). Note 1: this dataset actually consists of 2 types of files: 1) TSV files, containing the questions, answer coordinates and answer texts (for training, dev and test) 2) a folder of csv files, which contain the actual tabular data Note 2: if you download the dataset straight from the download link above, then you will see that the `answer_coordinates` and `answer_text` columns are string lists of string tuples and strings respectively, which is not ideal. It would be better to make them true Python lists of tuples and strings respectively (using `ast.literal_eval`), before uploading them to the HuggingFace hub. Adding this would be great! Then we could possibly also add [WTQ (WikiTable Questions)](https://github.com/ppasupat/WikiTableQuestions) and [TabFact (Tabular Fact Checking)](https://github.com/wenhuchen/Table-Fact-Checking) on which TAPAS also achieves state-of-the-art results. Note that the TAPAS algorithm requires these datasets to first be converted into the SQA format. Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/880/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/880/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/879
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/879/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/879/comments
https://api.github.com/repos/huggingface/datasets/issues/879/events
https://github.com/huggingface/datasets/issues/879
748,848,847
MDU6SXNzdWU3NDg4NDg4NDc=
879
boolq does not load
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "Hi ! It runs on my side without issues. I tried\r\n```python\r\nfrom datasets import load_dataset\r\nload_dataset(\"boolq\")\r\n```\r\n\r\nWhat version of datasets and tensorflow are your runnning ?\r\nAlso if you manage to get a minimal reproducible script (on google colab for example) that would be useful.", "hey\ni do the exact same commands. for me it fails i guess might be issues with\ncaching maybe?\nthanks\nbest\nrabeeh\n\nOn Tue, Nov 24, 2020, 10:24 AM Quentin Lhoest <notifications@github.com>\nwrote:\n\n> Hi ! It runs on my side without issues. I tried\n>\n> from datasets import load_datasetload_dataset(\"boolq\")\n>\n> What version of datasets and tensorflow are your runnning ?\n> Also if you manage to get a minimal reproducible script (on google colab\n> for example) that would be useful.\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/879#issuecomment-732769114>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABP4ZCGGDR2FUMRKZTIY5CTSRN3VXANCNFSM4T7R3U6A>\n> .\n>\n", "Could you check if it works on the master branch ?\r\nYou can use `load_dataset(\"boolq\", script_version=\"master\")` to do so.\r\nWe did some changes recently in boolq to remove the TF dependency and we changed the way the data files are downloaded in https://github.com/huggingface/datasets/pull/881" ]
2020-11-23T14:28:28
2022-10-05T12:23:32
2022-10-05T12:23:32
CONTRIBUTOR
null
null
null
Hi I am getting these errors trying to load boolq thanks Traceback (most recent call last): File "test.py", line 5, in <module> data = AutoTask().get("boolq").get_dataset("train", n_obs=10) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 42, in get_dataset dataset = self.load_dataset(split=split) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 38, in load_dataset return datasets.load_dataset(self.task.name, split=split) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 150, in download_custom get_from_cache(url, cache_dir=cache_dir, local_files_only=True, use_etag=False) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 472, in get_from_cache f"Cannot find the requested files in the cached path at {cache_path} and outgoing traffic has been" FileNotFoundError: Cannot find the requested files in the cached path at /idiap/home/rkarimi/.cache/huggingface/datasets/eaee069e38f6ceaa84de02ad088c34e63ec97671f2cd1910ddb16b10dc60808c and outgoing traffic has been disabled. To enable file online look-ups, set 'local_files_only' to False.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/879/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/879/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/878
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/878/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/878/comments
https://api.github.com/repos/huggingface/datasets/issues/878/events
https://github.com/huggingface/datasets/issues/878
748,621,981
MDU6SXNzdWU3NDg2MjE5ODE=
878
Loading Data From S3 Path in Sagemaker
{ "login": "mahesh1amour", "id": 42795522, "node_id": "MDQ6VXNlcjQyNzk1NTIy", "avatar_url": "https://avatars.githubusercontent.com/u/42795522?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mahesh1amour", "html_url": "https://github.com/mahesh1amour", "followers_url": "https://api.github.com/users/mahesh1amour/followers", "following_url": "https://api.github.com/users/mahesh1amour/following{/other_user}", "gists_url": "https://api.github.com/users/mahesh1amour/gists{/gist_id}", "starred_url": "https://api.github.com/users/mahesh1amour/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mahesh1amour/subscriptions", "organizations_url": "https://api.github.com/users/mahesh1amour/orgs", "repos_url": "https://api.github.com/users/mahesh1amour/repos", "events_url": "https://api.github.com/users/mahesh1amour/events{/privacy}", "received_events_url": "https://api.github.com/users/mahesh1amour/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" } ]
open
false
null
[]
null
[ "This would be a neat feature", "> neat feature\r\n\r\nI dint get these clearly, can you please elaborate like how to work on these ", "It could maybe work almost out of the box just by using `cached_path` in the text/csv/json scripts, no?", "Thanks thomwolf and julien-c\r\n\r\nI'm still confusion on what you guys said, \r\n\r\nI have solved the problem as follows:\r\n\r\n1. read the csv file using pandas from s3 \r\n2. Convert to dictionary key as column name and values as list column data\r\n3. convert it to Dataset using \r\n`from datasets import Dataset`\r\n`train_dataset = Dataset.from_dict(train_dict)`", "We were brainstorming around your use-case.\r\n\r\nLet's keep the issue open for now, I think this is an interesting question to think about.", "> We were brainstorming around your use-case.\r\n> \r\n> Let's keep the issue open for now, I think this is an interesting question to think about.\r\n\r\nSure thomwolf, Thanks for your concern ", "I agree it would be cool to have that feature. Also that's good to know that pandas supports this.\r\nFor the moment I'd suggest to first download the files locally as thom suggested and then load the dataset by providing paths to the local files", "Don't get\n", "Any updates on this issue?\r\nI face a similar issue. I have many parquet files in S3 and I would like to train on them. \r\nTo be honest I even face issues with only getting the last layer embedding out of them.", "Hi dorlavie, \r\nYou can find one solution that i have mentioned above, that can help you. \r\nAnd there is one more solution also which is downloading files locally\r\n", "> Hi dorlavie,\r\n> You can find one solution that i have mentioned above, that can help you.\r\n> And there is one more solution also which is downloading files locally\r\n\r\nmahesh1amour, thanks for the fast reply\r\n\r\nUnfortunately, in my case I can not read with pandas. The dataset is too big (50GB). \r\nIn addition, due to security concerns I am not allowed to save the data locally", "@dorlavie could use `boto3` to download the data to your local machine and then load it with `dataset`\r\n\r\nboto3 example [documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-example-download-file.html)\r\n```python\r\nimport boto3\r\n\r\ns3 = boto3.client('s3')\r\ns3.download_file('BUCKET_NAME', 'OBJECT_NAME', 'FILE_NAME')\r\n```\r\n\r\ndatasets example [documentation](https://huggingface.co/docs/datasets/loading_datasets.html)\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', data_files=['my_file_1.csv', 'my_file_2.csv', 'my_file_3.csv'])\r\n```\r\n", "Thanks @philschmid for the suggestion.\r\nAs I mentioned in the previous comment, due to security issues I can not save the data locally.\r\nI need to read it from S3 and process it directly.\r\n\r\nI guess that many other people try to train / fit those models on huge datasets (e.g entire Wiki), what is the best practice in those cases?", "If I understand correctly you're not allowed to write data on disk that you downloaded from S3 for example ?\r\nOr is it the use of the `boto3` library that is not allowed in your case ?", "@lhoestq yes you are correct.\r\nI am not allowed to save the \"raw text\" locally - The \"raw text\" must be saved only on S3.\r\nI am allowed to save the output of any model locally. \r\nIt doesn't matter how I do it boto3/pandas/pyarrow, it is forbidden", "@dorlavie are you using sagemaker for training too? Then you could use S3 URI, for example `s3://my-bucket/my-training-data` and pass it within the `.fit()` function when you start the sagemaker training job. Sagemaker would then download the data from s3 into the training runtime and you could load it from disk\r\n\r\n**sagemaker start training job**\r\n```python\r\npytorch_estimator.fit({'train':'s3://my-bucket/my-training-data','eval':'s3://my-bucket/my-evaluation-data'})\r\n```\r\n\r\n**in the train.py script**\r\n```python\r\nfrom datasets import load_from_disk\r\n\r\ntrain_dataset = load_from_disk(os.environ['SM_CHANNEL_TRAIN'])\r\n```\r\n\r\nI have created an example of how to use transformers and datasets with sagemaker. \r\nhttps://github.com/philschmid/huggingface-sagemaker-example/tree/main/03_huggingface_sagemaker_trainer_with_data_from_s3\r\n\r\nThe example contains a jupyter notebook `sagemaker-example.ipynb` and an `src/` folder. The sagemaker-example is a jupyter notebook that is used to create the training job on AWS Sagemaker. The `src/` folder contains the `train.py`, our training script, and `requirements.txt` for additional dependencies.\r\n\r\n" ]
2020-11-23T09:17:22
2020-12-23T09:53:08
null
NONE
null
null
null
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/878/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/878/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/877
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/877/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/877/comments
https://api.github.com/repos/huggingface/datasets/issues/877/events
https://github.com/huggingface/datasets/issues/877
748,234,438
MDU6SXNzdWU3NDgyMzQ0Mzg=
877
DataLoader(datasets) become more and more slowly within iterations
{ "login": "shexuan", "id": 25664170, "node_id": "MDQ6VXNlcjI1NjY0MTcw", "avatar_url": "https://avatars.githubusercontent.com/u/25664170?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shexuan", "html_url": "https://github.com/shexuan", "followers_url": "https://api.github.com/users/shexuan/followers", "following_url": "https://api.github.com/users/shexuan/following{/other_user}", "gists_url": "https://api.github.com/users/shexuan/gists{/gist_id}", "starred_url": "https://api.github.com/users/shexuan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shexuan/subscriptions", "organizations_url": "https://api.github.com/users/shexuan/orgs", "repos_url": "https://api.github.com/users/shexuan/repos", "events_url": "https://api.github.com/users/shexuan/events{/privacy}", "received_events_url": "https://api.github.com/users/shexuan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Thanks for reporting.\r\nDo you have the same slowdown when you iterate through the raw dataset object as well ? (no dataloader)\r\nIt would be nice to know whether it comes from the dataloader or not", "> Hi ! Thanks for reporting.\r\n> Do you have the same slowdown when you iterate through the raw dataset object as well ? (no dataloader)\r\n> It would be nice to know whether it comes from the dataloader or not\r\n\r\nI did not iter data from raw dataset, maybe I will test later. Now I iter all files directly from `open(file)`, around 20000it/s." ]
2020-11-22T12:41:10
2020-11-29T15:45:12
2020-11-29T15:45:12
NONE
null
null
null
Hello, when I for loop my dataloader, the loading speed is becoming more and more slowly! ``` dataset = load_from_disk(dataset_path) # around 21,000,000 lines lineloader = tqdm(DataLoader(dataset, batch_size=1)) for idx, line in enumerate(lineloader): # do some thing for each line ``` In the begining, the loading speed is around 2000it/s, but after 1 minutes later, the speed is much slower, just around 800it/s. And when I set `num_workers=4` in DataLoader, the loading speed is much lower, just 130it/s. Could you please help me with this problem? Thanks a lot!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/877/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/877/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/876
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/876/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/876/comments
https://api.github.com/repos/huggingface/datasets/issues/876/events
https://github.com/huggingface/datasets/issues/876
748,195,104
MDU6SXNzdWU3NDgxOTUxMDQ=
876
imdb dataset cannot be loaded
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "It looks like there was an issue while building the imdb dataset.\r\nCould you provide more information about your OS and the version of python and `datasets` ?\r\n\r\nAlso could you try again with \r\n```python\r\ndataset = datasets.load_dataset(\"imdb\", split=\"train\", download_mode=\"force_redownload\")\r\n```\r\nto make sure it's not a corrupted file issue ?", "I was using version 1.1.2 and this resolved with version 1.1.3, thanks. ", "Hello,\r\nI have the same pb with 1.8.0", "Hi ! I just tried in 1.8.0 and it worked fine. Can you try again ? Maybe the dataset host had some issues that are fixed now", "Hello,\r\nIt works fine now :) !\r\nThanks !" ]
2020-11-22T08:24:43
2021-11-26T11:07:16
2020-12-24T17:38:47
CONTRIBUTOR
null
null
null
Hi I am trying to load the imdb train dataset `dataset = datasets.load_dataset("imdb", split="train")` getting following errors, thanks for your help ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 558, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 73, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=32660064, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='test', num_bytes=26476338, num_examples=20316, dataset_name='imdb')}, {'expected': SplitInfo(name='train', num_bytes=33442202, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}] >>> dataset = datasets.load_dataset("imdb", split="train") ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/876/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/876/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/875
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/875/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/875/comments
https://api.github.com/repos/huggingface/datasets/issues/875/events
https://github.com/huggingface/datasets/issues/875
748,194,311
MDU6SXNzdWU3NDgxOTQzMTE=
875
bug in boolq dataset loading
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I just opened a PR to fix this.\r\nThanks for reporting !" ]
2020-11-22T08:18:34
2020-11-24T10:12:33
2020-11-24T10:12:33
CONTRIBUTOR
null
null
null
Hi I am trying to load boolq dataset: ``` import datasets datasets.load_dataset("boolq") ``` I am getting the following errors, thanks for your help ``` >>> import datasets 2020-11-22 09:16:30.070332: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2020-11-22 09:16:30.070389: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. >>> datasets.load_dataset("boolq") cahce dir /idiap/temp/rkarimi/cache_home/datasets cahce dir /idiap/temp/rkarimi/cache_home/datasets Using custom data configuration default Downloading and preparing dataset boolq/default (download: 8.36 MiB, generated: 7.47 MiB, post-processed: Unknown size, total: 15.83 MiB) to /idiap/temp/rkarimi/cache_home/datasets/boolq/default/0.1.0/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11... cahce dir /idiap/temp/rkarimi/cache_home/datasets cahce dir /idiap/temp/rkarimi/cache_home/datasets/downloads Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 149, in download_custom custom_download(url, path) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/tensorflow/python/lib/io/file_io.py", line 516, in copy_v2 compat.path_to_bytes(src), compat.path_to_bytes(dst), overwrite) tensorflow.python.framework.errors_impl.AlreadyExistsError: file already exists ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/875/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/875/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/874
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/874/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/874/comments
https://api.github.com/repos/huggingface/datasets/issues/874/events
https://github.com/huggingface/datasets/issues/874
748,193,140
MDU6SXNzdWU3NDgxOTMxNDA=
874
trec dataset unavailable
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "This was fixed in #740 \r\nCould you try to update `datasets` and try again ?", "This has been fixed in datasets 1.1.3" ]
2020-11-22T08:09:36
2020-11-27T13:56:42
2020-11-27T13:56:42
CONTRIBUTOR
null
null
null
Hi when I try to load the trec dataset I am getting these errors, thanks for your help `datasets.load_dataset("trec", split="train") ` ``` File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators dl_files = dl_manager.download_and_extract(_URLs) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 477, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/874/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/874/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/873
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/873/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/873/comments
https://api.github.com/repos/huggingface/datasets/issues/873/events
https://github.com/huggingface/datasets/issues/873
747,959,523
MDU6SXNzdWU3NDc5NTk1MjM=
873
load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error
{ "login": "vishal-burman", "id": 19861874, "node_id": "MDQ6VXNlcjE5ODYxODc0", "avatar_url": "https://avatars.githubusercontent.com/u/19861874?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vishal-burman", "html_url": "https://github.com/vishal-burman", "followers_url": "https://api.github.com/users/vishal-burman/followers", "following_url": "https://api.github.com/users/vishal-burman/following{/other_user}", "gists_url": "https://api.github.com/users/vishal-burman/gists{/gist_id}", "starred_url": "https://api.github.com/users/vishal-burman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vishal-burman/subscriptions", "organizations_url": "https://api.github.com/users/vishal-burman/orgs", "repos_url": "https://api.github.com/users/vishal-burman/repos", "events_url": "https://api.github.com/users/vishal-burman/events{/privacy}", "received_events_url": "https://api.github.com/users/vishal-burman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I get the same error. It was fixed some days ago, but again it appears", "Hi @mrm8488 it's working again today without any fix so I am closing this issue.", "I see the issue happening again today - \r\n\r\n[nltk_data] Downloading package stopwords to /root/nltk_data...\r\n[nltk_data] Package stopwords is already up-to-date!\r\nDownloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...\r\n\r\n---------------------------------------------------------------------------\r\n\r\nNotADirectoryError Traceback (most recent call last)\r\n\r\n<ipython-input-9-cd4bf8bea840> in <module>()\r\n 22 \r\n 23 \r\n---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train')\r\n 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation')\r\n 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test')\r\n\r\n5 frames\r\n\r\n/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)\r\n 132 else:\r\n 133 logging.fatal(\"Unsupported publisher: %s\", publisher)\r\n--> 134 files = sorted(os.listdir(top_dir))\r\n 135 \r\n 136 ret_files = []\r\n\r\nNotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'\r\n\r\nCan someone please take a look ?", "Sometimes happens. Try in a while", "It is working now, thank you. ", "Has anyone solved this ? I still get this error ", "> atal(\"Unsupported publisher: %s\", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = []\r\n> \r\n> NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'\r\n> \r\n> Can someone please take a look ?\r\n\r\n2 short-term workarounds:\r\n\r\n1. Use this line instead `dataset = load_dataset('ccdv/cnn_dailymail', '3.0.0')`. [In a related issue](https://github.com/huggingface/datasets/issues/996#issuecomment-997343101), this person mentioned another data source copy that just works.\r\n2. Use the same data source, but edit the urls. Instead of google drive quota problems mentioned in #996, I was getting the \"can't scan this file for viruses\" problem, which results in that prompted html getting downloaded instead of the files. You can get around this by:\r\n 1. Look at the traceback and find out where `cnn_dailymail.py` is on your computer.\r\n 2. Edit the `cnn_stories` and `dm_stories` url's by adding the following to the end of them `&confirm=t`. This should be around line 67.\r\n 3. You may have to remove those confirmation html files in your download directory (`~/.cache/huggingface/datasets/downloads` for me) so that they don't get in the way of the new download attempts.\r\n\r\nEither method works for me. I would've made a PR, but not sure if they want to go with the new ccdv/cnn_dailymail source or not.", "experience the same problem, ccdv/cnn_dailymail not working either. \r\n\r\nSolve this problem by installing datasets library from the master branch:\r\npython -m pip install git+https://github.com/huggingface/datasets.git@master", "Seem to be getting this again even with 1.18.4. I believe it worked yesterday.", "Hitting this one as well.", ">Hitting this one as well.\r\n\r\nHas anyone solved this ? I still get this error", "@yoheimiyamoto The solution provided by @davidshinn (i.e. `dataset = load_dataset('ccdv/cnn_dailymail', '3.0.0')`) worked for me." ]
2020-11-21T06:30:45
2022-05-05T07:19:59
2020-11-22T12:18:05
NONE
null
null
null
``` from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0') ``` Stack trace: ``` --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-6-2e06a8332652> in <module>() 1 from datasets import load_dataset ----> 2 dataset = load_dataset('cnn_dailymail', '3.0.0') 5 frames /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 608 download_config=download_config, 609 download_mode=download_mode, --> 610 ignore_verifications=ignore_verifications, 611 ) 612 /usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 513 if not downloaded_from_gcs: 514 self._download_and_prepare( --> 515 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 516 ) 517 # Sync info /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 568 split_dict = SplitDict(dataset_name=self.name) 569 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 570 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 571 572 # Checksums verification /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _split_generators(self, dl_manager) 252 def _split_generators(self, dl_manager): 253 dl_paths = dl_manager.download_and_extract(_DL_URLS) --> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN) 255 # Generate shared vocabulary 256 /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _subset_filenames(dl_paths, split) 153 else: 154 logging.fatal("Unsupported split: %s", split) --> 155 cnn = _find_files(dl_paths, "cnn", urls) 156 dm = _find_files(dl_paths, "dm", urls) 157 return cnn + dm /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` I have ran the code on Google Colab
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/873/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/873/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/872
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/872/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/872/comments
https://api.github.com/repos/huggingface/datasets/issues/872/events
https://github.com/huggingface/datasets/pull/872
747,653,697
MDExOlB1bGxSZXF1ZXN0NTI0ODM4NjEx
872
Add IndicGLUE dataset and Metrics
{ "login": "sumanthd17", "id": 28291870, "node_id": "MDQ6VXNlcjI4MjkxODcw", "avatar_url": "https://avatars.githubusercontent.com/u/28291870?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sumanthd17", "html_url": "https://github.com/sumanthd17", "followers_url": "https://api.github.com/users/sumanthd17/followers", "following_url": "https://api.github.com/users/sumanthd17/following{/other_user}", "gists_url": "https://api.github.com/users/sumanthd17/gists{/gist_id}", "starred_url": "https://api.github.com/users/sumanthd17/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sumanthd17/subscriptions", "organizations_url": "https://api.github.com/users/sumanthd17/orgs", "repos_url": "https://api.github.com/users/sumanthd17/repos", "events_url": "https://api.github.com/users/sumanthd17/events{/privacy}", "received_events_url": "https://api.github.com/users/sumanthd17/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "thanks ! merging now" ]
2020-11-20T17:09:34
2020-11-25T17:01:11
2020-11-25T15:26:07
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/872", "html_url": "https://github.com/huggingface/datasets/pull/872", "diff_url": "https://github.com/huggingface/datasets/pull/872.diff", "patch_url": "https://github.com/huggingface/datasets/pull/872.patch", "merged_at": "2020-11-25T15:26:07" }
Added IndicGLUE benchmark for evaluating models on 11 Indian Languages. The descriptions of the tasks and the corresponding paper can be found [here](https://indicnlp.ai4bharat.org/indic-glue/) - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/872/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/872/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/871
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/871/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/871/comments
https://api.github.com/repos/huggingface/datasets/issues/871/events
https://github.com/huggingface/datasets/issues/871
747,470,136
MDU6SXNzdWU3NDc0NzAxMzY=
871
terminate called after throwing an instance of 'google::protobuf::FatalException'
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Loading the iwslt2017-en-nl config of iwslt2017 works fine on my side. \r\nMaybe you can open an issue on transformers as well ? And also add more details about your environment (OS, python version, version of transformers and datasets etc.)", "closing now, figured out this is because the max length of decoder was set smaller than the input_dimensions. thanks " ]
2020-11-20T12:56:24
2020-12-12T21:16:32
2020-12-12T21:16:32
CONTRIBUTOR
null
null
null
Hi I am using the dataset "iwslt2017-en-nl", and after downloading it I am getting this error when trying to evaluate it on T5-base with seq2seq_trainer.py in the huggingface repo could you assist me please? thanks 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 63/63 [02:47<00:00, 2.18s/it][libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): run_t5_base_eval.sh: line 19: 5795 Aborted
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/871/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/871/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/870
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/870/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/870/comments
https://api.github.com/repos/huggingface/datasets/issues/870/events
https://github.com/huggingface/datasets/issues/870
747,021,996
MDU6SXNzdWU3NDcwMjE5OTY=
870
[Feature Request] Add optional parameter in text loading script to preserve linebreaks
{ "login": "jncasey", "id": 31020859, "node_id": "MDQ6VXNlcjMxMDIwODU5", "avatar_url": "https://avatars.githubusercontent.com/u/31020859?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jncasey", "html_url": "https://github.com/jncasey", "followers_url": "https://api.github.com/users/jncasey/followers", "following_url": "https://api.github.com/users/jncasey/following{/other_user}", "gists_url": "https://api.github.com/users/jncasey/gists{/gist_id}", "starred_url": "https://api.github.com/users/jncasey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jncasey/subscriptions", "organizations_url": "https://api.github.com/users/jncasey/orgs", "repos_url": "https://api.github.com/users/jncasey/repos", "events_url": "https://api.github.com/users/jncasey/events{/privacy}", "received_events_url": "https://api.github.com/users/jncasey/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Hi ! Thanks for your message.\r\nIndeed it's a free feature we can add and that can be useful.\r\nIf you want to contribute, feel free to open a PR to add it to the text dataset script :)", "Resolved via #1913." ]
2020-11-19T23:51:31
2022-06-01T15:25:53
2022-06-01T15:25:52
NONE
null
null
null
I'm working on a project about rhyming verse using phonetic poetry and song lyrics, and line breaks are a vital part of the data. I recently switched over to use the datasets library when my various corpora grew larger than my computer's memory. And so far, it is SO great. But the first time I processed all of my data into a dataset, I hadn't realized the text loader script was processing the source files line-by-line and stripping off the newlines. Once I caught the issue, I made my own data loader by modifying one line in the default text loader (changing `batch = batch.splitlines()` to `batch = batch.splitlines(True)` inside `_generate_tables`). And so I'm all set as far as my project is concerned. But if my use case is more general, it seems like it'd be pretty trivial to add a kwarg to the default text loader called keeplinebreaks or something, which would default to False and get passed to `splitlines()`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/870/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/870/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/869
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/869/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/869/comments
https://api.github.com/repos/huggingface/datasets/issues/869/events
https://github.com/huggingface/datasets/pull/869
746,495,711
MDExOlB1bGxSZXF1ZXN0NTIzODc3OTkw
869
Update ner datasets infos
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ ":+1: Thanks for fixing it!" ]
2020-11-19T11:28:03
2020-11-19T14:14:18
2020-11-19T14:14:17
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/869", "html_url": "https://github.com/huggingface/datasets/pull/869", "diff_url": "https://github.com/huggingface/datasets/pull/869.diff", "patch_url": "https://github.com/huggingface/datasets/pull/869.patch", "merged_at": "2020-11-19T14:14:17" }
Update the dataset_infos.json files for changes made in #850 regarding the ner datasets feature types (and the change to ClassLabel) I also fixed the ner types of conll2003
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/869/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/869/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/868
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/868/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/868/comments
https://api.github.com/repos/huggingface/datasets/issues/868/events
https://github.com/huggingface/datasets/pull/868
745,889,882
MDExOlB1bGxSZXF1ZXN0NTIzMzc2MzQ3
868
Consistent metric outputs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 4190228726, "node_id": "LA_kwDODunzps75wdD2", "url": "https://api.github.com/repos/huggingface/datasets/labels/transfer-to-evaluate", "name": "transfer-to-evaluate", "color": "E3165C", "default": false, "description": "" } ]
open
false
null
[]
null
[ "I keep this PR in stand-by for next week's datasets sprint. If the next release is 2.0.0 then we can include it given that it's breaking for many metrics" ]
2020-11-18T18:05:59
2022-09-23T08:27:37
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/868", "html_url": "https://github.com/huggingface/datasets/pull/868", "diff_url": "https://github.com/huggingface/datasets/pull/868.diff", "patch_url": "https://github.com/huggingface/datasets/pull/868.patch", "merged_at": null }
To automate the use of metrics, they should return consistent outputs. In particular I'm working on adding a conversion of metrics to keras metrics. To achieve this we need two things: - have each metric return dictionaries of string -> floats since each keras metrics should return one float - define in the metric info the different fields of the output dictionary In this PR I'm adding these two features. I also fixed a few bugs in some metrics #867 needs to be merged first
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/868/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/868/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/867
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/867/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/867/comments
https://api.github.com/repos/huggingface/datasets/issues/867/events
https://github.com/huggingface/datasets/pull/867
745,773,955
MDExOlB1bGxSZXF1ZXN0NTIzMjc4MjI4
867
Fix some metrics feature types
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2020-11-18T15:46:11
2020-11-19T17:35:58
2020-11-19T17:35:57
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/867", "html_url": "https://github.com/huggingface/datasets/pull/867", "diff_url": "https://github.com/huggingface/datasets/pull/867.diff", "patch_url": "https://github.com/huggingface/datasets/pull/867.patch", "merged_at": "2020-11-19T17:35:57" }
Replace `int` feature type to `int32` since `int` is not a pyarrow dtype in those metrics: - accuracy - precision - recall - f1 I also added the sklearn citation and used keyword arguments to remove future warnings
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/867/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/867/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/866
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/866/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/866/comments
https://api.github.com/repos/huggingface/datasets/issues/866/events
https://github.com/huggingface/datasets/issues/866
745,719,222
MDU6SXNzdWU3NDU3MTkyMjI=
866
OSCAR from Inria group
{ "login": "jchwenger", "id": 34098722, "node_id": "MDQ6VXNlcjM0MDk4NzIy", "avatar_url": "https://avatars.githubusercontent.com/u/34098722?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jchwenger", "html_url": "https://github.com/jchwenger", "followers_url": "https://api.github.com/users/jchwenger/followers", "following_url": "https://api.github.com/users/jchwenger/following{/other_user}", "gists_url": "https://api.github.com/users/jchwenger/gists{/gist_id}", "starred_url": "https://api.github.com/users/jchwenger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jchwenger/subscriptions", "organizations_url": "https://api.github.com/users/jchwenger/orgs", "repos_url": "https://api.github.com/users/jchwenger/repos", "events_url": "https://api.github.com/users/jchwenger/events{/privacy}", "received_events_url": "https://api.github.com/users/jchwenger/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "PR is already open here : #348 \r\nThe only thing remaining is to compute the metadata of each subdataset (one per language + shuffled/unshuffled).\r\nAs soon as #863 is merged we can start computing them. This will take a bit of time though", "Grand, thanks for this!" ]
2020-11-18T14:40:54
2020-11-18T15:01:30
2020-11-18T15:01:30
NONE
null
null
null
## Adding a Dataset - **Name:** *OSCAR* (Open Super-large Crawled ALMAnaCH coRpus), multilingual parsing of Common Crawl (separate crawls for many different languages), [here](https://oscar-corpus.com/). - **Description:** *OSCAR or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.* - **Paper:** *[here](https://hal.inria.fr/hal-02148693)* - **Data:** *[here](https://oscar-corpus.com/)* - **Motivation:** *useful for unsupervised tasks in separate languages. In an ideal world, your team would be able to obtain the unshuffled version, that could be used to train GPT-2-like models (the shuffled version, I suppose, could be used for translation).* I am aware that you do offer the "colossal" Common Crawl dataset already, but this has the advantage to be available in many subcorpora for different languages.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/866/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/866/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/865
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/865/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/865/comments
https://api.github.com/repos/huggingface/datasets/issues/865/events
https://github.com/huggingface/datasets/issues/865
745,430,497
MDU6SXNzdWU3NDU0MzA0OTc=
865
Have Trouble importing `datasets`
{ "login": "forest1988", "id": 2755894, "node_id": "MDQ6VXNlcjI3NTU4OTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4", "gravatar_id": "", "url": "https://api.github.com/users/forest1988", "html_url": "https://github.com/forest1988", "followers_url": "https://api.github.com/users/forest1988/followers", "following_url": "https://api.github.com/users/forest1988/following{/other_user}", "gists_url": "https://api.github.com/users/forest1988/gists{/gist_id}", "starred_url": "https://api.github.com/users/forest1988/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/forest1988/subscriptions", "organizations_url": "https://api.github.com/users/forest1988/orgs", "repos_url": "https://api.github.com/users/forest1988/repos", "events_url": "https://api.github.com/users/forest1988/events{/privacy}", "received_events_url": "https://api.github.com/users/forest1988/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I'm sorry, this was a problem with my environment.\r\nNow that I have identified the cause of environmental dependency, I would like to fix it and try it.\r\nExcuse me for making a noise." ]
2020-11-18T08:04:41
2020-11-18T08:16:35
2020-11-18T08:16:35
CONTRIBUTOR
null
null
null
I'm failing to import transformers (v4.0.0-dev), and tracing the cause seems to be failing to import datasets. I cloned the newest version of datasets (master branch), and do `pip install -e .`. Then, `import datasets` causes the error below. ``` ~/workspace/Clone/datasets/src/datasets/utils/file_utils.py in <module> 116 sys.path.append(str(HF_MODULES_CACHE)) 117 --> 118 os.makedirs(HF_MODULES_CACHE, exist_ok=True) 119 if not os.path.exists(os.path.join(HF_MODULES_CACHE, "__init__.py")): 120 with open(os.path.join(HF_MODULES_CACHE, "__init__.py"), "w"): ~/.pyenv/versions/anaconda3-2020.07/lib/python3.8/os.py in makedirs(name, mode, exist_ok) 221 return 222 try: --> 223 mkdir(name, mode) 224 except OSError: 225 # Cannot rely on checking for EEXIST, since the operating system FileNotFoundError: [Errno 2] No such file or directory: '<MY_HOME_DIRECTORY>/.cache/huggingface/modules' ``` The error occurs in `os.makedirs` in `file_utils.py`, even though `exist_ok = True` option is set. (I use Python 3.8, so `exist_ok` is expected to work.) I've checked some environment variables, and they are set as below. ``` *** NameError: name 'HF_MODULES_CACHE' is not defined *** NameError: name 'hf_cache_home' is not defined *** NameError: name 'XDG_CACHE_HOME' is not defined ``` Should I set some environment variables before using this library? And, do you have any idea why "No such file or directory" occurs even though the `exist_ok = True` option is set? Thank you in advance.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/865/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/865/timeline
null
completed
false