url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.62B
node_id
stringlengths
18
32
number
int64
1
5.62k
title
stringlengths
1
290
user
dict
labels
list
state
stringclasses
1 value
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
2 values
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/395
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/395/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/395/comments
https://api.github.com/repos/huggingface/datasets/issues/395/events
https://github.com/huggingface/datasets/issues/395
657,454,983
MDU6SXNzdWU2NTc0NTQ5ODM=
395
Memory issue when doing select
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[]
"2020-07-15T15:43:38"
"2020-07-16T08:07:31"
"2020-07-16T08:07:31"
MEMBER
null
As noticed in #389, the following code loads the entire wikipedia in memory. ```python import nlp w = nlp.load_dataset("wikipedia", "20200501.en", split="train") w.select([0]) ``` This is caused by [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/arrow_dataset.py#L626) for some reason, that tries to serialize the function with all the wikipedia data with it. It's not the case with `.map` or `.filter`. However functions that are based on `.select` like `.shuffle`, `.shard`, `.train_test_split`, `.sort` are affected.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/395/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/395/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/394
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/394/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/394/comments
https://api.github.com/repos/huggingface/datasets/issues/394/events
https://github.com/huggingface/datasets/pull/394
657,425,548
MDExOlB1bGxSZXF1ZXN0NDQ5NTQzNTE0
394
Remove remaining nested dict
{ "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "repos_url": "https://api.github.com/users/mariamabarham/repos", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-15T15:05:52"
"2020-07-16T07:39:52"
"2020-07-16T07:39:51"
CONTRIBUTOR
null
This PR deletes the remaining unnecessary nested dict #378
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/394/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/394/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/394", "html_url": "https://github.com/huggingface/datasets/pull/394", "diff_url": "https://github.com/huggingface/datasets/pull/394.diff", "patch_url": "https://github.com/huggingface/datasets/pull/394.patch", "merged_at": "2020-07-16T07:39:51" }
true
https://api.github.com/repos/huggingface/datasets/issues/393
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/393/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/393/comments
https://api.github.com/repos/huggingface/datasets/issues/393/events
https://github.com/huggingface/datasets/pull/393
657,330,911
MDExOlB1bGxSZXF1ZXN0NDQ5NDY1MTAz
393
Fix extracted files directory for the DownloadManager
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-15T12:59:55"
"2020-07-17T17:02:16"
"2020-07-17T17:02:14"
MEMBER
null
The cache dir was often cluttered by extracted files because of the download manager. For downloaded files, we are using the `downloads` directory to make things easier to navigate, but extracted files were still placed at the root of the cache directory. To fix that I changed the directory for extracted files to cache_dir/downloads/extracted.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/393/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/393/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/393", "html_url": "https://github.com/huggingface/datasets/pull/393", "diff_url": "https://github.com/huggingface/datasets/pull/393.diff", "patch_url": "https://github.com/huggingface/datasets/pull/393.patch", "merged_at": "2020-07-17T17:02:14" }
true
https://api.github.com/repos/huggingface/datasets/issues/392
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/392/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/392/comments
https://api.github.com/repos/huggingface/datasets/issues/392/events
https://github.com/huggingface/datasets/pull/392
657,313,738
MDExOlB1bGxSZXF1ZXN0NDQ5NDUwOTkx
392
Style change detection
{ "login": "ghomasHudson", "id": 13795113, "node_id": "MDQ6VXNlcjEzNzk1MTEz", "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghomasHudson", "html_url": "https://github.com/ghomasHudson", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-15T12:32:14"
"2020-07-21T13:18:36"
"2020-07-17T17:13:23"
CONTRIBUTOR
null
Another [PAN task](https://pan.webis.de/clef20/pan20-web/style-change-detection.html). This time about identifying when the style/author changes in documents. - There's the possibility of adding the [PAN19](https://zenodo.org/record/3577602) and PAN18 style change detection tasks too (these are datasets whose labels are a subset of PAN20's). These would probably make more sense as separate datasets (like wmt is now) - I've converted the integer 0,1 values to a boolean - Using manually downloaded data again. This might be changed at some point following the discussion in https://github.com/huggingface/nlp/pull/349.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/392/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/392/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/392", "html_url": "https://github.com/huggingface/datasets/pull/392", "diff_url": "https://github.com/huggingface/datasets/pull/392.diff", "patch_url": "https://github.com/huggingface/datasets/pull/392.patch", "merged_at": "2020-07-17T17:13:23" }
true
https://api.github.com/repos/huggingface/datasets/issues/390
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/390/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/390/comments
https://api.github.com/repos/huggingface/datasets/issues/390/events
https://github.com/huggingface/datasets/pull/390
656,956,384
MDExOlB1bGxSZXF1ZXN0NDQ5MTYxMzY3
390
Concatenate datasets
{ "login": "jarednielsen", "id": 4564897, "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jarednielsen", "html_url": "https://github.com/jarednielsen", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "repos_url": "https://api.github.com/users/jarednielsen/repos", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-14T23:24:37"
"2020-07-22T09:49:58"
"2020-07-22T09:49:58"
CONTRIBUTOR
null
I'm constructing the "WikiBooks" dataset, which is a concatenation of Wikipedia & BookCorpus. So I implemented the `Dataset.from_concat()` method, which concatenates two datasets with the same schema. This would also be useful if someone wants to pretrain on a large generic dataset + their own custom dataset. Not in love with the method name, so would love to hear suggestions. Usage: ```python from nlp import Dataset, load_dataset data1, data2 = {"id": [0, 1, 2]}, {"id": [3, 4, 5]} dset1, dset2 = Dataset.from_dict(data1), Dataset.from_dict(data2) dset_concat = Dataset.from_concat([dset1, dset2]) print(dset_concat) # Dataset(schema: {'id': 'int64'}, num_rows: 6) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/390/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/390/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/390", "html_url": "https://github.com/huggingface/datasets/pull/390", "diff_url": "https://github.com/huggingface/datasets/pull/390.diff", "patch_url": "https://github.com/huggingface/datasets/pull/390.patch", "merged_at": "2020-07-22T09:49:58" }
true
https://api.github.com/repos/huggingface/datasets/issues/389
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/389/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/389/comments
https://api.github.com/repos/huggingface/datasets/issues/389/events
https://github.com/huggingface/datasets/pull/389
656,921,768
MDExOlB1bGxSZXF1ZXN0NDQ5MTMyOTU5
389
Fix pickling of SplitDict
{ "login": "mitchellgordon95", "id": 7490438, "node_id": "MDQ6VXNlcjc0OTA0Mzg=", "avatar_url": "https://avatars.githubusercontent.com/u/7490438?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mitchellgordon95", "html_url": "https://github.com/mitchellgordon95", "followers_url": "https://api.github.com/users/mitchellgordon95/followers", "following_url": "https://api.github.com/users/mitchellgordon95/following{/other_user}", "gists_url": "https://api.github.com/users/mitchellgordon95/gists{/gist_id}", "starred_url": "https://api.github.com/users/mitchellgordon95/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mitchellgordon95/subscriptions", "organizations_url": "https://api.github.com/users/mitchellgordon95/orgs", "repos_url": "https://api.github.com/users/mitchellgordon95/repos", "events_url": "https://api.github.com/users/mitchellgordon95/events{/privacy}", "received_events_url": "https://api.github.com/users/mitchellgordon95/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-14T21:53:39"
"2020-08-04T14:38:10"
"2020-08-04T14:38:10"
CONTRIBUTOR
null
It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example: ``` wiki = nlp.load_dataset('wikipedia', split='train') def sentencize(examples): ... wiki = wiki.map(sentencize, batched=True) torch.save(wiki, 'sentencized_wiki_dataset.pt') ``` However, upon unpickling the dataset via torch.load(...), this error is raised: ``` ValueError("Cannot add elem. Use .add() instead.") ``` On line [492 of splits.py](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). This is because SplitDict subclasses dict, and pickle treats [dicts specially](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). Pickle expects access to `dict.__setitem__`, but this is disallowed by the class. The workaround is to provide an explicit interface for pickle to call when pickling and unpickling, thereby avoiding the use of `__setitem__`. Testing: - Manually pickled and unpickled a modified wikipedia dataset. - Ran `make style` I would be happy to run any other tests, but I couldn't find any in the contributing guidelines.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/389/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/389/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/389", "html_url": "https://github.com/huggingface/datasets/pull/389", "diff_url": "https://github.com/huggingface/datasets/pull/389.diff", "patch_url": "https://github.com/huggingface/datasets/pull/389.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/388
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/388/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/388/comments
https://api.github.com/repos/huggingface/datasets/issues/388/events
https://github.com/huggingface/datasets/issues/388
656,707,497
MDU6SXNzdWU2NTY3MDc0OTc=
388
🐛 [Dataset] Cannot download wmt14, wmt15 and wmt17
{ "login": "SamuelCahyawijaya", "id": 2826602, "node_id": "MDQ6VXNlcjI4MjY2MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/2826602?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SamuelCahyawijaya", "html_url": "https://github.com/SamuelCahyawijaya", "followers_url": "https://api.github.com/users/SamuelCahyawijaya/followers", "following_url": "https://api.github.com/users/SamuelCahyawijaya/following{/other_user}", "gists_url": "https://api.github.com/users/SamuelCahyawijaya/gists{/gist_id}", "starred_url": "https://api.github.com/users/SamuelCahyawijaya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SamuelCahyawijaya/subscriptions", "organizations_url": "https://api.github.com/users/SamuelCahyawijaya/orgs", "repos_url": "https://api.github.com/users/SamuelCahyawijaya/repos", "events_url": "https://api.github.com/users/SamuelCahyawijaya/events{/privacy}", "received_events_url": "https://api.github.com/users/SamuelCahyawijaya/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
null
[ "similar slow download speed here for nlp.load_dataset('wmt14', 'fr-en')\r\n`\r\nDownloading: 100%|██████████████████████████████████████████████████████████| 658M/658M [1:00:42<00:00, 181kB/s]\r\nDownloading: 100%|██████████████████████████████████████████████████████████| 918M/918M [1:39:38<00:00, 154kB/s]\r\nDownloading: 2%|▉ | 40.9M/2.37G [04:48<5:03:06, 128kB/s]\r\n`\r\nCould we just download a specific subdataset in 'wmt14', such as 'newstest14'? ", "> The code runs but the download speed is extremely slow, the same behaviour is not observed on wmt16 and wmt18\r\n\r\nThe original source for the files may provide slow download speeds.\r\nWe can probably host these files ourselves.\r\n\r\n> When trying to download wmt17 zh-en, I got the following error:\r\n> ConnectionError: Couldn't reach https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-zh.tar.gz\r\n\r\nLooks like the file`UNv1.0.en-zh.tar.gz` is missing, or the url changed. We need to fix that\r\n\r\n> Could we just download a specific subdataset in 'wmt14', such as 'newstest14'?\r\n\r\nRight now I don't think it's possible. Maybe @patrickvonplaten knows more about it\r\n", "Yeah, the download speed is sadly always extremely slow :-/. \r\nI will try to check out the `wmt17 zh-en` bug :-) ", "Maybe this can be used - https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-zh.tar.gz.00 ", "These issues seem to be fixed now." ]
"2020-07-14T15:36:41"
"2022-10-04T18:01:28"
"2022-10-04T18:01:28"
NONE
null
1. I try downloading `wmt14`, `wmt15`, `wmt17`, `wmt19` with the following code: ``` nlp.load_dataset('wmt14','de-en') nlp.load_dataset('wmt15','de-en') nlp.load_dataset('wmt17','de-en') nlp.load_dataset('wmt19','de-en') ``` The code runs but the download speed is **extremely slow**, the same behaviour is not observed on `wmt16` and `wmt18` 2. When trying to download `wmt17 zh-en`, I got the following error: > ConnectionError: Couldn't reach https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-zh.tar.gz
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/388/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/388/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/387
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/387/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/387/comments
https://api.github.com/repos/huggingface/datasets/issues/387/events
https://github.com/huggingface/datasets/issues/387
656,361,357
MDU6SXNzdWU2NTYzNjEzNTc=
387
Conversion through to_pandas output numpy arrays for lists instead of python objects
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "To convert from arrow type we have three options: to_numpy, to_pandas and to_pydict/to_pylist.\r\n\r\n- to_numpy and to_pandas return numpy arrays instead of lists but are very fast.\r\n- to_pydict/to_pylist can be 100x slower and become the bottleneck for reading data, but at least they return lists.\r\n\r\nMaybe we can have to_pydict/to_pylist as the default and use to_numpy or to_pandas when the format (set by `set_format`) is 'numpy' or 'pandas'" ]
"2020-07-14T06:24:01"
"2020-07-17T11:37:00"
"2020-07-17T11:37:00"
MEMBER
null
In a related question, the conversion through to_pandas output numpy arrays for the lists instead of python objects. Here is an example: ```python >>> dataset._data.slice(key, 1).to_pandas().to_dict("list") {'sentence1': ['Amrozi accused his brother , whom he called " the witness " , of deliberately distorting his evidence .'], 'sentence2': ['Referring to him as only " the witness " , Amrozi accused his brother of deliberately distorting his evidence .'], 'label': [1], 'idx': [0], 'input_ids': [array([ 101, 7277, 2180, 5303, 4806, 1117, 1711, 117, 2292, 1119, 1270, 107, 1103, 7737, 107, 117, 1104, 9938, 4267, 12223, 21811, 1117, 2554, 119, 102])], 'token_type_ids': [array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])], 'attention_mask': [array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])]} >>> type(dataset._data.slice(key, 1).to_pandas().to_dict("list")['input_ids'][0]) <class 'numpy.ndarray'> >>> dataset._data.slice(key, 1).to_pydict() {'sentence1': ['Amrozi accused his brother , whom he called " the witness " , of deliberately distorting his evidence .'], 'sentence2': ['Referring to him as only " the witness " , Amrozi accused his brother of deliberately distorting his evidence .'], 'label': [1], 'idx': [0], 'input_ids': [[101, 7277, 2180, 5303, 4806, 1117, 1711, 117, 2292, 1119, 1270, 107, 1103, 7737, 107, 117, 1104, 9938, 4267, 12223, 21811, 1117, 2554, 119, 102]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]} ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/387/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/387/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/386
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/386/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/386/comments
https://api.github.com/repos/huggingface/datasets/issues/386/events
https://github.com/huggingface/datasets/pull/386
655,839,067
MDExOlB1bGxSZXF1ZXN0NDQ4MjQ1NDI4
386
Update dataset loading and features - Add TREC dataset
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-13T13:10:18"
"2020-07-16T08:17:58"
"2020-07-16T08:17:58"
MEMBER
null
This PR: - add a template for a new dataset script - update the caching structure so that the path to the cached data files is also a function of the dataset loading script hash. This way when you update a loading script the data will be automatically updated instead of falling back to the previous version (which is usually a outdated). This makes it in particular easier to iterate when writing a new dataset loading script. - fix a bug in the `ClassLabel` feature and make it more flexible so that its methods `str2int` and `int2str` can also accept list, numpy arrays and PyTorch/TensorFlow tensors. - add the TREC-6 dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/386/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/386/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/386", "html_url": "https://github.com/huggingface/datasets/pull/386", "diff_url": "https://github.com/huggingface/datasets/pull/386.diff", "patch_url": "https://github.com/huggingface/datasets/pull/386.patch", "merged_at": "2020-07-16T08:17:58" }
true
https://api.github.com/repos/huggingface/datasets/issues/385
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/385/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/385/comments
https://api.github.com/repos/huggingface/datasets/issues/385/events
https://github.com/huggingface/datasets/pull/385
655,663,997
MDExOlB1bGxSZXF1ZXN0NDQ4MTAzMjY5
385
Remove unnecessary nested dict
{ "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "repos_url": "https://api.github.com/users/mariamabarham/repos", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-13T08:46:23"
"2020-07-15T11:27:38"
"2020-07-15T10:03:53"
CONTRIBUTOR
null
This PR is removing unnecessary nested dictionary used in some datasets. For now the following datasets are updated: - MLQA - RACE Will be adding more if necessary. #378
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/385/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/385/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/385", "html_url": "https://github.com/huggingface/datasets/pull/385", "diff_url": "https://github.com/huggingface/datasets/pull/385.diff", "patch_url": "https://github.com/huggingface/datasets/pull/385.patch", "merged_at": "2020-07-15T10:03:53" }
true
https://api.github.com/repos/huggingface/datasets/issues/383
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/383/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/383/comments
https://api.github.com/repos/huggingface/datasets/issues/383/events
https://github.com/huggingface/datasets/pull/383
655,291,201
MDExOlB1bGxSZXF1ZXN0NDQ3ODI0OTky
383
Adding the Linguistic Code-switching Evaluation (LinCE) benchmark
{ "login": "gaguilar", "id": 5833357, "node_id": "MDQ6VXNlcjU4MzMzNTc=", "avatar_url": "https://avatars.githubusercontent.com/u/5833357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gaguilar", "html_url": "https://github.com/gaguilar", "followers_url": "https://api.github.com/users/gaguilar/followers", "following_url": "https://api.github.com/users/gaguilar/following{/other_user}", "gists_url": "https://api.github.com/users/gaguilar/gists{/gist_id}", "starred_url": "https://api.github.com/users/gaguilar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gaguilar/subscriptions", "organizations_url": "https://api.github.com/users/gaguilar/orgs", "repos_url": "https://api.github.com/users/gaguilar/repos", "events_url": "https://api.github.com/users/gaguilar/events{/privacy}", "received_events_url": "https://api.github.com/users/gaguilar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-11T22:35:20"
"2020-07-16T16:19:46"
"2020-07-16T16:19:46"
CONTRIBUTOR
null
Hi, First of all, this library is really cool! Thanks for putting all of this together! This PR contains the [Linguistic Code-switching Evaluation (LinCE) benchmark](https://ritual.uh.edu/lince). As described in the official website (FAQ): > 1. Why do we need LinCE? >LinCE brings 10 code-switching datasets together for 4 tasks and 4 language pairs with 5 leaderboards in a single evaluation platform. We examined each dataset and fixed major issues on the partitions (or even define official partitions) with a comprehensive stratification method (see our paper for more details). >Besides, we believe that online benchmarks like LinCE bring steady research progress and allow to compare state-of-the-art models at the pace of the progress in NLP. We expect to benefit greatly the code-switching community with this benchmark. The data comes from social media and here's the summary table of tasks per language pair: | Language Pairs | LID | POS | NER | SA | |----------------------------------------|-----|-----|-----|----| | Spanish-English | ✅ | ✅ | ✅ | ✅ | | Hindi-English | ✅ | ✅ | ✅ | | | Modern Standard Arabic-Egyptian Arabic | ✅ | | ✅ | | | Nepali-English | ✅ | | | | The tasks are as follows: * LID: token-level language identification * POS: part-of-speech tagging * NER: named entity recognition * SA: sentiment analysis With the exception of MSA-EA, the rest of the datasets contain token-level LID labels. ## Usage For Spanish-English LID, we can load the data as follows: ``` import nlp data = nlp.load_dataset('./datasets/lince/lince.py', 'lid_spaeng') for split in data: print(data[split]) ``` Here's the output: ``` Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 21030) Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 3332) Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 8289) ``` Here's the list of shortcut names for every dataset available in LinCE: * `lid_spaeng` * `lid_hineng` * `lid_nepeng` * `lid_msaea` * `pos_spaeng` * `pos_hineng` * `ner_spaeng` * `ner_hineng` * `ner_msaea` * `sa_spaeng` All the numbers match with Table 3 in the LinCE [paper](https://www.aclweb.org/anthology/2020.lrec-1.223.pdf). Also, note that the MSA-EA datasets use the Persian script while the other datasets use the Roman script. ## Features Here is how the features look in the case of language identification (LID) tasks: | LID Feature | Type | Description | |----------------------|---------------|-------------------------------------------| | `idx` | `int` | Dataset index of current sentence | | `tokens` | `list<str>` | List of tokens (string) of a sentence | | `lid` | `list<str>` | List of LID labels (string) of a sentence | For part-of-speech (POS) tagging: | POS Feature | Type | Description | |----------------------|---------------|-------------------------------------------| | `idx` | `int` | Dataset index of current sentence | | `tokens` | `list<str>` | List of tokens (string) of a sentence | | `lid` | `list<str>` | List of LID labels (string) of a sentence | | `pos` | `list<str>` | List of POS tags (string) of a sentence | For named entity recognition (NER): | NER Feature | Type | Description | |----------------------|---------------|-------------------------------------------| | `idx` | `int` | Dataset index of current sentence | | `tokens` | `list<str>` | List of tokens (string) of a sentence | | `lid` | `list<str>` | List of LID labels (string) of a sentence | | `ner` | `list<str>` | List of NER labels (string) of a sentence | **NOTE**: the MSA-EA NER dataset does not contain the `lid` feature. For sentiment analysis (SA): | SA Feature | Type | Description | |---------------------|-------------|-------------------------------------------| | `idx` | `int` | Dataset index of current sentence | | `tokens` | `list<str>` | List of tokens (string) of a sentence | | `lid` | `list<str>` | List of LID labels (string) of a sentence | | `sa` | `str` | Sentiment label (string) of a sentence |
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/383/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/383/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/383", "html_url": "https://github.com/huggingface/datasets/pull/383", "diff_url": "https://github.com/huggingface/datasets/pull/383.diff", "patch_url": "https://github.com/huggingface/datasets/pull/383.patch", "merged_at": "2020-07-16T16:19:46" }
true
https://api.github.com/repos/huggingface/datasets/issues/382
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/382/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/382/comments
https://api.github.com/repos/huggingface/datasets/issues/382/events
https://github.com/huggingface/datasets/issues/382
655,290,482
MDU6SXNzdWU2NTUyOTA0ODI=
382
1080
{ "login": "saq194", "id": 60942503, "node_id": "MDQ6VXNlcjYwOTQyNTAz", "avatar_url": "https://avatars.githubusercontent.com/u/60942503?v=4", "gravatar_id": "", "url": "https://api.github.com/users/saq194", "html_url": "https://github.com/saq194", "followers_url": "https://api.github.com/users/saq194/followers", "following_url": "https://api.github.com/users/saq194/following{/other_user}", "gists_url": "https://api.github.com/users/saq194/gists{/gist_id}", "starred_url": "https://api.github.com/users/saq194/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/saq194/subscriptions", "organizations_url": "https://api.github.com/users/saq194/orgs", "repos_url": "https://api.github.com/users/saq194/repos", "events_url": "https://api.github.com/users/saq194/events{/privacy}", "received_events_url": "https://api.github.com/users/saq194/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-11T22:29:07"
"2020-07-11T22:49:38"
"2020-07-11T22:49:38"
NONE
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/382/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/382/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/381
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/381/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/381/comments
https://api.github.com/repos/huggingface/datasets/issues/381/events
https://github.com/huggingface/datasets/issues/381
655,277,119
MDU6SXNzdWU2NTUyNzcxMTk=
381
NLp
{ "login": "Spartanthor", "id": 68147610, "node_id": "MDQ6VXNlcjY4MTQ3NjEw", "avatar_url": "https://avatars.githubusercontent.com/u/68147610?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Spartanthor", "html_url": "https://github.com/Spartanthor", "followers_url": "https://api.github.com/users/Spartanthor/followers", "following_url": "https://api.github.com/users/Spartanthor/following{/other_user}", "gists_url": "https://api.github.com/users/Spartanthor/gists{/gist_id}", "starred_url": "https://api.github.com/users/Spartanthor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Spartanthor/subscriptions", "organizations_url": "https://api.github.com/users/Spartanthor/orgs", "repos_url": "https://api.github.com/users/Spartanthor/repos", "events_url": "https://api.github.com/users/Spartanthor/events{/privacy}", "received_events_url": "https://api.github.com/users/Spartanthor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-11T20:50:14"
"2020-07-11T20:50:39"
"2020-07-11T20:50:39"
NONE
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/381/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/381/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/378
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/378/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/378/comments
https://api.github.com/repos/huggingface/datasets/issues/378/events
https://github.com/huggingface/datasets/issues/378
655,226,316
MDU6SXNzdWU2NTUyMjYzMTY=
378
[dataset] Structure of MLQA seems unecessary nested
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Same for the RACE dataset: https://github.com/huggingface/nlp/blob/master/datasets/race/race.py\r\n\r\nShould we scan all the datasets to remove this pattern of un-necessary nesting?", "You're right, I think we don't need to use the nested dictionary. \r\n" ]
"2020-07-11T15:16:08"
"2020-07-15T16:17:20"
"2020-07-15T16:17:20"
MEMBER
null
The features of the MLQA dataset comprise several nested dictionaries with a single element inside (for `questions` and `ids`): https://github.com/huggingface/nlp/blob/master/datasets/mlqa/mlqa.py#L90-L97 Should we keep this @mariamabarham @patrickvonplaten? Was this added for compatibility with tfds? ```python features=nlp.Features( { "context": nlp.Value("string"), "questions": nlp.features.Sequence({"question": nlp.Value("string")}), "answers": nlp.features.Sequence( {"text": nlp.Value("string"), "answer_start": nlp.Value("int32"),} ), "ids": nlp.features.Sequence({"idx": nlp.Value("string")}) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/378/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/378/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/377
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/377/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/377/comments
https://api.github.com/repos/huggingface/datasets/issues/377/events
https://github.com/huggingface/datasets/issues/377
655,215,790
MDU6SXNzdWU2NTUyMTU3OTA=
377
Iyy!!!
{ "login": "ajinomoh", "id": 68154535, "node_id": "MDQ6VXNlcjY4MTU0NTM1", "avatar_url": "https://avatars.githubusercontent.com/u/68154535?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ajinomoh", "html_url": "https://github.com/ajinomoh", "followers_url": "https://api.github.com/users/ajinomoh/followers", "following_url": "https://api.github.com/users/ajinomoh/following{/other_user}", "gists_url": "https://api.github.com/users/ajinomoh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ajinomoh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ajinomoh/subscriptions", "organizations_url": "https://api.github.com/users/ajinomoh/orgs", "repos_url": "https://api.github.com/users/ajinomoh/repos", "events_url": "https://api.github.com/users/ajinomoh/events{/privacy}", "received_events_url": "https://api.github.com/users/ajinomoh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-11T14:11:07"
"2020-07-11T14:30:51"
"2020-07-11T14:30:51"
NONE
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/377/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/377/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/376
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/376/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/376/comments
https://api.github.com/repos/huggingface/datasets/issues/376/events
https://github.com/huggingface/datasets/issues/376
655,047,826
MDU6SXNzdWU2NTUwNDc4MjY=
376
to_pandas conversion doesn't always work
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "**Edit**: other topic previously in this message moved to a new issue: https://github.com/huggingface/nlp/issues/387", "Could you try to update pyarrow to >=0.17.0 ? It should fix the `to_pandas` bug\r\n\r\nAlso I'm not sure that structures like list<struct> are fully supported in the lib (none of the datasets use that).\r\nIt can cause issues when using dataset transforms like `filter` for example" ]
"2020-07-10T21:33:31"
"2022-10-04T18:05:39"
"2022-10-04T18:05:39"
MEMBER
null
For some complex nested types, the conversion from Arrow to python dict through pandas doesn't seem to be possible. Here is an example using the official SQUAD v2 JSON file. This example was found while investigating #373. ```python >>> squad = load_dataset('json', data_files={nlp.Split.TRAIN: ["./train-v2.0.json"]}, download_mode=nlp.GenerateMode.FORCE_REDOWNLOAD, version="1.0.0", field='data') >>> squad['train'] Dataset(schema: {'title': 'string', 'paragraphs': 'list<item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>>'}, num_rows: 442) >>> squad['train'][0] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/thomwolf/Documents/GitHub/datasets/src/nlp/arrow_dataset.py", line 589, in __getitem__ format_kwargs=self._format_kwargs, File "/Users/thomwolf/Documents/GitHub/datasets/src/nlp/arrow_dataset.py", line 529, in _getitem outputs = self._unnest(self._data.slice(key, 1).to_pandas().to_dict("list")) File "pyarrow/array.pxi", line 559, in pyarrow.lib._PandasConvertible.to_pandas File "pyarrow/table.pxi", line 1367, in pyarrow.lib.Table._to_pandas File "/Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/pandas_compat.py", line 766, in table_to_blockmanager blocks = _table_to_blocks(options, table, categories, ext_columns_dtypes) File "/Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/pandas_compat.py", line 1101, in _table_to_blocks list(extension_columns.keys())) File "pyarrow/table.pxi", line 881, in pyarrow.lib.table_to_blocks File "pyarrow/error.pxi", line 105, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Not implemented type for Arrow list to pandas: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string> ``` cc @lhoestq would we have a way to detect this from the schema maybe? Here is the schema for this pretty complex JSON: ```python >>> squad['train'].schema title: string paragraphs: list<item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>> child 0, item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string> child 0, qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>> child 0, item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>> child 0, question: string child 1, id: string child 2, answers: list<item: struct<text: string, answer_start: int64>> child 0, item: struct<text: string, answer_start: int64> child 0, text: string child 1, answer_start: int64 child 3, is_impossible: bool child 4, plausible_answers: list<item: struct<text: string, answer_start: int64>> child 0, item: struct<text: string, answer_start: int64> child 0, text: string child 1, answer_start: int64 child 1, context: string ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/376/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/376/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/375
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/375/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/375/comments
https://api.github.com/repos/huggingface/datasets/issues/375/events
https://github.com/huggingface/datasets/issues/375
655,023,307
MDU6SXNzdWU2NTUwMjMzMDc=
375
TypeError when computing bertscore
{ "login": "willywsm1013", "id": 13269577, "node_id": "MDQ6VXNlcjEzMjY5NTc3", "avatar_url": "https://avatars.githubusercontent.com/u/13269577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/willywsm1013", "html_url": "https://github.com/willywsm1013", "followers_url": "https://api.github.com/users/willywsm1013/followers", "following_url": "https://api.github.com/users/willywsm1013/following{/other_user}", "gists_url": "https://api.github.com/users/willywsm1013/gists{/gist_id}", "starred_url": "https://api.github.com/users/willywsm1013/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/willywsm1013/subscriptions", "organizations_url": "https://api.github.com/users/willywsm1013/orgs", "repos_url": "https://api.github.com/users/willywsm1013/repos", "events_url": "https://api.github.com/users/willywsm1013/events{/privacy}", "received_events_url": "https://api.github.com/users/willywsm1013/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I am not able to reproduce this issue on my side.\r\nCould you give us more details about the inputs you used ?\r\n\r\nI do get another error though:\r\n```\r\n~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/bert_score/utils.py in bert_cos_score_idf(model, refs, hyps, tokenizer, idf_dict, verbose, batch_size, device, all_layers)\r\n 371 return sorted(list(set(l)), key=lambda x: len(x.split(\" \")))\r\n 372 \r\n--> 373 sentences = dedup_and_sort(refs + hyps)\r\n 374 embs = []\r\n 375 iter_range = range(0, len(sentences), batch_size)\r\n\r\nValueError: operands could not be broadcast together with shapes (0,) (2,)\r\n```\r\nThat's because it gets numpy arrays as input and not lists. See #387 ", "The other issue was fixed by #403 \r\n\r\nDo you still get this issue @willywsm1013 ?\r\n" ]
"2020-07-10T20:37:44"
"2022-06-01T15:15:59"
"2022-06-01T15:15:59"
NONE
null
Hi, I installed nlp 0.3.0 via pip, and my python version is 3.7. When I tried to compute bertscore with the code: ``` import nlp bertscore = nlp.load_metric('bertscore') # load hyps and refs ... print (bertscore.compute(hyps, refs, lang='en')) ``` I got the following error. ``` Traceback (most recent call last): File "bert_score_evaluate.py", line 16, in <module> print (bertscore.compute(hyps, refs, lang='en')) File "/home/willywsm/anaconda3/envs/torcher/lib/python3.7/site-packages/nlp/metric.py", line 200, in compute output = self._compute(predictions=predictions, references=references, **metrics_kwargs) File "/home/willywsm/anaconda3/envs/torcher/lib/python3.7/site-packages/nlp/metrics/bertscore/fb176889831bf0ce995ed197edc94b2e9a83f647a869bb8c9477dbb2d04d0f08/bertscore.py", line 105, in _compute hashcode = bert_score.utils.get_hash(model_type, num_layers, idf, rescale_with_baseline) TypeError: get_hash() takes 3 positional arguments but 4 were given ``` It seems like there is something wrong with get_hash() function?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/375/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/375/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/374
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/374/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/374/comments
https://api.github.com/repos/huggingface/datasets/issues/374/events
https://github.com/huggingface/datasets/pull/374
654,895,066
MDExOlB1bGxSZXF1ZXN0NDQ3NTMxMzUy
374
Add dataset post processing for faiss indexes
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-10T16:25:59"
"2020-07-13T13:44:03"
"2020-07-13T13:44:01"
MEMBER
null
# Post processing of datasets for faiss indexes Now that we can have datasets with embeddings (see `wiki_pr` for example), we can allow users to load the dataset + get the Faiss index that comes with it to do nearest neighbors queries. ## Implementation proposition - Faiss indexes have to be added to the `nlp.Dataset` object, and therefore it's in a different scope that what are doing the `_split_generators` and `_generate_examples` methods of `nlp.DatasetBuilder`. Therefore I added a new method for post processing of the `nlp.Dataset` object called `_post_process` (name could change) - The role of `_post_process` is to apply dataset transforms (filter/map etc.) or indexing functions (add_faiss_index) to modify/enrich the `nlp.Dataset` object. It is not part of the `download_and_prepare` process (that is focused on arrow files creation) so the post processing is run inside the `as_dataset` method. - `_post_process` can generate new files (cached files from dataset transforms or serialized faiss indexes) and their names are defined by `_post_processing_resources` - as we know what are the post processing resources, we can download them automatically from google storage instead of computing them if they're available (as we do for arrow files) I'd happy to discuss these choices ! ## The `wiki_dpr` index It takes 1h20 and ~7GB of memory to compute. The final index is 1.42GB and takes ~1.5GB of memory. This is pretty cool given that a naive flat index would take 170GB of memory to store the 21M vectors of dim 768. I couldn't use directly the Faiss `index_factory` as I needed to set the metric to inner product. ## Example of usage ```python import nlp dset = nlp.load_dataset( "wiki_dpr", "psgs_w100_with_nq_embeddings", split="train", with_index=True ) print(len(dset), dset.list_indexes()) # (21015300, ['embeddings']) ``` (it also works with the dataset configuration without the embeddings because I added the index file in google storage for this one too) ## Demo You can also check a demo on google colab that shows how to use it with the DPRQuestionEncoder from transformers: https://colab.research.google.com/drive/1FakNU8W5EPMcWff7iP1H6REg3XSS0YLp?usp=sharing
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/374/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/374/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/374", "html_url": "https://github.com/huggingface/datasets/pull/374", "diff_url": "https://github.com/huggingface/datasets/pull/374.diff", "patch_url": "https://github.com/huggingface/datasets/pull/374.patch", "merged_at": "2020-07-13T13:44:01" }
true
https://api.github.com/repos/huggingface/datasets/issues/373
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/373/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/373/comments
https://api.github.com/repos/huggingface/datasets/issues/373/events
https://github.com/huggingface/datasets/issues/373
654,845,133
MDU6SXNzdWU2NTQ4NDUxMzM=
373
Segmentation fault when loading local JSON dataset as of #372
{ "login": "vegarab", "id": 24683907, "node_id": "MDQ6VXNlcjI0NjgzOTA3", "avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vegarab", "html_url": "https://github.com/vegarab", "followers_url": "https://api.github.com/users/vegarab/followers", "following_url": "https://api.github.com/users/vegarab/following{/other_user}", "gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}", "starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vegarab/subscriptions", "organizations_url": "https://api.github.com/users/vegarab/orgs", "repos_url": "https://api.github.com/users/vegarab/repos", "events_url": "https://api.github.com/users/vegarab/events{/privacy}", "received_events_url": "https://api.github.com/users/vegarab/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I've seen this sort of thing before -- it might help to delete the directory -- I've also noticed that there is an error with the json Dataloader for any data I've tried to load. I've replaced it with this, which skips over the data feature population step:\r\n\r\n\r\n```python\r\nimport os\r\n\r\nimport pyarrow.json as paj\r\n\r\nimport nlp as hf_nlp\r\n\r\nfrom nlp import DatasetInfo, BuilderConfig, SplitGenerator, Split, utils\r\nfrom nlp.arrow_writer import ArrowWriter\r\n\r\n\r\nclass JSONDatasetBuilder(hf_nlp.ArrowBasedBuilder):\r\n BUILDER_CONFIG_CLASS = BuilderConfig\r\n\r\n def _info(self):\r\n return DatasetInfo()\r\n\r\n def _split_generators(self, dl_manager):\r\n \"\"\" We handle string, list and dicts in datafiles\r\n \"\"\"\r\n if isinstance(self.config.data_files, (str, list, tuple)):\r\n files = self.config.data_files\r\n if isinstance(files, str):\r\n files = [files]\r\n return [SplitGenerator(name=Split.TRAIN, gen_kwargs={\"files\": files})]\r\n splits = []\r\n for split_name in [Split.TRAIN, Split.VALIDATION, Split.TEST]:\r\n if split_name in self.config.data_files:\r\n files = self.config.data_files[split_name]\r\n if isinstance(files, str):\r\n files = [files]\r\n splits.append(SplitGenerator(name=split_name, gen_kwargs={\"files\": files}))\r\n return splits\r\n\r\n def _prepare_split(self, split_generator):\r\n fname = \"{}-{}.arrow\".format(self.name, split_generator.name)\r\n fpath = os.path.join(self._cache_dir, fname)\r\n\r\n writer = ArrowWriter(path=fpath)\r\n\r\n generator = self._generate_tables(**split_generator.gen_kwargs)\r\n for key, table in utils.tqdm(generator, unit=\" tables\", leave=False):\r\n writer.write_table(table)\r\n num_examples, num_bytes = writer.finalize()\r\n\r\n split_generator.split_info.num_examples = num_examples\r\n split_generator.split_info.num_bytes = num_bytes\r\n\r\n def _generate_tables(self, files):\r\n for i, file in enumerate(files):\r\n pa_table = paj.read_json(\r\n file\r\n )\r\n yield i, pa_table\r\n\r\n```", "Yes, deleting the directory solves the error whenever I try to rerun.\r\n\r\nBy replacing the json-loader, you mean the cached file in my `site-packages` directory? e.g. `/home/XXX/.cache/lib/python3.7/site-packages/nlp/datasets/json/(...)/json.py` \r\n\r\nWhen I was testing this out before the #372 PR was merged I had issues installing it properly locally. Since the `json.py` script was downloaded instead of actually using the one provided in the local install. Manually updating that file seemed to solve it, but it didn't seem like a proper solution. Especially when having to run this on a remote compute cluster with no access to that directory.", "I see, diving in the JSON file for SQuAD it's a pretty complex structure.\r\n\r\nThe best solution for you, if you have a dataset really similar to SQuAD would be to copy and modify the SQuAD data processing script. We will probably add soon an option to be able to specify file path to use instead of the automatic URL encoded in the script but in the meantime you can:\r\n- copy the [squad script](https://github.com/huggingface/nlp/blob/master/datasets/squad/squad.py) in a new script for your dataset\r\n- in the new script replace [these `urls_to_download `](https://github.com/huggingface/nlp/blob/master/datasets/squad/squad.py#L99-L102) by `urls_to_download=self.config.data_files`\r\n- load the dataset with `dataset = load_dataset('path/to/your/new/script', data_files={nlp.Split.TRAIN: \"./datasets/train-v2.0.json\"})`\r\n\r\nThis way you can reuse all the processing logic of the SQuAD loading script.", "This seems like a more sensible solution! Thanks, @thomwolf. It's been a little daunting to understand what these scripts actually do, due to the level of abstraction and central documentation.\r\n\r\nAm I correct in assuming that the `_generate_examples()` function is the actual procedure for how the data is loaded from file? Meaning that essentially with a file containing another format, that is the only function that requires re-implementation? I'm working with a lot of datasets that, due to licensing and privacy, cannot be published. As this library is so neatly integrated with the transformers library and gives easy access to public sets such as SQUAD and increased performance, it is very neat to be able to load my private sets as well. As of now, I have just been working on scripts for translating all my data into the SQUAD-format before using the json script, but I see that it might not be necessary after all. ", "Yes `_generate_examples()` is the main entry point. If you change the shape of the returned dictionary you also need to update the `features` in the `_info`.\r\n\r\nI'm currently writing the doc so it should be easier soon to use the library and know how to add your datasets.\r\n", "Could you try to update pyarrow to >=0.17.0 @vegarab ?\r\nI don't have any segmentation fault with my version of pyarrow (0.17.1)\r\n\r\nI tested with\r\n```python\r\nimport nlp\r\ns = nlp.load_dataset(\"json\", data_files=\"train-v2.0.json\", field=\"data\", split=\"train\")\r\ns[0]\r\n# {'title': 'Normans', 'paragraphs': [{'qas': [{'question': 'In what country is Normandy located?', 'id':...\r\n```", "Also if you want to have your own dataset script, we now have a new documentation !\r\nSee here:\r\nhttps://huggingface.co/nlp/add_dataset.html", "@lhoestq \r\nFor some reason, I am not able to reproduce the segmentation fault, on pyarrow==0.16.0. Using the exact same environment and file.\r\n\r\nAnyhow, I discovered that pyarrow>=0.17.0 is required to read in a JSON file where the pandas structs contain lists. Otherwise, pyarrow complains when attempting to cast the struct:\r\n```py\r\nimport nlp\r\n>>> s = nlp.load_dataset(\"json\", data_files=\"datasets/train-v2.0.json\", field=\"data\", split=\"train\")\r\nUsing custom data configuration default\r\n>>> s[0]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/nlp/arrow_dataset.py\", line 558, in __getitem__\r\n format_kwargs=self._format_kwargs,\r\n File \"/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/nlp/arrow_dataset.py\", line 498, in _getitem\r\n outputs = self._unnest(self._data.slice(key, 1).to_pandas().to_dict(\"list\"))\r\n File \"pyarrow/array.pxi\", line 559, in pyarrow.lib._PandasConvertible.to_pandas\r\n File \"pyarrow/table.pxi\", line 1367, in pyarrow.lib.Table._to_pandas\r\n File \"/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/pyarrow/pandas_compat.py\", line 766, in table_to_blockmanager\r\n blocks = _table_to_blocks(options, table, categories, ext_columns_dtypes)\r\n File \"/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/pyarrow/pandas_compat.py\", line 1101, in _table_to_blocks\r\n list(extension_columns.keys()))\r\n File \"pyarrow/table.pxi\", line 881, in pyarrow.lib.table_to_blocks\r\n File \"pyarrow/error.pxi\", line 105, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowNotImplementedError: Not implemented type for Arrow list to pandas: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>\r\n>>> s\r\nDataset(schema: {'title': 'string', 'paragraphs': 'list<item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>>'}, num_rows: 35)\r\n```\r\n\r\nUpgrading to >=0.17.0 provides the same dataset structure, but accessing the records is possible without the same exception. \r\n\r\n", "Very happy to see some extended documentation! ", "#376 seems to be reporting the same issue as mentioned above. ", "This issue helped me a lot, thanks.\r\nHope this issue will be fixed soon." ]
"2020-07-10T15:04:25"
"2022-10-04T18:05:47"
"2022-10-04T18:05:47"
CONTRIBUTOR
null
The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault. ``` dataset = nlp.load_dataset('json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, field='data') ``` causes ``` Using custom data configuration default Downloading and preparing dataset json/default (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/XXX/.cache/huggingface/datasets/json/default/0.0.0... 0 tables [00:00, ? tables/s]Segmentation fault (core dumped) ``` where `./datasets/train-v2.0.json` is downloaded directly from https://rajpurkar.github.io/SQuAD-explorer/. This is consistent with other SQuAD-formatted JSON files. When attempting to load the dataset again, I get the following: ``` Using custom data configuration default Traceback (most recent call last): File "dataloader.py", line 6, in <module> 'json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, field='data') File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/load.py", line 524, in load_dataset save_infos=save_infos, File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 382, in download_and_prepare with incomplete_dir(self._cache_dir) as tmp_data_dir: File "/home/XXX/.conda/envs/torch/lib/python3.7/contextlib.py", line 112, in __enter__ return next(self.gen) File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 368, in incomplete_dir os.makedirs(tmp_dir) File "/home/XXX/.conda/envs/torch/lib/python3.7/os.py", line 223, in makedirs mkdir(name, mode) FileExistsError: [Errno 17] File exists: '/home/XXX/.cache/huggingface/datasets/json/default/0.0.0.incomplete' ``` (Not sure if you wanted this in the previous issue #369 or not as it was closed.)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/373/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/373/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/372
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/372/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/372/comments
https://api.github.com/repos/huggingface/datasets/issues/372/events
https://github.com/huggingface/datasets/pull/372
654,774,420
MDExOlB1bGxSZXF1ZXN0NDQ3NDMzNTA4
372
Make the json script more flexible
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-10T13:15:15"
"2020-07-10T14:52:07"
"2020-07-10T14:52:06"
MEMBER
null
Fix https://github.com/huggingface/nlp/issues/359 Fix https://github.com/huggingface/nlp/issues/369 JSON script now can accept JSON files containing a single dict with the records as a list in one attribute to the dict (previously it only accepted JSON files containing records as rows of dicts in the file). In this case, you should indicate using `field=XXX` the name of the field in the JSON structure which contains the records you want to load. The records can be a dict of lists or a list of dicts. E.g. to load the SQuAD dataset JSON (without using the `squad` specific dataset loading script), in which the data rows are in the `data` field of the JSON dict, you can do: ```python from nlp import load_dataset dataset = load_dataset('json', data_files='/PATH/TO/JSON', field='data') ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/372/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/372/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/372", "html_url": "https://github.com/huggingface/datasets/pull/372", "diff_url": "https://github.com/huggingface/datasets/pull/372.diff", "patch_url": "https://github.com/huggingface/datasets/pull/372.patch", "merged_at": "2020-07-10T14:52:05" }
true
https://api.github.com/repos/huggingface/datasets/issues/371
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/371/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/371/comments
https://api.github.com/repos/huggingface/datasets/issues/371/events
https://github.com/huggingface/datasets/pull/371
654,668,242
MDExOlB1bGxSZXF1ZXN0NDQ3MzQ4NDgw
371
Fix cached file path for metrics with different config names
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-10T10:02:24"
"2020-07-10T13:45:22"
"2020-07-10T13:45:20"
MEMBER
null
The config name was not taken into account to build the cached file path. It should fix #368
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/371/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/371/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/371", "html_url": "https://github.com/huggingface/datasets/pull/371", "diff_url": "https://github.com/huggingface/datasets/pull/371.diff", "patch_url": "https://github.com/huggingface/datasets/pull/371.patch", "merged_at": "2020-07-10T13:45:20" }
true
https://api.github.com/repos/huggingface/datasets/issues/370
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/370/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/370/comments
https://api.github.com/repos/huggingface/datasets/issues/370/events
https://github.com/huggingface/datasets/pull/370
654,304,193
MDExOlB1bGxSZXF1ZXN0NDQ3MDU3NTIw
370
Allow indexing Dataset via np.ndarray
{ "login": "jarednielsen", "id": 4564897, "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jarednielsen", "html_url": "https://github.com/jarednielsen", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "repos_url": "https://api.github.com/users/jarednielsen/repos", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-09T19:43:15"
"2020-07-10T14:05:44"
"2020-07-10T14:05:43"
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/370/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/370/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/370", "html_url": "https://github.com/huggingface/datasets/pull/370", "diff_url": "https://github.com/huggingface/datasets/pull/370.diff", "patch_url": "https://github.com/huggingface/datasets/pull/370.patch", "merged_at": "2020-07-10T14:05:43" }
true
https://api.github.com/repos/huggingface/datasets/issues/369
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/369/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/369/comments
https://api.github.com/repos/huggingface/datasets/issues/369/events
https://github.com/huggingface/datasets/issues/369
654,186,890
MDU6SXNzdWU2NTQxODY4OTA=
369
can't load local dataset: pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries
{ "login": "vegarab", "id": 24683907, "node_id": "MDQ6VXNlcjI0NjgzOTA3", "avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vegarab", "html_url": "https://github.com/vegarab", "followers_url": "https://api.github.com/users/vegarab/followers", "following_url": "https://api.github.com/users/vegarab/following{/other_user}", "gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}", "starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vegarab/subscriptions", "organizations_url": "https://api.github.com/users/vegarab/orgs", "repos_url": "https://api.github.com/users/vegarab/repos", "events_url": "https://api.github.com/users/vegarab/events{/privacy}", "received_events_url": "https://api.github.com/users/vegarab/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "I am able to reproduce this with the official SQuAD `train-v2.0.json` file downloaded directly from https://rajpurkar.github.io/SQuAD-explorer/", "I am facing this issue in transformers library 3.0.2 while reading a csv using datasets.\r\nIs this fixed in latest version? \r\nI updated the latest version 4.0.1 but still getting this error. What could cause this error?" ]
"2020-07-09T16:16:53"
"2020-12-15T23:07:22"
"2020-07-10T14:52:06"
CONTRIBUTOR
null
Trying to load a local SQuAD-formatted dataset (from a JSON file, about 60MB): ``` dataset = nlp.load_dataset(path='json', data_files={nlp.Split.TRAIN: ["./path/to/file.json"]}) ``` causes ``` Traceback (most recent call last): File "dataloader.py", line 9, in <module> ["./path/to/file.json"]}) File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/load.py", line 524, in load_dataset save_infos=save_infos, File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 432, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 483, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 719, in _prepare_split for key, table in utils.tqdm(generator, unit=" tables", leave=False): File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/tqdm/std.py", line 1129, in __iter__ for obj in iterable: File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/datasets/json/88c1bc5c68489f7eda549ed05a5a738527c613b3e7a4ee3524d9d233353a949b/json.py", line 53, in _generate_tables file, read_options=self.config.pa_read_options, parse_options=self.config.pa_parse_options, File "pyarrow/_json.pyx", line 191, in pyarrow._json.read_json File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?) ``` I haven't been able to find any reports of this specific pyarrow error here or elsewhere.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/369/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/369/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/368
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/368/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/368/comments
https://api.github.com/repos/huggingface/datasets/issues/368/events
https://github.com/huggingface/datasets/issues/368
654,087,251
MDU6SXNzdWU2NTQwODcyNTE=
368
load_metric can't acquire lock anymore
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I found that, in the same process (or the same interactive session), if I do\r\n\r\nimport nlp\r\n\r\nm1 = nlp.load_metric('glue', 'mrpc')\r\nm2 = nlp.load_metric('glue', 'sst2')\r\n\r\nI will get the same error `ValueError: Cannot acquire lock, caching file might be used by another process, you should setup a unique 'experiment_id'`." ]
"2020-07-09T14:04:09"
"2020-07-10T13:45:20"
"2020-07-10T13:45:20"
NONE
null
I can't load metric (glue) anymore after an error in a previous run. I even removed the whole cache folder `/home/XXX/.cache/huggingface/`, and the issue persisted. What are the steps to fix this? Traceback (most recent call last): File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/metric.py", line 101, in __init__ self.filelock.acquire(timeout=1) File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/filelock.py", line 278, in acquire raise Timeout(self._lock_file) filelock.Timeout: The file lock '/home/XXX/.cache/huggingface/metrics/glue/1.0.0/1-glue-0.arrow.lock' could not be acquired. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "examples_huggingface_nlp.py", line 268, in <module> main() File "examples_huggingface_nlp.py", line 242, in main dataset, metric = get_dataset_metric(glue_task) File "examples_huggingface_nlp.py", line 77, in get_dataset_metric metric = nlp.load_metric('glue', glue_config, experiment_id=1) File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/load.py", line 440, in load_metric **metric_init_kwargs, File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/metric.py", line 104, in __init__ "Cannot acquire lock, caching file might be used by another process, " ValueError: Cannot acquire lock, caching file might be used by another process, you should setup a unique 'experiment_id' for this run. I0709 15:54:41.008838 139854118430464 filelock.py:318] Lock 139852058030936 released on /home/XXX/.cache/huggingface/metrics/glue/1.0.0/1-glue-0.arrow.lock
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/368/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/368/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/367
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/367/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/367/comments
https://api.github.com/repos/huggingface/datasets/issues/367/events
https://github.com/huggingface/datasets/pull/367
654,012,984
MDExOlB1bGxSZXF1ZXN0NDQ2ODIxNTAz
367
Update Xtreme to add PAWS-X es
{ "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "repos_url": "https://api.github.com/users/mariamabarham/repos", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-09T12:14:37"
"2020-07-09T12:37:11"
"2020-07-09T12:37:10"
CONTRIBUTOR
null
This PR adds the `PAWS-X.es` in the Xtreme dataset #362
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/367/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/367/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/367", "html_url": "https://github.com/huggingface/datasets/pull/367", "diff_url": "https://github.com/huggingface/datasets/pull/367.diff", "patch_url": "https://github.com/huggingface/datasets/pull/367.patch", "merged_at": "2020-07-09T12:37:10" }
true
https://api.github.com/repos/huggingface/datasets/issues/366
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/366/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/366/comments
https://api.github.com/repos/huggingface/datasets/issues/366/events
https://github.com/huggingface/datasets/pull/366
653,954,896
MDExOlB1bGxSZXF1ZXN0NDQ2NzcyODE2
366
Add quora dataset
{ "login": "ghomasHudson", "id": 13795113, "node_id": "MDQ6VXNlcjEzNzk1MTEz", "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghomasHudson", "html_url": "https://github.com/ghomasHudson", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-09T10:34:22"
"2020-07-13T17:35:21"
"2020-07-13T17:35:21"
CONTRIBUTOR
null
Added the [Quora question pairs dataset](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs). Implementation Notes: - I used the original version provided on the quora website. There's also a [Kaggle competition](https://www.kaggle.com/c/quora-question-pairs) which has a nice train/test split but I can't find an easy way to download it. - I've made the questions into a list: ```python { "questions": [ {"id":0, "text": "Is this an example question?"}, {"id":1, "text": "Is this a sample question?"}, ], ... } ``` rather than: ```python { "question1": "Is this an example question?", "question2": "Is this a sample question?" "qid0": 0 "qid1": 1 ... } ``` Not sure if this was the right call. - Can't find a good citation for this dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/366/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/366/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/366", "html_url": "https://github.com/huggingface/datasets/pull/366", "diff_url": "https://github.com/huggingface/datasets/pull/366.diff", "patch_url": "https://github.com/huggingface/datasets/pull/366.patch", "merged_at": "2020-07-13T17:35:21" }
true
https://api.github.com/repos/huggingface/datasets/issues/365
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/365/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/365/comments
https://api.github.com/repos/huggingface/datasets/issues/365/events
https://github.com/huggingface/datasets/issues/365
653,845,964
MDU6SXNzdWU2NTM4NDU5NjQ=
365
How to augment data ?
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "organizations_url": "https://api.github.com/users/astariul/orgs", "repos_url": "https://api.github.com/users/astariul/repos", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "received_events_url": "https://api.github.com/users/astariul/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Using batched map is probably the easiest way at the moment.\r\nWhat kind of augmentation would you like to do ?", "Some samples in the dataset are too long, I want to divide them in several samples.", "Using batched map is the way to go then.\r\nWe'll make it clearer in the docs that map could be used for augmentation.\r\n\r\nLet me know if you think there should be another way to do it. Or feel free to close the issue otherwise.", "It just feels awkward to use map to augment data. Also it means it's not possible to augment data in a non-batched way.\r\n\r\nBut to be honest I have no idea of a good API...", "Or for non-batched samples, how about returning a tuple ?\r\n\r\n```python\r\ndef aug(sample):\r\n # Simply copy the existing data to have x2 amount of data\r\n return sample, sample\r\n\r\ndataset = dataset.map(aug)\r\n```\r\n\r\nIt feels really natural and easy, but :\r\n\r\n* it means the behavior with batched data is different\r\n* I don't know how doable it is backend-wise\r\n\r\n@lhoestq ", "As we're working with arrow's columnar format we prefer to play with batches that are dictionaries instead of tuples.\r\nIf we have tuple it implies to re-format the data each time we want to write to arrow, which can lower the speed of map for example.\r\n\r\nIt's also a matter of coherence, as we don't want users to be confused whether they have to return dictionaries for some functions and tuples for others when they're doing batches." ]
"2020-07-09T07:52:37"
"2020-07-10T09:12:07"
"2020-07-10T08:22:15"
NONE
null
Is there any clean way to augment data ? For now my work-around is to use batched map, like this : ```python def aug(samples): # Simply copy the existing data to have x2 amount of data for k, v in samples.items(): samples[k].extend(v) return samples dataset = dataset.map(aug, batched=True) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/365/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/365/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/364
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/364/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/364/comments
https://api.github.com/repos/huggingface/datasets/issues/364/events
https://github.com/huggingface/datasets/pull/364
653,821,597
MDExOlB1bGxSZXF1ZXN0NDQ2NjY0NzM5
364
add MS MARCO dataset
{ "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "repos_url": "https://api.github.com/users/mariamabarham/repos", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-09T07:11:19"
"2020-08-06T06:15:49"
"2020-08-06T06:15:48"
CONTRIBUTOR
null
This PR adds the MS MARCO dataset as requested in this issue #336. MS mARCO has multiple task including: - Passage and Document Retrieval - Keyphrase Extraction - QA and NLG This PR only adds the 2 versions of the QA and NLG task dataset which was realeased with the original paper here https://arxiv.org/pdf/1611.09268.pdf Tests are failing because of the dummy data. I tried to fix it without success. Can you please have a look at it? @patrickvonplaten , @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/364/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/364/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/364", "html_url": "https://github.com/huggingface/datasets/pull/364", "diff_url": "https://github.com/huggingface/datasets/pull/364.diff", "patch_url": "https://github.com/huggingface/datasets/pull/364.patch", "merged_at": "2020-08-06T06:15:48" }
true
https://api.github.com/repos/huggingface/datasets/issues/363
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/363/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/363/comments
https://api.github.com/repos/huggingface/datasets/issues/363/events
https://github.com/huggingface/datasets/pull/363
653,821,172
MDExOlB1bGxSZXF1ZXN0NDQ2NjY0NDIy
363
Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets
{ "login": "eltoto1219", "id": 14030663, "node_id": "MDQ6VXNlcjE0MDMwNjYz", "avatar_url": "https://avatars.githubusercontent.com/u/14030663?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eltoto1219", "html_url": "https://github.com/eltoto1219", "followers_url": "https://api.github.com/users/eltoto1219/followers", "following_url": "https://api.github.com/users/eltoto1219/following{/other_user}", "gists_url": "https://api.github.com/users/eltoto1219/gists{/gist_id}", "starred_url": "https://api.github.com/users/eltoto1219/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eltoto1219/subscriptions", "organizations_url": "https://api.github.com/users/eltoto1219/orgs", "repos_url": "https://api.github.com/users/eltoto1219/repos", "events_url": "https://api.github.com/users/eltoto1219/events{/privacy}", "received_events_url": "https://api.github.com/users/eltoto1219/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-09T07:10:30"
"2020-08-24T09:59:35"
"2020-08-24T09:59:35"
CONTRIBUTOR
null
nlp/features.py: The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py src/nlp/arrow_writer.py I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look! datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py: I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ). For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"! (still working on the pretraining, just wanted to push out the new functionality sooner than later)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/363/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/363/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/363", "html_url": "https://github.com/huggingface/datasets/pull/363", "diff_url": "https://github.com/huggingface/datasets/pull/363.diff", "patch_url": "https://github.com/huggingface/datasets/pull/363.patch", "merged_at": "2020-08-24T09:59:35" }
true
https://api.github.com/repos/huggingface/datasets/issues/362
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/362/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/362/comments
https://api.github.com/repos/huggingface/datasets/issues/362/events
https://github.com/huggingface/datasets/issues/362
653,766,245
MDU6SXNzdWU2NTM3NjYyNDU=
362
[dateset subset missing] xtreme paws-x
{ "login": "jerryIsHere", "id": 50871412, "node_id": "MDQ6VXNlcjUwODcxNDEy", "avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jerryIsHere", "html_url": "https://github.com/jerryIsHere", "followers_url": "https://api.github.com/users/jerryIsHere/followers", "following_url": "https://api.github.com/users/jerryIsHere/following{/other_user}", "gists_url": "https://api.github.com/users/jerryIsHere/gists{/gist_id}", "starred_url": "https://api.github.com/users/jerryIsHere/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jerryIsHere/subscriptions", "organizations_url": "https://api.github.com/users/jerryIsHere/orgs", "repos_url": "https://api.github.com/users/jerryIsHere/repos", "events_url": "https://api.github.com/users/jerryIsHere/events{/privacy}", "received_events_url": "https://api.github.com/users/jerryIsHere/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "You're right, thanks for pointing it out. We will update it " ]
"2020-07-09T05:04:54"
"2020-07-09T12:38:42"
"2020-07-09T12:38:42"
CONTRIBUTOR
null
I tried nlp.load_dataset('xtreme', 'PAWS-X.es') but get the value error It turns out that the subset for Spanish is missing https://github.com/google-research-datasets/paws/tree/master/pawsx
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/362/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/362/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/361
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/361/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/361/comments
https://api.github.com/repos/huggingface/datasets/issues/361/events
https://github.com/huggingface/datasets/issues/361
653,757,376
MDU6SXNzdWU2NTM3NTczNzY=
361
🐛 [Metrics] ROUGE is non-deterministic
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "organizations_url": "https://api.github.com/users/astariul/orgs", "repos_url": "https://api.github.com/users/astariul/repos", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "received_events_url": "https://api.github.com/users/astariul/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi, can you give a full self-contained example to reproduce this behavior?", "> Hi, can you give a full self-contained example to reproduce this behavior?\r\n\r\nThere is a notebook in the post ;)", "> If I run the ROUGE metric 2 times, with same predictions / references, the scores are slightly different.\r\n> \r\n> Refer to [this Colab notebook](https://colab.research.google.com/drive/1wRssNXgb9ldcp4ulwj-hMJn0ywhDOiDy?usp=sharing) for reproducing the problem.\r\n> \r\n> Example of F-score for ROUGE-1, ROUGE-2, ROUGE-L in 2 differents run :\r\n> \r\n> > ['0.3350', '0.1470', '0.2329']\r\n> > ['0.3358', '0.1451', '0.2332']\r\n> \r\n> Why ROUGE is not deterministic ?\r\n\r\nThis is because of rouge's `BootstrapAggregator` that uses sampling to get confidence intervals (low, mid, high).\r\nYou can get deterministic scores per sentence pair by using\r\n```python\r\nscore = rouge.compute(rouge_types=[\"rouge1\", \"rouge2\", \"rougeL\"], use_aggregator=False)\r\n```\r\nOr you can set numpy's random seed if you still want to use the aggregator.", "Maybe we can set all the random seeds of numpy/torch etc. while running `metric.compute` ?", "We should probably indeed!", "Now if you re-run the notebook, the two printed results are the same @colanim\r\n```\r\n['0.3356', '0.1466', '0.2318']\r\n['0.3356', '0.1466', '0.2318']\r\n```\r\nHowever across sessions, the results may change (as numpy's random seed can be different). You can prevent that by setting your seed:\r\n```python\r\nrouge = nlp.load_metric('rouge', seed=42)\r\n```", "> \r\n\r\nMinor nit: Note that \"aggregator\" is misspelled in this command. Should be `use_aggregator=False`. ", "Thanks, I fixed the code snippet" ]
"2020-07-09T04:39:37"
"2022-09-09T15:20:55"
"2020-07-20T23:48:37"
NONE
null
If I run the ROUGE metric 2 times, with same predictions / references, the scores are slightly different. Refer to [this Colab notebook](https://colab.research.google.com/drive/1wRssNXgb9ldcp4ulwj-hMJn0ywhDOiDy?usp=sharing) for reproducing the problem. Example of F-score for ROUGE-1, ROUGE-2, ROUGE-L in 2 differents run : > ['0.3350', '0.1470', '0.2329'] ['0.3358', '0.1451', '0.2332'] --- Why ROUGE is not deterministic ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/361/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/361/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/360
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/360/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/360/comments
https://api.github.com/repos/huggingface/datasets/issues/360/events
https://github.com/huggingface/datasets/issues/360
653,687,176
MDU6SXNzdWU2NTM2ODcxNzY=
360
[Feature request] Add dataset.ragged_map() function for many-to-many transformations
{ "login": "jarednielsen", "id": 4564897, "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jarednielsen", "html_url": "https://github.com/jarednielsen", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "repos_url": "https://api.github.com/users/jarednielsen/repos", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Actually `map(batched=True)` can already change the size of the dataset.\r\nIt can accept examples of length `N` and returns a batch of length `M` (can be null or greater than `N`).\r\n\r\nI'll make that explicit in the doc that I'm currently writing.", "You're two steps ahead of me :) In my testing, it also works if `M` < `N`.\r\n\r\nA batched map of different length seems to work if you directly overwrite all of the original keys, but fails if any of the original keys are preserved.\r\n\r\nFor example,\r\n```python\r\n# Create a dummy dataset\r\ndset = load_dataset(\"wikitext\", \"wikitext-2-raw-v1\")[\"test\"]\r\ndset = dset.map(lambda ex: {\"length\": len(ex[\"text\"]), \"foo\": 1})\r\n\r\n# Do an allreduce on each batch, overwriting both keys\r\ndset.map(lambda batch: {\"length\": [sum(batch[\"length\"])], \"foo\": [1]})\r\n# Dataset(schema: {'length': 'int64', 'foo': 'int64'}, num_rows: 5)\r\n\r\n# Now attempt an allreduce without touching the `foo` key\r\ndset.map(lambda batch: {\"length\": [sum(batch[\"length\"])]})\r\n# This fails with the error message below\r\n```\r\n\r\n```bash\r\n File \"/path/to/nlp/src/nlp/arrow_dataset.py\", line 728, in map\r\n arrow_schema = pa.Table.from_pydict(test_output).schema\r\n File \"pyarrow/io.pxi\", line 1532, in pyarrow.lib.Codec.detect\r\n File \"pyarrow/table.pxi\", line 1503, in pyarrow.lib.Table.from_arrays\r\n File \"pyarrow/public-api.pxi\", line 390, in pyarrow.lib.pyarrow_wrap_table\r\n File \"pyarrow/error.pxi\", line 85, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Column 1 named foo expected length 1 but got length 2\r\n```\r\n\r\nAdding the `remove_columns=[\"length\", \"foo\"]` argument to `map()` solves the issue. Leaving the above error for future visitors. Perfect, thank you!" ]
"2020-07-09T01:04:43"
"2020-07-09T19:31:51"
"2020-07-09T19:31:51"
CONTRIBUTOR
null
`dataset.map()` enables one-to-one transformations. Input one example and output one example. This is helpful for tokenizing and cleaning individual lines. `dataset.filter()` enables one-to-(one-or-none) transformations. Input one example and output either zero/one example. This is helpful for removing portions from the dataset. However, some dataset transformations are many-to-many. Consider constructing BERT training examples from a dataset of sentences, where you map `["a", "b", "c"] -> ["a[SEP]b", "a[SEP]c", "b[SEP]c", "c[SEP]b", ...]` I propose a more general `ragged_map()` method that takes in a batch of examples of length `N` and return a batch of examples `M`. This is different from the `map(batched=True)` method, which takes examples of length `N` and returns a batch of length `N`, processing individual examples in parallel. I don't have a clear vision of how this would be implemented efficiently and lazily, but would love to hear the community's feedback on this. My specific use case is creating an end-to-end ELECTRA data pipeline. I would like to take the raw WikiText data and generate training examples from this using the `ragged_map()` method, then export to TFRecords and train quickly. This would be a reproducible pipeline with no bash scripts. Currently I'm relying on scripts like https://github.com/google-research/electra/blob/master/build_pretraining_dataset.py, which are less general.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/360/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/360/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/359
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/359/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/359/comments
https://api.github.com/repos/huggingface/datasets/issues/359/events
https://github.com/huggingface/datasets/issues/359
653,656,279
MDU6SXNzdWU2NTM2NTYyNzk=
359
ArrowBasedBuilder _prepare_split parse_schema breaks on nested structures
{ "login": "timothyjlaurent", "id": 2000204, "node_id": "MDQ6VXNlcjIwMDAyMDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/timothyjlaurent", "html_url": "https://github.com/timothyjlaurent", "followers_url": "https://api.github.com/users/timothyjlaurent/followers", "following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}", "gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}", "starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions", "organizations_url": "https://api.github.com/users/timothyjlaurent/orgs", "repos_url": "https://api.github.com/users/timothyjlaurent/repos", "events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}", "received_events_url": "https://api.github.com/users/timothyjlaurent/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi, it depends on what it is in your `dataset_builder.py` file. Can you share it?\r\n\r\nIf you are just loading `json` files, you can also directly use the `json` script (which will find the schema/features from your JSON structure):\r\n\r\n```python\r\nfrom nlp import load_dataset\r\nds = load_dataset(\"json\", data_files=rel_datafiles)\r\n```", "The behavior I'm seeing is from the `json` script. \r\nI hacked this together to overcome the error with the `JSON` dataloader\r\n\r\n```\r\nclass DatasetBuilder(hf_nlp.ArrowBasedBuilder):\r\n BUILDER_CONFIG_CLASS = BuilderConfig\r\n\r\n def _info(self):\r\n return DatasetInfo()\r\n\r\n def _split_generators(self, dl_manager):\r\n \"\"\" We handle string, list and dicts in datafiles\r\n \"\"\"\r\n if isinstance(self.config.data_files, (str, list, tuple)):\r\n files = self.config.data_files\r\n if isinstance(files, str):\r\n files = [files]\r\n return [SplitGenerator(name=Split.TRAIN, gen_kwargs={\"files\": files})]\r\n splits = []\r\n for split_name in [Split.TRAIN, Split.VALIDATION, Split.TEST]:\r\n if split_name in self.config.data_files:\r\n files = self.config.data_files[split_name]\r\n if isinstance(files, str):\r\n files = [files]\r\n splits.append(SplitGenerator(name=split_name, gen_kwargs={\"files\": files}))\r\n return splits\r\n\r\n def _prepare_split(self, split_generator):\r\n fname = \"{}-{}.arrow\".format(self.name, split_generator.name)\r\n fpath = os.path.join(self._cache_dir, fname)\r\n\r\n writer = ArrowWriter(path=fpath)\r\n\r\n generator = self._generate_tables(**split_generator.gen_kwargs)\r\n for key, table in utils.tqdm(generator, unit=\" tables\", leave=False):\r\n writer.write_table(table)\r\n num_examples, num_bytes = writer.finalize()\r\n\r\n split_generator.split_info.num_examples = num_examples\r\n split_generator.split_info.num_bytes = num_bytes\r\n # this is where the error is coming from\r\n # def parse_schema(schema, schema_dict):\r\n # for field in schema:\r\n # if pa.types.is_struct(field.type):\r\n # schema_dict[field.name] = {}\r\n # parse_schema(field.type, schema_dict[field.name])\r\n # elif pa.types.is_list(field.type) and pa.types.is_struct(field.type.value_type):\r\n # schema_dict[field.name] = {}\r\n # parse_schema(field.type.value_type, schema_dict[field.name])\r\n # else:\r\n # schema_dict[field.name] = Value(str(field.type))\r\n # \r\n # parse_schema(writer.schema, features)\r\n # self.info.features = Features(features)\r\n\r\n def _generate_tables(self, files):\r\n for i, file in enumerate(files):\r\n pa_table = paj.read_json(\r\n file\r\n )\r\n yield i, pa_table\r\n```\r\n\r\nSo I basically just don't populate the `self.info.features` though this doesn't seem to cause any problems in my downstream applications. \r\n\r\nThe other workaround I was doing was to just use pyarrow.json to build a table and then to create the Dataset with its constructor or from_table methods. `load_dataset` has nice split logic, so I'd prefer to use that.\r\n\r\n", "Also noticed that if you for example in a loader script\r\n\r\n```\r\nfrom nlp import ArrowBasedBuilder\r\n\r\nclass MyBuilder(ArrowBasedBuilder):\r\n...\r\n\r\n```\r\nand use that in the subclass, it will be on the module's __dict__ and will be selected before the `MyBuilder` subclass, and it will raise `NotImplementedError` on its `_generate_examples` method... In the code it check for abstract classes but Builder and ArrowBasedBuilder aren't abstract classes, they're regular classes with `@abstract_methods`.", "Indeed this is part of a more general limitation which is the fact that we should generate and update the `features` from the auto-inferred Arrow schema when they are not provided (also happen when a user change the schema using `map()`, the features should be auto-generated and guessed as much as possible to keep the `features` synced with the underlying Arrow table schema).\r\n\r\nWe will try to solve this soon." ]
"2020-07-08T23:24:05"
"2020-07-10T14:52:06"
"2020-07-10T14:52:06"
NONE
null
I tried using the Json dataloader to load some JSON lines files. but get an exception in the parse_schema function. ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-23-9aecfbee53bd> in <module> 55 from nlp import load_dataset 56 ---> 57 ds = load_dataset("../text2struct/model/dataset_builder.py", data_files=rel_datafiles) 58 59 ~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 522 download_mode=download_mode, 523 ignore_verifications=ignore_verifications, --> 524 save_infos=save_infos, 525 ) 526 ~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 430 verify_infos = not save_infos and not ignore_verifications 431 self._download_and_prepare( --> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 433 ) 434 # Sync info ~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 481 try: 482 # Prepare split will record examples associated to the split --> 483 self._prepare_split(split_generator, **prepare_split_kwargs) 484 except OSError: 485 raise OSError("Cannot find data file. " + (self.manual_download_instructions or "")) ~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in _prepare_split(self, split_generator) 736 schema_dict[field.name] = Value(str(field.type)) 737 --> 738 parse_schema(writer.schema, features) 739 self.info.features = Features(features) 740 ~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in parse_schema(schema, schema_dict) 734 parse_schema(field.type.value_type, schema_dict[field.name]) 735 else: --> 736 schema_dict[field.name] = Value(str(field.type)) 737 738 parse_schema(writer.schema, features) <string> in __init__(self, dtype, id, _type) ~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/features.py in __post_init__(self) 55 56 def __post_init__(self): ---> 57 self.pa_type = string_to_arrow(self.dtype) 58 59 def __call__(self): ~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/features.py in string_to_arrow(type_str) 32 if str(type_str + "_") not in pa.__dict__: 33 raise ValueError( ---> 34 f"Neither {type_str} nor {type_str + '_'} seems to be a pyarrow data type. " 35 f"Please make sure to use a correct data type, see: " 36 f"https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions" ValueError: Neither list<item: string> nor list<item: string>_ seems to be a pyarrow data type. Please make sure to use a correct data type, see: https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions ``` If I create the dataset imperatively, using a pyarrow table, the dataset is created correctly. If I override the `_prepare_split` method to avoid calling the validate schema, the dataset can load as well.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/359/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/359/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/358
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/358/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/358/comments
https://api.github.com/repos/huggingface/datasets/issues/358/events
https://github.com/huggingface/datasets/pull/358
653,645,121
MDExOlB1bGxSZXF1ZXN0NDQ2NTI0NjQ5
358
Starting to add some real doc
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-08T22:53:03"
"2020-07-14T09:58:17"
"2020-07-14T09:58:15"
MEMBER
null
Adding a lot of documentation for: - load a dataset - explore the dataset object - process data with the dataset - add a new dataset script - share a dataset script - full package reference This version of the doc can be explored here: https://2219-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html Also: - fix a bug in `train_test_split` - update the `csv` script - add a verbose argument to the dataset processing methods Still missing: - doc for the metrics - how to directly upload a community provided dataset with the CLI - clean up more docstrings - add the `features` argument to `load_dataset` (should be another PR)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/358/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/358/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/358", "html_url": "https://github.com/huggingface/datasets/pull/358", "diff_url": "https://github.com/huggingface/datasets/pull/358.diff", "patch_url": "https://github.com/huggingface/datasets/pull/358.patch", "merged_at": "2020-07-14T09:58:15" }
true
https://api.github.com/repos/huggingface/datasets/issues/357
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/357/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/357/comments
https://api.github.com/repos/huggingface/datasets/issues/357/events
https://github.com/huggingface/datasets/pull/357
653,642,292
MDExOlB1bGxSZXF1ZXN0NDQ2NTIyMzU2
357
Add hashes to cnn_dailymail
{ "login": "jbragg", "id": 2238344, "node_id": "MDQ6VXNlcjIyMzgzNDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jbragg", "html_url": "https://github.com/jbragg", "followers_url": "https://api.github.com/users/jbragg/followers", "following_url": "https://api.github.com/users/jbragg/following{/other_user}", "gists_url": "https://api.github.com/users/jbragg/gists{/gist_id}", "starred_url": "https://api.github.com/users/jbragg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jbragg/subscriptions", "organizations_url": "https://api.github.com/users/jbragg/orgs", "repos_url": "https://api.github.com/users/jbragg/repos", "events_url": "https://api.github.com/users/jbragg/events{/privacy}", "received_events_url": "https://api.github.com/users/jbragg/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-08T22:45:21"
"2020-07-13T14:16:38"
"2020-07-13T14:16:38"
CONTRIBUTOR
null
The URL hashes are helpful for comparing results from other sources.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/357/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/357/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/357", "html_url": "https://github.com/huggingface/datasets/pull/357", "diff_url": "https://github.com/huggingface/datasets/pull/357.diff", "patch_url": "https://github.com/huggingface/datasets/pull/357.patch", "merged_at": "2020-07-13T14:16:38" }
true
https://api.github.com/repos/huggingface/datasets/issues/356
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/356/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/356/comments
https://api.github.com/repos/huggingface/datasets/issues/356/events
https://github.com/huggingface/datasets/pull/356
653,537,388
MDExOlB1bGxSZXF1ZXN0NDQ2NDM3MDQ5
356
Add text dataset
{ "login": "jarednielsen", "id": 4564897, "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jarednielsen", "html_url": "https://github.com/jarednielsen", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "repos_url": "https://api.github.com/users/jarednielsen/repos", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-08T19:21:53"
"2020-07-10T14:19:03"
"2020-07-10T14:19:03"
CONTRIBUTOR
null
Usage: ```python from nlp import load_dataset dset = load_dataset("text", data_files="/path/to/file.txt")["train"] ``` I created a dummy_data.zip which contains three files: `train.txt`, `test.txt`, `dev.txt`. Each of these contains two lines. It passes ```bash RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_text ``` but I would like a second set of eyes to ensure I did it right.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/356/reactions", "total_count": 6, "+1": 2, "-1": 0, "laugh": 0, "hooray": 3, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/356/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/356", "html_url": "https://github.com/huggingface/datasets/pull/356", "diff_url": "https://github.com/huggingface/datasets/pull/356.diff", "patch_url": "https://github.com/huggingface/datasets/pull/356.patch", "merged_at": "2020-07-10T14:19:03" }
true
https://api.github.com/repos/huggingface/datasets/issues/355
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/355/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/355/comments
https://api.github.com/repos/huggingface/datasets/issues/355/events
https://github.com/huggingface/datasets/issues/355
653,451,013
MDU6SXNzdWU2NTM0NTEwMTM=
355
can't load SNLI dataset
{ "login": "jxmorris12", "id": 13238952, "node_id": "MDQ6VXNlcjEzMjM4OTUy", "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jxmorris12", "html_url": "https://github.com/jxmorris12", "followers_url": "https://api.github.com/users/jxmorris12/followers", "following_url": "https://api.github.com/users/jxmorris12/following{/other_user}", "gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}", "starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions", "organizations_url": "https://api.github.com/users/jxmorris12/orgs", "repos_url": "https://api.github.com/users/jxmorris12/repos", "events_url": "https://api.github.com/users/jxmorris12/events{/privacy}", "received_events_url": "https://api.github.com/users/jxmorris12/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I just added the processed files of `snli` on our google storage, so that when you do `load_dataset` it can download the processed files from there :)\r\n\r\nWe are thinking about having available those processed files for more datasets in the future, because sometimes files aren't available (like for `snli`), or the download speed is too slow, or sometimes the files take time to be processed.", "Closing this one. Feel free to re-open if you have other questions :)", "Thank you!" ]
"2020-07-08T16:54:14"
"2020-07-18T05:15:57"
"2020-07-15T07:59:01"
CONTRIBUTOR
null
`nlp` seems to load `snli` from some URL based on nlp.stanford.edu. This subdomain is frequently down -- including right now, when I'd like to load `snli` in a Colab notebook, but can't. Is there a plan to move these datasets to huggingface servers for a more stable solution? Btw, here's the stack trace: ``` File "/content/nlp/src/nlp/builder.py", line 432, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/content/nlp/src/nlp/builder.py", line 466, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/content/nlp/src/nlp/datasets/snli/e417f6f2e16254938d977a17ed32f3998f5b23e4fcab0f6eb1d28784f23ea60d/snli.py", line 76, in _split_generators dl_dir = dl_manager.download_and_extract(_DATA_URL) File "/content/nlp/src/nlp/utils/download_manager.py", line 217, in download_and_extract return self.extract(self.download(url_or_urls)) File "/content/nlp/src/nlp/utils/download_manager.py", line 156, in download lambda url: cached_path(url, download_config=self._download_config,), url_or_urls, File "/content/nlp/src/nlp/utils/py_utils.py", line 190, in map_nested return function(data_struct) File "/content/nlp/src/nlp/utils/download_manager.py", line 156, in <lambda> lambda url: cached_path(url, download_config=self._download_config,), url_or_urls, File "/content/nlp/src/nlp/utils/file_utils.py", line 198, in cached_path local_files_only=download_config.local_files_only, File "/content/nlp/src/nlp/utils/file_utils.py", line 356, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://nlp.stanford.edu/projects/snli/snli_1.0.zip ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/355/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/355/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/354
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/354/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/354/comments
https://api.github.com/repos/huggingface/datasets/issues/354/events
https://github.com/huggingface/datasets/pull/354
653,357,617
MDExOlB1bGxSZXF1ZXN0NDQ2MjkyMTc4
354
More faiss control
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-08T14:45:20"
"2020-07-09T09:54:54"
"2020-07-09T09:54:51"
MEMBER
null
Allow users to specify a faiss index they created themselves, as sometimes indexes can be composite for examples
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/354/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/354/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/354", "html_url": "https://github.com/huggingface/datasets/pull/354", "diff_url": "https://github.com/huggingface/datasets/pull/354.diff", "patch_url": "https://github.com/huggingface/datasets/pull/354.patch", "merged_at": "2020-07-09T09:54:51" }
true
https://api.github.com/repos/huggingface/datasets/issues/352
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/352/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/352/comments
https://api.github.com/repos/huggingface/datasets/issues/352/events
https://github.com/huggingface/datasets/pull/352
653,128,883
MDExOlB1bGxSZXF1ZXN0NDQ2MTA1Mjky
352
🐛[BugFix]fix seqeval
{ "login": "AlongWY", "id": 20281571, "node_id": "MDQ6VXNlcjIwMjgxNTcx", "avatar_url": "https://avatars.githubusercontent.com/u/20281571?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AlongWY", "html_url": "https://github.com/AlongWY", "followers_url": "https://api.github.com/users/AlongWY/followers", "following_url": "https://api.github.com/users/AlongWY/following{/other_user}", "gists_url": "https://api.github.com/users/AlongWY/gists{/gist_id}", "starred_url": "https://api.github.com/users/AlongWY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AlongWY/subscriptions", "organizations_url": "https://api.github.com/users/AlongWY/orgs", "repos_url": "https://api.github.com/users/AlongWY/repos", "events_url": "https://api.github.com/users/AlongWY/events{/privacy}", "received_events_url": "https://api.github.com/users/AlongWY/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-08T09:12:12"
"2020-07-16T08:26:46"
"2020-07-16T08:26:46"
CONTRIBUTOR
null
Fix seqeval process labels such as 'B', 'B-ARGM-LOC'
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/352/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/352/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/352", "html_url": "https://github.com/huggingface/datasets/pull/352", "diff_url": "https://github.com/huggingface/datasets/pull/352.diff", "patch_url": "https://github.com/huggingface/datasets/pull/352.patch", "merged_at": "2020-07-16T08:26:46" }
true
https://api.github.com/repos/huggingface/datasets/issues/351
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/351/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/351/comments
https://api.github.com/repos/huggingface/datasets/issues/351/events
https://github.com/huggingface/datasets/pull/351
652,424,048
MDExOlB1bGxSZXF1ZXN0NDQ1NDk0NTE4
351
add pandas dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-07T15:38:07"
"2020-07-08T14:15:16"
"2020-07-08T14:15:15"
MEMBER
null
Create a dataset from serialized pandas dataframes. Usage: ```python from nlp import load_dataset dset = load_dataset("pandas", data_files="df.pkl")["train"] ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/351/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/351/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/351", "html_url": "https://github.com/huggingface/datasets/pull/351", "diff_url": "https://github.com/huggingface/datasets/pull/351.diff", "patch_url": "https://github.com/huggingface/datasets/pull/351.patch", "merged_at": "2020-07-08T14:15:15" }
true
https://api.github.com/repos/huggingface/datasets/issues/350
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/350/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/350/comments
https://api.github.com/repos/huggingface/datasets/issues/350/events
https://github.com/huggingface/datasets/pull/350
652,398,691
MDExOlB1bGxSZXF1ZXN0NDQ1NDczODYz
350
add from_pandas and from_dict
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-07T15:03:53"
"2020-07-08T14:14:33"
"2020-07-08T14:14:32"
MEMBER
null
I added two new methods to the `Dataset` class: - `from_pandas()` to create a dataset from a pandas dataframe - `from_dict()` to create a dataset from a dictionary (keys = columns) It uses the `pa.Table.from_pandas` and `pa.Table.from_pydict` funcitons to do so. It is also possible to specify the features types via `features=...` if there are ambiguities (null/nan values), otherwise the arrow schema is infered from the data automatically by pyarrow. One question that I have right now: + Should we also add a `save()` method that would write the dataset on the disk ? Right now if we create a `Dataset` using those two new methods, the data are kept in RAM. Then to reload it we can call the `from_file()` method.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/350/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/350/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/350", "html_url": "https://github.com/huggingface/datasets/pull/350", "diff_url": "https://github.com/huggingface/datasets/pull/350.diff", "patch_url": "https://github.com/huggingface/datasets/pull/350.patch", "merged_at": "2020-07-08T14:14:32" }
true
https://api.github.com/repos/huggingface/datasets/issues/349
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/349/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/349/comments
https://api.github.com/repos/huggingface/datasets/issues/349/events
https://github.com/huggingface/datasets/pull/349
652,231,571
MDExOlB1bGxSZXF1ZXN0NDQ1MzQwMTQ1
349
Hyperpartisan news detection
{ "login": "ghomasHudson", "id": 13795113, "node_id": "MDQ6VXNlcjEzNzk1MTEz", "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghomasHudson", "html_url": "https://github.com/ghomasHudson", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-07T11:06:37"
"2020-07-07T20:47:27"
"2020-07-07T14:57:11"
CONTRIBUTOR
null
Adding the hyperpartisan news detection dataset from PAN. This contains news article text, labelled with whether they're hyper-partisan and why kinds of biases they display. Implementation notes: - As with many PAN tasks, the data is hosted on [Zenodo](https://zenodo.org/record/1489920) and must be requested before use. I've used the manual download stuff for this, although the dataset is provided under a Creative Commons Attribution 4.0 International License, so we could host a version if we wanted to? - The 'bias' attribute doesn't exist for the 'byarticle' configuration. I've added an empty string to the class labels to deal with this. Is there a more standard value for empty data? - Should we always subclass `nlp.BuilderConfig`?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/349/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/349/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/349", "html_url": "https://github.com/huggingface/datasets/pull/349", "diff_url": "https://github.com/huggingface/datasets/pull/349.diff", "patch_url": "https://github.com/huggingface/datasets/pull/349.patch", "merged_at": "2020-07-07T14:57:11" }
true
https://api.github.com/repos/huggingface/datasets/issues/348
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/348/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/348/comments
https://api.github.com/repos/huggingface/datasets/issues/348/events
https://github.com/huggingface/datasets/pull/348
652,158,308
MDExOlB1bGxSZXF1ZXN0NDQ1MjgwNjk3
348
Add OSCAR dataset
{ "login": "pjox", "id": 635220, "node_id": "MDQ6VXNlcjYzNTIyMA==", "avatar_url": "https://avatars.githubusercontent.com/u/635220?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pjox", "html_url": "https://github.com/pjox", "followers_url": "https://api.github.com/users/pjox/followers", "following_url": "https://api.github.com/users/pjox/following{/other_user}", "gists_url": "https://api.github.com/users/pjox/gists{/gist_id}", "starred_url": "https://api.github.com/users/pjox/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pjox/subscriptions", "organizations_url": "https://api.github.com/users/pjox/orgs", "repos_url": "https://api.github.com/users/pjox/repos", "events_url": "https://api.github.com/users/pjox/events{/privacy}", "received_events_url": "https://api.github.com/users/pjox/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-07T09:22:07"
"2021-05-03T22:07:08"
"2021-02-09T10:19:19"
CONTRIBUTOR
null
I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅 Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/348/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 0, "hooray": 4, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/348/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/348", "html_url": "https://github.com/huggingface/datasets/pull/348", "diff_url": "https://github.com/huggingface/datasets/pull/348.diff", "patch_url": "https://github.com/huggingface/datasets/pull/348.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/347
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/347/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/347/comments
https://api.github.com/repos/huggingface/datasets/issues/347/events
https://github.com/huggingface/datasets/issues/347
652,106,567
MDU6SXNzdWU2NTIxMDY1Njc=
347
'cp950' codec error from load_dataset('xtreme', 'tydiqa')
{ "login": "jerryIsHere", "id": 50871412, "node_id": "MDQ6VXNlcjUwODcxNDEy", "avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jerryIsHere", "html_url": "https://github.com/jerryIsHere", "followers_url": "https://api.github.com/users/jerryIsHere/followers", "following_url": "https://api.github.com/users/jerryIsHere/following{/other_user}", "gists_url": "https://api.github.com/users/jerryIsHere/gists{/gist_id}", "starred_url": "https://api.github.com/users/jerryIsHere/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jerryIsHere/subscriptions", "organizations_url": "https://api.github.com/users/jerryIsHere/orgs", "repos_url": "https://api.github.com/users/jerryIsHere/repos", "events_url": "https://api.github.com/users/jerryIsHere/events{/privacy}", "received_events_url": "https://api.github.com/users/jerryIsHere/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "This is probably a Windows issue, we need to specify the encoding when `load_dataset()` reads the original CSV file.\r\nTry to find the `open()` statement called by `load_dataset()` and add an `encoding='utf-8'` parameter.\r\nSee issues #242 and #307 ", "It should be in `xtreme.py:L755`:\r\n```python\r\n if self.config.name == \"tydiqa\" or self.config.name.startswith(\"MLQA\") or self.config.name == \"SQuAD\":\r\n with open(filepath) as f:\r\n data = json.load(f)\r\n```\r\n\r\nCould you try to add the encoding parameter:\r\n```python\r\nopen(filepath, encoding='utf-8')\r\n```", "Hello @jerryIsHere :) Did it work ?\r\nIf so we may change the dataset script to force the utf-8 encoding", "@lhoestq sorry for being that late, I found 4 copy of xtreme.py. I did the changes as what has been told to all of them.\r\nThe problem is not solved", "Could you provide a better error message so that we can make sure it comes from the opening of the `tydiqa`'s json files ?\r\n", "@lhoestq \r\nThe error message is same as before:\r\nException has occurred: UnicodeDecodeError\r\n'cp950' codec can't decode byte 0xe2 in position 111: illegal multibyte sequence\r\n File \"D:\\python\\test\\test.py\", line 3, in <module>\r\n dataset = load_dataset('xtreme', 'tydiqa')\r\n\r\n![image](https://user-images.githubusercontent.com/50871412/87748794-7c216880-c829-11ea-94f0-7caeacb4d865.png)\r\n\r\nI said that I found 4 copy of xtreme.py and add the 「, encoding='utf-8'」 parameter to the open() function\r\nthese python script was found under this directory\r\nC:\\Users\\USER\\AppData\\Local\\Programs\\Python\\Python37\\Lib\\site-packages\\nlp\\datasets\\xtreme\r\n", "Hi there !\r\nI encountered the same issue with the IMDB dataset on windows. It threw an error about charmap not being able to decode a symbol during the first time I tried to download it. I checked on a remote linux machine I have, and it can't be reproduced.\r\nI added ```encoding='UTF-8'``` to both lines that have ```open``` in ```imdb.py``` (108 and 114) and it worked for me.\r\nThank you !", "> Hi there !\r\n> I encountered the same issue with the IMDB dataset on windows. It threw an error about charmap not being able to decode a symbol during the first time I tried to download it. I checked on a remote linux machine I have, and it can't be reproduced.\r\n> I added `encoding='UTF-8'` to both lines that have `open` in `imdb.py` (108 and 114) and it worked for me.\r\n> Thank you !\r\n\r\nHello !\r\nGlad you managed to fix this issue on your side.\r\nDo you mind opening a PR for IMDB ?", "> This is probably a Windows issue, we need to specify the encoding when `load_dataset()` reads the original CSV file.\r\n> Try to find the `open()` statement called by `load_dataset()` and add an `encoding='utf-8'` parameter.\r\n> See issues #242 and #307\r\n\r\nSorry for not responding for about a month.\r\nI have just found that it is necessary to change / add the environment variable as what was told in #242.\r\nEverything works after I add the new environment variable and restart my PC.\r\n\r\nI think the encoding issue for windows isn't limited to the open() function call specific to few dataset, but actually in the entire library, depends on the machine / os you use.", "Since #481 we shouldn't have other issues with encodings as they need to be set to \"utf-8\" be default.\r\n\r\nClosing this one, but feel free to re-open if you gave other questions" ]
"2020-07-07T08:14:23"
"2020-09-07T14:51:45"
"2020-09-07T14:51:45"
CONTRIBUTOR
null
![image](https://user-images.githubusercontent.com/50871412/86744744-67481680-c06c-11ea-8612-b77eba92a392.png) I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, perhaps : https://www.python.org/dev/peps/pep-0263/ I guess the error was triggered by the code " module = importlib.import_module(module_path)" at line 57 in the source code: nlp/src/nlp/load.py / (https://github.com/huggingface/nlp/blob/911d5596f9b500e39af8642fe3d1b891758999c7/src/nlp/load.py#L51) Any ideas? p.s. tried the same code on colab, that runs perfectly
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/347/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/347/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/346
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/346/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/346/comments
https://api.github.com/repos/huggingface/datasets/issues/346/events
https://github.com/huggingface/datasets/pull/346
652,044,151
MDExOlB1bGxSZXF1ZXN0NDQ1MTg4MTUz
346
Add emotion dataset
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-07T06:35:41"
"2022-05-30T15:16:44"
"2020-07-13T14:39:38"
MEMBER
null
Hello 🤗 team! I am trying to add an emotion classification dataset ([link](https://github.com/dair-ai/emotion_dataset)) to `nlp` but I am a bit stuck about what I should do when the URL for the dataset is not a ZIP file, but just a pickled `pandas.DataFrame` (see [here](https://www.dropbox.com/s/607ptdakxuh5i4s/merged_training.pkl)). With the current implementation, running ```bash python nlp-cli test datasets/emotion --save_infos --all_configs ``` throws a `_pickle.UnpicklingError: invalid load key, '<'.` error (full stack trace below). The strange thing is that the path to the file does not carry the `.pkl` extension and instead appears to be some md5 hash (see the `FILE PATH` print statement in the stack trace). Note: I have checked that the `merged_training.pkl` file is not corrupted when I download it with `wget`. Any pointers on what I'm doing wrong would be greatly appreciated! **Stack trace** ``` INFO:nlp.load:Checking datasets/emotion/emotion.py for additional imports. INFO:filelock:Lock 140330435928512 acquired on datasets/emotion/emotion.py.lock INFO:nlp.load:Found main folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion INFO:nlp.load:Creating specific version folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b INFO:nlp.load:Copying script file from datasets/emotion/emotion.py to /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py INFO:nlp.load:Couldn't find dataset infos file at datasets/emotion/dataset_infos.json INFO:nlp.load:Creating metadata file for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.json INFO:filelock:Lock 140330435928512 released on datasets/emotion/emotion.py.lock INFO:nlp.builder:Generating dataset emotion (/Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0) INFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source Downloading and preparing dataset emotion/emotion (download: Unknown size, generated: Unknown size, total: Unknown size) to /Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0... INFO:nlp.builder:Generating split train 0 examples [00:00, ? examples/s]FILE PATH /Users/lewtun/.cache/huggingface/datasets/3615dcb52b7ba052ef63e1571894c4b67e8e12a6ab1ef2f756ec3c380bf48490 Traceback (most recent call last): File "nlp-cli", line 37, in <module> service.run() File "/Users/lewtun/git/nlp/src/nlp/commands/test.py", line 83, in run builder.download_and_prepare( File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 431, in download_and_prepare self._download_and_prepare( File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 483, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 664, in _prepare_split for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False): File "/Users/lewtun/miniconda3/envs/nlp/lib/python3.8/site-packages/tqdm/std.py", line 1129, in __iter__ for obj in iterable: File "/Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py", line 87, in _generate_examples data = pickle.load(f) _pickle.UnpicklingError: invalid load key, '<'. ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/346/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/346/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/346", "html_url": "https://github.com/huggingface/datasets/pull/346", "diff_url": "https://github.com/huggingface/datasets/pull/346.diff", "patch_url": "https://github.com/huggingface/datasets/pull/346.patch", "merged_at": "2020-07-13T14:39:38" }
true
https://api.github.com/repos/huggingface/datasets/issues/345
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/345/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/345/comments
https://api.github.com/repos/huggingface/datasets/issues/345/events
https://github.com/huggingface/datasets/issues/345
651,761,201
MDU6SXNzdWU2NTE3NjEyMDE=
345
Supporting documents in ELI5
{ "login": "saverymax", "id": 29262273, "node_id": "MDQ6VXNlcjI5MjYyMjcz", "avatar_url": "https://avatars.githubusercontent.com/u/29262273?v=4", "gravatar_id": "", "url": "https://api.github.com/users/saverymax", "html_url": "https://github.com/saverymax", "followers_url": "https://api.github.com/users/saverymax/followers", "following_url": "https://api.github.com/users/saverymax/following{/other_user}", "gists_url": "https://api.github.com/users/saverymax/gists{/gist_id}", "starred_url": "https://api.github.com/users/saverymax/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/saverymax/subscriptions", "organizations_url": "https://api.github.com/users/saverymax/orgs", "repos_url": "https://api.github.com/users/saverymax/repos", "events_url": "https://api.github.com/users/saverymax/events{/privacy}", "received_events_url": "https://api.github.com/users/saverymax/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @saverymax ! For licensing reasons, the original team was unable to release pre-processed CommonCrawl documents. Instead, they provided a script to re-create them from a CommonCrawl dump, but it unfortunately requires access to a medium-large size cluster:\r\nhttps://github.com/facebookresearch/ELI5#downloading-support-documents-from-the-commoncrawl\r\n\r\nIn order to make the task accessible to people who may not have access to this kind of infrastructure, we suggest to use Wikipedia as a knowledge source rather than the full CommonCrawl. The following blog post shows how you can create Wikipedia support documents and get a performance that is on par with a system that uses CommonCrawl pages.\r\nhttps://yjernite.github.io/lfqa.html#task_description\r\n\r\nHope that helps, using ElasticSearch to index Wiki40b and create the documents should take about 4 hours. Let us know if you have any trouble with the blog post though!", "Hi, thanks for the quick response. The blog post is quite an interesting working example, thanks for sharing it.\r\nTwo follow-up points/questions about my original question:\r\n\r\n1. Yes, I read that the facebook team could not share the CommonCrawl b/c of licensing reasons. They state \"No, we are not allowed to host processed Reddit or CommonCrawl data,\" which indicates they could also not share the Reddit data for licensing reasons. But it seems that HuggingFace is able to share the Reddit data, so why not a subset of CommonCrawl?\r\n\r\n2. Thanks for the suggestion about ElasticSearch and Wiki40b. This is good to know about performance. I definitely could do the indexing and querying myself. What I like about the ELI5 dataset though, at least what is suggested by the paper, is that to create the dataset they had already selected the top 100 web sources and made a single support document from those. Though it doesn't appear to be too sophisticated an approach, having a single support document pre-computed (without having to run the facebook code or a replacement with another dataset) is super useful for my work, especially since I'm not working on developing the latest and greatest retrieval model. Of course, I don't expect HF NLP datasets to be perfectly tailored to my use-case. I know there is overhead to any project, I'm just illustrating a use-case of ELI5 which is not possible with the data provided as-is. If it's for licensing reasons, that is perfectly acceptable a reason, and I appreciate your response." ]
"2020-07-06T19:14:13"
"2020-10-27T15:38:45"
"2020-10-27T15:38:45"
NONE
null
I was attempting to use the ELI5 dataset, when I realized that huggingface does not provide the supporting documents (the source documents from the common crawl). Without the supporting documents, this makes the dataset about as useful for my project as a block of cheese, or some other more apt metaphor. According to facebook, the entire document collection is quite large. However, it would still be helpful to at least include a subset of the supporting documents i.e., having some data is better than having a block of cheese, in my case at least. If you choose not to include them, it would be helpful to have documentation mentioning this specifically. It is especially confusing because the hf nlp ELI5 dataset has the key `'document'` but there are no documents to be found :(
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/345/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/345/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/344
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/344/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/344/comments
https://api.github.com/repos/huggingface/datasets/issues/344/events
https://github.com/huggingface/datasets/pull/344
651,495,246
MDExOlB1bGxSZXF1ZXN0NDQ0NzQwMTIw
344
Search qa
{ "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "repos_url": "https://api.github.com/users/mariamabarham/repos", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-06T12:23:16"
"2020-07-16T08:58:16"
"2020-07-16T08:58:16"
CONTRIBUTOR
null
This PR adds the Search QA dataset used in **SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine**. The dataset has the following config name: - raw_jeopardy: raw data - train_test_val: which is the splitted version #336
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/344/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/344/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/344", "html_url": "https://github.com/huggingface/datasets/pull/344", "diff_url": "https://github.com/huggingface/datasets/pull/344.diff", "patch_url": "https://github.com/huggingface/datasets/pull/344.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/343
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/343/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/343/comments
https://api.github.com/repos/huggingface/datasets/issues/343/events
https://github.com/huggingface/datasets/pull/343
651,419,630
MDExOlB1bGxSZXF1ZXN0NDQ0Njc4NDEw
343
Fix nested tensorflow format
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-06T10:13:45"
"2020-07-06T13:11:52"
"2020-07-06T13:11:51"
MEMBER
null
In #339 and #337 we are thinking about adding a way to export datasets to tfrecords. However I noticed that it was not possible to do `dset.set_format("tensorflow")` on datasets with nested features like `squad`. I fixed that using a nested map operations to convert features to `tf.ragged.constant`. I also added tests on the `set_format` function.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/343/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/343/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/343", "html_url": "https://github.com/huggingface/datasets/pull/343", "diff_url": "https://github.com/huggingface/datasets/pull/343.diff", "patch_url": "https://github.com/huggingface/datasets/pull/343.patch", "merged_at": "2020-07-06T13:11:51" }
true
https://api.github.com/repos/huggingface/datasets/issues/342
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/342/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/342/comments
https://api.github.com/repos/huggingface/datasets/issues/342/events
https://github.com/huggingface/datasets/issues/342
651,333,194
MDU6SXNzdWU2NTEzMzMxOTQ=
342
Features should be updated when `map()` changes schema
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "`dataset.column_names` are being updated but `dataset.features` aren't indeed..." ]
"2020-07-06T08:03:23"
"2020-07-23T10:15:16"
"2020-07-23T10:15:16"
MEMBER
null
`dataset.map()` can change the schema and column names. We should update the features in this case (with what is possible to infer).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/342/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/342/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/341
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/341/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/341/comments
https://api.github.com/repos/huggingface/datasets/issues/341/events
https://github.com/huggingface/datasets/pull/341
650,611,969
MDExOlB1bGxSZXF1ZXN0NDQ0MDcwMjEx
341
add fever dataset
{ "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "repos_url": "https://api.github.com/users/mariamabarham/repos", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-03T13:53:07"
"2020-07-06T13:03:48"
"2020-07-06T13:03:47"
CONTRIBUTOR
null
This PR add the FEVER dataset https://fever.ai/ used in with the paper: FEVER: a large-scale dataset for Fact Extraction and VERification (https://arxiv.org/pdf/1803.05355.pdf). #336
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/341/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/341/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/341", "html_url": "https://github.com/huggingface/datasets/pull/341", "diff_url": "https://github.com/huggingface/datasets/pull/341.diff", "patch_url": "https://github.com/huggingface/datasets/pull/341.patch", "merged_at": "2020-07-06T13:03:47" }
true
https://api.github.com/repos/huggingface/datasets/issues/340
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/340/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/340/comments
https://api.github.com/repos/huggingface/datasets/issues/340/events
https://github.com/huggingface/datasets/pull/340
650,533,920
MDExOlB1bGxSZXF1ZXN0NDQ0MDA2Nzcy
340
Update cfq.py
{ "login": "brainshawn", "id": 4437290, "node_id": "MDQ6VXNlcjQ0MzcyOTA=", "avatar_url": "https://avatars.githubusercontent.com/u/4437290?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brainshawn", "html_url": "https://github.com/brainshawn", "followers_url": "https://api.github.com/users/brainshawn/followers", "following_url": "https://api.github.com/users/brainshawn/following{/other_user}", "gists_url": "https://api.github.com/users/brainshawn/gists{/gist_id}", "starred_url": "https://api.github.com/users/brainshawn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brainshawn/subscriptions", "organizations_url": "https://api.github.com/users/brainshawn/orgs", "repos_url": "https://api.github.com/users/brainshawn/repos", "events_url": "https://api.github.com/users/brainshawn/events{/privacy}", "received_events_url": "https://api.github.com/users/brainshawn/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-03T11:23:19"
"2020-07-03T12:33:50"
"2020-07-03T12:33:50"
CONTRIBUTOR
null
Make the dataset name consistent with in the paper: Compositional Freebase Question => Compositional Freebase Questions.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/340/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/340/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/340", "html_url": "https://github.com/huggingface/datasets/pull/340", "diff_url": "https://github.com/huggingface/datasets/pull/340.diff", "patch_url": "https://github.com/huggingface/datasets/pull/340.patch", "merged_at": "2020-07-03T12:33:50" }
true
https://api.github.com/repos/huggingface/datasets/issues/339
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/339/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/339/comments
https://api.github.com/repos/huggingface/datasets/issues/339/events
https://github.com/huggingface/datasets/pull/339
650,156,468
MDExOlB1bGxSZXF1ZXN0NDQzNzAyNTcw
339
Add dataset.export() to TFRecords
{ "login": "jarednielsen", "id": 4564897, "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jarednielsen", "html_url": "https://github.com/jarednielsen", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "repos_url": "https://api.github.com/users/jarednielsen/repos", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-02T19:26:27"
"2020-07-22T09:16:12"
"2020-07-22T09:16:12"
CONTRIBUTOR
null
Fixes https://github.com/huggingface/nlp/issues/337 Some design decisions: - Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting. - Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193. - Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know. - There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know. Also, I noticed that ```python dataset = dataset.select(indices) dataset.set_format("tensorflow") # dataset._format_type is "tensorflow" ``` gives a different output than ```python dataset.set_format("tensorflow") dataset = dataset.select(indices) # dataset._format_type is None ``` The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/339/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 3, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/339/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/339", "html_url": "https://github.com/huggingface/datasets/pull/339", "diff_url": "https://github.com/huggingface/datasets/pull/339.diff", "patch_url": "https://github.com/huggingface/datasets/pull/339.patch", "merged_at": "2020-07-22T09:16:11" }
true
https://api.github.com/repos/huggingface/datasets/issues/338
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/338/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/338/comments
https://api.github.com/repos/huggingface/datasets/issues/338/events
https://github.com/huggingface/datasets/pull/338
650,057,253
MDExOlB1bGxSZXF1ZXN0NDQzNjIxMTEx
338
Run `make style`
{ "login": "jarednielsen", "id": 4564897, "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jarednielsen", "html_url": "https://github.com/jarednielsen", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "repos_url": "https://api.github.com/users/jarednielsen/repos", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-02T16:19:47"
"2020-07-02T18:03:10"
"2020-07-02T18:03:10"
CONTRIBUTOR
null
These files get changed when I run `make style` on an unrelated PR. Upstreaming these changes so development on a different branch can be easier.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/338/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/338/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/338", "html_url": "https://github.com/huggingface/datasets/pull/338", "diff_url": "https://github.com/huggingface/datasets/pull/338.diff", "patch_url": "https://github.com/huggingface/datasets/pull/338.patch", "merged_at": "2020-07-02T18:03:10" }
true
https://api.github.com/repos/huggingface/datasets/issues/337
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/337/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/337/comments
https://api.github.com/repos/huggingface/datasets/issues/337/events
https://github.com/huggingface/datasets/issues/337
650,035,887
MDU6SXNzdWU2NTAwMzU4ODc=
337
[Feature request] Export Arrow dataset to TFRecords
{ "login": "jarednielsen", "id": 4564897, "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jarednielsen", "html_url": "https://github.com/jarednielsen", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "repos_url": "https://api.github.com/users/jarednielsen/repos", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-02T15:47:12"
"2020-07-22T09:16:12"
"2020-07-22T09:16:12"
CONTRIBUTOR
null
The TFRecord generation process is error-prone and requires complex separate Python scripts to download and preprocess the data. I propose to combine the user-friendly features of `nlp` with the speed and efficiency of TFRecords. Sample API: ```python # use these existing methods ds = load_dataset("wikitext", "wikitext-2-raw-v1", split="train") ds = ds.map(lambda ex: tokenizer(ex)) ds.set_format("tensorflow", columns=["input_ids", "token_type_ids", "attention_mask"]) # then add this method ds.export(folder="/my/tfrecords", prefix="myrecord", num_shards=8, format="tfrecord") ``` which would create files like so: ```bash /my/tfrecords/myrecord_1.tfrecord /my/tfrecords/myrecord_2.tfrecord ... ``` I would be happy to contribute this method. We could use a similar approach for PyTorch. Thoughts?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/337/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/337/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/336
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/336/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/336/comments
https://api.github.com/repos/huggingface/datasets/issues/336/events
https://github.com/huggingface/datasets/issues/336
649,914,203
MDU6SXNzdWU2NDk5MTQyMDM=
336
[Dataset requests] New datasets for Open Question Answering
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892884, "node_id": "MDU6TGFiZWwxOTM1ODkyODg0", "url": "https://api.github.com/repos/huggingface/datasets/labels/help%20wanted", "name": "help wanted", "color": "008672", "default": true, "description": "Extra attention is needed" }, { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
{ "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "repos_url": "https://api.github.com/users/mariamabarham/repos", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "type": "User", "site_admin": false }
[ { "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "repos_url": "https://api.github.com/users/mariamabarham/repos", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "type": "User", "site_admin": false } ]
null
[]
"2020-07-02T13:03:03"
"2020-07-16T09:04:22"
"2020-07-16T09:04:22"
MEMBER
null
We are still a few datasets missing for Open-Question Answering which is currently a field in strong development. Namely, it would be really nice to add: - WebQuestions (Berant et al., 2013) [done] - CuratedTrec (Baudis et al. 2015) [not open-source] - MS-MARCO (NGuyen et al. 2016) [done] - SearchQA (Dunn et al. 2017) [done] - FEVER (Thorne et al. 2018) - [ done] All these datasets are cited in http://arxiv.org/abs/2005.11401
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/336/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/336/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/335
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/335/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/335/comments
https://api.github.com/repos/huggingface/datasets/issues/335/events
https://github.com/huggingface/datasets/pull/335
649,765,179
MDExOlB1bGxSZXF1ZXN0NDQzMzgwMjI1
335
BioMRC Dataset presented in BioNLP 2020 ACL Workshop
{ "login": "PetrosStav", "id": 15162021, "node_id": "MDQ6VXNlcjE1MTYyMDIx", "avatar_url": "https://avatars.githubusercontent.com/u/15162021?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PetrosStav", "html_url": "https://github.com/PetrosStav", "followers_url": "https://api.github.com/users/PetrosStav/followers", "following_url": "https://api.github.com/users/PetrosStav/following{/other_user}", "gists_url": "https://api.github.com/users/PetrosStav/gists{/gist_id}", "starred_url": "https://api.github.com/users/PetrosStav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PetrosStav/subscriptions", "organizations_url": "https://api.github.com/users/PetrosStav/orgs", "repos_url": "https://api.github.com/users/PetrosStav/repos", "events_url": "https://api.github.com/users/PetrosStav/events{/privacy}", "received_events_url": "https://api.github.com/users/PetrosStav/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-02T09:03:41"
"2020-07-15T08:02:07"
"2020-07-15T08:02:07"
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/335/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/335/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/335", "html_url": "https://github.com/huggingface/datasets/pull/335", "diff_url": "https://github.com/huggingface/datasets/pull/335.diff", "patch_url": "https://github.com/huggingface/datasets/pull/335.patch", "merged_at": "2020-07-15T08:02:07" }
true
https://api.github.com/repos/huggingface/datasets/issues/334
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/334/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/334/comments
https://api.github.com/repos/huggingface/datasets/issues/334/events
https://github.com/huggingface/datasets/pull/334
649,661,791
MDExOlB1bGxSZXF1ZXN0NDQzMjk1NjQ0
334
Add dataset.shard() method
{ "login": "jarednielsen", "id": 4564897, "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jarednielsen", "html_url": "https://github.com/jarednielsen", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "repos_url": "https://api.github.com/users/jarednielsen/repos", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-02T06:05:19"
"2020-07-06T12:35:36"
"2020-07-06T12:35:36"
CONTRIBUTOR
null
Fixes https://github.com/huggingface/nlp/issues/312
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/334/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/334/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/334", "html_url": "https://github.com/huggingface/datasets/pull/334", "diff_url": "https://github.com/huggingface/datasets/pull/334.diff", "patch_url": "https://github.com/huggingface/datasets/pull/334.patch", "merged_at": "2020-07-06T12:35:36" }
true
https://api.github.com/repos/huggingface/datasets/issues/333
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/333/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/333/comments
https://api.github.com/repos/huggingface/datasets/issues/333/events
https://github.com/huggingface/datasets/pull/333
649,236,516
MDExOlB1bGxSZXF1ZXN0NDQyOTE1NDQ0
333
fix variable name typo
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-01T19:13:50"
"2020-07-24T15:43:31"
"2020-07-24T08:32:16"
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/333/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/333/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/333", "html_url": "https://github.com/huggingface/datasets/pull/333", "diff_url": "https://github.com/huggingface/datasets/pull/333.diff", "patch_url": "https://github.com/huggingface/datasets/pull/333.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/332
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/332/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/332/comments
https://api.github.com/repos/huggingface/datasets/issues/332/events
https://github.com/huggingface/datasets/pull/332
649,140,135
MDExOlB1bGxSZXF1ZXN0NDQyODMwMzMz
332
Add wiki_dpr
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-07-01T17:12:00"
"2020-07-06T12:21:17"
"2020-07-06T12:21:16"
MEMBER
null
Presented in the [Dense Passage Retrieval paper](https://arxiv.org/pdf/2004.04906.pdf), this dataset consists in 21M passages from the english wikipedia along with their 768-dim embeddings computed using DPR's context encoder. Note on the implementation: - There are two configs: with and without the embeddings (73GB vs 14GB) - I used a non-fixed-size sequence of floats to describe the feature format of the embeddings. I wanted to use fixed-size sequences but I had issues with reading the arrow file afterwards (for example `dataset[0]` was crashing) - I added the case for lists of urls as input of the download_manager
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/332/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/332/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/332", "html_url": "https://github.com/huggingface/datasets/pull/332", "diff_url": "https://github.com/huggingface/datasets/pull/332.diff", "patch_url": "https://github.com/huggingface/datasets/pull/332.patch", "merged_at": "2020-07-06T12:21:16" }
true
https://api.github.com/repos/huggingface/datasets/issues/331
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/331/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/331/comments
https://api.github.com/repos/huggingface/datasets/issues/331/events
https://github.com/huggingface/datasets/issues/331
648,533,199
MDU6SXNzdWU2NDg1MzMxOTk=
331
Loading CNN/Daily Mail dataset produces `nlp.utils.info_utils.NonMatchingSplitsSizesError`
{ "login": "jxmorris12", "id": 13238952, "node_id": "MDQ6VXNlcjEzMjM4OTUy", "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jxmorris12", "html_url": "https://github.com/jxmorris12", "followers_url": "https://api.github.com/users/jxmorris12/followers", "following_url": "https://api.github.com/users/jxmorris12/following{/other_user}", "gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}", "starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions", "organizations_url": "https://api.github.com/users/jxmorris12/orgs", "repos_url": "https://api.github.com/users/jxmorris12/repos", "events_url": "https://api.github.com/users/jxmorris12/events{/privacy}", "received_events_url": "https://api.github.com/users/jxmorris12/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "I couldn't reproduce on my side.\r\nIt looks like you were not able to generate all the examples, and you have the problem for each split train-test-validation.\r\nCould you try to enable logging, try again and send the logs ?\r\n```python\r\nimport logging\r\nlogging.basicConfig(level=logging.INFO)\r\n```", "here's the log\r\n```\r\n>>> import nlp\r\nimport logging\r\nlogging.basicConfig(level=logging.INFO)\r\nnlp.load_dataset('cnn_dailymail', '3.0.0')\r\n>>> import logging\r\n>>> logging.basicConfig(level=logging.INFO)\r\n>>> nlp.load_dataset('cnn_dailymail', '3.0.0')\r\nINFO:nlp.load:Checking /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py for additional imports.\r\nINFO:filelock:Lock 140443095301136 acquired on /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py.lock\r\nINFO:nlp.load:Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail\r\nINFO:nlp.load:Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\r\nINFO:nlp.load:Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py to /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/cnn_dailymail.py\r\nINFO:nlp.load:Updating dataset infos file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/dataset_infos.json to /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/dataset_infos.json\r\nINFO:nlp.load:Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/cnn_dailymail.json\r\nINFO:filelock:Lock 140443095301136 released on /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py.lock\r\nINFO:nlp.info:Loading Dataset Infos from /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\r\nINFO:nlp.builder:Generating dataset cnn_dailymail (/u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0)\r\nINFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source\r\nDownloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0...\r\nINFO:nlp.utils.info_utils:All the checksums matched successfully.\r\nINFO:nlp.builder:Generating split train\r\nINFO:nlp.arrow_writer:Done writing 285161 examples in 1240618482 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-train.arrow.\r\nINFO:nlp.builder:Generating split validation\r\nINFO:nlp.arrow_writer:Done writing 13255 examples in 56637485 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-validation.arrow.\r\nINFO:nlp.builder:Generating split test\r\nINFO:nlp.arrow_writer:Done writing 11379 examples in 48931393 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-test.arrow.\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/load.py\", line 520, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py\", line 431, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py\", line 488, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/utils/info_utils.py\", line 70, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\nnlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=49424491, num_examples=11490, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='test', num_bytes=48931393, num_examples=11379, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='train', num_bytes=1249178681, num_examples=287113, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='train', num_bytes=1240618482, num_examples=285161, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='validation', num_bytes=57149241, num_examples=13368, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='validation', num_bytes=56637485, num_examples=13255, dataset_name='cnn_dailymail')}]\r\n```", "> here's the log\r\n> \r\n> ```\r\n> >>> import nlp\r\n> import logging\r\n> logging.basicConfig(level=logging.INFO)\r\n> nlp.load_dataset('cnn_dailymail', '3.0.0')\r\n> >>> import logging\r\n> >>> logging.basicConfig(level=logging.INFO)\r\n> >>> nlp.load_dataset('cnn_dailymail', '3.0.0')\r\n> INFO:nlp.load:Checking /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py for additional imports.\r\n> INFO:filelock:Lock 140443095301136 acquired on /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py.lock\r\n> INFO:nlp.load:Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail\r\n> INFO:nlp.load:Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\r\n> INFO:nlp.load:Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py to /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/cnn_dailymail.py\r\n> INFO:nlp.load:Updating dataset infos file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/dataset_infos.json to /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/dataset_infos.json\r\n> INFO:nlp.load:Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/cnn_dailymail.json\r\n> INFO:filelock:Lock 140443095301136 released on /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py.lock\r\n> INFO:nlp.info:Loading Dataset Infos from /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\r\n> INFO:nlp.builder:Generating dataset cnn_dailymail (/u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0)\r\n> INFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source\r\n> Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0...\r\n> INFO:nlp.utils.info_utils:All the checksums matched successfully.\r\n> INFO:nlp.builder:Generating split train\r\n> INFO:nlp.arrow_writer:Done writing 285161 examples in 1240618482 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-train.arrow.\r\n> INFO:nlp.builder:Generating split validation\r\n> INFO:nlp.arrow_writer:Done writing 13255 examples in 56637485 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-validation.arrow.\r\n> INFO:nlp.builder:Generating split test\r\n> INFO:nlp.arrow_writer:Done writing 11379 examples in 48931393 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-test.arrow.\r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 1, in <module>\r\n> File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/load.py\", line 520, in load_dataset\r\n> builder_instance.download_and_prepare(\r\n> File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py\", line 431, in download_and_prepare\r\n> self._download_and_prepare(\r\n> File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py\", line 488, in _download_and_prepare\r\n> verify_splits(self.info.splits, split_dict)\r\n> File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/utils/info_utils.py\", line 70, in verify_splits\r\n> raise NonMatchingSplitsSizesError(str(bad_splits))\r\n> nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=49424491, num_examples=11490, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='test', num_bytes=48931393, num_examples=11379, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='train', num_bytes=1249178681, num_examples=287113, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='train', num_bytes=1240618482, num_examples=285161, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='validation', num_bytes=57149241, num_examples=13368, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='validation', num_bytes=56637485, num_examples=13255, dataset_name='cnn_dailymail')}]\r\n> ```\r\n\r\nWith `nlp == 0.3.0` version, I'm not able to reproduce this error on my side.\r\nWhich version are you using for reproducing your bug?\r\n\r\n```\r\n>> nlp.load_dataset('cnn_dailymail', '3.0.0')\r\n\r\n8.90k/8.90k [00:18<00:00, 486B/s]\r\n\r\nDownloading: 100%\r\n9.37k/9.37k [00:00<00:00, 234kB/s]\r\n\r\nDownloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0...\r\nDownloading:\r\n159M/? [00:09<00:00, 16.7MB/s]\r\n\r\nDownloading:\r\n376M/? [00:06<00:00, 62.6MB/s]\r\n\r\nDownloading:\r\n2.11M/? [00:06<00:00, 333kB/s]\r\n\r\nDownloading:\r\n46.4M/? [00:02<00:00, 18.4MB/s]\r\n\r\nDownloading:\r\n2.43M/? [00:00<00:00, 2.62MB/s]\r\n\r\nDataset cnn_dailymail downloaded and prepared to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0. Subsequent calls will reuse this data.\r\n{'test': Dataset(schema: {'article': 'string', 'highlights': 'string'}, num_rows: 11490),\r\n 'train': Dataset(schema: {'article': 'string', 'highlights': 'string'}, num_rows: 287113),\r\n 'validation': Dataset(schema: {'article': 'string', 'highlights': 'string'}, num_rows: 13368)}\r\n\r\n>> ...\r\n\r\n```", "In general if some examples are missing after processing (hence causing the `NonMatchingSplitsSizesError `), it is often due to either\r\n1) corrupted cached files\r\n2) decoding errors\r\n\r\nI just checked the dataset script for code that could lead to decoding errors but I couldn't find any. Before we try to dive more into the processing of the dataset, could you try to clear your cache ? Just to make sure that it isn't 1)", "Yes thanks for the support! I cleared out my cache folder and everything works fine now" ]
"2020-06-30T22:21:33"
"2020-07-09T13:03:40"
"2020-07-09T13:03:40"
CONTRIBUTOR
null
``` >>> import nlp >>> nlp.load_dataset('cnn_dailymail', '3.0.0') Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/p/qdata/jm8wx/datasets/nlp/src/nlp/load.py", line 520, in load_dataset builder_instance.download_and_prepare( File "/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py", line 431, in download_and_prepare self._download_and_prepare( File "/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py", line 488, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/p/qdata/jm8wx/datasets/nlp/src/nlp/utils/info_utils.py", line 70, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=49424491, num_examples=11490, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='test', num_bytes=48931393, num_examples=11379, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='train', num_bytes=1249178681, num_examples=287113, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='train', num_bytes=1240618482, num_examples=285161, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='validation', num_bytes=57149241, num_examples=13368, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='validation', num_bytes=56637485, num_examples=13255, dataset_name='cnn_dailymail')}] ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/331/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/331/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/330
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/330/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/330/comments
https://api.github.com/repos/huggingface/datasets/issues/330/events
https://github.com/huggingface/datasets/pull/330
648,525,720
MDExOlB1bGxSZXF1ZXN0NDQyMzIxMjEw
330
Doc red
{ "login": "ghomasHudson", "id": 13795113, "node_id": "MDQ6VXNlcjEzNzk1MTEz", "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghomasHudson", "html_url": "https://github.com/ghomasHudson", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-06-30T22:05:31"
"2020-07-06T12:10:39"
"2020-07-05T12:27:29"
CONTRIBUTOR
null
Adding [DocRED](https://github.com/thunlp/DocRED) - a relation extraction dataset which tests document-level RE. A few implementation notes: - There are 2 separate versions of the training set - *annotated* and *distant*. Instead of `nlp.Split.Train` I've used the splits `"train_annotated"` and `"train_distant"` to reflect this. - As well as the relation id, the full relation name is mapped from `rel_info.json` - I renamed the 'h', 'r', 't' keys to 'head', 'relation' and 'tail' to make them more readable. - Used the fix from #319 to allow nested sequences of dicts.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/330/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/330/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/330", "html_url": "https://github.com/huggingface/datasets/pull/330", "diff_url": "https://github.com/huggingface/datasets/pull/330.diff", "patch_url": "https://github.com/huggingface/datasets/pull/330.patch", "merged_at": "2020-07-05T12:27:29" }
true
https://api.github.com/repos/huggingface/datasets/issues/329
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/329/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/329/comments
https://api.github.com/repos/huggingface/datasets/issues/329/events
https://github.com/huggingface/datasets/issues/329
648,446,979
MDU6SXNzdWU2NDg0NDY5Nzk=
329
[Bug] FileLock dependency incompatible with filesystem
{ "login": "jarednielsen", "id": 4564897, "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jarednielsen", "html_url": "https://github.com/jarednielsen", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "repos_url": "https://api.github.com/users/jarednielsen/repos", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi, can you give details on your environment/os/packages versions/etc?", "Environment is Ubuntu 18.04, Python 3.7.5, nlp==0.3.0, filelock=3.0.12.\r\n\r\nThe external volume is Amazon FSx for Lustre, and it by default creates files with limited permissions. My working theory is that FileLock creates a lockfile that isn't writable, and thus there's no way to acquire it by removing the .lock file. But Python is able to create new files and write to them outside of the FileLock package.\r\n\r\nWhen I attempt to use FileLock within a Docker container by writing to `/root/.cache/hello.txt`, it succeeds. So there's some permissions issue. But it's not a Docker configuration issue; I've replicated it without Docker.\r\n```bash\r\necho \"hello world\" >> hello.txt\r\nls -l\r\n\r\n-rw-rw-r-- 1 ubuntu ubuntu 10 Jun 30 19:52 hello.txt\r\n```", "Looks like the `flock` syscall does not work on Lustre filesystems by default: https://github.com/benediktschmitt/py-filelock/issues/67.\r\n\r\nI added the `-o flock` option when mounting the filesystem, as [described here](https://docs.aws.amazon.com/fsx/latest/LustreGuide/getting-started-step2.html), which fixed the issue.", "Awesome, thanks a lot for sharing your fix!", "I'm wondering if this can be revisited. In some managed environments the same person using HF cannot change the file-system mount flags, (and the organization may be unwilling to change these flags due to other concerns) but can ensure that there won't be concurrent writes, for example because HF is offline and the models/datasets were downloaded earlier. \r\n\r\nThe real fix would be to FileLock itself, which does not seem very active and seems to not deal with failed system flock calls , which would be one way to fix this, as they mention in the issue below also raised by @jarednielsen \r\n\r\nhttps://github.com/tox-dev/py-filelock/issues/67", "> I'm wondering if this can be revisited. In some managed environments the same person using HF cannot change the file-system mount flags, (and the organization may be unwilling to change these flags due to other concerns) but can ensure that there won't be concurrent writes, for example because HF is offline and the models/datasets were downloaded earlier.\r\n\r\nI am one of those users. Is there a work around for this?\r\n", "The machines I use have a shared FS which has the filelock problem as well as a local one that does not. Using some env vars (HF_HOME, which controls both models and datasets, and HF_DATASETS_OFFLINE) for both transformers and datasets library one can influence where these downloads happen, and whether the locks get taken. I think some of the relevant documentation is here https://huggingface.co/docs/transformers/installation#cache-setup. I do end up using different settings when I download the models and when I use them, and have to rsync the models to the local file system using a separate script. ", "Thanks @orm011 . These filesystems are such a pain. I'll dig around, looks like setting `cache_dir` to a non-lustre filesystem works for `transformers` but not `datasets`.", "Note I `export HF_HOME=` in the shell prior to running python (I do not use the `cache_dir` argument, I think I ran into similar issues with it, nor `HF_DATASETS_CACHE` , though maybe that works, or maybe you can set it in python prior to importing the library ), and I change no other variables. Then `datasets.load_dataset()` works without any additional flags, and they go into `HF_HOME/datasets/` and the models go into `HF_HOME/transformers/` (and the lock files are all there as well). " ]
"2020-06-30T19:45:31"
"2022-09-08T20:58:37"
"2020-06-30T21:33:06"
CONTRIBUTOR
null
I'm downloading a dataset successfully with `load_dataset("wikitext", "wikitext-2-raw-v1")` But when I attempt to cache it on an external volume, it hangs indefinitely: `load_dataset("wikitext", "wikitext-2-raw-v1", cache_dir="/fsx") # /fsx is an external volume mount` The filesystem when hanging looks like this: ```bash /fsx ----downloads ----94be...73.lock ----wikitext ----wikitext-2-raw ----wikitext-2-raw-1.0.0.incomplete ``` It appears that on this filesystem, the FileLock object is forever stuck in its "acquire" stage. I have verified that the issue lies specifically with the `filelock` dependency: ```python open("/fsx/hello.txt").write("hello") # succeeds from filelock import FileLock with FileLock("/fsx/hello.lock"): open("/fsx/hello.txt").write("hello") # hangs indefinitely ``` Has anyone else run into this issue? I'd raise it directly on the FileLock repo, but that project appears abandoned with the last update over a year ago. Or if there's a solution that would remove the FileLock dependency from the project, I would appreciate that.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/329/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/329/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/328
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/328/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/328/comments
https://api.github.com/repos/huggingface/datasets/issues/328/events
https://github.com/huggingface/datasets/issues/328
648,326,841
MDU6SXNzdWU2NDgzMjY4NDE=
328
Fork dataset
{ "login": "timothyjlaurent", "id": 2000204, "node_id": "MDQ6VXNlcjIwMDAyMDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/timothyjlaurent", "html_url": "https://github.com/timothyjlaurent", "followers_url": "https://api.github.com/users/timothyjlaurent/followers", "following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}", "gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}", "starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions", "organizations_url": "https://api.github.com/users/timothyjlaurent/orgs", "repos_url": "https://api.github.com/users/timothyjlaurent/repos", "events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}", "received_events_url": "https://api.github.com/users/timothyjlaurent/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "To be able to generate the Arrow dataset you need to either use our csv or json utilities `load_dataset(\"json\", data_files=my_json_files)` OR write your own custom dataset script (you can find some inspiration from the [squad](https://github.com/huggingface/nlp/blob/master/datasets/squad/squad.py) script for example). Custom dataset scripts can be called locally with `nlp.load_dataset(path_to_my_script_directory)`.\r\n\r\nThis should help you get what you call \"Dataset1\".\r\n\r\nThen using some dataset transforms like `.map` for example you can get to \"DatasetNER\" and \"DatasetREL\".\r\n", "Thanks for the helpful advice, @lhoestq -- I wasn't quite able to get the json recipe working - \r\n\r\n```\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/pyarrow/ipc.py in __init__(self, source)\r\n 60 \r\n 61 def __init__(self, source):\r\n---> 62 self._open(source)\r\n 63 \r\n 64 \r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/pyarrow/ipc.pxi in pyarrow.lib._RecordBatchStreamReader._open()\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()\r\nArrowInvalid: Tried reading schema message, was null or length 0\r\n```\r\n\r\nBut I'm going to give the generator_dataset_builder a try.\r\n\r\n1 more quick question -- can .map be used to output different length mappings -- could I skip one, or yield 2, can you map_batch ", "You can use `.map(my_func, batched=True)` and return less examples, or more examples if you want", "Thanks this answers my question. I think the issue I was having using the json loader were due to using gzipped jsonl files.\r\n\r\nThe error I get now is :\r\n\r\n```\r\n\r\nUsing custom data configuration test\r\n---------------------------------------------------------------------------\r\n\r\nValueError Traceback (most recent call last)\r\n\r\n<ipython-input-38-29082a31e5b2> in <module>\r\n 5 print(ner_datafiles)\r\n 6 \r\n----> 7 ds = nlp.load_dataset(\"json\", \"test\", data_files=ner_datafiles[0])\r\n 8 \r\n\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 522 download_mode=download_mode,\r\n 523 ignore_verifications=ignore_verifications,\r\n--> 524 save_infos=save_infos,\r\n 525 )\r\n 526 \r\n\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 430 verify_infos = not save_infos and not ignore_verifications\r\n 431 self._download_and_prepare(\r\n--> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 433 )\r\n 434 # Sync info\r\n\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 481 try:\r\n 482 # Prepare split will record examples associated to the split\r\n--> 483 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 484 except OSError:\r\n 485 raise OSError(\"Cannot find data file. \" + (self.manual_download_instructions or \"\"))\r\n\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in _prepare_split(self, split_generator)\r\n 736 schema_dict[field.name] = Value(str(field.type))\r\n 737 \r\n--> 738 parse_schema(writer.schema, features)\r\n 739 self.info.features = Features(features)\r\n 740 \r\n\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in parse_schema(schema, schema_dict)\r\n 734 parse_schema(field.type.value_type, schema_dict[field.name])\r\n 735 else:\r\n--> 736 schema_dict[field.name] = Value(str(field.type))\r\n 737 \r\n 738 parse_schema(writer.schema, features)\r\n\r\n<string> in __init__(self, dtype, id, _type)\r\n\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/features.py in __post_init__(self)\r\n 55 \r\n 56 def __post_init__(self):\r\n---> 57 self.pa_type = string_to_arrow(self.dtype)\r\n 58 \r\n 59 def __call__(self):\r\n\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/features.py in string_to_arrow(type_str)\r\n 32 if str(type_str + \"_\") not in pa.__dict__:\r\n 33 raise ValueError(\r\n---> 34 f\"Neither {type_str} nor {type_str + '_'} seems to be a pyarrow data type. \"\r\n 35 f\"Please make sure to use a correct data type, see: \"\r\n 36 f\"https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions\"\r\n\r\nValueError: Neither list<item: int64> nor list<item: int64>_ seems to be a pyarrow data type. Please make sure to use a correct data type, see: https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions.\r\n```\r\n\r\nIf I just create a pa- table manually like is done in the jsonloader -- it seems to work fine. Ths JSON I'm trying to load isn't overly complex - 1 integer field, the rest text fields with a nested list of objects with text fields .", "I'll close this -- It's still unclear how to go about troubleshooting the json example as I mentioned above. If I decide it's worth the trouble, I'll create another issue, or wait for a better support for using nlp for making custom data-loaders." ]
"2020-06-30T16:42:53"
"2020-07-06T21:43:59"
"2020-07-06T21:43:59"
NONE
null
We have a multi-task learning model training I'm trying to convert to using the Arrow-based nlp dataset. We're currently training a custom TensorFlow model but the nlp paradigm should be a bridge for us to be able to use the wealth of pre-trained models in Transformers. Our preprocessing flow parses raw text and json with Entity and Relations annotations and creates 2 datasets for training a NER and Relations prediction heads. Is there some good way to "fork" dataset- EG 1. text + json -> Dataset1 1. Dataset1 -> DatasetNER 1. Dataset1 -> DatasetREL or 1. text + json -> Dataset1 1. Dataset1 -> DatasetNER 1. Dataset1 + DatasetNER -> DatasetREL
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/328/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/328/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/327
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/327/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/327/comments
https://api.github.com/repos/huggingface/datasets/issues/327/events
https://github.com/huggingface/datasets/pull/327
648,312,858
MDExOlB1bGxSZXF1ZXN0NDQyMTQyOTQw
327
set seed for suffling tests
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-06-30T16:21:34"
"2020-07-02T08:34:05"
"2020-07-02T08:34:04"
MEMBER
null
Some tests were randomly failing because of a missing seed in a test for `train_test_split(shuffle=True)`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/327/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/327/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/327", "html_url": "https://github.com/huggingface/datasets/pull/327", "diff_url": "https://github.com/huggingface/datasets/pull/327.diff", "patch_url": "https://github.com/huggingface/datasets/pull/327.patch", "merged_at": "2020-07-02T08:34:04" }
true
https://api.github.com/repos/huggingface/datasets/issues/326
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/326/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/326/comments
https://api.github.com/repos/huggingface/datasets/issues/326/events
https://github.com/huggingface/datasets/issues/326
648,126,103
MDU6SXNzdWU2NDgxMjYxMDM=
326
Large dataset in Squad2-format
{ "login": "flozi00", "id": 47894090, "node_id": "MDQ6VXNlcjQ3ODk0MDkw", "avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4", "gravatar_id": "", "url": "https://api.github.com/users/flozi00", "html_url": "https://github.com/flozi00", "followers_url": "https://api.github.com/users/flozi00/followers", "following_url": "https://api.github.com/users/flozi00/following{/other_user}", "gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}", "starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/flozi00/subscriptions", "organizations_url": "https://api.github.com/users/flozi00/orgs", "repos_url": "https://api.github.com/users/flozi00/repos", "events_url": "https://api.github.com/users/flozi00/events{/privacy}", "received_events_url": "https://api.github.com/users/flozi00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I'm pretty sure you can get some inspiration from the squad_v2 script. It looks like the dataset is quite big so it will take some time for the users to generate it, but it should be reasonable.\r\n\r\nAlso you are saying that you are still making the dataset grow in size right ?\r\nIt's probably good practice to let the users do their training/evaluations with the exact same version of the dataset.\r\nWe allow for each dataset to specify a version (ex: 1.0.0) and increment this number every time there are new samples in the dataset for example. Does it look like a good solution for you ? Or would you rather have one final version with the full dataset ?", "It would also be good if there is any possibility for versioning, I think this way is much better than the dynamic way.\nIf you mean that part to put the tiles into one is the generation it would take up to 15-20 minutes on home computer hardware.\nAre there any compression or optimization algorithms while generating the dataset ?\nOtherwise the hardware limit is around 32 GB ram at the moment.\nIf everything works well we will add some more gigabytes of data in future what would make it pretty memory costly.", "15-20 minutes is fine !\r\nAlso there's no RAM limitations as we save to disk every 1000 elements while generating the dataset by default.\r\nAfter generation, the dataset is ready to use with (again) no RAM limitations as we do memory-mapping.", "Wow, that sounds pretty cool.\nActually I have the problem of running out of memory while tokenization on our local machine.\nThat wouldn't happen again, would it ?", "You can do the tokenization step using `my_tokenized_dataset = my_dataset.map(my_tokenize_function)` that writes the tokenized texts on disk as well. And then `my_tokenized_dataset` will be a memory-mapped dataset too, so you should be fine :)", "Does it have an affect to the trainings speed ?", "In your training loop, loading the tokenized texts is going to be fast and pretty much negligible compared to a forward pass. You shouldn't expect any slow down.", "Closing this one. Feel free to re-open if you have other questions" ]
"2020-06-30T12:18:59"
"2020-07-09T09:01:50"
"2020-07-09T09:01:50"
CONTRIBUTOR
null
At the moment we are building an large question answering dataset and think about sharing it with the huggingface community. Caused the computing power we splitted it into multiple tiles, but they are all in the same format. Right now the most important facts about are this: - Contexts: 1.047.671 - questions: 1.677.732 - Answers: 6.742.406 - unanswerable: 377.398 It is already cleaned <pre><code> train_data = [ { 'context': "this is the context", 'qas': [ { 'id': "00002", 'is_impossible': False, 'question': "whats is this", 'answers': [ { 'text': "answer", 'answer_start': 0 } ] }, { 'id': "00003", 'is_impossible': False, 'question': "question2", 'answers': [ { 'text': "answer2", 'answer_start': 1 } ] } ] } ] </code></pre> Cause it is growing every day we are thinking about an structure like this: We host an Json file, containing all the download links and the script can load it dynamically. At the moment it is around ~20GB Any advice how to handle this, or an ready to use template ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/326/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/326/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/325
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/325/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/325/comments
https://api.github.com/repos/huggingface/datasets/issues/325/events
https://github.com/huggingface/datasets/pull/325
647,601,592
MDExOlB1bGxSZXF1ZXN0NDQxNTk3NTgw
325
Add SQuADShifts dataset
{ "login": "millerjohnp", "id": 8953195, "node_id": "MDQ6VXNlcjg5NTMxOTU=", "avatar_url": "https://avatars.githubusercontent.com/u/8953195?v=4", "gravatar_id": "", "url": "https://api.github.com/users/millerjohnp", "html_url": "https://github.com/millerjohnp", "followers_url": "https://api.github.com/users/millerjohnp/followers", "following_url": "https://api.github.com/users/millerjohnp/following{/other_user}", "gists_url": "https://api.github.com/users/millerjohnp/gists{/gist_id}", "starred_url": "https://api.github.com/users/millerjohnp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/millerjohnp/subscriptions", "organizations_url": "https://api.github.com/users/millerjohnp/orgs", "repos_url": "https://api.github.com/users/millerjohnp/repos", "events_url": "https://api.github.com/users/millerjohnp/events{/privacy}", "received_events_url": "https://api.github.com/users/millerjohnp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-06-29T19:11:16"
"2020-06-30T17:07:31"
"2020-06-30T17:07:31"
CONTRIBUTOR
null
This PR adds the four new variants of the SQuAD dataset used in [The Effect of Natural Distribution Shift on Question Answering Models](https://arxiv.org/abs/2004.14444) to facilitate evaluating model robustness to distribution shift.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/325/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/325/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/325", "html_url": "https://github.com/huggingface/datasets/pull/325", "diff_url": "https://github.com/huggingface/datasets/pull/325.diff", "patch_url": "https://github.com/huggingface/datasets/pull/325.patch", "merged_at": "2020-06-30T17:07:31" }
true
https://api.github.com/repos/huggingface/datasets/issues/324
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/324/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/324/comments
https://api.github.com/repos/huggingface/datasets/issues/324/events
https://github.com/huggingface/datasets/issues/324
647,525,725
MDU6SXNzdWU2NDc1MjU3MjU=
324
Error when calculating glue score
{ "login": "D-i-l-r-u-k-s-h-i", "id": 47185867, "node_id": "MDQ6VXNlcjQ3MTg1ODY3", "avatar_url": "https://avatars.githubusercontent.com/u/47185867?v=4", "gravatar_id": "", "url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i", "html_url": "https://github.com/D-i-l-r-u-k-s-h-i", "followers_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/followers", "following_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/following{/other_user}", "gists_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/gists{/gist_id}", "starred_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/subscriptions", "organizations_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/orgs", "repos_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/repos", "events_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/events{/privacy}", "received_events_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The glue metric for cola is a metric for classification. It expects label ids as integers as inputs.", "I want to evaluate a sentence pair whether they are semantically equivalent, so I used MRPC and it gives the same error, does that mean we have to encode the sentences and parse as input?\r\n\r\nusing BertTokenizer;\r\n```\r\nencoded_reference=tokenizer.encode(reference, add_special_tokens=False)\r\nencoded_prediction=tokenizer.encode(prediction, add_special_tokens=False)\r\n```\r\n\r\n`glue_score = glue_metric.compute(encoded_prediction, encoded_reference)`\r\n```\r\n\r\nValueError Traceback (most recent call last)\r\n<ipython-input-9-4c3a3ce7b583> in <module>()\r\n----> 1 glue_score = glue_metric.compute(encoded_prediction, encoded_reference)\r\n\r\n6 frames\r\n/usr/local/lib/python3.6/dist-packages/nlp/metric.py in compute(self, predictions, references, timeout, **metrics_kwargs)\r\n 198 predictions = self.data[\"predictions\"]\r\n 199 references = self.data[\"references\"]\r\n--> 200 output = self._compute(predictions=predictions, references=references, **metrics_kwargs)\r\n 201 return output\r\n 202 \r\n\r\n/usr/local/lib/python3.6/dist-packages/nlp/metrics/glue/27b1bc63e520833054bd0d7a8d0bc7f6aab84cc9eed1b576e98c806f9466d302/glue.py in _compute(self, predictions, references)\r\n 101 return pearson_and_spearman(predictions, references)\r\n 102 elif self.config_name in [\"mrpc\", \"qqp\"]:\r\n--> 103 return acc_and_f1(predictions, references)\r\n 104 elif self.config_name in [\"sst2\", \"mnli\", \"mnli_mismatched\", \"mnli_matched\", \"qnli\", \"rte\", \"wnli\", \"hans\"]:\r\n 105 return {\"accuracy\": simple_accuracy(predictions, references)}\r\n\r\n/usr/local/lib/python3.6/dist-packages/nlp/metrics/glue/27b1bc63e520833054bd0d7a8d0bc7f6aab84cc9eed1b576e98c806f9466d302/glue.py in acc_and_f1(preds, labels)\r\n 60 def acc_and_f1(preds, labels):\r\n 61 acc = simple_accuracy(preds, labels)\r\n---> 62 f1 = f1_score(y_true=labels, y_pred=preds)\r\n 63 return {\r\n 64 \"accuracy\": acc,\r\n\r\n/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in f1_score(y_true, y_pred, labels, pos_label, average, sample_weight, zero_division)\r\n 1097 pos_label=pos_label, average=average,\r\n 1098 sample_weight=sample_weight,\r\n-> 1099 zero_division=zero_division)\r\n 1100 \r\n 1101 \r\n\r\n/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in fbeta_score(y_true, y_pred, beta, labels, pos_label, average, sample_weight, zero_division)\r\n 1224 warn_for=('f-score',),\r\n 1225 sample_weight=sample_weight,\r\n-> 1226 zero_division=zero_division)\r\n 1227 return f\r\n 1228 \r\n\r\n/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in precision_recall_fscore_support(y_true, y_pred, beta, labels, pos_label, average, warn_for, sample_weight, zero_division)\r\n 1482 raise ValueError(\"beta should be >=0 in the F-beta score\")\r\n 1483 labels = _check_set_wise_labels(y_true, y_pred, average, labels,\r\n-> 1484 pos_label)\r\n 1485 \r\n 1486 # Calculate tp_sum, pred_sum, true_sum ###\r\n\r\n/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in _check_set_wise_labels(y_true, y_pred, average, labels, pos_label)\r\n 1314 raise ValueError(\"Target is %s but average='binary'. Please \"\r\n 1315 \"choose another average setting, one of %r.\"\r\n-> 1316 % (y_type, average_options))\r\n 1317 elif pos_label not in (None, 1):\r\n 1318 warnings.warn(\"Note that pos_label (set to %r) is ignored when \"\r\n\r\nValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted'].\r\n\r\n```", "MRPC is also a binary classification task, so its metric is a binary classification metric.\r\n\r\nTo evaluate if pairs of sentences are semantically equivalent, maybe you could take a look at models that compute if one sentence entails the other or not (typically the kinds of model that could work well on the MRPC task).", "Closing this one. Feel free to re-open if you have other questions :)" ]
"2020-06-29T16:53:48"
"2020-07-09T09:13:34"
"2020-07-09T09:13:34"
NONE
null
I was trying glue score along with other metrics here. But glue gives me this error; ``` import nlp glue_metric = nlp.load_metric('glue',name="cola") glue_score = glue_metric.compute(predictions, references) ``` ``` --------------------------------------------------------------------------- --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-8-b9210a524504> in <module>() ----> 1 glue_score = glue_metric.compute(predictions, references) 6 frames /usr/local/lib/python3.6/dist-packages/nlp/metric.py in compute(self, predictions, references, timeout, **metrics_kwargs) 191 """ 192 if predictions is not None: --> 193 self.add_batch(predictions=predictions, references=references) 194 self.finalize(timeout=timeout) 195 /usr/local/lib/python3.6/dist-packages/nlp/metric.py in add_batch(self, predictions, references, **kwargs) 207 if self.writer is None: 208 self._init_writer() --> 209 self.writer.write_batch(batch) 210 211 def add(self, prediction=None, reference=None, **kwargs): /usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size) 155 if self.pa_writer is None: 156 self._build_writer(pa_table=pa.Table.from_pydict(batch_examples)) --> 157 pa_table: pa.Table = pa.Table.from_pydict(batch_examples, schema=self._schema) 158 if writer_batch_size is None: 159 writer_batch_size = self.writer_batch_size /usr/local/lib/python3.6/dist-packages/pyarrow/types.pxi in __iter__() /usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib.asarray() /usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib.array() /usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array() TypeError: an integer is required (got type str) ``` I'm not sure whether I'm doing this wrong or whether it's an issue. I would like to know a workaround. Thank you.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/324/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/324/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/323
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/323/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/323/comments
https://api.github.com/repos/huggingface/datasets/issues/323/events
https://github.com/huggingface/datasets/pull/323
647,521,308
MDExOlB1bGxSZXF1ZXN0NDQxNTMxOTY3
323
Add package path to sys when downloading package as github archive
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-06-29T16:46:01"
"2020-07-30T14:00:23"
"2020-07-30T14:00:23"
MEMBER
null
This fixes the `coval.py` metric so that imports within the downloaded module work correctly. We can use a similar trick to add the BLEURT metric (@ankparikh) @thomwolf not sure how you feel about adding to the `PYTHONPATH` from the script. This is the only way I could make it work with my understanding of `importlib` but there might be a more elegant method. This PR fixes https://github.com/huggingface/nlp/issues/305
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/323/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/323/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/323", "html_url": "https://github.com/huggingface/datasets/pull/323", "diff_url": "https://github.com/huggingface/datasets/pull/323.diff", "patch_url": "https://github.com/huggingface/datasets/pull/323.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/322
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/322/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/322/comments
https://api.github.com/repos/huggingface/datasets/issues/322/events
https://github.com/huggingface/datasets/pull/322
647,483,850
MDExOlB1bGxSZXF1ZXN0NDQxNTAyMjc2
322
output nested dict in get_nearest_examples
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-06-29T15:47:47"
"2020-07-02T08:33:33"
"2020-07-02T08:33:32"
MEMBER
null
As we are using a columnar format like arrow as the backend for datasets, we expect to have a dictionary of columns when we slice a dataset like in this example: ```python my_examples = dataset[0:10] print(type(my_examples)) # >>> dict print(my_examples["my_column"][0] # >>> this is the first element of the column 'my_column' ``` Therefore I wanted to keep this logic when calling `get_nearest_examples` that returns the top 10 nearest examples: ```python dataset.add_faiss_index(column="embeddings") scores, examples = dataset.get_nearest_examples("embeddings", query=my_numpy_embedding) print(type(examples)) # >>> dict ``` Previously it was returning a list[dict]. It was the only place that was using this output format. To make it work I had to implement `__getitem__(key)` where `key` is a list. This is different from `.select` because `.select` is a dataset transform (it returns a new dataset object) while `__getitem__` is an extraction method (it returns python dictionaries).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/322/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/322/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/322", "html_url": "https://github.com/huggingface/datasets/pull/322", "diff_url": "https://github.com/huggingface/datasets/pull/322.diff", "patch_url": "https://github.com/huggingface/datasets/pull/322.patch", "merged_at": "2020-07-02T08:33:32" }
true
https://api.github.com/repos/huggingface/datasets/issues/321
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/321/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/321/comments
https://api.github.com/repos/huggingface/datasets/issues/321/events
https://github.com/huggingface/datasets/issues/321
647,271,526
MDU6SXNzdWU2NDcyNzE1MjY=
321
ERROR:root:mwparserfromhell
{ "login": "Shiro-LK", "id": 26505641, "node_id": "MDQ6VXNlcjI2NTA1NjQx", "avatar_url": "https://avatars.githubusercontent.com/u/26505641?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Shiro-LK", "html_url": "https://github.com/Shiro-LK", "followers_url": "https://api.github.com/users/Shiro-LK/followers", "following_url": "https://api.github.com/users/Shiro-LK/following{/other_user}", "gists_url": "https://api.github.com/users/Shiro-LK/gists{/gist_id}", "starred_url": "https://api.github.com/users/Shiro-LK/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Shiro-LK/subscriptions", "organizations_url": "https://api.github.com/users/Shiro-LK/orgs", "repos_url": "https://api.github.com/users/Shiro-LK/repos", "events_url": "https://api.github.com/users/Shiro-LK/events{/privacy}", "received_events_url": "https://api.github.com/users/Shiro-LK/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "It looks like it comes from `mwparserfromhell`.\r\n\r\nWould it be possible to get the bad `section` that causes this issue ? The `section` string is from `datasets/wikipedia.py:L548` ? You could just add a `try` statement and print the section if the line `section_text.append(section.strip_code().strip())` crashes.\r\n\r\nIt will help us know if we have to fix it on our side or if it is a `mwparserfromhell` issue.", "Hi, \r\n\r\nThank you for you answer.\r\nI have try to print the bad section using `try` and `except`, but it is a bit weird as the error seems to appear 3 times for instance, but the two first error does not print anything (as if the function did not go in the `except` part).\r\nFor the third one, I got that (I haven't display the entire text) :\r\n\r\n> error : ==== Parque nacional Cajas ====\r\n> {{AP|Parque nacional Cajas}}\r\n> [[Archivo:Ecuador cajas national park.jpg|thumb|left|300px|Laguna del Cajas]]\r\n> El parque nacional Cajas está situado en los [[Cordillera de los Andes|Andes]], al sur del [[Ecuador]], en la provincia de [[Provincia de Azuay|Azuay]], a 33\r\n> [[km]] al noroccidente de la ciudad de [[Cuenca (Ecuador)|Cuenca]]. Los accesos más comunes al parque inician todos en Cuenca: Desde allí, la vía Cuenca-Mol\r\n> leturo atraviesa en Control de [[Surocucho]] en poco más de 30 minutos de viaje; más adelante, esta misma carretera pasa a orillas de la laguna La Toreadora donde están el Centro Administrativo y de Información del parque. Siguiendo de largo hacia [[Molleturo]], por esta vía se conoce el sector norte del Cajas y se serpentea entre varias lagunas mayores y menores.\r\n> Para acceder al parque desde la costa, la vía Molleturo-Cuenca es también la mejor opción.\r\n\r\nHow can I display the link instead of the text ? I suppose it will help you more ", "The error appears several times as Apache Beam retries to process examples up to 4 times irc.\r\n\r\nI just tried to run this text into `mwparserfromhell` but it worked without the issue.\r\n\r\nI used this code (from the `wikipedia.py` script):\r\n```python\r\nimport mwparserfromhell as parser\r\nimport re\r\nimport six\r\n\r\nraw_content = r\"\"\"==== Parque nacional Cajas ====\r\n{{AP|Parque nacional Cajas}}\r\n[[Archivo:Ecuador cajas national park.jpg|thumb|left|300px|Laguna del Cajas]]\r\nEl parque nacional Cajas está situado en los [[Cordillera de los Andes|Andes]], al sur del [[Ecuador]], en la provincia de [[Provincia de Azuay|Azuay]], a 33\r\n[[km]] al noroccidente de la ciudad de [[Cuenca (Ecuador)|Cuenca]]. Los accesos más comunes al parque inician todos en Cuenca: Desde allí, la vía Cuenca-Mol\r\nleturo atraviesa en Control de [[Surocucho]] en poco más de 30 minutos de viaje; más adelante, esta misma carretera pasa a orillas de la laguna La Toreadora donde están el Centro Administrativo y de Información del parque. Siguiendo de largo hacia [[Molleturo]], por esta vía se conoce el sector norte del Cajas y se serpentea entre varias lagunas mayores y menores.\r\n\"\"\"\r\n\r\nwikicode = parser.parse(raw_content)\r\n\r\n# Filters for references, tables, and file/image links.\r\nre_rm_wikilink = re.compile(\"^(?:File|Image|Media):\", flags=re.IGNORECASE | re.UNICODE)\r\n\r\ndef rm_wikilink(obj):\r\n return bool(re_rm_wikilink.match(six.text_type(obj.title)))\r\n\r\ndef rm_tag(obj):\r\n return six.text_type(obj.tag) in {\"ref\", \"table\"}\r\n\r\ndef rm_template(obj):\r\n return obj.name.lower() in {\"reflist\", \"notelist\", \"notelist-ua\", \"notelist-lr\", \"notelist-ur\", \"notelist-lg\"}\r\n\r\ndef try_remove_obj(obj, section):\r\n try:\r\n section.remove(obj)\r\n except ValueError:\r\n # For unknown reasons, objects are sometimes not found.\r\n pass\r\n\r\nsection_text = []\r\nfor section in wikicode.get_sections(flat=True, include_lead=True, include_headings=True):\r\n for obj in section.ifilter_wikilinks(matches=rm_wikilink, recursive=True):\r\n try_remove_obj(obj, section)\r\n for obj in section.ifilter_templates(matches=rm_template, recursive=True):\r\n try_remove_obj(obj, section)\r\n for obj in section.ifilter_tags(matches=rm_tag, recursive=True):\r\n try_remove_obj(obj, section)\r\n\r\n section_text.append(section.strip_code().strip())\r\n```", "Not sure why we're having this issue. Maybe could you get also the file that's causing that ?", "thanks for your answer.\r\nHow can I know which file is causing the issue ? \r\nI am trying to load the spanish wikipedia data. ", "Because of the way Apache Beam works we indeed don't have access to the file name at this point in the code.\r\nWe'll have to use some tricks I think :p \r\n\r\nYou can append `filepath` to `title` in `wikipedia.py:L512` for example. [[EDIT: it's L494 my bad]]\r\nThen just do `try:...except:` on the call of `_parse_and_clean_wikicode` L500 I guess.\r\n\r\nThanks for diving into this ! I tried it myself but I run out of memory on my laptop\r\nAs soon as we have the name of the file it should be easier to find what's wrong.", "Thanks for your help.\r\n\r\nI tried to print the \"title\" of the document inside the` except (mwparserfromhell.parser.ParserError) as e`,the title displayed was : \"Campeonato Mundial de futsal de la AMF 2015\". (Wikipedia ES) Is it what you were looking for ?", "Thanks a lot @Shiro-LK !\r\n\r\nI was able to reproduce the issue. It comes from [this table on wikipedia](https://es.wikipedia.org/wiki/Campeonato_Mundial_de_futsal_de_la_AMF_2015#Clasificados) that can't be parsed.\r\n\r\nThe file in which the problem occurs comes from the wikipedia dumps, and it can be downloaded [here](https://dumps.wikimedia.org/eswiki/20200501/eswiki-20200501-pages-articles-multistream6.xml-p6424816p7924815.bz2)\r\n\r\nParsing the file this way raises the parsing issue:\r\n\r\n```python\r\nimport mwparserfromhell as parser\r\nfrom tqdm.auto import tqdm\r\nimport bz2\r\nimport six\r\nimport logging\r\nimport codecs\r\nimport xml.etree.cElementTree as etree\r\n\r\nfilepath = \"path/to/eswiki-20200501-pages-articles-multistream6.xml-p6424816p7924815.bz2\"\r\n\r\ndef _extract_content(filepath):\r\n \"\"\"Extracts article content from a single WikiMedia XML file.\"\"\"\r\n logging.info(\"generating examples from = %s\", filepath)\r\n with open(filepath, \"rb\") as f:\r\n f = bz2.BZ2File(filename=f)\r\n if six.PY3:\r\n # Workaround due to:\r\n # https://github.com/tensorflow/tensorflow/issues/33563\r\n utf_f = codecs.getreader(\"utf-8\")(f)\r\n else:\r\n utf_f = f\r\n # To clear root, to free-up more memory than just `elem.clear()`.\r\n context = etree.iterparse(utf_f, events=(\"end\",))\r\n context = iter(context)\r\n unused_event, root = next(context)\r\n for unused_event, elem in tqdm(context, total=949087):\r\n if not elem.tag.endswith(\"page\"):\r\n continue\r\n namespace = elem.tag[:-4]\r\n title = elem.find(\"./{0}title\".format(namespace)).text\r\n ns = elem.find(\"./{0}ns\".format(namespace)).text\r\n id_ = elem.find(\"./{0}id\".format(namespace)).text\r\n # Filter pages that are not in the \"main\" namespace.\r\n if ns != \"0\":\r\n root.clear()\r\n continue\r\n raw_content = elem.find(\"./{0}revision/{0}text\".format(namespace)).text\r\n root.clear()\r\n\r\n if \"Campeonato Mundial de futsal de la AMF 2015\" in title:\r\n yield (id_, title, raw_content)\r\n\r\nfor id_, title, raw_content in _extract_content(filepath):\r\n wikicode = parser.parse(raw_content)\r\n```\r\n\r\nThe copied the raw content that can't be parsed [here](https://pastebin.com/raw/ZbmevLyH).\r\n\r\nThe minimal code to reproduce is:\r\n```python\r\nimport mwparserfromhell as parser\r\nimport requests\r\n\r\nraw_content = requests.get(\"https://pastebin.com/raw/ZbmevLyH\").content.decode(\"utf-8\")\r\nwikicode = parser.parse(raw_content)\r\n\r\n```\r\n\r\nI will create an issue on mwparserfromhell's repo to see if we can fix that\r\n", "This going to be fixed in the next `mwparserfromhell` release :)", "Fixed in `mwparserfromhell` version 0.6." ]
"2020-06-29T11:10:43"
"2022-02-14T15:21:46"
"2022-02-14T15:21:46"
NONE
null
Hi, I am trying to download some wikipedia data but I got this error for spanish "es" (but there are maybe some others languages which have the same error I haven't tried all of them ). `ERROR:root:mwparserfromhell ParseError: This is a bug and should be reported. Info: C tokenizer exited with non-empty token stack.` The code I have use was : `dataset = load_dataset('wikipedia', '20200501.es', beam_runner='DirectRunner')`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/321/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/321/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/320
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/320/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/320/comments
https://api.github.com/repos/huggingface/datasets/issues/320/events
https://github.com/huggingface/datasets/issues/320
647,188,167
MDU6SXNzdWU2NDcxODgxNjc=
320
Blog Authorship Corpus, Non Matching Splits Sizes Error, nlp viewer
{ "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "repos_url": "https://api.github.com/users/mariamabarham/repos", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "type": "User", "site_admin": false }
[ { "id": 2107841032, "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer", "name": "nlp-viewer", "color": "94203D", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "I wonder if this means downloading failed? That corpus has a really slow server.", "This dataset seems to have a decoding problem that results in inconsistencies in the number of generated examples.\r\nSee #215.\r\nThat's why we end up with a `NonMatchingSplitsSizesError `." ]
"2020-06-29T07:36:35"
"2020-06-29T14:44:42"
"2020-06-29T14:44:42"
CONTRIBUTOR
null
Selecting `blog_authorship_corpus` in the nlp viewer throws the following error: ``` NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train', num_bytes=614706451, num_examples=535568, dataset_name='blog_authorship_corpus')}, {'expected': SplitInfo(name='validation', num_bytes=37500394, num_examples=31277, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='validation', num_bytes=32553710, num_examples=28521, dataset_name='blog_authorship_corpus')}] Traceback: File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script exec(code, module.__dict__) File "/home/sasha/nlp-viewer/run.py", line 172, in <module> dts, fail = get(str(option.id), str(conf_option.name) if conf_option else None) File "/home/sasha/streamlit/lib/streamlit/caching.py", line 591, in wrapped_func return get_or_create_cached_value() File "/home/sasha/streamlit/lib/streamlit/caching.py", line 575, in get_or_create_cached_value return_value = func(*args, **kwargs) File "/home/sasha/nlp-viewer/run.py", line 132, in get builder_instance.download_and_prepare() File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 432, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 488, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/utils/info_utils.py", line 70, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) ``` @srush @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/320/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/320/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/319
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/319/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/319/comments
https://api.github.com/repos/huggingface/datasets/issues/319/events
https://github.com/huggingface/datasets/issues/319
646,792,487
MDU6SXNzdWU2NDY3OTI0ODc=
319
Nested sequences with dicts
{ "login": "ghomasHudson", "id": 13795113, "node_id": "MDQ6VXNlcjEzNzk1MTEz", "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghomasHudson", "html_url": "https://github.com/ghomasHudson", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Oh yes, this is a backward compatibility feature with tensorflow_dataset in which a `Sequence` or `dict` is converted in a `dict` of `lists`, unfortunately it is not very intuitive, see here: https://github.com/huggingface/nlp/blob/master/src/nlp/features.py#L409\r\n\r\nTo avoid this behavior, you can just define the list in the feature with a simple list or a tuple (which is also simpler to write).\r\nIn your case, the features could be as follow:\r\n``` python\r\n...\r\nfeatures=nlp.Features({\r\n \"title\": nlp.Value(\"string\"),\r\n \"vertexSet\": [[{\r\n \"name\": nlp.Value(\"string\"),\r\n \"sent_id\": nlp.Value(\"int32\"),\r\n \"pos\": nlp.features.Sequence(nlp.Value(\"int32\")),\r\n \"type\": nlp.Value(\"string\"),\r\n }]],\r\n ...\r\n }),\r\n...\r\n```" ]
"2020-06-27T23:45:17"
"2020-07-03T10:22:00"
"2020-07-03T10:22:00"
CONTRIBUTOR
null
Am pretty much finished [adding a dataset](https://github.com/ghomasHudson/nlp/blob/DocRED/datasets/docred/docred.py) for [DocRED](https://github.com/thunlp/DocRED), but am getting an error when trying to add a nested `nlp.features.sequence(nlp.features.sequence({key:value,...}))`. The original data is in this format: ```python { 'title': "Title of wiki page", 'vertexSet': [ [ { 'name': "mention_name", 'sent_id': "mention in which sentence", 'pos': ["postion of mention in a sentence"], 'type': "NER_type"}, {another mention} ], [another entity] ] ... } ``` So to represent this I've attempted to write: ``` ... features=nlp.Features({ "title": nlp.Value("string"), "vertexSet": nlp.features.Sequence(nlp.features.Sequence({ "name": nlp.Value("string"), "sent_id": nlp.Value("int32"), "pos": nlp.features.Sequence(nlp.Value("int32")), "type": nlp.Value("string"), })), ... }), ... ``` This is giving me the error: ``` pyarrow.lib.ArrowTypeError: Could not convert [{'pos': [[0,2], [2,4], [3,5]], "type": ["ORG", "ORG", "ORG"], "name": ["Lark Force", "Lark Force", "Lark Force", "sent_id": [0, 3, 4]}..... with type list: was not a dict, tuple, or recognized null value for conversion to struct type ``` Do we expect the pyarrow stuff to break when doing this deeper nesting? I've checked that it still works when you do `nlp.features.Sequence(nlp.features.Sequence(nlp.Value("string"))` or `nlp.features.Sequence({key:value,...})` just not nested sequences with a dict. If it's not possible, I can always convert it to a shallower structure. I'd rather not change the DocRED authors' structure if I don't have to though.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/319/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/319/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/318
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/318/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/318/comments
https://api.github.com/repos/huggingface/datasets/issues/318/events
https://github.com/huggingface/datasets/pull/318
646,682,840
MDExOlB1bGxSZXF1ZXN0NDQwOTExOTYy
318
Multitask
{ "login": "ghomasHudson", "id": 13795113, "node_id": "MDQ6VXNlcjEzNzk1MTEz", "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghomasHudson", "html_url": "https://github.com/ghomasHudson", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-06-27T13:27:29"
"2022-07-06T15:19:57"
"2022-07-06T15:19:57"
CONTRIBUTOR
null
Following our discussion in #217, I've implemented a first working version of `MultiDataset`. There's a function `build_multitask()` which takes either individual `nlp.Dataset`s or `dicts` of splits and constructs `MultiDataset`(s). I've added a notebook with example usage. I've implemented many of the `nlp.Dataset` methods (cache_files, columns, nbytes, num_columns, num_rows, column_names, schema, shape). Some of the other methods are complicated as they change the number of examples. These raise `NotImplementedError`s at the moment. This will need some tests which I haven't written yet. There's definitely room for improvements but I think the general approach is sound.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/318/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/318/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/318", "html_url": "https://github.com/huggingface/datasets/pull/318", "diff_url": "https://github.com/huggingface/datasets/pull/318.diff", "patch_url": "https://github.com/huggingface/datasets/pull/318.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/317
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/317/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/317/comments
https://api.github.com/repos/huggingface/datasets/issues/317/events
https://github.com/huggingface/datasets/issues/317
646,555,384
MDU6SXNzdWU2NDY1NTUzODQ=
317
Adding a dataset with multiple subtasks
{ "login": "erickrf", "id": 294483, "node_id": "MDQ6VXNlcjI5NDQ4Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/294483?v=4", "gravatar_id": "", "url": "https://api.github.com/users/erickrf", "html_url": "https://github.com/erickrf", "followers_url": "https://api.github.com/users/erickrf/followers", "following_url": "https://api.github.com/users/erickrf/following{/other_user}", "gists_url": "https://api.github.com/users/erickrf/gists{/gist_id}", "starred_url": "https://api.github.com/users/erickrf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/erickrf/subscriptions", "organizations_url": "https://api.github.com/users/erickrf/orgs", "repos_url": "https://api.github.com/users/erickrf/repos", "events_url": "https://api.github.com/users/erickrf/events{/privacy}", "received_events_url": "https://api.github.com/users/erickrf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "For one dataset you can have different configurations that each have their own `nlp.Features`.\r\nWe imagine having one configuration per subtask for example.\r\nThey are loaded with `nlp.load_dataset(\"my_dataset\", \"my_config\")`.\r\n\r\nFor example the `glue` dataset has many configurations. It is a bit different from your case though because each configuration is a dataset by itself (sst2, mnli).\r\nAnother example is `wikipedia` that has one configuration per language." ]
"2020-06-26T23:14:19"
"2020-10-27T15:36:52"
"2020-10-27T15:36:52"
NONE
null
I intent to add the datasets of the MT Quality Estimation shared tasks to `nlp`. However, they have different subtasks -- such as word-level, sentence-level and document-level quality estimation, each of which having different language pairs, and some of the data reused in different subtasks. For example, in [QE 2019,](http://www.statmt.org/wmt19/qe-task.html) we had the same English-Russian and English-German data for word-level and sentence-level QE. I suppose these datasets could have both their word and sentence-level labels inside `nlp.Features`; but what about other subtasks? Should they be considered a different dataset altogether? I read the discussion on #217 but the case of QE seems a lot simpler.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/317/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/317/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/316
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/316/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/316/comments
https://api.github.com/repos/huggingface/datasets/issues/316/events
https://github.com/huggingface/datasets/pull/316
646,366,450
MDExOlB1bGxSZXF1ZXN0NDQwNjY5NzY5
316
add AG News dataset
{ "login": "jxmorris12", "id": 13238952, "node_id": "MDQ6VXNlcjEzMjM4OTUy", "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jxmorris12", "html_url": "https://github.com/jxmorris12", "followers_url": "https://api.github.com/users/jxmorris12/followers", "following_url": "https://api.github.com/users/jxmorris12/following{/other_user}", "gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}", "starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions", "organizations_url": "https://api.github.com/users/jxmorris12/orgs", "repos_url": "https://api.github.com/users/jxmorris12/repos", "events_url": "https://api.github.com/users/jxmorris12/events{/privacy}", "received_events_url": "https://api.github.com/users/jxmorris12/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-06-26T16:11:58"
"2020-06-30T09:58:08"
"2020-06-30T08:31:55"
CONTRIBUTOR
null
adds support for the AG-News topic classification dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/316/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/316/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/316", "html_url": "https://github.com/huggingface/datasets/pull/316", "diff_url": "https://github.com/huggingface/datasets/pull/316.diff", "patch_url": "https://github.com/huggingface/datasets/pull/316.patch", "merged_at": "2020-06-30T08:31:55" }
true
https://api.github.com/repos/huggingface/datasets/issues/314
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/314/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/314/comments
https://api.github.com/repos/huggingface/datasets/issues/314/events
https://github.com/huggingface/datasets/pull/314
645,461,174
MDExOlB1bGxSZXF1ZXN0NDM5OTM4MTMw
314
Fixed singlular very minor spelling error
{ "login": "SchizoidBat", "id": 40696362, "node_id": "MDQ6VXNlcjQwNjk2MzYy", "avatar_url": "https://avatars.githubusercontent.com/u/40696362?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SchizoidBat", "html_url": "https://github.com/SchizoidBat", "followers_url": "https://api.github.com/users/SchizoidBat/followers", "following_url": "https://api.github.com/users/SchizoidBat/following{/other_user}", "gists_url": "https://api.github.com/users/SchizoidBat/gists{/gist_id}", "starred_url": "https://api.github.com/users/SchizoidBat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SchizoidBat/subscriptions", "organizations_url": "https://api.github.com/users/SchizoidBat/orgs", "repos_url": "https://api.github.com/users/SchizoidBat/repos", "events_url": "https://api.github.com/users/SchizoidBat/events{/privacy}", "received_events_url": "https://api.github.com/users/SchizoidBat/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-06-25T10:45:59"
"2020-06-26T08:46:41"
"2020-06-25T12:43:59"
CONTRIBUTOR
null
An instance of "independantly" was changed to "independently". That's all.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/314/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/314/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/314", "html_url": "https://github.com/huggingface/datasets/pull/314", "diff_url": "https://github.com/huggingface/datasets/pull/314.diff", "patch_url": "https://github.com/huggingface/datasets/pull/314.patch", "merged_at": "2020-06-25T12:43:59" }
true
https://api.github.com/repos/huggingface/datasets/issues/313
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/313/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/313/comments
https://api.github.com/repos/huggingface/datasets/issues/313/events
https://github.com/huggingface/datasets/pull/313
645,390,088
MDExOlB1bGxSZXF1ZXN0NDM5ODc4MDg5
313
Add MWSC
{ "login": "ghomasHudson", "id": 13795113, "node_id": "MDQ6VXNlcjEzNzk1MTEz", "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghomasHudson", "html_url": "https://github.com/ghomasHudson", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }, { "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "repos_url": "https://api.github.com/users/mariamabarham/repos", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "type": "User", "site_admin": false }, { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[]
"2020-06-25T09:22:02"
"2020-06-30T08:28:11"
"2020-06-30T08:28:11"
CONTRIBUTOR
null
Adding the [Modified Winograd Schema Challenge](https://github.com/salesforce/decaNLP/blob/master/local_data/schema.txt) dataset which formed part of the [decaNLP](http://decanlp.com/) benchmark. Not sure how much use people would find for it it outside of the benchmark, but it is general purpose. Code is heavily borrowed from the [decaNLP repo](https://github.com/salesforce/decaNLP/blob/1e9605f246b9e05199b28bde2a2093bc49feeeaa/text/torchtext/datasets/generic.py#L773-L877). There's a few (possibly overly opinionated) design choices I made: - I used the train/test/dev split [buried in the decaNLP code](https://github.com/salesforce/decaNLP/blob/1e9605f246b9e05199b28bde2a2093bc49feeeaa/text/torchtext/datasets/generic.py#L852-L855) - I split out each example into the 2 alternatives. Originally the data uses the format: ``` The city councilmen refused the demonstrators a permit because they [feared/advocated] violence. Who [feared/advocated] violence? councilmen/demonstrators ``` I split into the 2 variants: ``` The city councilmen refused the demonstrators a permit because they feared violence. Who feared violence? councilmen/demonstrators The city councilmen refused the demonstrators a permit because they advocated violence. Who advocated violence? councilmen/demonstrators ``` I can't see any use for having the options combined into a single example (splitting them is [the way decaNLP processes](https://github.com/salesforce/decaNLP/blob/1e9605f246b9e05199b28bde2a2093bc49feeeaa/text/torchtext/datasets/generic.py#L846-L850)) them. You can't train on both versions with them combined, and splitting the examples later would be a pain to do. I think [winogrande.py](https://github.com/huggingface/nlp/blob/master/datasets/winogrande/winogrande.py) presents the data in this way? - I've not used the decaNLP framing (appending the options to the question e.g. `Who feared violence? -- councilmen or demonstrators?`) but left it more generic by adding the options as a new key: `"options":["councilmen","demonstrators"]` This should be an easy thing to change using `map` if needed by a specific application. Dataset is working as-is but if anyone has any thoughts/preferences on the design decisions here I'm definitely open to different choices.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/313/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/313/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/313", "html_url": "https://github.com/huggingface/datasets/pull/313", "diff_url": "https://github.com/huggingface/datasets/pull/313.diff", "patch_url": "https://github.com/huggingface/datasets/pull/313.patch", "merged_at": "2020-06-30T08:28:10" }
true
https://api.github.com/repos/huggingface/datasets/issues/312
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/312/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/312/comments
https://api.github.com/repos/huggingface/datasets/issues/312/events
https://github.com/huggingface/datasets/issues/312
645,025,561
MDU6SXNzdWU2NDUwMjU1NjE=
312
[Feature request] Add `shard()` method to dataset
{ "login": "jarednielsen", "id": 4564897, "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jarednielsen", "html_url": "https://github.com/jarednielsen", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "repos_url": "https://api.github.com/users/jarednielsen/repos", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi Jared,\r\nInteresting, thanks for raising this question. You can also do that after loading with `dataset.select()` or `dataset.filter()` which let you keep only a specific subset of rows in a dataset.\r\nWhat is your use-case for sharding?", "Thanks for the pointer to those functions! It's still a little more verbose since you have to manually calculate which ids each rank would keep, but definitely works.\r\n\r\nMy use case is multi-node, multi-GPU training and avoiding global batches of duplicate elements. I'm using horovod. You can shuffle indices, or set random seeds, but explicitly sharding the dataset up front is the safest and clearest way I've found to do so." ]
"2020-06-24T22:48:33"
"2020-07-06T12:35:36"
"2020-07-06T12:35:36"
CONTRIBUTOR
null
Currently, to shard a dataset into 10 pieces on different ranks, you can run ```python rank = 3 # for example size = 10 dataset = nlp.load_dataset('wikitext', 'wikitext-2-raw-v1', split=f"train[{rank*10}%:{(rank+1)*10}%]") ``` However, this breaks down if you have a number of ranks that doesn't divide cleanly into 100, such as 64 ranks. Is there interest in adding a method shard() that looks like this? ```python rank = 3 size = 64 dataset = nlp.load_dataset("wikitext", "wikitext-2-raw-v1", split="train").shard(rank=rank, size=size) ``` TensorFlow has a similar API: https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shard. I'd be happy to contribute this code.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/312/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/312/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/311
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/311/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/311/comments
https://api.github.com/repos/huggingface/datasets/issues/311/events
https://github.com/huggingface/datasets/pull/311
645,013,131
MDExOlB1bGxSZXF1ZXN0NDM5NTQ3OTg0
311
Add qa_zre
{ "login": "ghomasHudson", "id": 13795113, "node_id": "MDQ6VXNlcjEzNzk1MTEz", "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghomasHudson", "html_url": "https://github.com/ghomasHudson", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-06-24T22:17:22"
"2020-06-29T16:37:38"
"2020-06-29T16:37:38"
CONTRIBUTOR
null
Adding the QA-ZRE dataset from ["Zero-Shot Relation Extraction via Reading Comprehension"](http://nlp.cs.washington.edu/zeroshot/). A common processing step seems to be replacing the `XXX` placeholder with the `subject`. I've left this out as it's something you could easily do with `map`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/311/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/311/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/311", "html_url": "https://github.com/huggingface/datasets/pull/311", "diff_url": "https://github.com/huggingface/datasets/pull/311.diff", "patch_url": "https://github.com/huggingface/datasets/pull/311.patch", "merged_at": "2020-06-29T16:37:38" }
true
https://api.github.com/repos/huggingface/datasets/issues/310
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/310/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/310/comments
https://api.github.com/repos/huggingface/datasets/issues/310/events
https://github.com/huggingface/datasets/pull/310
644,806,720
MDExOlB1bGxSZXF1ZXN0NDM5MzY1MDg5
310
add wikisql
{ "login": "ghomasHudson", "id": 13795113, "node_id": "MDQ6VXNlcjEzNzk1MTEz", "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghomasHudson", "html_url": "https://github.com/ghomasHudson", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-06-24T18:00:35"
"2020-06-25T12:32:25"
"2020-06-25T12:32:25"
CONTRIBUTOR
null
Adding the [WikiSQL](https://github.com/salesforce/WikiSQL) dataset. Interesting things to note: - Have copied the function (`_convert_to_human_readable`) which converts the SQL query to a human-readable (string) format as this is what most people will want when actually using this dataset for NLP applications. - `conds` was originally a tuple but is converted to a dictionary to support differing types. Would be nice to add the logical_form metrics too at some point.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/310/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/310/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/310", "html_url": "https://github.com/huggingface/datasets/pull/310", "diff_url": "https://github.com/huggingface/datasets/pull/310.diff", "patch_url": "https://github.com/huggingface/datasets/pull/310.patch", "merged_at": "2020-06-25T12:32:25" }
true
https://api.github.com/repos/huggingface/datasets/issues/309
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/309/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/309/comments
https://api.github.com/repos/huggingface/datasets/issues/309/events
https://github.com/huggingface/datasets/pull/309
644,783,822
MDExOlB1bGxSZXF1ZXN0NDM5MzQ1NzYz
309
Add narrative qa
{ "login": "Varal7", "id": 8019486, "node_id": "MDQ6VXNlcjgwMTk0ODY=", "avatar_url": "https://avatars.githubusercontent.com/u/8019486?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Varal7", "html_url": "https://github.com/Varal7", "followers_url": "https://api.github.com/users/Varal7/followers", "following_url": "https://api.github.com/users/Varal7/following{/other_user}", "gists_url": "https://api.github.com/users/Varal7/gists{/gist_id}", "starred_url": "https://api.github.com/users/Varal7/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Varal7/subscriptions", "organizations_url": "https://api.github.com/users/Varal7/orgs", "repos_url": "https://api.github.com/users/Varal7/repos", "events_url": "https://api.github.com/users/Varal7/events{/privacy}", "received_events_url": "https://api.github.com/users/Varal7/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-06-24T17:26:18"
"2020-09-03T09:02:10"
"2020-09-03T09:02:09"
NONE
null
Test cases for dummy data don't pass Only contains data for summaries (not whole story)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/309/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/309/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/309", "html_url": "https://github.com/huggingface/datasets/pull/309", "diff_url": "https://github.com/huggingface/datasets/pull/309.diff", "patch_url": "https://github.com/huggingface/datasets/pull/309.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/308
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/308/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/308/comments
https://api.github.com/repos/huggingface/datasets/issues/308/events
https://github.com/huggingface/datasets/pull/308
644,195,251
MDExOlB1bGxSZXF1ZXN0NDM4ODYyMzYy
308
Specify utf-8 encoding for MRPC files
{ "login": "patpizio", "id": 15801338, "node_id": "MDQ6VXNlcjE1ODAxMzM4", "avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patpizio", "html_url": "https://github.com/patpizio", "followers_url": "https://api.github.com/users/patpizio/followers", "following_url": "https://api.github.com/users/patpizio/following{/other_user}", "gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}", "starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patpizio/subscriptions", "organizations_url": "https://api.github.com/users/patpizio/orgs", "repos_url": "https://api.github.com/users/patpizio/repos", "events_url": "https://api.github.com/users/patpizio/events{/privacy}", "received_events_url": "https://api.github.com/users/patpizio/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-06-23T22:44:36"
"2020-06-25T12:52:21"
"2020-06-25T12:16:10"
CONTRIBUTOR
null
Fixes #307, again probably a Windows-related issue.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/308/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/308/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/308", "html_url": "https://github.com/huggingface/datasets/pull/308", "diff_url": "https://github.com/huggingface/datasets/pull/308.diff", "patch_url": "https://github.com/huggingface/datasets/pull/308.patch", "merged_at": "2020-06-25T12:16:09" }
true
https://api.github.com/repos/huggingface/datasets/issues/307
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/307/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/307/comments
https://api.github.com/repos/huggingface/datasets/issues/307/events
https://github.com/huggingface/datasets/issues/307
644,187,262
MDU6SXNzdWU2NDQxODcyNjI=
307
Specify encoding for MRPC
{ "login": "patpizio", "id": 15801338, "node_id": "MDQ6VXNlcjE1ODAxMzM4", "avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patpizio", "html_url": "https://github.com/patpizio", "followers_url": "https://api.github.com/users/patpizio/followers", "following_url": "https://api.github.com/users/patpizio/following{/other_user}", "gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}", "starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patpizio/subscriptions", "organizations_url": "https://api.github.com/users/patpizio/orgs", "repos_url": "https://api.github.com/users/patpizio/repos", "events_url": "https://api.github.com/users/patpizio/events{/privacy}", "received_events_url": "https://api.github.com/users/patpizio/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-06-23T22:24:49"
"2020-06-25T12:16:09"
"2020-06-25T12:16:09"
CONTRIBUTOR
null
Same as #242, but with MRPC: on Windows, I get a `UnicodeDecodeError` when I try to download the dataset: ```python dataset = nlp.load_dataset('glue', 'mrpc') ``` ```python Downloading and preparing dataset glue/mrpc (download: Unknown size, generated: Unknown size, total: Unknown size) to C:\Users\Python\.cache\huggingface\datasets\glue\mrpc\1.0.0... --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) ~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in incomplete_dir(dirname) 369 try: --> 370 yield tmp_dir 371 if os.path.isdir(dirname): ~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 430 verify_infos = not save_infos and not ignore_verifications --> 431 self._download_and_prepare( 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs ~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 482 # Prepare split will record examples associated to the split --> 483 self._prepare_split(split_generator, **prepare_split_kwargs) 484 except OSError: ~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in _prepare_split(self, split_generator) 663 generator = self._generate_examples(**split_generator.gen_kwargs) --> 664 for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False): 665 example = self.info.features.encode_example(record) ~\Miniconda3\envs\nlp\lib\site-packages\tqdm\notebook.py in __iter__(self, *args, **kwargs) 217 try: --> 218 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): 219 # return super(tqdm...) will not catch exception ~\Miniconda3\envs\nlp\lib\site-packages\tqdm\std.py in __iter__(self) 1128 try: -> 1129 for obj in iterable: 1130 yield obj ~\Miniconda3\envs\nlp\lib\site-packages\nlp\datasets\glue\7fc58099eb3983a04c8dac8500b70d27e6eceae63ffb40d7900c977897bb58c6\glue.py in _generate_examples(self, data_file, split, mrpc_files) 514 examples = self._generate_example_mrpc_files(mrpc_files=mrpc_files, split=split) --> 515 for example in examples: 516 yield example["idx"], example ~\Miniconda3\envs\nlp\lib\site-packages\nlp\datasets\glue\7fc58099eb3983a04c8dac8500b70d27e6eceae63ffb40d7900c977897bb58c6\glue.py in _generate_example_mrpc_files(self, mrpc_files, split) 576 reader = csv.DictReader(f, delimiter="\t", quoting=csv.QUOTE_NONE) --> 577 for n, row in enumerate(reader): 578 is_row_in_dev = [row["#1 ID"], row["#2 ID"]] in dev_ids ~\Miniconda3\envs\nlp\lib\csv.py in __next__(self) 110 self.fieldnames --> 111 row = next(self.reader) 112 self.line_num = self.reader.line_num ~\Miniconda3\envs\nlp\lib\encodings\cp1252.py in decode(self, input, final) 22 def decode(self, input, final=False): ---> 23 return codecs.charmap_decode(input,self.errors,decoding_table)[0] 24 UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 1180: character maps to <undefined> ``` The fix is the same: specify `utf-8` encoding when opening the file. The previous fix didn't work as MRPC's download process is different from the others in GLUE. I am going to propose a new PR :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/307/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/307/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/306
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/306/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/306/comments
https://api.github.com/repos/huggingface/datasets/issues/306/events
https://github.com/huggingface/datasets/pull/306
644,176,078
MDExOlB1bGxSZXF1ZXN0NDM4ODQ2MTI3
306
add pg19 dataset
{ "login": "lucidrains", "id": 108653, "node_id": "MDQ6VXNlcjEwODY1Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/108653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lucidrains", "html_url": "https://github.com/lucidrains", "followers_url": "https://api.github.com/users/lucidrains/followers", "following_url": "https://api.github.com/users/lucidrains/following{/other_user}", "gists_url": "https://api.github.com/users/lucidrains/gists{/gist_id}", "starred_url": "https://api.github.com/users/lucidrains/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucidrains/subscriptions", "organizations_url": "https://api.github.com/users/lucidrains/orgs", "repos_url": "https://api.github.com/users/lucidrains/repos", "events_url": "https://api.github.com/users/lucidrains/events{/privacy}", "received_events_url": "https://api.github.com/users/lucidrains/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-06-23T22:03:52"
"2020-07-06T07:55:59"
"2020-07-06T07:55:59"
CONTRIBUTOR
null
https://github.com/huggingface/nlp/issues/274 Add functioning PG19 dataset with dummy data `cos_e.py` was just auto-linted by `make style`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/306/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/306/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/306", "html_url": "https://github.com/huggingface/datasets/pull/306", "diff_url": "https://github.com/huggingface/datasets/pull/306.diff", "patch_url": "https://github.com/huggingface/datasets/pull/306.patch", "merged_at": "2020-07-06T07:55:59" }
true
https://api.github.com/repos/huggingface/datasets/issues/305
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/305/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/305/comments
https://api.github.com/repos/huggingface/datasets/issues/305/events
https://github.com/huggingface/datasets/issues/305
644,148,149
MDU6SXNzdWU2NDQxNDgxNDk=
305
Importing downloaded package repository fails
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 2067393914, "node_id": "MDU6TGFiZWwyMDY3MzkzOTE0", "url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug", "name": "metric bug", "color": "25b21e", "default": false, "description": "A bug in a metric script" } ]
closed
false
null
[]
null
[]
"2020-06-23T21:09:05"
"2020-07-30T16:44:23"
"2020-07-30T16:44:23"
MEMBER
null
The `get_imports` function in `src/nlp/load.py` has a feature to download a package as a zip archive of the github repository and import functions from the unpacked directory. This is used for example in the `metrics/coval.py` file, and would be useful to add BLEURT (@ankparikh). Currently however, the code seems to have trouble with imports within the package. For example: ``` import nlp coval = nlp.load_metric('coval') ``` yields: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/yacine/Code/nlp/src/nlp/load.py", line 432, in load_metric metric_cls = import_main_class(module_path, dataset=False) File "/home/yacine/Code/nlp/src/nlp/load.py", line 57, in import_main_class module = importlib.import_module(module_path) File "/home/yacine/anaconda3/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/yacine/Code/nlp/src/nlp/metrics/coval/a78807df33ac45edbb71799caf2b3b47e55df4fd690267808fe963a5e8b30952/coval.py", line 21, in <module> from .coval_backend.conll import reader # From: https://github.com/ns-moosavi/coval File "/home/yacine/Code/nlp/src/nlp/metrics/coval/a78807df33ac45edbb71799caf2b3b47e55df4fd690267808fe963a5e8b30952/coval_backend/conll/reader.py", line 2, in <module> from conll import mention ModuleNotFoundError: No module named 'conll' ``` Not sure what the fix would be there.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/305/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 1, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/305/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/304
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/304/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/304/comments
https://api.github.com/repos/huggingface/datasets/issues/304/events
https://github.com/huggingface/datasets/issues/304
644,091,970
MDU6SXNzdWU2NDQwOTE5NzA=
304
Problem while printing doc string when instantiating multiple metrics.
{ "login": "codehunk628", "id": 51091425, "node_id": "MDQ6VXNlcjUxMDkxNDI1", "avatar_url": "https://avatars.githubusercontent.com/u/51091425?v=4", "gravatar_id": "", "url": "https://api.github.com/users/codehunk628", "html_url": "https://github.com/codehunk628", "followers_url": "https://api.github.com/users/codehunk628/followers", "following_url": "https://api.github.com/users/codehunk628/following{/other_user}", "gists_url": "https://api.github.com/users/codehunk628/gists{/gist_id}", "starred_url": "https://api.github.com/users/codehunk628/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/codehunk628/subscriptions", "organizations_url": "https://api.github.com/users/codehunk628/orgs", "repos_url": "https://api.github.com/users/codehunk628/repos", "events_url": "https://api.github.com/users/codehunk628/events{/privacy}", "received_events_url": "https://api.github.com/users/codehunk628/received_events", "type": "User", "site_admin": false }
[ { "id": 2067393914, "node_id": "MDU6TGFiZWwyMDY3MzkzOTE0", "url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug", "name": "metric bug", "color": "25b21e", "default": false, "description": "A bug in a metric script" } ]
closed
false
null
[]
null
[]
"2020-06-23T19:32:05"
"2020-07-22T09:50:58"
"2020-07-22T09:50:58"
CONTRIBUTOR
null
When I load more than one metric and try to print doc string of a particular metric,. It shows the doc strings of all imported metric one after the other which looks quite confusing and clumsy. Attached [Colab](https://colab.research.google.com/drive/13H0ZgyQ2se0mqJ2yyew0bNEgJuHaJ8H3?usp=sharing) Notebook for problem clarification..
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/304/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/304/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/303
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/303/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/303/comments
https://api.github.com/repos/huggingface/datasets/issues/303/events
https://github.com/huggingface/datasets/pull/303
643,912,464
MDExOlB1bGxSZXF1ZXN0NDM4NjI3Nzcw
303
allow to move files across file systems
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-06-23T14:56:08"
"2020-06-23T15:08:44"
"2020-06-23T15:08:43"
MEMBER
null
Users are allowed to use the `cache_dir` that they want. Therefore it can happen that we try to move files across filesystems. We were using `os.rename` that doesn't allow that, so I changed some of them to `shutil.move`. This should fix #301
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/303/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/303/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/303", "html_url": "https://github.com/huggingface/datasets/pull/303", "diff_url": "https://github.com/huggingface/datasets/pull/303.diff", "patch_url": "https://github.com/huggingface/datasets/pull/303.patch", "merged_at": "2020-06-23T15:08:43" }
true
https://api.github.com/repos/huggingface/datasets/issues/302
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/302/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/302/comments
https://api.github.com/repos/huggingface/datasets/issues/302/events
https://github.com/huggingface/datasets/issues/302
643,910,418
MDU6SXNzdWU2NDM5MTA0MTg=
302
Question - Sign Language Datasets
{ "login": "AmitMY", "id": 5757359, "node_id": "MDQ6VXNlcjU3NTczNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmitMY", "html_url": "https://github.com/AmitMY", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "repos_url": "https://api.github.com/users/AmitMY/repos", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
closed
false
null
[]
null
[ "Even more complicating - \r\n\r\nAs I see it, datasets can have \"addons\".\r\nFor example, the WebNLG dataset is a dataset for data-to-text. However, a work of mine and other works enriched this dataset with text plans / underlying text structures. In that case, I see a need to load the dataset \"WebNLG\" with \"plans\" addon.\r\n\r\nSame for sign language - if there is a dataset of videos, one addon can be to run OpenPose, another to run ARKit4 pose estimation, and another to run PoseNet, or even just a video embedding addon. (which are expensive to run individually for everyone who wants to use these data)\r\n\r\nThis is something I dabbled with my own implementation to a [research datasets library](https://github.com/AmitMY/meta-scholar/) and I love to get the discussion going on these topics.", "This is a really cool idea !\r\nThe example for data objects you gave for the RWTH-PHOENIX-Weather 2014 T dataset can totally fit inside the library.\r\n\r\nFor your point about formats like `ilex`, `eaf`, or `srt`, it is possible to use any library in your dataset script.\r\nHowever most user probably won't need these libraries, as most datasets don't need them, and therefore it's unlikely that we will have them in the minimum requirements to use `nlp` (we want to keep it as light-weight as possible). If a user wants to load your dataset and doesn't have the libraries you need, an error is raised asking the user to install them.\r\n\r\nMore generally, we plan to have something like a `requirements.txt` per dataset. This could also be a place for addons as you said. What do you think ?", "Thanks, Quentin, I think a `requirements.txt` per dataset will be a good thing.\r\nI will work on adding this dataset next week, and once we sort all of the kinks, I'll add more." ]
"2020-06-23T14:53:40"
"2020-11-25T11:25:33"
"2020-11-25T11:25:33"
CONTRIBUTOR
null
An emerging field in NLP is SLP - sign language processing. I was wondering about adding datasets here, specifically because it's shaping up to be large and easily usable. The metrics for sign language to text translation are the same. So, what do you think about (me, or others) adding datasets here? An example dataset would be [RWTH-PHOENIX-Weather 2014 T](https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/) For every item in the dataset, the data object includes: 1. video_path - path to mp4 file 2. pose_path - a path to `.pose` file with human pose landmarks 3. openpose_path - a path to a `.json` file with human pose landmarks 4. gloss - string 5. text - string 6. video_metadata - height, width, frames, framerate ------ To make it a tad more complicated - what if sign language libraries add requirements to `nlp`? for example, sign language is commonly annotated using `ilex`, `eaf`, or `srt` files, which are all loadable as text, but there is no reason for the dataset to parse that file by itself, if libraries exist to do so.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/302/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/302/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/301
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/301/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/301/comments
https://api.github.com/repos/huggingface/datasets/issues/301/events
https://github.com/huggingface/datasets/issues/301
643,763,525
MDU6SXNzdWU2NDM3NjM1MjU=
301
Setting cache_dir gives error on wikipedia download
{ "login": "hallvagi", "id": 33862536, "node_id": "MDQ6VXNlcjMzODYyNTM2", "avatar_url": "https://avatars.githubusercontent.com/u/33862536?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hallvagi", "html_url": "https://github.com/hallvagi", "followers_url": "https://api.github.com/users/hallvagi/followers", "following_url": "https://api.github.com/users/hallvagi/following{/other_user}", "gists_url": "https://api.github.com/users/hallvagi/gists{/gist_id}", "starred_url": "https://api.github.com/users/hallvagi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hallvagi/subscriptions", "organizations_url": "https://api.github.com/users/hallvagi/orgs", "repos_url": "https://api.github.com/users/hallvagi/repos", "events_url": "https://api.github.com/users/hallvagi/events{/privacy}", "received_events_url": "https://api.github.com/users/hallvagi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Whoops didn't mean to close this one.\r\nI did some changes, could you try to run it from the master branch ?", "Now it works, thanks!" ]
"2020-06-23T11:31:44"
"2020-06-24T07:05:07"
"2020-06-24T07:05:07"
NONE
null
First of all thank you for a super handy library! I'd like to download large files to a specific drive so I set `cache_dir=my_path`. This works fine with e.g. imdb and squad. But on wikipedia I get an error: ``` nlp.load_dataset('wikipedia', '20200501.de', split = 'train', cache_dir=my_path) ``` ``` OSError Traceback (most recent call last) <ipython-input-2-23551344d7bc> in <module> 1 import nlp ----> 2 nlp.load_dataset('wikipedia', '20200501.de', split = 'train', cache_dir=path) ~/anaconda3/envs/fastai2/lib/python3.7/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 522 download_mode=download_mode, 523 ignore_verifications=ignore_verifications, --> 524 save_infos=save_infos, 525 ) 526 ~/anaconda3/envs/fastai2/lib/python3.7/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 385 with utils.temporary_assignment(self, "_cache_dir", tmp_data_dir): 386 reader = ArrowReader(self._cache_dir, self.info) --> 387 reader.download_from_hf_gcs(self._cache_dir, self._relative_data_dir(with_version=True)) 388 downloaded_info = DatasetInfo.from_directory(self._cache_dir) 389 self.info.update(downloaded_info) ~/anaconda3/envs/fastai2/lib/python3.7/site-packages/nlp/arrow_reader.py in download_from_hf_gcs(self, cache_dir, relative_data_dir) 231 remote_dataset_info = os.path.join(remote_cache_dir, "dataset_info.json") 232 downloaded_dataset_info = cached_path(remote_dataset_info) --> 233 os.rename(downloaded_dataset_info, os.path.join(cache_dir, "dataset_info.json")) 234 if self._info is not None: 235 self._info.update(self._info.from_directory(cache_dir)) OSError: [Errno 18] Invalid cross-device link: '/home/local/NTU/nn/.cache/huggingface/datasets/025fa4fd4f04aaafc9e939260fbc8f0bb190ce14c61310c8ae1ddd1dcb31f88c.9637f367b6711a79ca478be55fe6989b8aea4941b7ef7adc67b89ff403020947' -> '/data/nn/nlp/wikipedia/20200501.de/1.0.0.incomplete/dataset_info.json' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/301/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/301/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/300
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/300/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/300/comments
https://api.github.com/repos/huggingface/datasets/issues/300/events
https://github.com/huggingface/datasets/pull/300
643,688,304
MDExOlB1bGxSZXF1ZXN0NDM4NDQ4Mjk1
300
Fix bertscore references
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-06-23T09:38:59"
"2020-06-23T14:47:38"
"2020-06-23T14:47:37"
MEMBER
null
I added some type checking for metrics. There was an issue where a metric could interpret a string a a list. A `ValueError` is raised if a string is given instead of a list. Moreover I added support for both strings and lists of strings for `references` in `bertscore`, as it is the case in the original code. Both ways work: ``` import nlp scorer = nlp.load_metric("bertscore") with open("pred.txt") as p, open("ref.txt") as g: for lp, lg in zip(p, g): scorer.add(lp, [lg]) score = scorer.compute(lang="en") ``` ``` import nlp scorer = nlp.load_metric("bertscore") with open("pred.txt") as p, open("ref.txt") as g: for lp, lg in zip(p, g): scorer.add(lp, lg) score = scorer.compute(lang="en") ``` This should fix #295 and #238
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/300/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/300/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/300", "html_url": "https://github.com/huggingface/datasets/pull/300", "diff_url": "https://github.com/huggingface/datasets/pull/300.diff", "patch_url": "https://github.com/huggingface/datasets/pull/300.patch", "merged_at": "2020-06-23T14:47:36" }
true
https://api.github.com/repos/huggingface/datasets/issues/299
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/299/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/299/comments
https://api.github.com/repos/huggingface/datasets/issues/299/events
https://github.com/huggingface/datasets/pull/299
643,611,557
MDExOlB1bGxSZXF1ZXN0NDM4Mzg0NDgw
299
remove some print in snli file
{ "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "repos_url": "https://api.github.com/users/mariamabarham/repos", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-06-23T07:46:06"
"2020-06-23T08:10:46"
"2020-06-23T08:10:44"
CONTRIBUTOR
null
This PR removes unwanted `print` statements in some files such as `snli.py`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/299/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/299/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/299", "html_url": "https://github.com/huggingface/datasets/pull/299", "diff_url": "https://github.com/huggingface/datasets/pull/299.diff", "patch_url": "https://github.com/huggingface/datasets/pull/299.patch", "merged_at": "2020-06-23T08:10:44" }
true
https://api.github.com/repos/huggingface/datasets/issues/298
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/298/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/298/comments
https://api.github.com/repos/huggingface/datasets/issues/298/events
https://github.com/huggingface/datasets/pull/298
643,603,804
MDExOlB1bGxSZXF1ZXN0NDM4Mzc4MDM4
298
Add searchable datasets
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-06-23T07:33:03"
"2020-06-26T07:50:44"
"2020-06-26T07:50:43"
MEMBER
null
# Better support for Numpy format + Add Indexed Datasets I was working on adding Indexed Datasets but in the meantime I had to also add more support for Numpy arrays in the lib. ## Better support for Numpy format New features: - New fast method to convert Numpy arrays from Arrow structure (up to x100 speed up) using Pandas. - Allow to output Numpy arrays in batched `.map`, which was the only missing part to fully support Numpy arrays. Pandas offers fast zero-copy Numpy arrays conversion from Arrow structures. Using it we can speed up the reading of memory-mapped Numpy array stored in Arrow format. With these changes you can easily compute embeddings of texts using `.map()`. For example: ```python def embed(text): tokenized_example = tokenizer.encode(text, return_tensors="pt") embeddings = bert_encoder(tokenized_examples).numpy() return embeddings dset_with_embeddings = dset.map(lambda example: {"embeddings": embed(example["text])}) ``` And then reading the embeddings from the arrow format is be very fast. PS1: Note that right now only 1d arrays are supported. PS2: It seems possible to do without pandas but it will require more _trickery_. PS3: I did a simple benchmark with google colab that you can view here: https://colab.research.google.com/drive/1QlLTR6LRwYOKGJ-hTHmHyolE3wJzvfFg?usp=sharing ## Add Indexed Datasets For many retrieval tasks it is convenient to index a dataset to be able to run fast queries. For example for models like DPR, REALM, RAG etc. that are models for Open Domain QA, the retrieval step is very important. Therefore I added two ways to add an index to a column of a dataset: 1) You can index it using a Dense Index like Faiss. It is used to index vectors. Faiss is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. 2) You can index it using a Sparse Index like Elasticsearch. It is used to index text and run queries based on BM25 similarity. Example of usage: ```python ds = nlp.load_dataset('crime_and_punish', split='train') ds_with_embeddings = ds.map(lambda example: {'embeddings': embed(example['line']})) # `embed` outputs a `np.array` ds_with_embeddings.add_vector_index(column='embeddings') scores, retrieved_examples = ds_with_embeddings.get_nearest(column='embeddings', query=embed('my new query'), k=10) ``` ```python ds = nlp.load_dataset('crime_and_punish', split='train') es_client = elasticsearch.Elasticsearch() ds.add_text_index(column='line', es_client=es_client, index_name="my_es_index") scores, retrieved_examples = ds.get_nearest(column='line', query='my new query', k=10) ``` PS4: Faiss allows to specify many options for the [index](https://github.com/facebookresearch/faiss/wiki/The-index-factory) and for [GPU settings](https://github.com/facebookresearch/faiss/wiki/Faiss-on-the-GPU). I made sure that the user has full control over those settings. ## Tests I added tests for Faiss, Elasticsearch and indexed datasets. I had to edit the CI config because all the test scripts were not being run by CircleCI. ------------------ I'd be really happy to have some feedbacks :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/298/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/298/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/298", "html_url": "https://github.com/huggingface/datasets/pull/298", "diff_url": "https://github.com/huggingface/datasets/pull/298.diff", "patch_url": "https://github.com/huggingface/datasets/pull/298.patch", "merged_at": "2020-06-26T07:50:43" }
true
https://api.github.com/repos/huggingface/datasets/issues/297
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/297/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/297/comments
https://api.github.com/repos/huggingface/datasets/issues/297/events
https://github.com/huggingface/datasets/issues/297
643,444,625
MDU6SXNzdWU2NDM0NDQ2MjU=
297
Error in Demo for Specific Datasets
{ "login": "s-jse", "id": 60150701, "node_id": "MDQ6VXNlcjYwMTUwNzAx", "avatar_url": "https://avatars.githubusercontent.com/u/60150701?v=4", "gravatar_id": "", "url": "https://api.github.com/users/s-jse", "html_url": "https://github.com/s-jse", "followers_url": "https://api.github.com/users/s-jse/followers", "following_url": "https://api.github.com/users/s-jse/following{/other_user}", "gists_url": "https://api.github.com/users/s-jse/gists{/gist_id}", "starred_url": "https://api.github.com/users/s-jse/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/s-jse/subscriptions", "organizations_url": "https://api.github.com/users/s-jse/orgs", "repos_url": "https://api.github.com/users/s-jse/repos", "events_url": "https://api.github.com/users/s-jse/events{/privacy}", "received_events_url": "https://api.github.com/users/s-jse/received_events", "type": "User", "site_admin": false }
[ { "id": 2107841032, "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer", "name": "nlp-viewer", "color": "94203D", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Thanks for reporting these errors :)\r\n\r\nI can actually see two issues here.\r\n\r\nFirst, datasets like `natural_questions` require apache_beam to be processed. Right now the import is not at the right place so we have this error message. However, even the imports are fixed, the nlp viewer doesn't actually have the resources to process NQ right now so we'll have to wait until we have a version that we've already processed on our google storage (that's what we've done for wikipedia for example).\r\n\r\nSecond, datasets like `newsroom` require manual downloads as we're not allowed to redistribute the data ourselves (if I'm not wrong). An error message should be displayed saying that we're not allowed to show the dataset.\r\n\r\nI can fix the first issue with the imports but for the second one I think we'll have to see with @srush to show a message for datasets that require manual downloads (it can be checked whether a dataset requires manual downloads if `dataset_builder_instance.manual_download_instructions is not None`).\r\n\r\n", "I added apache-beam to the viewer. We can think about how to add newsroom. ", "We don't plan to host the source files of newsroom ourselves for now.\r\nYou can still get the dataset if you follow the download instructions given by `dataset = load_dataset('newsroom')` though.\r\nThe viewer also shows the instructions now.\r\n\r\nClosing this one. If you have other questions, feel free to re-open :)" ]
"2020-06-23T00:38:42"
"2020-07-17T17:43:06"
"2020-07-17T17:43:06"
NONE
null
Selecting `natural_questions` or `newsroom` dataset in the online demo results in an error similar to the following. ![image](https://user-images.githubusercontent.com/60150701/85347842-ac861900-b4ae-11ea-98c4-a53a00934783.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/297/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/297/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/296
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/296/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/296/comments
https://api.github.com/repos/huggingface/datasets/issues/296/events
https://github.com/huggingface/datasets/issues/296
643,423,717
MDU6SXNzdWU2NDM0MjM3MTc=
296
snli -1 labels
{ "login": "jxmorris12", "id": 13238952, "node_id": "MDQ6VXNlcjEzMjM4OTUy", "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jxmorris12", "html_url": "https://github.com/jxmorris12", "followers_url": "https://api.github.com/users/jxmorris12/followers", "following_url": "https://api.github.com/users/jxmorris12/following{/other_user}", "gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}", "starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions", "organizations_url": "https://api.github.com/users/jxmorris12/orgs", "repos_url": "https://api.github.com/users/jxmorris12/repos", "events_url": "https://api.github.com/users/jxmorris12/events{/privacy}", "received_events_url": "https://api.github.com/users/jxmorris12/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@jxmorris12 , we use `-1` to label examples for which `gold label` is missing (`gold label = -` in the original dataset). ", "Thanks @mariamabarham! so the original dataset is missing some labels? That is weird. Is standard practice just to discard those examples training/eval?", "Yes the original dataset is missing some labels maybe @sleepinyourhat , @gangeli can correct me if I'm wrong \r\nFor my personal opinion at least if you want your model to learn to predict no answer (-1) you can leave it their but otherwise you can discard them. ", "thanks @mariamabarham :)" ]
"2020-06-22T23:33:30"
"2020-06-23T14:41:59"
"2020-06-23T14:41:58"
CONTRIBUTOR
null
I'm trying to train a model on the SNLI dataset. Why does it have so many -1 labels? ``` import nlp from collections import Counter data = nlp.load_dataset('snli')['train'] print(Counter(data['label'])) Counter({0: 183416, 2: 183187, 1: 182764, -1: 785}) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/296/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/295
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/295/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/295/comments
https://api.github.com/repos/huggingface/datasets/issues/295/events
https://github.com/huggingface/datasets/issues/295
643,245,412
MDU6SXNzdWU2NDMyNDU0MTI=
295
Improve input warning for evaluation metrics
{ "login": "Tiiiger", "id": 19514537, "node_id": "MDQ6VXNlcjE5NTE0NTM3", "avatar_url": "https://avatars.githubusercontent.com/u/19514537?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tiiiger", "html_url": "https://github.com/Tiiiger", "followers_url": "https://api.github.com/users/Tiiiger/followers", "following_url": "https://api.github.com/users/Tiiiger/following{/other_user}", "gists_url": "https://api.github.com/users/Tiiiger/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tiiiger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tiiiger/subscriptions", "organizations_url": "https://api.github.com/users/Tiiiger/orgs", "repos_url": "https://api.github.com/users/Tiiiger/repos", "events_url": "https://api.github.com/users/Tiiiger/events{/privacy}", "received_events_url": "https://api.github.com/users/Tiiiger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-06-22T17:28:57"
"2020-06-23T14:47:37"
"2020-06-23T14:47:37"
NONE
null
Hi, I am the author of `bert_score`. Recently, we received [ an issue ](https://github.com/Tiiiger/bert_score/issues/62) reporting a problem in using `bert_score` from the `nlp` package (also see #238 in this repo). After looking into this, I realized that the problem arises from the format `nlp.Metric` takes input. Here is a minimal example: ```python import nlp scorer = nlp.load_metric("bertscore") with open("pred.txt") as p, open("ref.txt") as g: for lp, lg in zip(p, g): scorer.add(lp, lg) score = scorer.compute(lang="en") ``` The problem in the above code is that `scorer.add()` expects a list of strings as input for the references. As a result, the `scorer` here would take a list of characters in `lg` to be the references. The correct implementation would be calling ```python scorer.add(lp, [lg]) ``` I just want to raise this issue to you to prevent future user errors of a similar kind. I assume some simple type checking can prevent this from happening? Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/295/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/295/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/294
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/294/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/294/comments
https://api.github.com/repos/huggingface/datasets/issues/294/events
https://github.com/huggingface/datasets/issues/294
643,181,179
MDU6SXNzdWU2NDMxODExNzk=
294
Cannot load arxiv dataset on MacOS?
{ "login": "JohnGiorgi", "id": 8917831, "node_id": "MDQ6VXNlcjg5MTc4MzE=", "avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JohnGiorgi", "html_url": "https://github.com/JohnGiorgi", "followers_url": "https://api.github.com/users/JohnGiorgi/followers", "following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}", "gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}", "starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions", "organizations_url": "https://api.github.com/users/JohnGiorgi/orgs", "repos_url": "https://api.github.com/users/JohnGiorgi/repos", "events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}", "received_events_url": "https://api.github.com/users/JohnGiorgi/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "I couldn't replicate this issue on my macbook :/\r\nCould you try to play with different encodings in `with open(path, encoding=...) as f` in scientific_papers.py:L108 ?", "I was able to track down the file causing the problem by adding the following to `scientific_papers.py` (starting at line 116):\r\n\r\n```python\r\n from json import JSONDecodeError\r\n try:\r\n d = json.loads(line)\r\n summary = \"\\n\".join(d[\"abstract_text\"])\r\n except JSONDecodeError:\r\n print(path, line)\r\n```\r\n\r\n\r\n\r\nFor me it was at: `/Users/johngiorgi/.cache/huggingface/datasets/f87fd498c5003cbe253a2af422caa1e58f87a4fd74cb3e67350c635c8903b259/arxiv-dataset/train.txt` with `\"article_id\": \"1407.3051\"`.\r\n\r\nNot really 100% sure at the moment, but it looks like this specific substring from `\"article_text\"` may be causing the problem?\r\n\r\n```\r\n\"after the missing - mass scale adjustment , the validity of the corrections was tested in the @xmath85 productions at 1.69 gev/@xmath1 . in fig . [\", \"fig : calibrations ] ( a ) , we show the missing - mass spectrum in the @xmath86 region in the @xmath87 reaction at 1.69 gev/@xmath1 . a fitting result with a lorentzian function for the @xmath86 ( dashed line ) and the three - body phas\r\n```\r\n\r\nperhaps because it appears to be truncated. I (think) I can recreate the problem by doing the following:\r\n\r\n```python\r\nimport json\r\n\r\n# A minimal example of the json file that causes the error\r\ninvalid_json = '{\"article_id\": \"1407.3051\", \"article_text\": [\"the missing - mass resolution was obtained to be 2.8 @xmath3 0.1 mev/@xmath4 ( fwhm ) , which corresponds to the missing - mass resolution of 3.2 @xmath3 0.2 mev/@xmath4 ( fwhm ) at the @xmath6 cusp region in the @xmath0 reaction .\", \"this resolution is at least by a factor of 2 better than the previous measurement with the same reaction ( 3.2@xmath595.5 mev/@xmath4 in @xmath84 ) @xcite .\", \"after the missing - mass scale adjustment , the validity of the corrections was tested in the @xmath85 productions at 1.69 gev/@xmath1 . in fig . [\", \"fig : calibrations ] ( a ) , we show the missing - mass spectrum in the @xmath86 region in the @xmath87 reaction at 1.69 gev/@xmath1 . a fitting result with a lorentzian function for the @xmath86 ( dashed line ) and the three - body phas' \r\n# The line of code from `scientific_papers.py` which appears to cause the error\r\njson.loads(invalid_json)\r\n```\r\n\r\nThis is as far as I get before I am stumped.", "I just checked inside `train.txt` and this line isn't truncated for me (line 163577).\r\nCould you try to clear your cache and re-download the dataset ?", "Ah the turn-it-off-turn-it-on again solution! That did it, thanks a lot :) " ]
"2020-06-22T15:46:55"
"2020-06-30T15:25:10"
"2020-06-30T15:25:10"
CONTRIBUTOR
null
I am having trouble loading the `"arxiv"` config from the `"scientific_papers"` dataset on MacOS. When I try loading the dataset with: ```python arxiv = nlp.load_dataset("scientific_papers", "arxiv") ``` I get the following stack trace: ```bash JSONDecodeError Traceback (most recent call last) <ipython-input-2-8e00c55d5a59> in <module> ----> 1 arxiv = nlp.load_dataset("scientific_papers", "arxiv") ~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 522 download_mode=download_mode, 523 ignore_verifications=ignore_verifications, --> 524 save_infos=save_infos, 525 ) 526 ~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 430 verify_infos = not save_infos and not ignore_verifications 431 self._download_and_prepare( --> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 433 ) 434 # Sync info ~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 481 try: 482 # Prepare split will record examples associated to the split --> 483 self._prepare_split(split_generator, **prepare_split_kwargs) 484 except OSError: 485 raise OSError("Cannot find data file. " + (self.manual_download_instructions or "")) ~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/builder.py in _prepare_split(self, split_generator) 662 663 generator = self._generate_examples(**split_generator.gen_kwargs) --> 664 for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False): 665 example = self.info.features.encode_example(record) 666 writer.write(example) ~/miniconda3/envs/t2t/lib/python3.7/site-packages/tqdm/std.py in __iter__(self) 1106 fp_write=getattr(self.fp, 'write', sys.stderr.write)) 1107 -> 1108 for obj in iterable: 1109 yield obj 1110 # Update and possibly print the progressbar. ~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/datasets/scientific_papers/107a416c0e1958cb846f5934b5aae292f7884a5b27e86af3f3ef1a093e058bbc/scientific_papers.py in _generate_examples(self, path) 114 # "section_names": list[str], list of section names. 115 # "sections": list[list[str]], list of sections (list of paragraphs) --> 116 d = json.loads(line) 117 summary = "\n".join(d["abstract_text"]) 118 # In original paper, <S> and </S> are not used in vocab during training ~/miniconda3/envs/t2t/lib/python3.7/json/__init__.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 346 parse_int is None and parse_float is None and 347 parse_constant is None and object_pairs_hook is None and not kw): --> 348 return _default_decoder.decode(s) 349 if cls is None: 350 cls = JSONDecoder ~/miniconda3/envs/t2t/lib/python3.7/json/decoder.py in decode(self, s, _w) 335 336 """ --> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end()) 338 end = _w(s, end).end() 339 if end != len(s): ~/miniconda3/envs/t2t/lib/python3.7/json/decoder.py in raw_decode(self, s, idx) 351 """ 352 try: --> 353 obj, end = self.scan_once(s, idx) 354 except StopIteration as err: 355 raise JSONDecodeError("Expecting value", s, err.value) from None JSONDecodeError: Unterminated string starting at: line 1 column 46983 (char 46982) 163502 examples [02:10, 2710.68 examples/s] ``` I am not sure how to trace back to the specific JSON file that has the "Unterminated string". Also, I do not get this error on colab so I suspect it may be MacOS specific. Copy pasting the relevant lines from `transformers-cli env` below: - Platform: Darwin-19.5.0-x86_64-i386-64bit - Python version: 3.7.5 - PyTorch version (GPU?): 1.5.0 (False) - Tensorflow version (GPU?): 2.2.0 (False) Any ideas?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/294/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/294/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/293
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/293/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/293/comments
https://api.github.com/repos/huggingface/datasets/issues/293/events
https://github.com/huggingface/datasets/pull/293
642,942,182
MDExOlB1bGxSZXF1ZXN0NDM3ODM1ODI4
293
Don't test community datasets
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-06-22T10:15:33"
"2020-06-22T11:07:00"
"2020-06-22T11:06:59"
MEMBER
null
This PR disables testing for community datasets on aws. It should fix the CI that is currently failing.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/293/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/293/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/293", "html_url": "https://github.com/huggingface/datasets/pull/293", "diff_url": "https://github.com/huggingface/datasets/pull/293.diff", "patch_url": "https://github.com/huggingface/datasets/pull/293.patch", "merged_at": "2020-06-22T11:06:59" }
true
https://api.github.com/repos/huggingface/datasets/issues/292
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/292/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/292/comments
https://api.github.com/repos/huggingface/datasets/issues/292/events
https://github.com/huggingface/datasets/pull/292
642,897,797
MDExOlB1bGxSZXF1ZXN0NDM3Nzk4NTM2
292
Update metadata for x_stance dataset
{ "login": "jvamvas", "id": 5830820, "node_id": "MDQ6VXNlcjU4MzA4MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/5830820?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jvamvas", "html_url": "https://github.com/jvamvas", "followers_url": "https://api.github.com/users/jvamvas/followers", "following_url": "https://api.github.com/users/jvamvas/following{/other_user}", "gists_url": "https://api.github.com/users/jvamvas/gists{/gist_id}", "starred_url": "https://api.github.com/users/jvamvas/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jvamvas/subscriptions", "organizations_url": "https://api.github.com/users/jvamvas/orgs", "repos_url": "https://api.github.com/users/jvamvas/repos", "events_url": "https://api.github.com/users/jvamvas/events{/privacy}", "received_events_url": "https://api.github.com/users/jvamvas/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-06-22T09:13:26"
"2020-06-23T08:07:24"
"2020-06-23T08:07:24"
CONTRIBUTOR
null
Thank you for featuring the x_stance dataset in your library. This PR updates some metadata: - Citation: Replace preprint with proceedings - URL: Use a URL with long-term availability
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/292/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/292/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/292", "html_url": "https://github.com/huggingface/datasets/pull/292", "diff_url": "https://github.com/huggingface/datasets/pull/292.diff", "patch_url": "https://github.com/huggingface/datasets/pull/292.patch", "merged_at": "2020-06-23T08:07:24" }
true
https://api.github.com/repos/huggingface/datasets/issues/291
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/291/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/291/comments
https://api.github.com/repos/huggingface/datasets/issues/291/events
https://github.com/huggingface/datasets/pull/291
642,688,450
MDExOlB1bGxSZXF1ZXN0NDM3NjM1NjMy
291
break statement not required
{ "login": "mayurnewase", "id": 12967587, "node_id": "MDQ6VXNlcjEyOTY3NTg3", "avatar_url": "https://avatars.githubusercontent.com/u/12967587?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mayurnewase", "html_url": "https://github.com/mayurnewase", "followers_url": "https://api.github.com/users/mayurnewase/followers", "following_url": "https://api.github.com/users/mayurnewase/following{/other_user}", "gists_url": "https://api.github.com/users/mayurnewase/gists{/gist_id}", "starred_url": "https://api.github.com/users/mayurnewase/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mayurnewase/subscriptions", "organizations_url": "https://api.github.com/users/mayurnewase/orgs", "repos_url": "https://api.github.com/users/mayurnewase/repos", "events_url": "https://api.github.com/users/mayurnewase/events{/privacy}", "received_events_url": "https://api.github.com/users/mayurnewase/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2020-06-22T01:40:55"
"2020-06-23T17:57:58"
"2020-06-23T09:37:02"
NONE
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/291/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/291/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/291", "html_url": "https://github.com/huggingface/datasets/pull/291", "diff_url": "https://github.com/huggingface/datasets/pull/291.diff", "patch_url": "https://github.com/huggingface/datasets/pull/291.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/290
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/290/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/290/comments
https://api.github.com/repos/huggingface/datasets/issues/290/events
https://github.com/huggingface/datasets/issues/290
641,978,286
MDU6SXNzdWU2NDE5NzgyODY=
290
ConnectionError - Eli5 dataset download
{ "login": "JovanNj", "id": 8490096, "node_id": "MDQ6VXNlcjg0OTAwOTY=", "avatar_url": "https://avatars.githubusercontent.com/u/8490096?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JovanNj", "html_url": "https://github.com/JovanNj", "followers_url": "https://api.github.com/users/JovanNj/followers", "following_url": "https://api.github.com/users/JovanNj/following{/other_user}", "gists_url": "https://api.github.com/users/JovanNj/gists{/gist_id}", "starred_url": "https://api.github.com/users/JovanNj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JovanNj/subscriptions", "organizations_url": "https://api.github.com/users/JovanNj/orgs", "repos_url": "https://api.github.com/users/JovanNj/repos", "events_url": "https://api.github.com/users/JovanNj/events{/privacy}", "received_events_url": "https://api.github.com/users/JovanNj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "It should ne fixed now, thanks for reporting this one :)\r\nIt was an issue on our google storage.\r\n\r\nLet me now if you're still facing this issue.", "It works now, thanks for prompt help!" ]
"2020-06-19T13:40:33"
"2020-06-20T13:22:24"
"2020-06-20T13:22:24"
NONE
null
Hi, I have a problem with downloading Eli5 dataset. When typing `nlp.load_dataset('eli5')`, I get ConnectionError: Couldn't reach https://storage.googleapis.com/huggingface-nlp/cache/datasets/eli5/LFQA_reddit/1.0.0/explain_like_im_five-train_eli5.arrow I would appreciate if you could help me with this issue.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/290/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/290/timeline
null
completed
null
null
false