url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.05B
node_id
stringlengths
18
32
number
int64
1
3.27k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
int64
1,587B
1,637B
updated_at
int64
1,587B
1,637B
closed_at
int64
1,587B
1,637B
βŒ€
author_association
stringclasses
3 values
active_lock_reason
null
pull_request
dict
body
stringlengths
0
228k
βŒ€
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/129
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/129/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/129/comments
https://api.github.com/repos/huggingface/datasets/issues/129/events
https://github.com/huggingface/datasets/issues/129
618,997,725
MDU6SXNzdWU2MTg5OTc3MjU=
129
[Feature request] Add Google Natural Question dataset
{ "login": "elyase", "id": 1175888, "node_id": "MDQ6VXNlcjExNzU4ODg=", "avatar_url": "https://avatars.githubusercontent.com/u/1175888?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elyase", "html_url": "https://github.com/elyase", "followers_url": "https://api.github.com/users/elyase/followers", "following_url": "https://api.github.com/users/elyase/following{/other_user}", "gists_url": "https://api.github.com/users/elyase/gists{/gist_id}", "starred_url": "https://api.github.com/users/elyase/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elyase/subscriptions", "organizations_url": "https://api.github.com/users/elyase/orgs", "repos_url": "https://api.github.com/users/elyase/repos", "events_url": "https://api.github.com/users/elyase/events{/privacy}", "received_events_url": "https://api.github.com/users/elyase/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "Indeed, I think this one is almost ready cc @lhoestq ", "I'm doing the latest adjustments to make the processing of the dataset run on Dataflow", "Is there an update to this? It will be very beneficial for the QA community!", "Still work in progress :)\r\nThe idea is to have the dataset already processed somewhere so that the user only have to download the processed files. I'm also doing it for wikipedia.", "Super appreciate your hard work !!\r\nI'll cross my fingers and hope easily loadable wikipedia dataset will come soon. ", "Quick update on NQ: due to some limitations I met using apache beam + parquet I was not able to use the dataset in a nested parquet structure in python to convert it to our Apache Arrow format yet.\r\nHowever we had planned to change this conversion step anyways so we'll make just sure that it enables to process and convert the NQ dataset to arrow.", "NQ was added in #427 πŸŽ‰" ]
1,589,552,060,000
1,595,510,489,000
1,595,510,489,000
NONE
null
null
Would be great to have https://github.com/google-research-datasets/natural-questions as an alternative to SQuAD.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/129/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/129/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/128
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/128/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/128/comments
https://api.github.com/repos/huggingface/datasets/issues/128/events
https://github.com/huggingface/datasets/issues/128
618,951,117
MDU6SXNzdWU2MTg5NTExMTc=
128
Some error inside nlp.load_dataset()
{ "login": "polkaYK", "id": 18486287, "node_id": "MDQ6VXNlcjE4NDg2Mjg3", "avatar_url": "https://avatars.githubusercontent.com/u/18486287?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polkaYK", "html_url": "https://github.com/polkaYK", "followers_url": "https://api.github.com/users/polkaYK/followers", "following_url": "https://api.github.com/users/polkaYK/following{/other_user}", "gists_url": "https://api.github.com/users/polkaYK/gists{/gist_id}", "starred_url": "https://api.github.com/users/polkaYK/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polkaYK/subscriptions", "organizations_url": "https://api.github.com/users/polkaYK/orgs", "repos_url": "https://api.github.com/users/polkaYK/repos", "events_url": "https://api.github.com/users/polkaYK/events{/privacy}", "received_events_url": "https://api.github.com/users/polkaYK/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Google colab has an old version of Apache Arrow built-in.\r\nBe sure you execute the \"pip install\" cell and restart the notebook environment if the colab asks for it.", "Thanks for reply, worked fine!\r\n" ]
1,589,547,689,000
1,589,548,240,000
1,589,548,240,000
NONE
null
null
First of all, nice work! I am going through [this overview notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb) In simple step `dataset = nlp.load_dataset('squad', split='validation[:10%]')` I get an error, which is connected with some inner code, I think: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-8-d848d3a99b8c> in <module>() 1 # Downloading and loading a dataset 2 ----> 3 dataset = nlp.load_dataset('squad', split='validation[:10%]') 8 frames /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 515 download_mode=download_mode, 516 ignore_verifications=ignore_verifications, --> 517 save_infos=save_infos, 518 ) 519 /usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs) 361 verify_infos = not save_infos and not ignore_verifications 362 self._download_and_prepare( --> 363 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 364 ) 365 # Sync info /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 414 try: 415 # Prepare split will record examples associated to the split --> 416 self._prepare_split(split_generator, **prepare_split_kwargs) 417 except OSError: 418 raise OSError("Cannot find data file. " + (self.MANUAL_DOWNLOAD_INSTRUCTIONS or "")) /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _prepare_split(self, split_generator) 585 fname = "{}-{}.arrow".format(self.name, split_generator.name) 586 fpath = os.path.join(self._cache_dir, fname) --> 587 examples_type = self.info.features.type 588 writer = ArrowWriter(data_type=examples_type, path=fpath, writer_batch_size=self._writer_batch_size) 589 /usr/local/lib/python3.6/dist-packages/nlp/features.py in type(self) 460 @property 461 def type(self): --> 462 return get_nested_type(self) 463 464 @classmethod /usr/local/lib/python3.6/dist-packages/nlp/features.py in get_nested_type(schema) 370 # Nested structures: we allow dict, list/tuples, sequences 371 if isinstance(schema, dict): --> 372 return pa.struct({key: get_nested_type(value) for key, value in schema.items()}) 373 elif isinstance(schema, (list, tuple)): 374 assert len(schema) == 1, "We defining list feature, you should just provide one example of the inner type" /usr/local/lib/python3.6/dist-packages/nlp/features.py in <dictcomp>(.0) 370 # Nested structures: we allow dict, list/tuples, sequences 371 if isinstance(schema, dict): --> 372 return pa.struct({key: get_nested_type(value) for key, value in schema.items()}) 373 elif isinstance(schema, (list, tuple)): 374 assert len(schema) == 1, "We defining list feature, you should just provide one example of the inner type" /usr/local/lib/python3.6/dist-packages/nlp/features.py in get_nested_type(schema) 379 # We allow to reverse list of dict => dict of list for compatiblity with tfds 380 if isinstance(inner_type, pa.StructType): --> 381 return pa.struct(dict((f.name, pa.list_(f.type, schema.length)) for f in inner_type)) 382 return pa.list_(inner_type, schema.length) 383 /usr/local/lib/python3.6/dist-packages/nlp/features.py in <genexpr>(.0) 379 # We allow to reverse list of dict => dict of list for compatiblity with tfds 380 if isinstance(inner_type, pa.StructType): --> 381 return pa.struct(dict((f.name, pa.list_(f.type, schema.length)) for f in inner_type)) 382 return pa.list_(inner_type, schema.length) 383 TypeError: list_() takes exactly one argument (2 given) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/128/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/128/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/127
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/127/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/127/comments
https://api.github.com/repos/huggingface/datasets/issues/127/events
https://github.com/huggingface/datasets/pull/127
618,909,042
MDExOlB1bGxSZXF1ZXN0NDE4NTQ1MDcy
127
Update Overview.ipynb
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,543,208,000
1,589,543,247,000
1,589,543,245,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/127", "html_url": "https://github.com/huggingface/datasets/pull/127", "diff_url": "https://github.com/huggingface/datasets/pull/127.diff", "patch_url": "https://github.com/huggingface/datasets/pull/127.patch" }
update notebook
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/127/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/127/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/126
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/126/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/126/comments
https://api.github.com/repos/huggingface/datasets/issues/126/events
https://github.com/huggingface/datasets/pull/126
618,897,499
MDExOlB1bGxSZXF1ZXN0NDE4NTM1Mzc5
126
remove webis
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,541,920,000
1,589,542,284,000
1,589,542,226,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/126", "html_url": "https://github.com/huggingface/datasets/pull/126", "diff_url": "https://github.com/huggingface/datasets/pull/126.diff", "patch_url": "https://github.com/huggingface/datasets/pull/126.patch" }
Remove webis from dataset folder. Our first dataset script that only lives on AWS :-) https://s3.console.aws.amazon.com/s3/buckets/datasets.huggingface.co/nlp/datasets/webis/tl_dr/?region=us-east-1 @julien-c @jplu
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/126/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/126/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/125
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/125/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/125/comments
https://api.github.com/repos/huggingface/datasets/issues/125/events
https://github.com/huggingface/datasets/pull/125
618,869,048
MDExOlB1bGxSZXF1ZXN0NDE4NTExNDE0
125
[Newsroom] add newsroom
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,538,874,000
1,589,539,027,000
1,589,539,022,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/125", "html_url": "https://github.com/huggingface/datasets/pull/125", "diff_url": "https://github.com/huggingface/datasets/pull/125.diff", "patch_url": "https://github.com/huggingface/datasets/pull/125.patch" }
I checked it with the data link of the mail you forwarded @thomwolf => works well!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/125/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/125/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/124
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/124/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/124/comments
https://api.github.com/repos/huggingface/datasets/issues/124/events
https://github.com/huggingface/datasets/pull/124
618,864,284
MDExOlB1bGxSZXF1ZXN0NDE4NTA3NDUx
124
Xsum, require manual download of some files
{ "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "repos_url": "https://api.github.com/users/mariamabarham/repos", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,538,373,000
1,589,540,688,000
1,589,540,686,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/124", "html_url": "https://github.com/huggingface/datasets/pull/124", "diff_url": "https://github.com/huggingface/datasets/pull/124.diff", "patch_url": "https://github.com/huggingface/datasets/pull/124.patch" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/124/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/124/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/123
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/123/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/123/comments
https://api.github.com/repos/huggingface/datasets/issues/123/events
https://github.com/huggingface/datasets/pull/123
618,820,140
MDExOlB1bGxSZXF1ZXN0NDE4NDcxODU5
123
[Tests] Local => aws
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "For each dataset, If there exist a `dataset_info.json`, then the command `nlp-cli test path/to/my/dataset --al_configs` is successful only if the `dataset_infos.json` is correct. The infos are correct if the size and checksums of the downloaded file are correct, and if the number of examples in each split are correct.\r\n\r\nNote: the `test` command is supposed to test the script, that's why it runs the script even if the cached files already exist. Let me know if it's good to you.", "> For each dataset, If there exist a `dataset_info.json`, then the command `nlp-cli test path/to/my/dataset --al_configs` is successful only if the `dataset_infos.json` is correct. The infos are correct if the size and checksums of the downloaded file are correct, and if the number of examples in each split are correct.\r\n> \r\n> Note: the `test` command is supposed to test the script, that's why it runs the script even if the cached files already exist. Let me know if it's good to you.\r\n\r\nDoes it have to download the whole data to check if the checksums are correct? I guess so no? ", "> > For each dataset, If there exist a `dataset_info.json`, then the command `nlp-cli test path/to/my/dataset --al_configs` is successful only if the `dataset_infos.json` is correct. The infos are correct if the size and checksums of the downloaded file are correct, and if the number of examples in each split are correct.\r\n> > Note: the `test` command is supposed to test the script, that's why it runs the script even if the cached files already exist. Let me know if it's good to you.\r\n> \r\n> Does it have to download the whole data to check if the checksums are correct? I guess so no?\r\n\r\nYes it has to download them all (unless they were already downloaded in which case it just uses the cached downloaded files)." ]
1,589,533,945,000
1,589,537,172,000
1,589,537,006,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/123", "html_url": "https://github.com/huggingface/datasets/pull/123", "diff_url": "https://github.com/huggingface/datasets/pull/123.diff", "patch_url": "https://github.com/huggingface/datasets/pull/123.patch" }
## Change default Test from local => aws As a default we set` aws=True`, `Local=False`, `slow=False` ### 1. RUN_AWS=1 (default) This runs 4 tests per dataset script. a) Does the dataset script have a valid etag / Can it be reached on AWS? b) Can we load its `builder_class`? c) Can we load **all** dataset configs? d) _Most importantly_: Can we load the dataset? Important - we currently only test the first config of each dataset to reduce test time. Total test time is around 1min20s. ### 2. RUN_LOCAL=1 RUN_AWS=0 ***This should be done when debugging dataset scripts of the ./datasets folder*** This only runs 1 test per dataset test, which is equivalent to aws d) - Can we load the dataset from the local `datasets` directory? ### 3. RUN_SLOW=1 We should set up to run these tests maybe 1 time per week ? @thomwolf The `slow` tests include two more important tests. e) Can we load the dataset with all possible configs? This test will probably fail at the moment because a lot of dummy data is missing. We should add the dummy data step by step to be sure that all configs work. f) Test that the actual dataset can be loaded. This will take quite some time to run, but is important to make sure that the "real" data can be loaded. It will also test whether the dataset script has the correct checksums file which is currently not tested with `aws=True`. @lhoestq - is there an easy way to check cheaply whether the `dataset_info.json` is correct for each dataset script?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/123/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/123/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/122
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/122/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/122/comments
https://api.github.com/repos/huggingface/datasets/issues/122/events
https://github.com/huggingface/datasets/pull/122
618,813,182
MDExOlB1bGxSZXF1ZXN0NDE4NDY2Mzc3
122
Final cleanup of readme and metrics
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,533,252,000
1,630,698,009,000
1,589,533,342,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/122", "html_url": "https://github.com/huggingface/datasets/pull/122", "diff_url": "https://github.com/huggingface/datasets/pull/122.diff", "patch_url": "https://github.com/huggingface/datasets/pull/122.patch" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/122/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/122/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/121
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/121/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/121/comments
https://api.github.com/repos/huggingface/datasets/issues/121/events
https://github.com/huggingface/datasets/pull/121
618,790,040
MDExOlB1bGxSZXF1ZXN0NDE4NDQ4MTkx
121
make style
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,531,016,000
1,589,531,139,000
1,589,531,138,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/121", "html_url": "https://github.com/huggingface/datasets/pull/121", "diff_url": "https://github.com/huggingface/datasets/pull/121.diff", "patch_url": "https://github.com/huggingface/datasets/pull/121.patch" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/121/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/121/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/120
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/120/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/120/comments
https://api.github.com/repos/huggingface/datasets/issues/120/events
https://github.com/huggingface/datasets/issues/120
618,737,783
MDU6SXNzdWU2MTg3Mzc3ODM=
120
πŸ› `map` not working
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "organizations_url": "https://api.github.com/users/astariul/orgs", "repos_url": "https://api.github.com/users/astariul/repos", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "received_events_url": "https://api.github.com/users/astariul/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I didn't assign the output πŸ€¦β€β™‚οΈ\r\n\r\n```python\r\ndataset.map(test)\r\n```\r\n\r\nshould be :\r\n\r\n```python\r\ndataset = dataset.map(test)\r\n```" ]
1,589,524,988,000
1,589,526,158,000
1,589,526,158,000
NONE
null
null
I'm trying to run a basic example (mapping function to add a prefix). [Here is the colab notebook I'm using.](https://colab.research.google.com/drive/1YH4JCAy0R1MMSc-k_Vlik_s1LEzP_t1h?usp=sharing) ```python import nlp dataset = nlp.load_dataset('squad', split='validation[:10%]') def test(sample): sample['title'] = "test prefix @@@ " + sample["title"] return sample print(dataset[0]['title']) dataset.map(test) print(dataset[0]['title']) ``` Output : > Super_Bowl_50 Super_Bowl_50 Expected output : > Super_Bowl_50 test prefix @@@ Super_Bowl_50
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/120/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/120/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/119
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/119/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/119/comments
https://api.github.com/repos/huggingface/datasets/issues/119/events
https://github.com/huggingface/datasets/issues/119
618,652,145
MDU6SXNzdWU2MTg2NTIxNDU=
119
πŸ› Colab : type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "organizations_url": "https://api.github.com/users/astariul/orgs", "repos_url": "https://api.github.com/users/astariul/repos", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "received_events_url": "https://api.github.com/users/astariul/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "It's strange, after installing `nlp` on Colab, the `pyarrow` version seems fine from `pip` but not from python :\r\n\r\n```python\r\nimport pyarrow\r\n\r\n!pip show pyarrow\r\nprint(\"version = {}\".format(pyarrow.__version__))\r\n```\r\n\r\n> Name: pyarrow\r\nVersion: 0.17.0\r\nSummary: Python library for Apache Arrow\r\nHome-page: https://arrow.apache.org/\r\nAuthor: None\r\nAuthor-email: None\r\nLicense: Apache License, Version 2.0\r\nLocation: /usr/local/lib/python3.6/dist-packages\r\nRequires: numpy\r\nRequired-by: nlp, feather-format\r\n> \r\n> version = 0.14.1", "Ok I just had to restart the runtime after installing `nlp`. After restarting, the version of `pyarrow` is fine." ]
1,589,509,646,000
1,589,519,482,000
1,589,510,728,000
NONE
null
null
I'm trying to load CNN/DM dataset on Colab. [Colab notebook](https://colab.research.google.com/drive/11Mf7iNhIyt6GpgA1dBEtg3cyMHmMhtZS?usp=sharing) But I meet this error : > AttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/119/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/119/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/118
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/118/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/118/comments
https://api.github.com/repos/huggingface/datasets/issues/118/events
https://github.com/huggingface/datasets/issues/118
618,643,088
MDU6SXNzdWU2MTg2NDMwODg=
118
❓ How to apply a map to all subsets ?
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "organizations_url": "https://api.github.com/users/astariul/orgs", "repos_url": "https://api.github.com/users/astariul/repos", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "received_events_url": "https://api.github.com/users/astariul/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "That's the way!" ]
1,589,507,932,000
1,589,526,349,000
1,589,526,265,000
NONE
null
null
I'm working with CNN/DM dataset, where I have 3 subsets : `train`, `test`, `validation`. Should I apply my map function on the subsets one by one ? ```python import nlp cnn_dm = nlp.load_dataset('cnn_dailymail') for corpus in ['train', 'test', 'validation']: cnn_dm[corpus] = cnn_dm[corpus].map(my_func) ``` Or is there a better way to do this ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/118/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/118/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/117
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/117/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/117/comments
https://api.github.com/repos/huggingface/datasets/issues/117/events
https://github.com/huggingface/datasets/issues/117
618,632,573
MDU6SXNzdWU2MTg2MzI1NzM=
117
❓ How to remove specific rows of a dataset ?
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "organizations_url": "https://api.github.com/users/astariul/orgs", "repos_url": "https://api.github.com/users/astariul/repos", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "received_events_url": "https://api.github.com/users/astariul/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi, you can't do that at the moment." ]
1,589,505,906,000
1,620,964,939,000
1,589,526,272,000
NONE
null
null
I saw on the [example notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb#scrollTo=efFhDWhlvSVC) how to remove a specific column : ```python dataset.drop('id') ``` But I didn't find how to remove a specific row. **For example, how can I remove all sample with `id` < 10 ?**
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/117/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/117/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/116
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/116/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/116/comments
https://api.github.com/repos/huggingface/datasets/issues/116/events
https://github.com/huggingface/datasets/issues/116
618,628,264
MDU6SXNzdWU2MTg2MjgyNjQ=
116
πŸ› Trying to use ROUGE metric : pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "organizations_url": "https://api.github.com/users/astariul/orgs", "repos_url": "https://api.github.com/users/astariul/repos", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "received_events_url": "https://api.github.com/users/astariul/received_events", "type": "User", "site_admin": false }
[ { "id": 2067393914, "node_id": "MDU6TGFiZWwyMDY3MzkzOTE0", "url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug", "name": "metric bug", "color": "25b21e", "default": false, "description": "A bug in a metric script" } ]
closed
false
null
[]
null
[ "Can you share your data files or a minimally reproducible example?", "Sure, [here is a Colab notebook](https://colab.research.google.com/drive/1uiS89fnHMG7HV_cYxp3r-_LqJQvNNKs9?usp=sharing) reproducing the error.\r\n\r\n> ArrowInvalid: Column 1 named references expected length 36 but got length 56", "This is because `add` takes as input a batch of elements and you provided only one. I think we should have `add` for one prediction/reference and `add_batch` for a batch of predictions/references. This would make it more coherent with the way we use Arrow.\r\n\r\nLet me do this change", "Thanks for noticing though. I was mainly used to do `.compute` directly ^^", "Thanks @lhoestq it works :)" ]
1,589,505,126,000
1,590,709,387,000
1,590,709,387,000
NONE
null
null
I'm trying to use rouge metric. I have to files : `test.pred.tokenized` and `test.gold.tokenized` with each line containing a sentence. I tried : ```python import nlp rouge = nlp.load_metric('rouge') with open("test.pred.tokenized") as p, open("test.gold.tokenized") as g: for lp, lg in zip(p, g): rouge.add(lp, lg) ``` But I meet following error : > pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323 --- Full stack-trace : ``` Traceback (most recent call last): File "<stdin>", line 3, in <module> File "/home/me/.venv/transformers/lib/python3.6/site-packages/nlp/metric.py", line 224, in add self.writer.write_batch(batch) File "/home/me/.venv/transformers/lib/python3.6/site-packages/nlp/arrow_writer.py", line 148, in write_batch pa_table: pa.Table = pa.Table.from_pydict(batch_examples, schema=self._schema) File "pyarrow/table.pxi", line 1550, in pyarrow.lib.Table.from_pydict File "pyarrow/table.pxi", line 1503, in pyarrow.lib.Table.from_arrays File "pyarrow/public-api.pxi", line 390, in pyarrow.lib.pyarrow_wrap_table File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323 ``` (`nlp` installed from source)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/116/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/116/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/115
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/115/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/115/comments
https://api.github.com/repos/huggingface/datasets/issues/115/events
https://github.com/huggingface/datasets/issues/115
618,615,855
MDU6SXNzdWU2MTg2MTU4NTU=
115
AttributeError: 'dict' object has no attribute 'info'
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "organizations_url": "https://api.github.com/users/astariul/orgs", "repos_url": "https://api.github.com/users/astariul/repos", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "received_events_url": "https://api.github.com/users/astariul/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
null
[ "I could access the info by first accessing the different splits :\r\n\r\n```python\r\nimport nlp\r\n\r\ncnn_dm = nlp.load_dataset('cnn_dailymail')\r\nprint(cnn_dm['train'].info)\r\n```\r\n\r\nInformation seems to be duplicated between the subsets :\r\n\r\n```python\r\nprint(cnn_dm[\"train\"].info == cnn_dm[\"test\"].info == cnn_dm[\"validation\"].info)\r\n# True\r\n```\r\n\r\nIs it expected ?", "Good point @Colanim ! What happens under the hood when running:\r\n\r\n```python\r\nimport nlp\r\n\r\ncnn_dm = nlp.load_dataset('cnn_dailymail')\r\n```\r\n\r\nis that for every split in `cnn_dailymail`, a different dataset object (which all holds the same info) is created. This has the advantages that the datasets are easily separable in a training setup. \r\nAlso note that you can load e.g. only the `train` split of the dataset via:\r\n\r\n```python\r\ncnn_dm_train = nlp.load_dataset('cnn_dailymail', split=\"train\")\r\nprint(cnn_dm_train.info)\r\n```\r\n\r\nI think we should make the `info` object slightly different when creating the dataset for each split - at the moment it contains for example the variable `splits` which should maybe be renamed to `split` and contain only one `SplitInfo` object ...\r\n" ]
1,589,502,587,000
1,589,721,060,000
1,589,721,060,000
NONE
null
null
I'm trying to access the information of CNN/DM dataset : ```python cnn_dm = nlp.load_dataset('cnn_dailymail') print(cnn_dm.info) ``` returns : > AttributeError: 'dict' object has no attribute 'info'
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/115/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/115/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/114
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/114/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/114/comments
https://api.github.com/repos/huggingface/datasets/issues/114/events
https://github.com/huggingface/datasets/issues/114
618,611,310
MDU6SXNzdWU2MTg2MTEzMTA=
114
Couldn't reach CNN/DM dataset
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "organizations_url": "https://api.github.com/users/astariul/orgs", "repos_url": "https://api.github.com/users/astariul/repos", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "received_events_url": "https://api.github.com/users/astariul/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Installing from source (instead of Pypi package) solved the problem." ]
1,589,501,777,000
1,589,501,992,000
1,589,501,991,000
NONE
null
null
I can't get CNN / DailyMail dataset. ```python import nlp assert "cnn_dailymail" in [dataset.id for dataset in nlp.list_datasets()] cnn_dm = nlp.load_dataset('cnn_dailymail') ``` [Colab notebook](https://colab.research.google.com/drive/1zQ3bYAVzm1h0mw0yWPqKAg_4EUlSx5Ex?usp=sharing) gives following error : ``` ConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/nlp/cnn_dailymail/cnn_dailymail.py ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/114/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/114/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/113
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/113/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/113/comments
https://api.github.com/repos/huggingface/datasets/issues/113/events
https://github.com/huggingface/datasets/pull/113
618,590,562
MDExOlB1bGxSZXF1ZXN0NDE4MjkxNjIx
113
Adding docstrings and some doc
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,498,081,000
1,589,498,565,000
1,589,498,564,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/113", "html_url": "https://github.com/huggingface/datasets/pull/113", "diff_url": "https://github.com/huggingface/datasets/pull/113.diff", "patch_url": "https://github.com/huggingface/datasets/pull/113.patch" }
Some doc
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/113/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/113/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/112
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/112/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/112/comments
https://api.github.com/repos/huggingface/datasets/issues/112/events
https://github.com/huggingface/datasets/pull/112
618,569,195
MDExOlB1bGxSZXF1ZXN0NDE4Mjc0MTU4
112
Qa4mre - add dataset
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,494,671,000
1,589,534,203,000
1,589,534,202,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/112", "html_url": "https://github.com/huggingface/datasets/pull/112", "diff_url": "https://github.com/huggingface/datasets/pull/112.diff", "patch_url": "https://github.com/huggingface/datasets/pull/112.patch" }
Added dummy data test only for the first config. Will do the rest later. I had to do add some minor hacks to an important function to make it work. There might be a cleaner way to handle it - can you take a look @thomwolf ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/112/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/112/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/111
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/111/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/111/comments
https://api.github.com/repos/huggingface/datasets/issues/111/events
https://github.com/huggingface/datasets/pull/111
618,528,060
MDExOlB1bGxSZXF1ZXN0NDE4MjQwMjMy
111
[Clean-up] remove under construction datastes
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,489,533,000
1,589,489,543,000
1,589,489,542,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/111", "html_url": "https://github.com/huggingface/datasets/pull/111", "diff_url": "https://github.com/huggingface/datasets/pull/111.diff", "patch_url": "https://github.com/huggingface/datasets/pull/111.patch" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/111/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/111/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/110
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/110/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/110/comments
https://api.github.com/repos/huggingface/datasets/issues/110/events
https://github.com/huggingface/datasets/pull/110
618,520,325
MDExOlB1bGxSZXF1ZXN0NDE4MjMzODIy
110
fix reddit tifu dummy data
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,488,657,000
1,589,488,814,000
1,589,488,813,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/110", "html_url": "https://github.com/huggingface/datasets/pull/110", "diff_url": "https://github.com/huggingface/datasets/pull/110.diff", "patch_url": "https://github.com/huggingface/datasets/pull/110.patch" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/110/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/110/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/109
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/109/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/109/comments
https://api.github.com/repos/huggingface/datasets/issues/109/events
https://github.com/huggingface/datasets/pull/109
618,508,359
MDExOlB1bGxSZXF1ZXN0NDE4MjI0MDYw
109
[Reclor] fix reclor
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,487,386,000
1,589,487,549,000
1,589,487,548,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/109", "html_url": "https://github.com/huggingface/datasets/pull/109", "diff_url": "https://github.com/huggingface/datasets/pull/109.diff", "patch_url": "https://github.com/huggingface/datasets/pull/109.patch" }
- That's probably one me. Could have made the manual data test more flexible. @mariamabarham
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/109/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/109/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/108
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/108/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/108/comments
https://api.github.com/repos/huggingface/datasets/issues/108/events
https://github.com/huggingface/datasets/pull/108
618,386,394
MDExOlB1bGxSZXF1ZXN0NDE4MTIzMzc3
108
convert can use manual dir as second argument
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,475,152,000
1,589,475,163,000
1,589,475,162,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/108", "html_url": "https://github.com/huggingface/datasets/pull/108", "diff_url": "https://github.com/huggingface/datasets/pull/108.diff", "patch_url": "https://github.com/huggingface/datasets/pull/108.patch" }
@mariamabarham
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/108/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/108/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/107
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/107/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/107/comments
https://api.github.com/repos/huggingface/datasets/issues/107/events
https://github.com/huggingface/datasets/pull/107
618,373,045
MDExOlB1bGxSZXF1ZXN0NDE4MTEyNzcx
107
add writer_batch_size to GeneratorBasedBuilder
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Awesome that's great!" ]
1,589,474,139,000
1,589,475,030,000
1,589,475,029,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/107", "html_url": "https://github.com/huggingface/datasets/pull/107", "diff_url": "https://github.com/huggingface/datasets/pull/107.diff", "patch_url": "https://github.com/huggingface/datasets/pull/107.patch" }
You can now specify `writer_batch_size` in the builder arguments or directly in `load_dataset`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/107/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/107/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/106
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/106/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/106/comments
https://api.github.com/repos/huggingface/datasets/issues/106/events
https://github.com/huggingface/datasets/pull/106
618,361,418
MDExOlB1bGxSZXF1ZXN0NDE4MTAzMjM3
106
Add data dir test command
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Nice - I think we can merge this. I will update the checksums for `wikihow` then as well" ]
1,589,473,119,000
1,589,474,951,000
1,589,474,950,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/106", "html_url": "https://github.com/huggingface/datasets/pull/106", "diff_url": "https://github.com/huggingface/datasets/pull/106.diff", "patch_url": "https://github.com/huggingface/datasets/pull/106.patch" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/106/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/106/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/105
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/105/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/105/comments
https://api.github.com/repos/huggingface/datasets/issues/105/events
https://github.com/huggingface/datasets/pull/105
618,345,191
MDExOlB1bGxSZXF1ZXN0NDE4MDg5Njgz
105
[New structure on AWS] Adapt paths
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,471,757,000
1,589,471,788,000
1,589,471,787,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/105", "html_url": "https://github.com/huggingface/datasets/pull/105", "diff_url": "https://github.com/huggingface/datasets/pull/105.diff", "patch_url": "https://github.com/huggingface/datasets/pull/105.patch" }
Some small changes so that we have the correct paths. @julien-c
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/105/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/105/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/104
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/104/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/104/comments
https://api.github.com/repos/huggingface/datasets/issues/104/events
https://github.com/huggingface/datasets/pull/104
618,277,081
MDExOlB1bGxSZXF1ZXN0NDE4MDMzOTY0
104
Add trivia_q
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,466,439,000
1,594,532,060,000
1,589,487,812,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/104", "html_url": "https://github.com/huggingface/datasets/pull/104", "diff_url": "https://github.com/huggingface/datasets/pull/104.diff", "patch_url": "https://github.com/huggingface/datasets/pull/104.patch" }
Currently tested only for one config to pass tests. Needs to add more dummy data later.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/104/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/104/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/103
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/103/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/103/comments
https://api.github.com/repos/huggingface/datasets/issues/103/events
https://github.com/huggingface/datasets/pull/103
618,233,637
MDExOlB1bGxSZXF1ZXN0NDE3OTk5MDIy
103
[Manual downloads] add logic proposal for manual downloads and add wikihow
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "> Wikihow is an example that needs to manually download two files as stated in: https://github.com/mahnazkoupaee/WikiHow-Dataset.\r\n> \r\n> The user can then store these files under a hard-coded name: `wikihowAll.csv` and `wikihowSep.csv` in this case in a directory of his choice, e.g. `~/wikihow/manual_dir`.\r\n> \r\n> The dataset can then be loaded via:\r\n> \r\n> ```python\r\n> import nlp\r\n> nlp.load_dataset(\"wikihow\", data_dir=\"~/wikihow/manual_dir\")\r\n> ```\r\n> \r\n> I added/changed so that there are explicit error messages when using manually downloaded files.\r\n\r\nwouldn't be nicer if we can have `manual_dir/wikihow`? ", "> > Wikihow is an example that needs to manually download two files as stated in: https://github.com/mahnazkoupaee/WikiHow-Dataset.\r\n> > The user can then store these files under a hard-coded name: `wikihowAll.csv` and `wikihowSep.csv` in this case in a directory of his choice, e.g. `~/wikihow/manual_dir`.\r\n> > The dataset can then be loaded via:\r\n> > ```python\r\n> > import nlp\r\n> > nlp.load_dataset(\"wikihow\", data_dir=\"~/wikihow/manual_dir\")\r\n> > ```\r\n> > \r\n> > \r\n> > I added/changed so that there are explicit error messages when using manually downloaded files.\r\n> \r\n> wouldn't be nicer if we can have `manual_dir/wikihow`?\r\n\r\nSure, I mean the user can decide whatever he likes best :-) The path one puts in `data_dir` will be used as the path to the manual dir. `nlp.load_dataset(\"wikihow\", data_dir=\"~/manual_dir/wikihow\")` would work as well as any other path ;-) ", "Perfect! You can merge!" ]
1,589,463,036,000
1,589,466,461,000
1,589,466,460,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/103", "html_url": "https://github.com/huggingface/datasets/pull/103", "diff_url": "https://github.com/huggingface/datasets/pull/103.diff", "patch_url": "https://github.com/huggingface/datasets/pull/103.patch" }
Wikihow is an example that needs to manually download two files as stated in: https://github.com/mahnazkoupaee/WikiHow-Dataset. The user can then store these files under a hard-coded name: `wikihowAll.csv` and `wikihowSep.csv` in this case in a directory of his choice, e.g. `~/wikihow/manual_dir`. The dataset can then be loaded via: ```python import nlp nlp.load_dataset("wikihow", data_dir="~/wikihow/manual_dir") ``` I added/changed so that there are explicit error messages when using manually downloaded files.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/103/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/103/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/102
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/102/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/102/comments
https://api.github.com/repos/huggingface/datasets/issues/102/events
https://github.com/huggingface/datasets/pull/102
618,231,216
MDExOlB1bGxSZXF1ZXN0NDE3OTk3MDQz
102
Run save infos
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Haha that cornell dialogue dataset - that ran for 3h on my computer as well. The `generate_examples` method in this script is one of the most inefficient code samples I've ever seen :D ", "Indeed it's been 3 hours already\r\n```73111 examples [3:07:48, 2.40 examples/s]```" ]
1,589,462,846,000
1,589,470,984,000
1,589,470,983,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/102", "html_url": "https://github.com/huggingface/datasets/pull/102", "diff_url": "https://github.com/huggingface/datasets/pull/102.diff", "patch_url": "https://github.com/huggingface/datasets/pull/102.patch" }
I replaced the old checksum file with the new `dataset_infos.json` by running the script on almost all the datasets we have. The only one that is still running on my side is the cornell dialog
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/102/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/102/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/101
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/101/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/101/comments
https://api.github.com/repos/huggingface/datasets/issues/101/events
https://github.com/huggingface/datasets/pull/101
618,111,651
MDExOlB1bGxSZXF1ZXN0NDE3ODk5OTQ2
101
[Reddit] add reddit
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,451,902,000
1,589,452,045,000
1,589,452,044,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/101", "html_url": "https://github.com/huggingface/datasets/pull/101", "diff_url": "https://github.com/huggingface/datasets/pull/101.diff", "patch_url": "https://github.com/huggingface/datasets/pull/101.patch" }
- Everything worked fine @mariamabarham. Made my computer nearly crash, but all seems to be working :-)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/101/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/101/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/100
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/100/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/100/comments
https://api.github.com/repos/huggingface/datasets/issues/100/events
https://github.com/huggingface/datasets/pull/100
618,081,602
MDExOlB1bGxSZXF1ZXN0NDE3ODc1MjE2
100
Add per type scores in seqeval metric
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "LGTM :-) Some small suggestions to shorten the code a bit :-) ", "Can you put the kwargs as normal kwargs instead of a dict? (And add them to the kwargs description As well)", "@thom Is-it what you meant?", "Yes and there is a dynamically generated doc string in the metric script KWARGS DESCRIPTION" ]
1,589,449,072,000
1,589,498,495,000
1,589,498,494,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/100", "html_url": "https://github.com/huggingface/datasets/pull/100", "diff_url": "https://github.com/huggingface/datasets/pull/100.diff", "patch_url": "https://github.com/huggingface/datasets/pull/100.patch" }
This PR add a bit more detail in the seqeval metric. Now the usage and output are: ```python import nlp met = nlp.load_metric('metrics/seqeval') references = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] predictions = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] met.compute(predictions, references) #Output: {'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.8} ``` It is also possible to compute scores for non IOB notations, POS tagging for example hasn't this kind of notation. Add `suffix` parameter: ```python import nlp met = nlp.load_metric('metrics/seqeval') references = [['O', 'O', 'O', 'MISC', 'MISC', 'MISC', 'O'], ['PER', 'PER', 'O']] predictions = [['O', 'O', 'MISC', 'MISC', 'MISC', 'MISC', 'O'], ['PER', 'PER', 'O']] met.compute(predictions, references, metrics_kwargs={"suffix": True}) #Output: {'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.9} ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/100/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/100/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/99
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/99/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/99/comments
https://api.github.com/repos/huggingface/datasets/issues/99/events
https://github.com/huggingface/datasets/pull/99
618,026,700
MDExOlB1bGxSZXF1ZXN0NDE3ODMxNjky
99
[Cmrc 2018] fix cmrc2018
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,444,523,000
1,589,446,182,000
1,589,446,181,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/99", "html_url": "https://github.com/huggingface/datasets/pull/99", "diff_url": "https://github.com/huggingface/datasets/pull/99.diff", "patch_url": "https://github.com/huggingface/datasets/pull/99.patch" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/99/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/99/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/98
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/98/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/98/comments
https://api.github.com/repos/huggingface/datasets/issues/98/events
https://github.com/huggingface/datasets/pull/98
617,957,739
MDExOlB1bGxSZXF1ZXN0NDE3Nzc3NDcy
98
Webis tl-dr
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Should that rather be in an organization scope, @thomwolf @patrickvonplaten ?", "> Should that rather be in an organization scope, @thomwolf @patrickvonplaten ?\r\n\r\nI'm a bit indifferent - both would be fine for me!", "@jplu - if creating the dummy_data is too tedious, I can do it as well :-) ", "There is dummy_data here, no ?", "Yeah I think naming it webis/tl_dr would be best @jplu if that works for you", "No problem at all!! On it^^", "> There is dummy_data here, no ?\r\n\r\nSome paths were wrong - the structure is really confusing and the error messages don't really help either - I have to think about how to make this easier to understand!\r\n\r\nHope it was ok that I fiddled with your PR !", "> Some paths were wrong - the structure is really confusing and the error message don't really help either - I have to think about how to make this easier to understand!\r\n\r\nOh ok! I haven't noticed that sorry :(\r\n\r\n> Hope it was ok that I fiddled with your PR !\r\n\r\nOf course it was ok :)", "@julien-c Looks like what you have in mind?\r\n\r\n```python\r\nimport nlp\r\nnlp.load_dataset(\"datasets/webis\", \"tl_dr\")\r\n\r\n#Output: Downloading and preparing dataset webis/tl_dr (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/jplu/.cache/huggingface/datasets/webis/tl_dr/1.0.0...\r\n```", "Merging this for now. Maybe we can see whether to rename it in a different PR @julien-c ? \r\n", "Hi, \r\nAuthor here of the webis-tldr corpus. Any plans on integrating this dataset into the hub? I remember we could access it in the previous versions of the library. If there is a particular issue that I can help with, do let me know.\r\n\r\nThanks!", "Hi @shahbazsyed, this dataset _is_ inside the hub but it's namespaced by the organization name `webis`.\r\n\r\nYou can load it following the steps described in https://huggingface.co/datasets/webis/tl_dr\r\n\r\nHere's a Colab showcasing that it works: https://colab.research.google.com/drive/11IrzRVpnMLJZ8_UFFHLR8FhiajjAHRUU?usp=sharing\r\n\r\nThe reason the code is in S3 and not in this repo is that the dataset is namespaced under the `webis` organization. We don't have a lot of namespaced datasets yet but this should become the main way we add more datasets in the future.\r\nLet us know if that's an issue for you. Thank you!" ]
1,589,437,338,000
1,599,127,221,000
1,589,489,656,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/98", "html_url": "https://github.com/huggingface/datasets/pull/98", "diff_url": "https://github.com/huggingface/datasets/pull/98.diff", "patch_url": "https://github.com/huggingface/datasets/pull/98.patch" }
Add the Webid TL:DR dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/98/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/98/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/97
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/97/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/97/comments
https://api.github.com/repos/huggingface/datasets/issues/97/events
https://github.com/huggingface/datasets/pull/97
617,809,431
MDExOlB1bGxSZXF1ZXN0NDE3NjU4MDcy
97
[Csv] add tests for csv dataset script
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@thomwolf - can you check and merge if ok? " ]
1,589,411,171,000
1,589,412,196,000
1,589,412,195,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/97", "html_url": "https://github.com/huggingface/datasets/pull/97", "diff_url": "https://github.com/huggingface/datasets/pull/97.diff", "patch_url": "https://github.com/huggingface/datasets/pull/97.patch" }
Adds dummy data tests for csv.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/97/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/97/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/96
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/96/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/96/comments
https://api.github.com/repos/huggingface/datasets/issues/96/events
https://github.com/huggingface/datasets/pull/96
617,739,521
MDExOlB1bGxSZXF1ZXN0NDE3NjAwMjY4
96
lm1b
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I might have a different version of `isort` than others. It seems like I'm always reordering the imports of others. But isn't really a problem..." ]
1,589,402,324,000
1,589,465,610,000
1,589,465,609,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/96", "html_url": "https://github.com/huggingface/datasets/pull/96", "diff_url": "https://github.com/huggingface/datasets/pull/96.diff", "patch_url": "https://github.com/huggingface/datasets/pull/96.patch" }
Add lm1b dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/96/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/96/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/95
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/95/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/95/comments
https://api.github.com/repos/huggingface/datasets/issues/95/events
https://github.com/huggingface/datasets/pull/95
617,703,037
MDExOlB1bGxSZXF1ZXN0NDE3NTY5NzA4
95
Replace checksums files by Dataset infos json
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Great! LGTM :-) ", "> Ok, really clean!\r\n> I like the logic (not a huge fan of using `_asdict_inner` but it makes sense).\r\n> I think it's a nice improvement!\r\n> \r\n> How should we update the files in the repo? Run a big job on a server or on somebody's computer who has most of the datasets already downloaded?\r\n\r\nMaybe we can split the updates among us...IMO most datasets run very quickly. \r\nI think I've downloaded 50 datasets and 80% are loaded in <5min, 15% in <1h and then `wmt` which is still downloading (since 12h). \r\nI deleted my cache because the `wmt` downloads require quite a lot of space, so I only have parts of the `wmt` datasets on my computer. \r\n\r\n@mariamabarham I guess you have downloaded most of the datasets no? " ]
1,589,398,576,000
1,589,446,723,000
1,589,446,722,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/95", "html_url": "https://github.com/huggingface/datasets/pull/95", "diff_url": "https://github.com/huggingface/datasets/pull/95.diff", "patch_url": "https://github.com/huggingface/datasets/pull/95.patch" }
### Better verifications when loading a dataset I replaced the `urls_checksums` directory that used to contain `checksums.txt` and `cached_sizes.txt`, by a single file `dataset_infos.json`. It's just a dict `config_name` -> `DatasetInfo`. It simplifies and improves how verifications of checksums and splits sizes are done, as they're all stored in `DatasetInfo` (one per config). Also, having already access to `DatasetInfo` enables to check disk space before running `download_and_prepare` for a given config. The dataset infos json file is user readable, you can take a look at the squad one that I generated in this PR. ### Renaming According to these changes, I did some renaming: `save_checksums` -> `save_infos` `ignore_checksums` -> `ignore_verifications` for example, when you are creating a dataset you have to run ```nlp-cli test path/to/my/dataset --save_infos --all_configs``` instead of ```nlp-cli test path/to/my/dataset --save_checksums --all_configs``` ### And now, the fun part We'll have to rerun the `nlp-cli test ... --save_infos --all_configs` for all the datasets ----------------- feedback appreciated !
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/95/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/95/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/94
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/94/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/94/comments
https://api.github.com/repos/huggingface/datasets/issues/94/events
https://github.com/huggingface/datasets/pull/94
617,571,340
MDExOlB1bGxSZXF1ZXN0NDE3NDYyMTIw
94
Librispeech
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@jplu - I changed this weird archieve - iter method to something simpler. It's only one file to download anyways so I don't see the point of using weird iter methods...It's a huge file though :D 30 million lines of text. Took me quite some time to download :D " ]
1,589,385,854,000
1,589,405,343,000
1,589,405,342,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/94", "html_url": "https://github.com/huggingface/datasets/pull/94", "diff_url": "https://github.com/huggingface/datasets/pull/94.diff", "patch_url": "https://github.com/huggingface/datasets/pull/94.patch" }
Add librispeech dataset and remove some useless content.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/94/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/94/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/93
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/93/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/93/comments
https://api.github.com/repos/huggingface/datasets/issues/93/events
https://github.com/huggingface/datasets/pull/93
617,522,029
MDExOlB1bGxSZXF1ZXN0NDE3NDIxODUy
93
Cleanup notebooks and various fixes
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,381,938,000
1,589,382,108,000
1,589,382,107,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/93", "html_url": "https://github.com/huggingface/datasets/pull/93", "diff_url": "https://github.com/huggingface/datasets/pull/93.diff", "patch_url": "https://github.com/huggingface/datasets/pull/93.patch" }
Fixes on dataset (more flexible) metrics (fix) and general clean ups
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/93/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/93/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/92
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/92/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/92/comments
https://api.github.com/repos/huggingface/datasets/issues/92/events
https://github.com/huggingface/datasets/pull/92
617,341,505
MDExOlB1bGxSZXF1ZXN0NDE3Mjc1ODky
92
[WIP] add wmt14
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,366,523,000
1,589,627,858,000
1,589,627,857,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/92", "html_url": "https://github.com/huggingface/datasets/pull/92", "diff_url": "https://github.com/huggingface/datasets/pull/92.diff", "patch_url": "https://github.com/huggingface/datasets/pull/92.patch" }
WMT14 takes forever to download :-/ - WMT is the first dataset that uses an abstract class IMO, so I had to modify the `load_dataset_module` a bit.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/92/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/92/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/91
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/91/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/91/comments
https://api.github.com/repos/huggingface/datasets/issues/91/events
https://github.com/huggingface/datasets/pull/91
617,339,484
MDExOlB1bGxSZXF1ZXN0NDE3Mjc0MjA0
91
[Paracrawl] add paracrawl
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,366,340,000
1,589,366,415,000
1,589,366,414,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/91", "html_url": "https://github.com/huggingface/datasets/pull/91", "diff_url": "https://github.com/huggingface/datasets/pull/91.diff", "patch_url": "https://github.com/huggingface/datasets/pull/91.patch" }
- Huge dataset - took ~1h to download - Also this PR reformats all dataset scripts and adds `datasets` to `make style`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/91/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/91/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/90
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/90/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/90/comments
https://api.github.com/repos/huggingface/datasets/issues/90/events
https://github.com/huggingface/datasets/pull/90
617,311,877
MDExOlB1bGxSZXF1ZXN0NDE3MjUxODE0
90
Add download gg drive
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "awesome - so no manual downloaded needed here? ", "Yes exactly. It works like a standard download" ]
1,589,363,762,000
1,589,373,988,000
1,589,364,331,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/90", "html_url": "https://github.com/huggingface/datasets/pull/90", "diff_url": "https://github.com/huggingface/datasets/pull/90.diff", "patch_url": "https://github.com/huggingface/datasets/pull/90.patch" }
We can now add datasets that download from google drive
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/90/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/90/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/89
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/89/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/89/comments
https://api.github.com/repos/huggingface/datasets/issues/89/events
https://github.com/huggingface/datasets/pull/89
617,295,069
MDExOlB1bGxSZXF1ZXN0NDE3MjM4MjU4
89
Add list and inspect methods - cleanup hf_api
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,362,215,000
1,589,378,700,000
1,589,362,390,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/89", "html_url": "https://github.com/huggingface/datasets/pull/89", "diff_url": "https://github.com/huggingface/datasets/pull/89.diff", "patch_url": "https://github.com/huggingface/datasets/pull/89.patch" }
Add a bunch of methods to easily list and inspect the processing scripts up-loaded on S3: ```python nlp.list_datasets() nlp.list_metrics() # Copy and prepare the scripts at `local_path` for easy inspection/modification. nlp.inspect_dataset(path, local_path) # Copy and prepare the scripts at `local_path` for easy inspection/modification. nlp.inspect_metric(path, local_path) ``` Also clean up the `HfAPI` to use `dataclasses` for better user-experience
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/89/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/89/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/88
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/88/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/88/comments
https://api.github.com/repos/huggingface/datasets/issues/88/events
https://github.com/huggingface/datasets/pull/88
617,284,664
MDExOlB1bGxSZXF1ZXN0NDE3MjI5ODQw
88
Add wiki40b
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Looks good to me. I have not really looked too much into the Beam Datasets yet though - so I think you can merge whenever you think is good for Beam datasets :-) " ]
1,589,361,361,000
1,589,373,115,000
1,589,373,114,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/88", "html_url": "https://github.com/huggingface/datasets/pull/88", "diff_url": "https://github.com/huggingface/datasets/pull/88.diff", "patch_url": "https://github.com/huggingface/datasets/pull/88.patch" }
This one is a beam dataset that downloads files using tensorflow. I tested it on a small config and it works fine
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/88/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/88/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/87
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/87/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/87/comments
https://api.github.com/repos/huggingface/datasets/issues/87/events
https://github.com/huggingface/datasets/pull/87
617,267,118
MDExOlB1bGxSZXF1ZXN0NDE3MjE1NzA0
87
Add Flores
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,359,889,000
1,589,361,814,000
1,589,361,813,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/87", "html_url": "https://github.com/huggingface/datasets/pull/87", "diff_url": "https://github.com/huggingface/datasets/pull/87.diff", "patch_url": "https://github.com/huggingface/datasets/pull/87.patch" }
Beautiful language for sure!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/87/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/87/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/86
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/86/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/86/comments
https://api.github.com/repos/huggingface/datasets/issues/86/events
https://github.com/huggingface/datasets/pull/86
617,260,972
MDExOlB1bGxSZXF1ZXN0NDE3MjEwNzY2
86
[Load => load_dataset] change naming
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,359,380,000
1,589,359,858,000
1,589,359,857,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/86", "html_url": "https://github.com/huggingface/datasets/pull/86", "diff_url": "https://github.com/huggingface/datasets/pull/86.diff", "patch_url": "https://github.com/huggingface/datasets/pull/86.patch" }
Rename leftovers @thomwolf
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/86/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/86/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/85
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/85/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/85/comments
https://api.github.com/repos/huggingface/datasets/issues/85/events
https://github.com/huggingface/datasets/pull/85
617,253,428
MDExOlB1bGxSZXF1ZXN0NDE3MjA0ODA4
85
Add boolq
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Awesome :-) Thanks for adding the function to the Mock DL Manager" ]
1,589,358,747,000
1,589,360,979,000
1,589,360,978,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/85", "html_url": "https://github.com/huggingface/datasets/pull/85", "diff_url": "https://github.com/huggingface/datasets/pull/85.diff", "patch_url": "https://github.com/huggingface/datasets/pull/85.patch" }
I just added the dummy data for this dataset. This one was uses `tf.io.gfile.copy` to download the data but I added the support for custom download in the mock_download_manager. I also had to add a `tensorflow` dependency for tests.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/85/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/85/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/84
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/84/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/84/comments
https://api.github.com/repos/huggingface/datasets/issues/84/events
https://github.com/huggingface/datasets/pull/84
617,249,815
MDExOlB1bGxSZXF1ZXN0NDE3MjAxODcz
84
[TedHrLr] add left dummy data
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,358,440,000
1,589,358,562,000
1,589,358,561,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/84", "html_url": "https://github.com/huggingface/datasets/pull/84", "diff_url": "https://github.com/huggingface/datasets/pull/84.diff", "patch_url": "https://github.com/huggingface/datasets/pull/84.patch" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/84/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/84/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/83
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/83/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/83/comments
https://api.github.com/repos/huggingface/datasets/issues/83/events
https://github.com/huggingface/datasets/pull/83
616,863,601
MDExOlB1bGxSZXF1ZXN0NDE2ODkyOTUz
83
New datasets
{ "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "repos_url": "https://api.github.com/users/mariamabarham/repos", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,307,747,000
1,589,307,767,000
1,589,307,765,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/83", "html_url": "https://github.com/huggingface/datasets/pull/83", "diff_url": "https://github.com/huggingface/datasets/pull/83.diff", "patch_url": "https://github.com/huggingface/datasets/pull/83.patch" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/83/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/83/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/82
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/82/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/82/comments
https://api.github.com/repos/huggingface/datasets/issues/82/events
https://github.com/huggingface/datasets/pull/82
616,805,194
MDExOlB1bGxSZXF1ZXN0NDE2ODQ1Njc5
82
[Datasets] add ted_hrlr
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,302,010,000
1,589,356,374,000
1,589,356,373,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/82", "html_url": "https://github.com/huggingface/datasets/pull/82", "diff_url": "https://github.com/huggingface/datasets/pull/82.diff", "patch_url": "https://github.com/huggingface/datasets/pull/82.patch" }
@thomwolf - After looking at `xnli` I think it's better to leave the translation features and add a `translation` key to make them work in our framework. The result looks like this: ![Screenshot from 2020-05-12 18-34-43](https://user-images.githubusercontent.com/23423619/81721933-ee1faf00-9480-11ea-9e95-d6557cbd0ce0.png) you can see that each split has a `translation` key which value is the nlp.features.Translation object. That's a simple change. If it's ok for you, I will add dummy data for the other configs and treat the other translation scripts in the same way.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/82/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/82/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/81
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/81/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/81/comments
https://api.github.com/repos/huggingface/datasets/issues/81/events
https://github.com/huggingface/datasets/pull/81
616,793,010
MDExOlB1bGxSZXF1ZXN0NDE2ODM1NzE1
81
add tests
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,300,899,000
1,589,355,837,000
1,589,355,836,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/81", "html_url": "https://github.com/huggingface/datasets/pull/81", "diff_url": "https://github.com/huggingface/datasets/pull/81.diff", "patch_url": "https://github.com/huggingface/datasets/pull/81.patch" }
Tests for py_utils functions and for the BaseReader used to read from arrow and parquet. I also removed unused utils functions.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/81/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/81/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/80
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/80/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/80/comments
https://api.github.com/repos/huggingface/datasets/issues/80/events
https://github.com/huggingface/datasets/pull/80
616,786,803
MDExOlB1bGxSZXF1ZXN0NDE2ODMwNjk3
80
Add nbytes + nexamples check
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Looks good to me! Should we hard code those numbers in the config classes and make sure that when loading a dataset that the numbers match? " ]
1,589,300,323,000
1,589,356,354,000
1,589,356,353,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/80", "html_url": "https://github.com/huggingface/datasets/pull/80", "diff_url": "https://github.com/huggingface/datasets/pull/80.diff", "patch_url": "https://github.com/huggingface/datasets/pull/80.patch" }
### Save size and number of examples Now when you do `save_checksums`, it also create `cached_sizes.txt` right next to the checksum file. This new file stores the bytes sizes and the number of examples of each split that has been prepared and stored in the cache. Example: ``` # Cached sizes: <full_config_name> <num_bytes> <num_examples> hansards/house/1.0.0/test 22906629 122290 hansards/house/1.0.0/train 191459584 947969 hansards/senate/1.0.0/test 5711686 25553 hansards/senate/1.0.0/train 40324278 182135 ``` ### Check processing output If there is a `caches_sizes.txt`, then each time we run `download_and_prepare` it will make sure that the sizes match. You can set `ignore_checksums=True` if you don't want that to happen. ### Fill Dataset Info All the split infos and the checksums are now stored correctly in DatasetInfo after `download_and_prepare` ### Check space on disk before running `download_and_prepare` Check if the space is lower than the sum of the sizes of the files in `checksums.txt` and `cached_files.txt`. This is not ideal though as it considers the files for all configs. TODO: A better way to do it would be to have save the `DatasetInfo` instead of the `checksums.txt` and `cached_sizes.txt`, in order to have one file per dataset config (and therefore consider only the sizes of the files for one config and not all of them). It can also be the occasion to factorize all the `download_and_prepare` verifications. Maybe next PR ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/80/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/80/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/79
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/79/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/79/comments
https://api.github.com/repos/huggingface/datasets/issues/79/events
https://github.com/huggingface/datasets/pull/79
616,785,613
MDExOlB1bGxSZXF1ZXN0NDE2ODI5NzMy
79
[Convert] add new pattern
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,300,211,000
1,589,300,230,000
1,589,300,229,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/79", "html_url": "https://github.com/huggingface/datasets/pull/79", "diff_url": "https://github.com/huggingface/datasets/pull/79.diff", "patch_url": "https://github.com/huggingface/datasets/pull/79.patch" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/79/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/79/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/78
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/78/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/78/comments
https://api.github.com/repos/huggingface/datasets/issues/78/events
https://github.com/huggingface/datasets/pull/78
616,774,275
MDExOlB1bGxSZXF1ZXN0NDE2ODIwNzU5
78
[Tests] skip beam dataset tests for now
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq - I moved the wkipedia file to the \"correct\" folder. ", "Nice thanks !" ]
1,589,299,258,000
1,589,300,184,000
1,589,300,182,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/78", "html_url": "https://github.com/huggingface/datasets/pull/78", "diff_url": "https://github.com/huggingface/datasets/pull/78.diff", "patch_url": "https://github.com/huggingface/datasets/pull/78.patch" }
For now we will skip tests for Beam Datasets
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/78/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/78/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/77
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/77/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/77/comments
https://api.github.com/repos/huggingface/datasets/issues/77/events
https://github.com/huggingface/datasets/pull/77
616,674,601
MDExOlB1bGxSZXF1ZXN0NDE2NzQwMjAz
77
New datasets
{ "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "repos_url": "https://api.github.com/users/mariamabarham/repos", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,291,519,000
1,589,292,136,000
1,589,292,135,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/77", "html_url": "https://github.com/huggingface/datasets/pull/77", "diff_url": "https://github.com/huggingface/datasets/pull/77.diff", "patch_url": "https://github.com/huggingface/datasets/pull/77.patch" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/77/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/77/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/76
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/76/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/76/comments
https://api.github.com/repos/huggingface/datasets/issues/76/events
https://github.com/huggingface/datasets/pull/76
616,579,228
MDExOlB1bGxSZXF1ZXN0NDE2NjYyMTk2
76
pin flake 8
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,282,729,000
1,589,282,855,000
1,589,282,854,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/76", "html_url": "https://github.com/huggingface/datasets/pull/76", "diff_url": "https://github.com/huggingface/datasets/pull/76.diff", "patch_url": "https://github.com/huggingface/datasets/pull/76.patch" }
Flake 8's new version does not like our format. Pinning the version for now.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/76/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/76/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/75
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/75/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/75/comments
https://api.github.com/repos/huggingface/datasets/issues/75/events
https://github.com/huggingface/datasets/pull/75
616,520,163
MDExOlB1bGxSZXF1ZXN0NDE2NjE0MzU1
75
WIP adding metrics
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "It's all about my metric stuff so I'll probably merge it unless you want to have a look.\r\n\r\nTook the occasion to remove the old doc and requirements.txt" ]
1,589,277,120,000
1,589,355,852,000
1,589,355,850,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/75", "html_url": "https://github.com/huggingface/datasets/pull/75", "diff_url": "https://github.com/huggingface/datasets/pull/75.diff", "patch_url": "https://github.com/huggingface/datasets/pull/75.patch" }
Adding the following metrics as identified by @mariamabarham: 1. BLEU: BiLingual Evaluation Understudy: https://github.com/tensorflow/nmt/blob/master/nmt/scripts/bleu.py, https://github.com/chakki-works/sumeval/blob/master/sumeval/metrics/bleu.py (multilingual) 2. GLEU: Google-BLEU: https://github.com/cnap/gec-ranking/blob/master/scripts/compute_gleu 3. Sacrebleu: https://pypi.org/project/sacrebleu/1.4.8/ (pypi package), https://github.com/mjpost/sacrebleu (github implementation) 4. ROUGE: Recall-Oriented Understudy for Gisting Evaluation: https://github.com/google-research/google-research/tree/master/rouge, https://github.com/chakki-works/sumeval/blob/master/sumeval/metrics/rouge.py (multilingual) 5. Seqeval: https://github.com/chakki-works/seqeval (github implementation), https://pypi.org/project/seqeval/0.0.12/ (pypi package) 6. Coval: coreference evaluation package for the CoNLL and ARRAU datasets https://github.com/ns-moosavi/coval 7. SQuAD v1 evaluation script 8. SQuAD V2 evaluation script: https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/ 9. GLUE 10. XNLI Not now: 1. Perplexity: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/perplexity.py 2. Spearman: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/spearman_correlation.py 3. F1_measure: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/f1_measure.py 4. Pearson_corelation: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/pearson_correlation.py 5. AUC: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/auc.py 6. Entropy: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/entropy.py
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/75/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/75/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/74
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/74/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/74/comments
https://api.github.com/repos/huggingface/datasets/issues/74/events
https://github.com/huggingface/datasets/pull/74
616,511,101
MDExOlB1bGxSZXF1ZXN0NDE2NjA3MDcy
74
fix overflow check
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,276,281,000
1,589,277,879,000
1,589,277,878,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/74", "html_url": "https://github.com/huggingface/datasets/pull/74", "diff_url": "https://github.com/huggingface/datasets/pull/74.diff", "patch_url": "https://github.com/huggingface/datasets/pull/74.patch" }
I did some tests and unfortunately the test ``` pa_array.nbytes > MAX_BATCH_BYTES ``` doesn't work. Indeed for a StructArray, `nbytes` can be less 2GB even if there is an overflow (it loops...). I don't think we can do a proper overflow test for the limit of 2GB... For now I replaced it with a sanity check on the first element.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/74/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/74/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/73
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/73/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/73/comments
https://api.github.com/repos/huggingface/datasets/issues/73/events
https://github.com/huggingface/datasets/pull/73
616,417,845
MDExOlB1bGxSZXF1ZXN0NDE2NTMyMTg1
73
JSON script
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The tests for the Wikipedia dataset do not pass anymore with the error:\r\n```\r\nTo be able to use this dataset, you need to install the following dependencies ['mwparserfromhell'] using 'pip install mwparserfromhell' for instance'\r\n```", "This was an issue on master. You can just rebase from master.", "Perfect! Indeed, it worked^^ Thanks @lhoestq ", "Currently the dummy_data tests are always green because in a PR the dataset is not yet synchronized with aws. This PR fixes this: https://github.com/huggingface/nlp/pull/140 . \r\n\r\nCould you test `json` locally or wait until the PR: https://github.com/huggingface/nlp/pull/140 is merged ? :-) ", "Ok, I will wait #140 to be merged and then rebase :) " ]
1,589,267,482,000
1,589,784,637,000
1,589,784,636,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/73", "html_url": "https://github.com/huggingface/datasets/pull/73", "diff_url": "https://github.com/huggingface/datasets/pull/73.diff", "patch_url": "https://github.com/huggingface/datasets/pull/73.patch" }
Add a JSONS script to read JSON datasets from files.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/73/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/73/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/72
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/72/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/72/comments
https://api.github.com/repos/huggingface/datasets/issues/72/events
https://github.com/huggingface/datasets/pull/72
616,225,010
MDExOlB1bGxSZXF1ZXN0NDE2Mzc4Mjg4
72
[README dummy data tests] README to better understand how the dummy data structure works
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,235,543,000
1,589,235,963,000
1,589,235,961,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/72", "html_url": "https://github.com/huggingface/datasets/pull/72", "diff_url": "https://github.com/huggingface/datasets/pull/72.diff", "patch_url": "https://github.com/huggingface/datasets/pull/72.patch" }
In this PR a README.md is added to tests to shine more light on how the dummy data structure works. I try to explain the different possible cases. IMO the best way to understand the logic is to checkout the dummy data structure of the different datasets I mention in the README.md since those are the "edge cases". @mariamabarham @thomwolf @lhoestq @jplu - I'd be happy to checkout the dummy data structure and get some feedback on possible improvements.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/72/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/72/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/71
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/71/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/71/comments
https://api.github.com/repos/huggingface/datasets/issues/71/events
https://github.com/huggingface/datasets/pull/71
615,942,180
MDExOlB1bGxSZXF1ZXN0NDE2MTUxODM4
71
Fix arrow writer for big datasets using writer_batch_size
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "After a quick chat with Yacine : the 2Go test may not be sufficient actually, as I'm looking at the size of the array and not the size of the current_rows. If the test doesn't do the job I think I'll remove it and lower the batch size a bit to be sure that it never exceeds 2Go. I'll do more tests later" ]
1,589,208,336,000
1,589,227,787,000
1,589,227,238,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/71", "html_url": "https://github.com/huggingface/datasets/pull/71", "diff_url": "https://github.com/huggingface/datasets/pull/71.diff", "patch_url": "https://github.com/huggingface/datasets/pull/71.patch" }
This PR fixes Yacine's bug. According to [this](https://github.com/apache/arrow/blob/master/docs/source/cpp/arrays.rst#size-limitations-and-recommendations), it is not recommended to have pyarrow arrays bigger than 2Go. Therefore I set a default batch size of 100 000 examples per batch. In general it shouldn't exceed 2Go. If it does, I reduce the batch_size on the fly, and I notify the user with a warning.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/71/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/71/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/70
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/70/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/70/comments
https://api.github.com/repos/huggingface/datasets/issues/70/events
https://github.com/huggingface/datasets/pull/70
615,679,102
MDExOlB1bGxSZXF1ZXN0NDE1OTM3NDgw
70
adding RACE, QASC, Super_glue and Tiny_shakespear datasets
{ "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "repos_url": "https://api.github.com/users/mariamabarham/repos", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I think rebasing to master will solve the quality test and the datasets that don't have a testing structure yet because of the manual download - maybe you can put them in `datasets under construction`? Then would also make it easier for me to see how to add tests for them :-) " ]
1,589,184,469,000
1,589,289,712,000
1,589,289,711,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/70", "html_url": "https://github.com/huggingface/datasets/pull/70", "diff_url": "https://github.com/huggingface/datasets/pull/70.diff", "patch_url": "https://github.com/huggingface/datasets/pull/70.patch" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/70/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/70/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/69
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/69/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/69/comments
https://api.github.com/repos/huggingface/datasets/issues/69/events
https://github.com/huggingface/datasets/pull/69
615,450,534
MDExOlB1bGxSZXF1ZXN0NDE1NzYyNTQ4
69
fix cache dir in builder tests
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Nice, is that the reason one cannot rerun the tests without deleting the cache? \r\n", "Yes exactly. It was not using the temporary dir for tests." ]
1,589,135,961,000
1,589,181,570,000
1,589,181,568,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/69", "html_url": "https://github.com/huggingface/datasets/pull/69", "diff_url": "https://github.com/huggingface/datasets/pull/69.diff", "patch_url": "https://github.com/huggingface/datasets/pull/69.patch" }
minor fix
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/69/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/69/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/68
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/68/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/68/comments
https://api.github.com/repos/huggingface/datasets/issues/68/events
https://github.com/huggingface/datasets/pull/68
614,882,655
MDExOlB1bGxSZXF1ZXN0NDE1MzQ3NTgw
68
[CSV] re-add csv
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,588,959,509,000
1,588,959,648,000
1,588,959,646,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/68", "html_url": "https://github.com/huggingface/datasets/pull/68", "diff_url": "https://github.com/huggingface/datasets/pull/68.diff", "patch_url": "https://github.com/huggingface/datasets/pull/68.patch" }
Re-adding csv under the datasets under construction to keep circle ci happy - will have to see how to include it in the tests. @lhoestq noticed that I accidently deleted it in https://github.com/huggingface/nlp/pull/63#discussion_r422263729.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/68/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/68/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/67
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/67/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/67/comments
https://api.github.com/repos/huggingface/datasets/issues/67/events
https://github.com/huggingface/datasets/pull/67
614,798,483
MDExOlB1bGxSZXF1ZXN0NDE1Mjc5NjI0
67
[Tests] Test files locally
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Super nice, good job @patrickvonplaten!" ]
1,588,950,163,000
1,588,967,447,000
1,588,951,020,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/67", "html_url": "https://github.com/huggingface/datasets/pull/67", "diff_url": "https://github.com/huggingface/datasets/pull/67.diff", "patch_url": "https://github.com/huggingface/datasets/pull/67.patch" }
This PR adds a `aws` and a `local` decorator to the tests so that tests now run on the local datasets. By default, the `aws` is deactivated and `local` is activated and `slow` is deactivated, so that only 1 test per dataset runs on circle ci. **When local is activated all folders in `./datasets` are tested.** **Important** When adding a dataset, we should no longer upload it to AWS. The steps are: 1. Open a PR 2. Add a dataset as described in `datasets/README.md` 3. If all tests pass, push to master Currently we have 49 functional datasets in our code base. We have 6 datasets "under-construction" that don't pass the tests - so I put them in a folder "datasets_under_construction" - it would be nice to open a PR to fix them and put them in the `datasets` folder. **Important** when running tests locally, the datasets are cached so to rerun them delete your local cache via: `rm -r ~/.cache/huggingface/datasets/*` @thomwolf @mariamabarham @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/67/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/67/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/66
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/66/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/66/comments
https://api.github.com/repos/huggingface/datasets/issues/66/events
https://github.com/huggingface/datasets/pull/66
614,748,552
MDExOlB1bGxSZXF1ZXN0NDE1MjM5Njgy
66
[Datasets] ReadME
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,588,945,063,000
1,588,945,163,000
1,588,945,162,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/66", "html_url": "https://github.com/huggingface/datasets/pull/66", "diff_url": "https://github.com/huggingface/datasets/pull/66.diff", "patch_url": "https://github.com/huggingface/datasets/pull/66.patch" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/66/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/66/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/65
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/65/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/65/comments
https://api.github.com/repos/huggingface/datasets/issues/65/events
https://github.com/huggingface/datasets/pull/65
614,746,516
MDExOlB1bGxSZXF1ZXN0NDE1MjM4MDEw
65
fix math dataset and xcopa
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,588,944,835,000
1,588,944,941,000
1,588,944,940,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/65", "html_url": "https://github.com/huggingface/datasets/pull/65", "diff_url": "https://github.com/huggingface/datasets/pull/65.diff", "patch_url": "https://github.com/huggingface/datasets/pull/65.patch" }
- fixes math dataset and xcopa, uploaded both of the to S3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/65/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/65/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/64
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/64/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/64/comments
https://api.github.com/repos/huggingface/datasets/issues/64/events
https://github.com/huggingface/datasets/pull/64
614,737,057
MDExOlB1bGxSZXF1ZXN0NDE1MjMwMjYy
64
[Datasets] Make master ready for datasets adding
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,588,943,820,000
1,588,943,851,000
1,588,943,850,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/64", "html_url": "https://github.com/huggingface/datasets/pull/64", "diff_url": "https://github.com/huggingface/datasets/pull/64.diff", "patch_url": "https://github.com/huggingface/datasets/pull/64.patch" }
Add all relevant files so that datasets can now be added on master
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/64/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/64/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/63
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/63/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/63/comments
https://api.github.com/repos/huggingface/datasets/issues/63/events
https://github.com/huggingface/datasets/pull/63
614,666,365
MDExOlB1bGxSZXF1ZXN0NDE1MTczODU5
63
[Dataset scripts] add all datasets scripts
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,588,935,015,000
1,588,959,562,000
1,588,937,640,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/63", "html_url": "https://github.com/huggingface/datasets/pull/63", "diff_url": "https://github.com/huggingface/datasets/pull/63.diff", "patch_url": "https://github.com/huggingface/datasets/pull/63.patch" }
As mentioned, we can have the canonical datasets in the master. For now I also want to include all the data as present on S3 to make the synchronization easier when uploading new datastes. @mariamabarham @lhoestq @thomwolf - what do you think? If this is ok for you, I can sync up the master with the `add_dataset` branch: https://github.com/huggingface/nlp/pull/37 so that master is up to date.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/63/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/63/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/62
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/62/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/62/comments
https://api.github.com/repos/huggingface/datasets/issues/62/events
https://github.com/huggingface/datasets/pull/62
614,630,830
MDExOlB1bGxSZXF1ZXN0NDE1MTQ1NDAx
62
[Cached Path] Better error message
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,588,930,787,000
1,588,931,147,000
1,588,931,147,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/62", "html_url": "https://github.com/huggingface/datasets/pull/62", "diff_url": "https://github.com/huggingface/datasets/pull/62.diff", "patch_url": "https://github.com/huggingface/datasets/pull/62.patch" }
IMO returning `None` in this function only leads to confusion and is never helpful.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/62/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/62/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/61
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/61/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/61/comments
https://api.github.com/repos/huggingface/datasets/issues/61/events
https://github.com/huggingface/datasets/pull/61
614,607,474
MDExOlB1bGxSZXF1ZXN0NDE1MTI3MTU4
61
[Load] rename setup_module to prepare_module
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,588,928,062,000
1,588,928,192,000
1,588,928,176,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/61", "html_url": "https://github.com/huggingface/datasets/pull/61", "diff_url": "https://github.com/huggingface/datasets/pull/61.diff", "patch_url": "https://github.com/huggingface/datasets/pull/61.patch" }
rename setup_module to prepare_module due to issues with pytests `setup_module` function. See: PR #59.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/61/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/61/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/60
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/60/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/60/comments
https://api.github.com/repos/huggingface/datasets/issues/60/events
https://github.com/huggingface/datasets/pull/60
614,372,553
MDExOlB1bGxSZXF1ZXN0NDE0OTQyNjEy
60
Update to simplify some datasets conversion
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Awesome! ", "Also we should convert `tf.io.gfile.exists` into `os.path.exists` , `tf.io.gfile.listdir`into `os.listdir` and `tf.io.gfile.glob` into `glob.glob` (will need to add `import glob`)", "> Also we should convert `tf.io.gfile.exists` into `os.path.exists` , `tf.io.gfile.listdir`into `os.listdir` and `tf.io.gfile.glob` into `glob.glob` (will need to add `import glob`)\r\n\r\nWe should probably open a new PR about this", "I think it might be a good idea to both change the supervised keys to a named tuple and also handle the translation features specifically.", "Just noticed that `pyarrow` apparently does not have a `is_boolean` function. Or do I have the wrong `pyarrow` version? ", "Ah, it was a typo `pa.types.is_boolean` is the correct name. Will fix in: https://github.com/huggingface/nlp/pull/59" ]
1,588,888,944,000
1,588,934,312,000
1,588,933,104,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/60", "html_url": "https://github.com/huggingface/datasets/pull/60", "diff_url": "https://github.com/huggingface/datasets/pull/60.diff", "patch_url": "https://github.com/huggingface/datasets/pull/60.patch" }
This PR updates the encoding of `Values` like `integers`, `boolean` and `float` to use python casting and avoid having to cast in the dataset scripts, as mentioned here: https://github.com/huggingface/nlp/pull/37#discussion_r420176626 We could also change (not included in this PR yet): - `supervized_keys` to make them a NamedTuple instead of a dataclass, and - handle specifically the `Translation` features. as mentioned here: https://github.com/huggingface/nlp/pull/37#discussion_r421740236 @patrickvonplaten @mariamabarham tell me if you want these two last changes as well.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/60/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/60/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/59
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/59/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/59/comments
https://api.github.com/repos/huggingface/datasets/issues/59/events
https://github.com/huggingface/datasets/pull/59
614,366,045
MDExOlB1bGxSZXF1ZXN0NDE0OTM3NTgx
59
Fix tests
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I can fix the tests tomorrow :-) ", "Very weird bug indeed! I think the problem was that when importing `setup_module` we overwrote `pytest's` setup_module function. I think this is the relevant code in pytest: https://github.com/pytest-dev/pytest/blob/9d2eabb397b059b75b746259daeb20ee5588f559/src/_pytest/python.py#L460.", "Also PR: #25 introduced some renaming: `DatasetBuilder.builder_config` -> `DatasetBuilder.config` so that we will have to change most of the dataset scripts (Just replace the \"builder_config\" with \"config\").\r\n\r\nI think the renaming is a good idea and I can do the fix with a bash regex, but will have to re-upload most of the datasets. @thomwolf @mariamabarham \r\n\r\n", "> Also PR: #25 introduced some renaming: `DatasetBuilder.builder_config` -> `DatasetBuilder.config` so that we will have to change most of the dataset scripts (Just replace the \"builder_config\" with \"config\").\r\n> \r\n> I think the renaming is a good idea and I can do the fix with a bash regex, but will have to re-upload most of the datasets. @thomwolf @mariamabarham\r\n\r\nI think if it only needs a re-uploading, we can rename it, `DatasetBuilder.config` is easier and sounds better", "Ok seems to be fine. Most tests work - merging." ]
1,588,888,089,000
1,588,935,477,000
1,588,934,811,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/59", "html_url": "https://github.com/huggingface/datasets/pull/59", "diff_url": "https://github.com/huggingface/datasets/pull/59.diff", "patch_url": "https://github.com/huggingface/datasets/pull/59.patch" }
@patrickvonplaten I've broken a bit the tests with #25 while simplifying and re-organizing the `load.py` and `download_manager.py` scripts. I'm trying to fix them here but I have a weird error, do you think you can have a look? ```bash (datasets) MacBook-Pro-de-Thomas:datasets thomwolf$ python -m pytest -sv ./tests/test_dataset_common.py::DatasetTest::test_builder_class_snli ============================================================================= test session starts ============================================================================= platform darwin -- Python 3.7.7, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 -- /Users/thomwolf/miniconda2/envs/datasets/bin/python cachedir: .pytest_cache rootdir: /Users/thomwolf/Documents/GitHub/datasets plugins: xdist-1.31.0, forked-1.1.3 collected 1 item tests/test_dataset_common.py::DatasetTest::test_builder_class_snli ERROR =================================================================================== ERRORS ==================================================================================== ____________________________________________________________ ERROR at setup of DatasetTest.test_builder_class_snli ____________________________________________________________ file_path = <module 'tests.test_dataset_common' from '/Users/thomwolf/Documents/GitHub/datasets/tests/test_dataset_common.py'> download_config = DownloadConfig(cache_dir=None, force_download=False, resume_download=False, local_files_only=False, proxies=None, user_agent=None, extract_compressed_file=True, force_extract=True) download_kwargs = {} def setup_module(file_path: str, download_config: Optional[DownloadConfig] = None, **download_kwargs,) -> DatasetBuilder: r""" Download/extract/cache a dataset to add to the lib from a path or url which can be: - a path to a local directory containing the dataset processing python script - an url to a S3 directory with a dataset processing python script Dataset codes are cached inside the lib to allow easy import (avoid ugly sys.path tweaks) and using cloudpickle (among other things). Return: tuple of the unique id associated to the dataset the local path to the dataset """ if download_config is None: download_config = DownloadConfig(**download_kwargs) download_config.extract_compressed_file = True download_config.force_extract = True > name = list(filter(lambda x: x, file_path.split("/")))[-1] + ".py" E AttributeError: module 'tests.test_dataset_common' has no attribute 'split' src/nlp/load.py:169: AttributeError ============================================================================== warnings summary =============================================================================== /Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15 /Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp -- Docs: https://docs.pytest.org/en/latest/warnings.html =========================================================================== short test summary info =========================================================================== ERROR tests/test_dataset_common.py::DatasetTest::test_builder_class_snli - AttributeError: module 'tests.test_dataset_common' has no attribute 'split' ========================================================================= 1 warning, 1 error in 3.63s ========================================================================= ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/59/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/59/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/58
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/58/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/58/comments
https://api.github.com/repos/huggingface/datasets/issues/58/events
https://github.com/huggingface/datasets/pull/58
614,362,308
MDExOlB1bGxSZXF1ZXN0NDE0OTM0NTY4
58
Aborted PR - Fix tests
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Wait I messed up my branch, let me clean this." ]
1,588,887,619,000
1,588,888,081,000
1,588,887,687,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/58", "html_url": "https://github.com/huggingface/datasets/pull/58", "diff_url": "https://github.com/huggingface/datasets/pull/58.diff", "patch_url": "https://github.com/huggingface/datasets/pull/58.patch" }
@patrickvonplaten I've broken a bit the tests with #25 while simplifying and re-organizing the `load.py` and `download_manager.py` scripts. I'm trying to fix them here but I have a weird error, do you think you can have a look? ```bash (datasets) MacBook-Pro-de-Thomas:datasets thomwolf$ python -m pytest -sv ./tests/test_dataset_common.py::DatasetTest::test_builder_class_snli ============================================================================= test session starts ============================================================================= platform darwin -- Python 3.7.7, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 -- /Users/thomwolf/miniconda2/envs/datasets/bin/python cachedir: .pytest_cache rootdir: /Users/thomwolf/Documents/GitHub/datasets plugins: xdist-1.31.0, forked-1.1.3 collected 1 item tests/test_dataset_common.py::DatasetTest::test_builder_class_snli ERROR =================================================================================== ERRORS ==================================================================================== ____________________________________________________________ ERROR at setup of DatasetTest.test_builder_class_snli ____________________________________________________________ file_path = <module 'tests.test_dataset_common' from '/Users/thomwolf/Documents/GitHub/datasets/tests/test_dataset_common.py'> download_config = DownloadConfig(cache_dir=None, force_download=False, resume_download=False, local_files_only=False, proxies=None, user_agent=None, extract_compressed_file=True, force_extract=True) download_kwargs = {} def setup_module(file_path: str, download_config: Optional[DownloadConfig] = None, **download_kwargs,) -> DatasetBuilder: r""" Download/extract/cache a dataset to add to the lib from a path or url which can be: - a path to a local directory containing the dataset processing python script - an url to a S3 directory with a dataset processing python script Dataset codes are cached inside the lib to allow easy import (avoid ugly sys.path tweaks) and using cloudpickle (among other things). Return: tuple of the unique id associated to the dataset the local path to the dataset """ if download_config is None: download_config = DownloadConfig(**download_kwargs) download_config.extract_compressed_file = True download_config.force_extract = True > name = list(filter(lambda x: x, file_path.split("/")))[-1] + ".py" E AttributeError: module 'tests.test_dataset_common' has no attribute 'split' src/nlp/load.py:169: AttributeError ============================================================================== warnings summary =============================================================================== /Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15 /Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp -- Docs: https://docs.pytest.org/en/latest/warnings.html =========================================================================== short test summary info =========================================================================== ERROR tests/test_dataset_common.py::DatasetTest::test_builder_class_snli - AttributeError: module 'tests.test_dataset_common' has no attribute 'split' ========================================================================= 1 warning, 1 error in 3.63s ========================================================================= ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/58/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/58/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/57
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/57/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/57/comments
https://api.github.com/repos/huggingface/datasets/issues/57/events
https://github.com/huggingface/datasets/pull/57
614,261,638
MDExOlB1bGxSZXF1ZXN0NDE0ODUzMDM5
57
Better cached path
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I should have read this PR before doing my own: https://github.com/huggingface/nlp/pull/62 :D \r\nwill close mine. Looks great :-) ", "> Awesome, this is really nice!\r\n> \r\n> By the way, we should improve the `cached_path` method of the `transformers` repo similarly, don't you think (@patrickvonplaten in particular).\r\n\r\nYeah, we should do the same in `transformers` I think - will note it down." ]
1,588,876,560,000
1,588,944,030,000
1,588,944,028,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/57", "html_url": "https://github.com/huggingface/datasets/pull/57", "diff_url": "https://github.com/huggingface/datasets/pull/57.diff", "patch_url": "https://github.com/huggingface/datasets/pull/57.patch" }
### Changes: - The `cached_path` no longer returns None if the file is missing/the url doesn't work. Instead, it can raise `FileNotFoundError` (missing file), `ConnectionError` (no cache and unreachable url) or `ValueError` (parsing error) - Fix requests to firebase API that doesn't handle HEAD requests... - Allow custom download in datasets script: it allows to use `tf.io.gfile.copy` for example, to download from google storage. I added an example: the `boolq` script
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/57/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/57/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/56
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/56/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/56/comments
https://api.github.com/repos/huggingface/datasets/issues/56/events
https://github.com/huggingface/datasets/pull/56
614,236,869
MDExOlB1bGxSZXF1ZXN0NDE0ODMyODY4
56
[Dataset] Tester add mock function
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,588,873,897,000
1,588,873,971,000
1,588,873,970,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/56", "html_url": "https://github.com/huggingface/datasets/pull/56", "diff_url": "https://github.com/huggingface/datasets/pull/56.diff", "patch_url": "https://github.com/huggingface/datasets/pull/56.patch" }
need to add an empty `extract()` function to make `hansard` dataset test work.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/56/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/56/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/55
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/55/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/55/comments
https://api.github.com/repos/huggingface/datasets/issues/55/events
https://github.com/huggingface/datasets/pull/55
613,968,072
MDExOlB1bGxSZXF1ZXN0NDE0NjE0MjE1
55
Beam datasets
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Right now the changes are a bit hard to read as the one from #25 are also included. You can wait until #25 is merged before looking at the implementation details", "Nice!! I tested it a bit and works quite well. I will do a my review once the #25 will be merged because there are several overlaps.\r\n\r\nAt least I can share my thoughts on your **Next** section:\r\n1) I don't think it is a good thing to rely on tfds preprocessed datasets uploaded in their online storage, because they might be updated or deleted at any moment by Google and then possibly break our own processing.\r\n2) Improves the pipeline is always a good direction, but in the meantime we might also share the preprocessed dataset in S3 storage. Which might be another way to see 1), instead of downloading Google preprocessed datasets, using our own ones.\r\n3) Apache Beam can be easily integrated in Spark, so I don't see the need to replace Beam by Spark.", "Ok I've merged #25 so you can rebase or merge if you want.\r\n\r\nI fully agree with @jplu notes for the \"next section\".\r\n\r\nDon't hesitate to use some credit on Google Dataflow if you think it would be useful to give it a try.", "Pr is ready for review !\r\n\r\nNew minor changes:\r\n- re-added the csv dataset builder (it was on my branch from #25 but disappeared from master)\r\n- move the csv script and the wikipedia script to \"under construction\" for now\r\n- some renaming in the `nlp-cli test` command" ]
1,588,849,472,000
1,589,181,602,000
1,589,181,600,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/55", "html_url": "https://github.com/huggingface/datasets/pull/55", "diff_url": "https://github.com/huggingface/datasets/pull/55.diff", "patch_url": "https://github.com/huggingface/datasets/pull/55.patch" }
# Beam datasets ## Intro Beam Datasets are using beam pipelines for preprocessing (basically lots of `.map` over objects called PCollections). The advantage of apache beam is that you can choose which type of runner you want to use to preprocess your data. The main runners are: - the `DirectRunner` to run the pipeline locally (default). However I encountered memory issues for big datasets (like the french or english wikipedia). Small dataset work fine - Google Dataflow. I didn't play with it. - Spark or Flink, two well known data processing frameworks. I tried to use the Spark/Flink local runners provided by apache beam for python and wasn't able to make them work properly though... ## From tfds beam datasets to our own beam datasets Tensorflow datasets used beam and a complicated pipeline to shard the TFRecords files. To allow users to download beam datasets and not having to preprocess them, they also allow to download the already preprocessed datasets from their google storage (the beam pipeline doesn't run in that case). On our side, we replace TFRecords by something else. Arrow or Parquet do the job but I chose Parquet as: 1) there is a builtin apache beam parquet writer that is quite convenient, and 2) reading parquet from the pyarrow library is also simple and effective (there is a mmap option !) Moreover we don't shard datasets in many many files like tfds (they were doing probably doing that mainly because of the limit of 2Gb per TFRecord file). Therefore we have a simpler pipeline that saves each split into one parquet file. We also removed the utilities to use their google storage (for now maybe ? we'll have to discuss it). ## Main changes - Added a BeamWriter to save the output of beam pipelines into parquet files and fill dataset infos - Create a ParquetReader and refactor a bit the arrow_reader.py \> **With this, we can now try to add beam datasets from tfds** I already added the wikipedia one, and I will also try to add the Wiki40b dataset ## Test the wikipedia script You can download and run the beam pipeline for wikipedia (using the `DirectRunner` by default) like this: ``` >>> import nlp >>> nlp.load("datasets/nlp/wikipedia", dataset_config="20200501.frr") ``` This wikipedia dataset (lang: frr, North Frisian) is a small one (~10Mb), but feel free to try bigger ones (and fill 20Gb of swap memory if you try the english one lol) ## Next Should we allow to download preprocessed datasets from the tfds google storage ? Should we try to optimize the beam pipelines to run locally without memory issues ? Should we try other data processing frameworks for big datasets, like spark ? ## About this PR It should be merged after #25 ----------------- I'd be happy to have your feedback and your ideas to improve the processing of big datasets like wikipedia :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/55/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/55/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/54
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/54/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/54/comments
https://api.github.com/repos/huggingface/datasets/issues/54/events
https://github.com/huggingface/datasets/pull/54
613,513,348
MDExOlB1bGxSZXF1ZXN0NDE0MjUyODkw
54
[Tests] Improved Error message for dummy folder structure
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,588,788,708,000
1,588,788,780,000
1,588,788,779,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/54", "html_url": "https://github.com/huggingface/datasets/pull/54", "diff_url": "https://github.com/huggingface/datasets/pull/54.diff", "patch_url": "https://github.com/huggingface/datasets/pull/54.patch" }
Improved Error message
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/54/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/54/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/53
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/53/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/53/comments
https://api.github.com/repos/huggingface/datasets/issues/53/events
https://github.com/huggingface/datasets/pull/53
613,436,158
MDExOlB1bGxSZXF1ZXN0NDE0MTkwMzkz
53
[Features] Typo in generate_from_dict
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,588,781,123,000
1,588,865,326,000
1,588,865,325,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/53", "html_url": "https://github.com/huggingface/datasets/pull/53", "diff_url": "https://github.com/huggingface/datasets/pull/53.diff", "patch_url": "https://github.com/huggingface/datasets/pull/53.patch" }
Change `isinstance` test in features when generating features from dict.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/53/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/53/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/52
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/52/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/52/comments
https://api.github.com/repos/huggingface/datasets/issues/52/events
https://github.com/huggingface/datasets/pull/52
613,339,071
MDExOlB1bGxSZXF1ZXN0NDE0MTEyMDAy
52
allow dummy folder structure to handle dict of lists
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,588,773,275,000
1,588,773,319,000
1,588,773,318,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/52", "html_url": "https://github.com/huggingface/datasets/pull/52", "diff_url": "https://github.com/huggingface/datasets/pull/52.diff", "patch_url": "https://github.com/huggingface/datasets/pull/52.patch" }
`esnli.py` needs that extension of the dummy data testing.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/52/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/52/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/51
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/51/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/51/comments
https://api.github.com/repos/huggingface/datasets/issues/51/events
https://github.com/huggingface/datasets/pull/51
613,266,668
MDExOlB1bGxSZXF1ZXN0NDE0MDUyOTYw
51
[Testing] Improved testing structure
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Awesome!\r\nLet's have this in the doc at the end :-)" ]
1,588,766,587,000
1,588,889,239,000
1,588,771,218,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/51", "html_url": "https://github.com/huggingface/datasets/pull/51", "diff_url": "https://github.com/huggingface/datasets/pull/51.diff", "patch_url": "https://github.com/huggingface/datasets/pull/51.patch" }
This PR refactors the test design a bit and puts the mock download manager in the `utils` files as it is just a test helper class. as @mariamabarham pointed out, creating a dummy folder structure can be quite hard to grasp. This PR tries to change that to some extent. It follows the following logic for the `dummy` folder structure now: 1.) The data bulider has no config -> the `dummy` folder structure is: `dummy/<version>/dummy_data.zip` 2) The data builder has >= 1 configs -> the `dummy` folder structure is: `dummy/<config_name_1>/<version>/dummy_data.zip` `dummy/<config_name_2>/<version>/dummy_data.zip` Now, the difficult part is how to create the `dummy_data.zip` file. There are two cases: A) The `data_urs` parameter inserted into the `download_and_extract` fn is a **string**: -> the `dummy_data.zip` file zips the folder: `dummy_data/<relative_path_of_folder_structure_of_url>` B) The `data_urs` parameter inserted into the `download_and_extract` fn is a **dict**: -> the `dummy_data.zip` file zips the folder: `dummy_data/<relative_path_of_folder_structure_of_url_behind _key_1>` `dummy_data/<relative_path_of_folder_structure_of_url_behind _key_2>` By relative folder structure I mean `url_path.split('./')[-1]`. As an example the dataset **xquad** by deepmind has the following url path behind the key `de`: `https://github.com/deepmind/xquad/blob/master/xquad.de.json` -> This means that the relative url path should be `xquad.de.json`. @mariamabarham B) is a change from how is was before and I think is makes more sense. While before the `dummy_data.zip` file for xquad with config `de` looked like: `dummy_data/de` it would now look like `dummy_data/xquad.de.json`. I think this is better and easier to understand. Therefore there are currently 6 tests that would have to have changed their dummy folder structure, but which can easily be done (30min). I also added a function: `print_dummy_data_folder_structure` that prints out the expected structures when testing which should be quite helpful.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/51/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/51/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/50
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/50/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/50/comments
https://api.github.com/repos/huggingface/datasets/issues/50/events
https://github.com/huggingface/datasets/pull/50
612,583,126
MDExOlB1bGxSZXF1ZXN0NDEzNTAwMjE0
50
[Tests] test only for fast test as a default
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Test failure is not related to change in test file.\r\n" ]
1,588,683,562,000
1,588,683,738,000
1,588,683,736,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/50", "html_url": "https://github.com/huggingface/datasets/pull/50", "diff_url": "https://github.com/huggingface/datasets/pull/50.diff", "patch_url": "https://github.com/huggingface/datasets/pull/50.patch" }
Test only for one config on circle ci to speed up testing. Add all config test as a slow test. @mariamabarham @thomwolf
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/50/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/50/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/49
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/49/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/49/comments
https://api.github.com/repos/huggingface/datasets/issues/49/events
https://github.com/huggingface/datasets/pull/49
612,545,483
MDExOlB1bGxSZXF1ZXN0NDEzNDY5ODg0
49
fix flatten nested
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,588,679,713,000
1,588,687,166,000
1,588,687,165,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/49", "html_url": "https://github.com/huggingface/datasets/pull/49", "diff_url": "https://github.com/huggingface/datasets/pull/49.diff", "patch_url": "https://github.com/huggingface/datasets/pull/49.patch" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/49/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/49/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/48
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/48/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/48/comments
https://api.github.com/repos/huggingface/datasets/issues/48/events
https://github.com/huggingface/datasets/pull/48
612,504,687
MDExOlB1bGxSZXF1ZXN0NDEzNDM2MTgz
48
[Command Convert] remove tensorflow import
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,588,675,260,000
1,588,677,238,000
1,588,677,236,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/48", "html_url": "https://github.com/huggingface/datasets/pull/48", "diff_url": "https://github.com/huggingface/datasets/pull/48.diff", "patch_url": "https://github.com/huggingface/datasets/pull/48.patch" }
Remove all tensorflow import statements.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/48/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/48/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/47
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/47/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/47/comments
https://api.github.com/repos/huggingface/datasets/issues/47/events
https://github.com/huggingface/datasets/pull/47
612,446,493
MDExOlB1bGxSZXF1ZXN0NDEzMzg5MDc1
47
[PyArrow Feature] fix py arrow bool
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,588,668,988,000
1,588,675,228,000
1,588,675,227,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/47", "html_url": "https://github.com/huggingface/datasets/pull/47", "diff_url": "https://github.com/huggingface/datasets/pull/47.diff", "patch_url": "https://github.com/huggingface/datasets/pull/47.patch" }
To me it seems that `bool` can only be accessed with `bool_` when looking at the pyarrow types: https://arrow.apache.org/docs/python/api/datatypes.html.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/47/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/47/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/46
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/46/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/46/comments
https://api.github.com/repos/huggingface/datasets/issues/46/events
https://github.com/huggingface/datasets/pull/46
612,398,190
MDExOlB1bGxSZXF1ZXN0NDEzMzUxNTY0
46
[Features] Strip str key before dict look-up
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,588,663,905,000
1,588,667,865,000
1,588,667,864,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/46", "html_url": "https://github.com/huggingface/datasets/pull/46", "diff_url": "https://github.com/huggingface/datasets/pull/46.diff", "patch_url": "https://github.com/huggingface/datasets/pull/46.patch" }
The dataset `anli.py` currently fails because it tries to look up a key `1\n` in a dict that only has the key `1`. Added an if statement to strip key if it cannot be found in dict.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/46/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/46/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/45
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/45/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/45/comments
https://api.github.com/repos/huggingface/datasets/issues/45/events
https://github.com/huggingface/datasets/pull/45
612,386,583
MDExOlB1bGxSZXF1ZXN0NDEzMzQzMjAy
45
[Load] Separate Module kwargs and builder kwargs.
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,588,662,594,000
1,588,931,482,000
1,588,931,482,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/45", "html_url": "https://github.com/huggingface/datasets/pull/45", "diff_url": "https://github.com/huggingface/datasets/pull/45.diff", "patch_url": "https://github.com/huggingface/datasets/pull/45.patch" }
Kwargs for the `load_module` fn should be passed with `module_xxxx` to `builder_kwargs` of `load` fn. This is a follow-up PR of: https://github.com/huggingface/nlp/pull/41
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/45/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/45/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/44
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/44/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/44/comments
https://api.github.com/repos/huggingface/datasets/issues/44/events
https://github.com/huggingface/datasets/pull/44
611,873,486
MDExOlB1bGxSZXF1ZXN0NDEyOTUwMzU1
44
[Tests] Fix tests for datasets with no config
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,588,598,738,000
1,588,598,884,000
1,588,598,883,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/44", "html_url": "https://github.com/huggingface/datasets/pull/44", "diff_url": "https://github.com/huggingface/datasets/pull/44.diff", "patch_url": "https://github.com/huggingface/datasets/pull/44.patch" }
Forgot to fix `None` problem for datasets that have no config this in PR: https://github.com/huggingface/nlp/pull/42
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/44/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/44/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/43
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/43/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/43/comments
https://api.github.com/repos/huggingface/datasets/issues/43/events
https://github.com/huggingface/datasets/pull/43
611,773,279
MDExOlB1bGxSZXF1ZXN0NDEyODcxNTE5
43
[Checksums] If no configs exist prevent to run over empty list
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Whoops I fixed it directly on master before checking that you have done it in this PR. We may close it", "Yeah, I saw :-) But I think we should add this as well since some datasets have an empty list of configs and then as the code is now it would fail. \r\n\r\nIn this PR, I just make sure that the code jumps in the correct else if \"there are no configs\" as is the case for some datasets @mariamabarham ", "Sorry, I thought you meant a different commit . Just saw this one: https://github.com/huggingface/nlp/commit/7c644f284e2303b57612a6e7c904fe13906d893f\r\n.\r\n\r\nAll good then :-) " ]
1,588,588,782,000
1,588,598,283,000
1,588,598,283,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/43", "html_url": "https://github.com/huggingface/datasets/pull/43", "diff_url": "https://github.com/huggingface/datasets/pull/43.diff", "patch_url": "https://github.com/huggingface/datasets/pull/43.patch" }
`movie_rationales` e.g. has no configs.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/43/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/43/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/42
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/42/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/42/comments
https://api.github.com/repos/huggingface/datasets/issues/42/events
https://github.com/huggingface/datasets/pull/42
611,754,343
MDExOlB1bGxSZXF1ZXN0NDEyODU1OTE2
42
[Tests] allow tests for builders without config
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,588,586,782,000
1,588,597,850,000
1,588,597,848,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/42", "html_url": "https://github.com/huggingface/datasets/pull/42", "diff_url": "https://github.com/huggingface/datasets/pull/42.diff", "patch_url": "https://github.com/huggingface/datasets/pull/42.patch" }
Some dataset scripts have no configs - the tests have to be adapted for this case. In this case the dummy data will be saved as: - natural_questions -> dummy -> -> 1.0.0 (version num) -> -> -> dummy_data.zip
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/42/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/42/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/41
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/41/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/41/comments
https://api.github.com/repos/huggingface/datasets/issues/41/events
https://github.com/huggingface/datasets/pull/41
611,739,219
MDExOlB1bGxSZXF1ZXN0NDEyODQzNDQy
41
[Load module] allow kwargs into load module
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,588,585,331,000
1,588,621,147,000
1,588,621,146,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/41", "html_url": "https://github.com/huggingface/datasets/pull/41", "diff_url": "https://github.com/huggingface/datasets/pull/41.diff", "patch_url": "https://github.com/huggingface/datasets/pull/41.patch" }
Currenly it is not possible to force a re-download of the dataset script. This simple change allows to pass ``force_reload=True`` as ``builder_kwargs`` in the ``load.py`` function.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/41/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/41/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/40
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/40/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/40/comments
https://api.github.com/repos/huggingface/datasets/issues/40/events
https://github.com/huggingface/datasets/pull/40
611,721,308
MDExOlB1bGxSZXF1ZXN0NDEyODI4NzU2
40
Update remote checksums instead of overwrite
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,588,583,594,000
1,588,593,111,000
1,588,593,109,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/40", "html_url": "https://github.com/huggingface/datasets/pull/40", "diff_url": "https://github.com/huggingface/datasets/pull/40.diff", "patch_url": "https://github.com/huggingface/datasets/pull/40.patch" }
When the user uploads a dataset on S3, checksums are also uploaded with the `--upload_checksums` parameter. If the user uploads the dataset in several steps, then the remote checksums file was previously overwritten. Now it's going to be updated with the new checksums.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/40/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/40/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/39
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/39/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/39/comments
https://api.github.com/repos/huggingface/datasets/issues/39/events
https://github.com/huggingface/datasets/pull/39
611,712,135
MDExOlB1bGxSZXF1ZXN0NDEyODIxNTA4
39
[Test] improve slow testing
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,588,582,713,000
1,588,582,790,000
1,588,582,789,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/39", "html_url": "https://github.com/huggingface/datasets/pull/39", "diff_url": "https://github.com/huggingface/datasets/pull/39.diff", "patch_url": "https://github.com/huggingface/datasets/pull/39.patch" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/39/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/39/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/38
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/38/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/38/comments
https://api.github.com/repos/huggingface/datasets/issues/38/events
https://github.com/huggingface/datasets/issues/38
611,677,656
MDU6SXNzdWU2MTE2Nzc2NTY=
38
[Checksums] Error for some datasets
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }, { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "@lhoestq - could you take a look? It's not very urgent though!", "Fixed with 06882b4\r\n\r\nNow your command works :)\r\nNote that you can also do\r\n```\r\nnlp-cli test datasets/nlp/xnli --save_checksums\r\n```\r\nSo that it will save the checksums directly in the right directory.", "Awesome!" ]
1,588,579,216,000
1,588,585,700,000
1,588,585,700,000
MEMBER
null
null
The checksums command works very nicely for `squad`. But for `crime_and_punish` and `xnli`, the same bug happens: When running: ``` python nlp-cli nlp-cli test xnli --save_checksums ``` leads to: ``` File "nlp-cli", line 33, in <module> service.run() File "/home/patrick/python_bin/nlp/commands/test.py", line 61, in run ignore_checksums=self._ignore_checksums, File "/home/patrick/python_bin/nlp/builder.py", line 383, in download_and_prepare self._download_and_prepare(dl_manager=dl_manager, download_config=download_config) File "/home/patrick/python_bin/nlp/builder.py", line 627, in _download_and_prepare dl_manager=dl_manager, max_examples_per_split=download_config.max_examples_per_split, File "/home/patrick/python_bin/nlp/builder.py", line 431, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/patrick/python_bin/nlp/datasets/xnli/8bf4185a2da1ef2a523186dd660d9adcf0946189e7fa5942ea31c63c07b68a7f/xnli.py", line 95, in _split_generators dl_dir = dl_manager.download_and_extract(_DATA_URL) File "/home/patrick/python_bin/nlp/utils/download_manager.py", line 246, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/patrick/python_bin/nlp/utils/download_manager.py", line 186, in download self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths) File "/home/patrick/python_bin/nlp/utils/download_manager.py", line 166, in _record_sizes_checksums self._recorded_sizes_checksums[url] = get_size_checksum(path) File "/home/patrick/python_bin/nlp/utils/checksums_utils.py", line 81, in get_size_checksum with open(path, "rb") as f: TypeError: expected str, bytes or os.PathLike object, not tuple ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/38/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/38/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/37
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/37/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/37/comments
https://api.github.com/repos/huggingface/datasets/issues/37/events
https://github.com/huggingface/datasets/pull/37
611,670,295
MDExOlB1bGxSZXF1ZXN0NDEyNzg5MjQ4
37
[Datasets ToDo-List] add datasets
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }, { "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "repos_url": "https://api.github.com/users/mariamabarham/repos", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "type": "User", "site_admin": false } ]
null
[ "Note:\r\n```\r\nnlp-cli test datasets/nlp/<your-dataset-folder> --save_checksums --all_configs\r\n```\r\ndirectly saves the checksums in the right place, and runs for all the dataset configurations.", "@patrickvonplaten can you provide the add the link to the PR for the dummy data? ", "https://github.com/huggingface/nlp/pull/15 - But it's probably best to checkout into this branch and look how the dummy data strtucture is for `squad` for example.", "are lock files supposed to stay ?", "> are lock files supposed to stay ?\r\n\r\nNot sure! I think the checksum command creates them, so I just uploaded them as well.", "We can trash the `lock` file, they are dummy file that are only used to avoid concurrent access when the library is run.\r\nYou can read the filelock readme and code, it's a very simple single-file library: https://github.com/benediktschmitt/py-filelock", "The testing design was slightly changed as explained in https://github.com/huggingface/nlp/pull/51 . \r\nIf creating the dummy folder is too confusing it helps to upload everything else to AWS, then run the test and check the INFO when testing on how to create the dummy folder structure.", "Closing because we can now work on master" ]
1,588,578,459,000
1,588,945,703,000
1,588,945,703,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/37", "html_url": "https://github.com/huggingface/datasets/pull/37", "diff_url": "https://github.com/huggingface/datasets/pull/37.diff", "patch_url": "https://github.com/huggingface/datasets/pull/37.patch" }
## Description This PR acts as a dashboard to see which datasets are added to the library and work. Cicle-ci should always be green so that we can be sure that newly added datasets are functional. This PR should not be merged. ## Progress **For the following datasets the test commands**: ``` RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_real_dataset_<your-dataset-name> ``` and ``` RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_dataset_all_configs_<your-dataset-name> ``` **passes**. - [x] Squad - [x] Sentiment140 - [x] XNLI - [x] Crime_and_Punish - [x] movie_rationales - [x] ai2_arc - [x] anli - [x] event2Mind - [x] Fquad - [x] blimp - [x] empathetic_dialogues - [x] cosmos_qa - [x] xquad - [x] blog_authorship_corpus - [x] SNLI - [x] break_data - [x] SQuAD v2 - [x] cfq - [x] eraser_multi_rc - [x] Glue - [x] Tydiqa - [x] wiki_qa - [x] wikitext - [x] winogrande - [x] wiqa - [x] esnli - [x] civil_comments - [x] commonsense_qa - [x] com_qa - [x] coqa - [x] wiki_split - [x] cos_e - [x] xcopa - [x] quarel - [x] quartz - [x] squad_it - [x] quoref - [x] squad_pt - [x] cornell_movie_dialog - [x] SciQ - [x] Scifact - [x] hellaswag - [x] ted_multi (in translate) - [x] Aeslc (summarization) - [x] drop - [x] gap - [x] hansard - [x] opinosis - [x] MLQA - [x] math_dataset ## How-To-Add a dataset **Before adding a dataset make sure that your branch is up to date**: 1. `git checkout add_datasets` 2. `git pull` **Add a dataset via the `convert_dataset.sh` bash script:** Running `bash convert_dataset.sh <file/to/tfds/datascript.py>` (*e.g.* `bash convert_dataset.sh ../tensorflow-datasets/tensorflow_datasets/text/movie_rationales.py`) will automatically run all the steps mentioned in **Add a dataset manually** below. Make sure that you run `convert_dataset.sh` from the root folder of `nlp`. The conversion script should work almost always for step 1): "convert dataset script from tfds to nlp format" and 2) "create checksum file" and step 3) "make style". It can also sometimes automatically run step 4) "create the correct dummy data from tfds", but this will only work if a) there is either no config name or only one config name and b) the `tfds testing/test_data/fake_example` is in the correct form. Nevertheless, the script should always be run in the beginning until an error occurs to be more efficient. If the conversion script does not work or fails at some step, then you can run the steps manually as follows: **Add a dataset manually** Make sure you run all of the following commands from the root of your `nlp` git clone. Also make sure that you changed to this branch: ``` git checkout add_datasets ``` 1) the tfds datascript file should be converted to `nlp` style: ``` python nlp-cli convert --tfds_path <path/to/tensorflow_datasets/text/your_dataset_name>.py --nlp_directory datasets/nlp ``` This will convert the tdfs script and create a folder with the correct name. 2) the checksum file should be added. Use the command: ``` python nlp-cli test datasets/nlp/<your-dataset-folder> --save_checksums --all_configs ``` A checksums.txt file should be created in your folder and the structure should look as follows: squad/ β”œβ”€β”€ squad.py/ └── urls_checksums/ ...........└── checksums.txt Delete the created `*.lock` file afterward - it should not be uploaded to AWS. 3) run black and isort on your newly added datascript files so that they look nice: ``` make style ``` 4) the dummy data should be added. For this it might be useful to take a look into the structure of other examples as shown in the PR here and at `<path/to/tensorflow_datasets/testing/test_data/test_data/fake_examples>` whether the same data can be used. 5) the data can be uploaded to AWS using the command ``` aws s3 cp datasets/nlp/<your-dataset-folder> s3://datasets.huggingface.co/nlp/<your-dataset-folder> --recursive ``` 6) check whether all works as expected using: ``` RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_real_dataset_<your-dataset-name> ``` and ``` RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_dataset_all_configs_<your-dataset-name> ``` 7) push to this PR and rerun the circle ci workflow to check whether circle ci stays green. 8) Edit this commend and tick off your newly added dataset :-) ## TODO-list Maybe we can add a TODO-list here for everybody that feels like adding new datasets so that we will not add the same datasets. Here a link to available datasets: https://docs.google.com/spreadsheets/d/1zOtEqOrnVQwdgkC4nJrTY6d-Av02u0XFzeKAtBM2fUI/edit#gid=0 Patrick: - [ ] boolq - *weird download link* - [ ] c4 - *beam dataset*
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/37/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/37/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/36
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/36/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/36/comments
https://api.github.com/repos/huggingface/datasets/issues/36/events
https://github.com/huggingface/datasets/pull/36
611,528,349
MDExOlB1bGxSZXF1ZXN0NDEyNjgwOTk1
36
Metrics - refactoring, adding support for download and distributed metrics
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Ok, this one seems to be ready to merge.", "> Really cool, I love it! I would just raise a tiny point, the distributive version of the metrics might not work properly with TF because it is a different way to do, why not to add a \"framework\" detection and raise warning when TF is used, saying something like \"not available yet in TF switch to non distributive metric computation\".\r\n> \r\n> What do you think?\r\n\r\nGood point @jplu I'm not sure how you should do distributed metrics evaluation for TF.\r\nThere is only one python script, right?\r\nMaybe it's just the same as in the not-distributed case?", "I think non-distributed case should work in TF for both cases indeed, but this needs to be tested." ]
1,588,546,817,000
1,589,184,962,000
1,589,184,960,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/36", "html_url": "https://github.com/huggingface/datasets/pull/36", "diff_url": "https://github.com/huggingface/datasets/pull/36.diff", "patch_url": "https://github.com/huggingface/datasets/pull/36.patch" }
Refactoring metrics to have a similar loading API than the datasets and improving the import system. # Import system The import system has ben upgraded. There are now three types of imports allowed: 1. `library` imports (identified as "absolute imports") ```python import seqeval ``` => we'll test all the imports before running the scripts and if one cannot be imported we'll display an error message like this one: `ImportError: To be able to use this metric/dataset, you need to install the following dependencies ['seqeval'] using 'pip install seqeval' for instance'` 2. `internal` imports (identified as "relative imports") ```python import .c4_utils ``` => we'll assume this point to a file in the same directory/S3-directory as the main script and download this file. 2. `external` imports (identified as "relative imports" with a comment starting with `# From:`) ```python from .nmt_bleu import compute_bleu # From: https://github.com/tensorflow/nmt/blob/master/nmt/scripts/bleu.py ``` => we'll assume this point to the URL of a python script (if it's a link to a github file, we'll take the raw file automatically). => the script is downloaded and renamed to the import name (here above renamed from `bleu.py` to `nmt_bleu.py`). Renaming the file can be necessary if the distant file has the same name as the dataset/metric processing script. If you forgot to rename the distant script and it has the same name as the dataset/metric, you'll have an explicit error message asking to rename the import anyway. # Hosting metrics Metrics are hosted on a S3 bucket like the dataset processing scripts. # Metrics scripts Metrics scripts have a lot in common with datasets processing scripts. They also have a `metric.info` including citations, descriptions and links to relevant pages. Metrics have more documentation to supply to ensure they are used well. Four examples are already included for reference in [./metrics](./metrics): BLEU, ROUGE, SacreBLEU and SeqEVAL. # Automatic support for distributed/multi-processing metric computation We've also added support for automatic distributed/multi-processing metric computation (e.g. when using DistributedDataParallel). We leverage our own dataset format for smart caching in this case. Here is a quick gist of a standard use of metrics (the simplest usage): ```python import nlp bleu_metric = nlp.load_metric('bleu') # If you only have a single iteration, you can easily compute the score like this predictions = model(inputs) score = bleu_metric.compute(predictions, references) # If you have a loop, you can "add" your predictions and references at each iteration instead of having to save them yourself (the metric object store them efficiently for you) for batch in dataloader: model_input, targets = batch predictions = model(model_inputs) bleu.add(predictions, targets) score = bleu_metric.compute() # Compute the score from all the stored predictions/references ``` Here is a quick gist of a use in a distributed torch setup (should work for any python multi-process setup actually). It's pretty much identical to the second example above: ```python import nlp # You need to give the total number of parallel python processes (num_process) and the id of each process (process_id) bleu = nlp.load_metric('bleu', process_id=torch.distributed.get_rank(),b num_process=torch.distributed.get_world_size()) for batch in dataloader: model_input, targets = batch predictions = model(model_inputs) bleu.add(predictions, targets) score = bleu_metric.compute() # Compute the score on the first node by default (can be set to compute on each node as well) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/36/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/36/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/35
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/35/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/35/comments
https://api.github.com/repos/huggingface/datasets/issues/35/events
https://github.com/huggingface/datasets/pull/35
611,413,731
MDExOlB1bGxSZXF1ZXN0NDEyNjAyMTc0
35
[Tests] fix typo
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,588,512,229,000
1,588,512,261,000
1,588,512,260,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/35", "html_url": "https://github.com/huggingface/datasets/pull/35", "diff_url": "https://github.com/huggingface/datasets/pull/35.diff", "patch_url": "https://github.com/huggingface/datasets/pull/35.patch" }
@lhoestq - currently the slow test fail with: ``` _____________________________________________________________________________________ DatasetTest.test_load_real_dataset_xnli _____________________________________________________________________________________ self = <tests.test_dataset_common.DatasetTest testMethod=test_load_real_dataset_xnli>, dataset_name = 'xnli' @slow def test_load_real_dataset(self, dataset_name): with tempfile.TemporaryDirectory() as temp_data_dir: > dataset = load(dataset_name, data_dir=temp_data_dir) tests/test_dataset_common.py:153: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../python_bin/nlp/load.py:497: in load dbuilder.download_and_prepare(**download_and_prepare_kwargs) ../../python_bin/nlp/builder.py:383: in download_and_prepare self._download_and_prepare(dl_manager=dl_manager, download_config=download_config) ../../python_bin/nlp/builder.py:627: in _download_and_prepare dl_manager=dl_manager, max_examples_per_split=download_config.max_examples_per_split, ../../python_bin/nlp/builder.py:431: in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) ../../python_bin/nlp/datasets/xnli/8bf4185a2da1ef2a523186dd660d9adcf0946189e7fa5942ea31c63c07b68a7f/xnli.py:95: in _split_generators dl_dir = dl_manager.download_and_extract(_DATA_URL) ../../python_bin/nlp/utils/download_manager.py:246: in download_and_extract return self.extract(self.download(url_or_urls)) ../../python_bin/nlp/utils/download_manager.py:186: in download self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths) ../../python_bin/nlp/utils/download_manager.py:166: in _record_sizes_checksums self._recorded_sizes_checksums[url] = get_size_checksum(path) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ path = ('', '/tmp/tmpkajlg9yc/downloads/c0f7773c480a3f2d85639d777e0e17e65527460310d80760fd3fc2b2f2960556.c952a63cb17d3d46e412ceb7dbcd656ce2b15cc9ef17f50c28f81c48a7c853b5') def get_size_checksum(path: str) -> Tuple[int, str]: """Compute the file size and the sha256 checksum of a file""" m = sha256() > with open(path, "rb") as f: E TypeError: expected str, bytes or os.PathLike object, not tuple ../../python_bin/nlp/utils/checksums_utils.py:81: TypeError ``` - the checksums probably need to be updated no? And we should also think about how to write a test for the checksums.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/35/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/35/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/34
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/34/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/34/comments
https://api.github.com/repos/huggingface/datasets/issues/34/events
https://github.com/huggingface/datasets/pull/34
611,385,516
MDExOlB1bGxSZXF1ZXN0NDEyNTg0OTM0
34
[Tests] add slow tests
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,588,503,682,000
1,588,508,310,000
1,588,508,309,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/34", "html_url": "https://github.com/huggingface/datasets/pull/34", "diff_url": "https://github.com/huggingface/datasets/pull/34.diff", "patch_url": "https://github.com/huggingface/datasets/pull/34.patch" }
This PR adds a slow test that downloads the "real" dataset. The test is decorated as "slow" so that it will not automatically run on circle ci. Before uploading a dataset, one should test that this test passes, manually by running ``` RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_real_dataset_<your-dataset-script-name> ``` This PR should be merged after PR: #33
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/34/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/34/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/33
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/33/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/33/comments
https://api.github.com/repos/huggingface/datasets/issues/33/events
https://github.com/huggingface/datasets/pull/33
611,052,081
MDExOlB1bGxSZXF1ZXN0NDEyMzU1ODE0
33
Big cleanup/refactoring for clean serialization
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Great! I think when this merged, we can merge sure that Circle Ci stays happy when uploading new datasets. " ]
1,588,376,757,000
1,588,508,254,000
1,588,508,253,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/33", "html_url": "https://github.com/huggingface/datasets/pull/33", "diff_url": "https://github.com/huggingface/datasets/pull/33.diff", "patch_url": "https://github.com/huggingface/datasets/pull/33.patch" }
This PR cleans many base classes to re-build them as `dataclasses`. We can thus use a simple serialization workflow for `DatasetInfo`, including it's `Features` and `SplitDict` based on `dataclasses` `asdict()`. The resulting code is a lot shorter, can be easily serialized/deserialized, dataset info are human-readable and we can get rid of the `dataclass_json` dependency. The scripts have breaking changes and the conversion tool is updated. Example of dataset info in SQuAD script now: ```python def _info(self): return nlp.DatasetInfo( description=_DESCRIPTION, features=nlp.Features({ "id": nlp.Value('string'), "title": nlp.Value('string'), "context": nlp.Value('string'), "question": nlp.Value('string'), "answers": nlp.Sequence({ "text": nlp.Value('string'), "answer_start": nlp.Value('int32'), }), }), # No default supervised_keys (as we have to pass both question # and context as input). supervised_keys=None, homepage="https://rajpurkar.github.io/SQuAD-explorer/", citation=_CITATION, ) ``` Example of serialized dataset info: ```bash { "description": "Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.\n", "citation": "@article{2016arXiv160605250R,\n author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},\n Konstantin and {Liang}, Percy},\n title = \"{SQuAD: 100,000+ Questions for Machine Comprehension of Text}\",\n journal = {arXiv e-prints},\n year = 2016,\n eid = {arXiv:1606.05250},\n pages = {arXiv:1606.05250},\narchivePrefix = {arXiv},\n eprint = {1606.05250},\n}\n", "homepage": "https://rajpurkar.github.io/SQuAD-explorer/", "license": "", "features": { "id": { "dtype": "string", "_type": "Value" }, "title": { "dtype": "string", "_type": "Value" }, "context": { "dtype": "string", "_type": "Value" }, "question": { "dtype": "string", "_type": "Value" }, "answers": { "feature": { "text": { "dtype": "string", "_type": "Value" }, "answer_start": { "dtype": "int32", "_type": "Value" } }, "length": -1, "_type": "Sequence" } }, "supervised_keys": null, "name": "squad", "version": { "version_str": "1.0.0", "description": "New split API (https://tensorflow.org/datasets/splits)", "nlp_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0 }, "splits": { "train": { "name": "train", "num_bytes": 79426386, "num_examples": 87599, "dataset_name": "squad" }, "validation": { "name": "validation", "num_bytes": 10491883, "num_examples": 10570, "dataset_name": "squad" } }, "size_in_bytes": 0, "download_size": 35142551, "download_checksums": [] } ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/33/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/33/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/32
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/32/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/32/comments
https://api.github.com/repos/huggingface/datasets/issues/32/events
https://github.com/huggingface/datasets/pull/32
610,715,580
MDExOlB1bGxSZXF1ZXN0NDEyMTAzMzIx
32
Fix map caching notebooks
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,588,334,126,000
1,588,508,158,000
1,588,508,157,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/32", "html_url": "https://github.com/huggingface/datasets/pull/32", "diff_url": "https://github.com/huggingface/datasets/pull/32.diff", "patch_url": "https://github.com/huggingface/datasets/pull/32.patch" }
Previously, caching results with `.map()` didn't work in notebooks. To reuse a result, `.map()` serializes the functions with `dill.dumps` and then it hashes it. The problem is that when using `dill.dumps` to serialize a function, it also saves its origin (filename + line no.) and the origin of all the `globals` this function needs. However for notebooks and shells, the filename looks like \<ipython-input-13-9ed2afe61d25\> and the line no. changes often. To fix the problem, I added a new dispatch function for code objects that ignore the origin of the code if it comes from a notebook or a python shell. I tested these cases in a notebook: - lambda functions - named functions - methods - classmethods - staticmethods - classes that implement `__call__` The caching now works as expected for all of them :) I also tested the caching in the demo notebook and it works fine !
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/32/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/32/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/31
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/31/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/31/comments
https://api.github.com/repos/huggingface/datasets/issues/31/events
https://github.com/huggingface/datasets/pull/31
610,677,641
MDExOlB1bGxSZXF1ZXN0NDEyMDczNDE4
31
[Circle ci] Install a virtual env before running tests
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,588,327,877,000
1,588,370,776,000
1,588,370,775,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/31", "html_url": "https://github.com/huggingface/datasets/pull/31", "diff_url": "https://github.com/huggingface/datasets/pull/31.diff", "patch_url": "https://github.com/huggingface/datasets/pull/31.patch" }
Install a virtual env before running tests to not running into sudo issues when dynamically downloading files. Same number of tests now pass / fail as on my local computer: ![Screenshot from 2020-05-01 12-14-44](https://user-images.githubusercontent.com/23423619/80798814-8a0a0a80-8ba5-11ea-8db8-599d33bbfccd.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/31/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/31/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/30
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/30/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/30/comments
https://api.github.com/repos/huggingface/datasets/issues/30/events
https://github.com/huggingface/datasets/pull/30
610,549,072
MDExOlB1bGxSZXF1ZXN0NDExOTY4Mzk3
30
add metrics which require download files from github
{ "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "repos_url": "https://api.github.com/users/mariamabarham/repos", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,588,306,402,000
1,589,185,194,000
1,589,185,194,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/30", "html_url": "https://github.com/huggingface/datasets/pull/30", "diff_url": "https://github.com/huggingface/datasets/pull/30.diff", "patch_url": "https://github.com/huggingface/datasets/pull/30.patch" }
To download files from github, I copied the `load_dataset_module` and its dependencies (without the builder) in `load.py` to `metrics/metric_utils.py`. I made the following changes: - copy the needed files in a folder`metric_name` - delete all other files that are not needed For metrics that require an external import, I first create a `<metric_name>_imports.py` file which contains all external urls. Then I create a `<metric_name>.py` in which I will load the external files using `<metric_name>_imports.py`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/30/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/30/timeline
null
true