url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
48
51
id
int64
600M
1.08B
node_id
stringlengths
18
24
number
int64
2
3.45k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
int64
1,587B
1,640B
updated_at
int64
1,588B
1,640B
closed_at
int64
1,588B
1,640B
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
draft
null
pull_request
null
is_pull_request
bool
1 class
https://api.github.com/repos/huggingface/datasets/issues/337
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/337/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/337/comments
https://api.github.com/repos/huggingface/datasets/issues/337/events
https://github.com/huggingface/datasets/issues/337
650,035,887
MDU6SXNzdWU2NTAwMzU4ODc=
337
[Feature request] Export Arrow dataset to TFRecords
{ "login": "jarednielsen", "id": 4564897, "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jarednielsen", "html_url": "https://github.com/jarednielsen", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "repos_url": "https://api.github.com/users/jarednielsen/repos", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,593,704,832,000
1,595,409,372,000
1,595,409,372,000
CONTRIBUTOR
null
The TFRecord generation process is error-prone and requires complex separate Python scripts to download and preprocess the data. I propose to combine the user-friendly features of `nlp` with the speed and efficiency of TFRecords. Sample API: ```python # use these existing methods ds = load_dataset("wikitext", "wikitext-2-raw-v1", split="train") ds = ds.map(lambda ex: tokenizer(ex)) ds.set_format("tensorflow", columns=["input_ids", "token_type_ids", "attention_mask"]) # then add this method ds.export(folder="/my/tfrecords", prefix="myrecord", num_shards=8, format="tfrecord") ``` which would create files like so: ```bash /my/tfrecords/myrecord_1.tfrecord /my/tfrecords/myrecord_2.tfrecord ... ``` I would be happy to contribute this method. We could use a similar approach for PyTorch. Thoughts?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/337/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/337/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/336
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/336/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/336/comments
https://api.github.com/repos/huggingface/datasets/issues/336/events
https://github.com/huggingface/datasets/issues/336
649,914,203
MDU6SXNzdWU2NDk5MTQyMDM=
336
[Dataset requests] New datasets for Open Question Answering
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892884, "node_id": "MDU6TGFiZWwxOTM1ODkyODg0", "url": "https://api.github.com/repos/huggingface/datasets/labels/help%20wanted", "name": "help wanted", "color": "008672", "default": true, "description": "Extra attention is needed" }, { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
{ "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "repos_url": "https://api.github.com/users/mariamabarham/repos", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "type": "User", "site_admin": false }
[ { "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "repos_url": "https://api.github.com/users/mariamabarham/repos", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "type": "User", "site_admin": false } ]
null
[]
1,593,694,983,000
1,594,890,262,000
1,594,890,262,000
MEMBER
null
We are still a few datasets missing for Open-Question Answering which is currently a field in strong development. Namely, it would be really nice to add: - WebQuestions (Berant et al., 2013) [done] - CuratedTrec (Baudis et al. 2015) [not open-source] - MS-MARCO (NGuyen et al. 2016) [done] - SearchQA (Dunn et al. 2017) [done] - FEVER (Thorne et al. 2018) - [ done] All these datasets are cited in http://arxiv.org/abs/2005.11401
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/336/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/336/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/331
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/331/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/331/comments
https://api.github.com/repos/huggingface/datasets/issues/331/events
https://github.com/huggingface/datasets/issues/331
648,533,199
MDU6SXNzdWU2NDg1MzMxOTk=
331
Loading CNN/Daily Mail dataset produces `nlp.utils.info_utils.NonMatchingSplitsSizesError`
{ "login": "jxmorris12", "id": 13238952, "node_id": "MDQ6VXNlcjEzMjM4OTUy", "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jxmorris12", "html_url": "https://github.com/jxmorris12", "followers_url": "https://api.github.com/users/jxmorris12/followers", "following_url": "https://api.github.com/users/jxmorris12/following{/other_user}", "gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}", "starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions", "organizations_url": "https://api.github.com/users/jxmorris12/orgs", "repos_url": "https://api.github.com/users/jxmorris12/repos", "events_url": "https://api.github.com/users/jxmorris12/events{/privacy}", "received_events_url": "https://api.github.com/users/jxmorris12/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "I couldn't reproduce on my side.\r\nIt looks like you were not able to generate all the examples, and you have the problem for each split train-test-validation.\r\nCould you try to enable logging, try again and send the logs ?\r\n```python\r\nimport logging\r\nlogging.basicConfig(level=logging.INFO)\r\n```", "here's the log\r\n```\r\n>>> import nlp\r\nimport logging\r\nlogging.basicConfig(level=logging.INFO)\r\nnlp.load_dataset('cnn_dailymail', '3.0.0')\r\n>>> import logging\r\n>>> logging.basicConfig(level=logging.INFO)\r\n>>> nlp.load_dataset('cnn_dailymail', '3.0.0')\r\nINFO:nlp.load:Checking /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py for additional imports.\r\nINFO:filelock:Lock 140443095301136 acquired on /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py.lock\r\nINFO:nlp.load:Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail\r\nINFO:nlp.load:Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\r\nINFO:nlp.load:Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py to /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/cnn_dailymail.py\r\nINFO:nlp.load:Updating dataset infos file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/dataset_infos.json to /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/dataset_infos.json\r\nINFO:nlp.load:Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/cnn_dailymail.json\r\nINFO:filelock:Lock 140443095301136 released on /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py.lock\r\nINFO:nlp.info:Loading Dataset Infos from /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\r\nINFO:nlp.builder:Generating dataset cnn_dailymail (/u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0)\r\nINFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source\r\nDownloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0...\r\nINFO:nlp.utils.info_utils:All the checksums matched successfully.\r\nINFO:nlp.builder:Generating split train\r\nINFO:nlp.arrow_writer:Done writing 285161 examples in 1240618482 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-train.arrow.\r\nINFO:nlp.builder:Generating split validation\r\nINFO:nlp.arrow_writer:Done writing 13255 examples in 56637485 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-validation.arrow.\r\nINFO:nlp.builder:Generating split test\r\nINFO:nlp.arrow_writer:Done writing 11379 examples in 48931393 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-test.arrow.\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/load.py\", line 520, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py\", line 431, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py\", line 488, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/utils/info_utils.py\", line 70, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\nnlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=49424491, num_examples=11490, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='test', num_bytes=48931393, num_examples=11379, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='train', num_bytes=1249178681, num_examples=287113, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='train', num_bytes=1240618482, num_examples=285161, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='validation', num_bytes=57149241, num_examples=13368, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='validation', num_bytes=56637485, num_examples=13255, dataset_name='cnn_dailymail')}]\r\n```", "> here's the log\r\n> \r\n> ```\r\n> >>> import nlp\r\n> import logging\r\n> logging.basicConfig(level=logging.INFO)\r\n> nlp.load_dataset('cnn_dailymail', '3.0.0')\r\n> >>> import logging\r\n> >>> logging.basicConfig(level=logging.INFO)\r\n> >>> nlp.load_dataset('cnn_dailymail', '3.0.0')\r\n> INFO:nlp.load:Checking /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py for additional imports.\r\n> INFO:filelock:Lock 140443095301136 acquired on /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py.lock\r\n> INFO:nlp.load:Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail\r\n> INFO:nlp.load:Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\r\n> INFO:nlp.load:Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py to /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/cnn_dailymail.py\r\n> INFO:nlp.load:Updating dataset infos file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/dataset_infos.json to /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/dataset_infos.json\r\n> INFO:nlp.load:Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/cnn_dailymail.json\r\n> INFO:filelock:Lock 140443095301136 released on /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py.lock\r\n> INFO:nlp.info:Loading Dataset Infos from /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\r\n> INFO:nlp.builder:Generating dataset cnn_dailymail (/u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0)\r\n> INFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source\r\n> Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0...\r\n> INFO:nlp.utils.info_utils:All the checksums matched successfully.\r\n> INFO:nlp.builder:Generating split train\r\n> INFO:nlp.arrow_writer:Done writing 285161 examples in 1240618482 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-train.arrow.\r\n> INFO:nlp.builder:Generating split validation\r\n> INFO:nlp.arrow_writer:Done writing 13255 examples in 56637485 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-validation.arrow.\r\n> INFO:nlp.builder:Generating split test\r\n> INFO:nlp.arrow_writer:Done writing 11379 examples in 48931393 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-test.arrow.\r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 1, in <module>\r\n> File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/load.py\", line 520, in load_dataset\r\n> builder_instance.download_and_prepare(\r\n> File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py\", line 431, in download_and_prepare\r\n> self._download_and_prepare(\r\n> File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py\", line 488, in _download_and_prepare\r\n> verify_splits(self.info.splits, split_dict)\r\n> File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/utils/info_utils.py\", line 70, in verify_splits\r\n> raise NonMatchingSplitsSizesError(str(bad_splits))\r\n> nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=49424491, num_examples=11490, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='test', num_bytes=48931393, num_examples=11379, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='train', num_bytes=1249178681, num_examples=287113, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='train', num_bytes=1240618482, num_examples=285161, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='validation', num_bytes=57149241, num_examples=13368, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='validation', num_bytes=56637485, num_examples=13255, dataset_name='cnn_dailymail')}]\r\n> ```\r\n\r\nWith `nlp == 0.3.0` version, I'm not able to reproduce this error on my side.\r\nWhich version are you using for reproducing your bug?\r\n\r\n```\r\n>> nlp.load_dataset('cnn_dailymail', '3.0.0')\r\n\r\n8.90k/8.90k [00:18<00:00, 486B/s]\r\n\r\nDownloading: 100%\r\n9.37k/9.37k [00:00<00:00, 234kB/s]\r\n\r\nDownloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0...\r\nDownloading:\r\n159M/? [00:09<00:00, 16.7MB/s]\r\n\r\nDownloading:\r\n376M/? [00:06<00:00, 62.6MB/s]\r\n\r\nDownloading:\r\n2.11M/? [00:06<00:00, 333kB/s]\r\n\r\nDownloading:\r\n46.4M/? [00:02<00:00, 18.4MB/s]\r\n\r\nDownloading:\r\n2.43M/? [00:00<00:00, 2.62MB/s]\r\n\r\nDataset cnn_dailymail downloaded and prepared to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0. Subsequent calls will reuse this data.\r\n{'test': Dataset(schema: {'article': 'string', 'highlights': 'string'}, num_rows: 11490),\r\n 'train': Dataset(schema: {'article': 'string', 'highlights': 'string'}, num_rows: 287113),\r\n 'validation': Dataset(schema: {'article': 'string', 'highlights': 'string'}, num_rows: 13368)}\r\n\r\n>> ...\r\n\r\n```", "In general if some examples are missing after processing (hence causing the `NonMatchingSplitsSizesError `), it is often due to either\r\n1) corrupted cached files\r\n2) decoding errors\r\n\r\nI just checked the dataset script for code that could lead to decoding errors but I couldn't find any. Before we try to dive more into the processing of the dataset, could you try to clear your cache ? Just to make sure that it isn't 1)", "Yes thanks for the support! I cleared out my cache folder and everything works fine now" ]
1,593,555,693,000
1,594,299,820,000
1,594,299,820,000
CONTRIBUTOR
null
``` >>> import nlp >>> nlp.load_dataset('cnn_dailymail', '3.0.0') Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/p/qdata/jm8wx/datasets/nlp/src/nlp/load.py", line 520, in load_dataset builder_instance.download_and_prepare( File "/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py", line 431, in download_and_prepare self._download_and_prepare( File "/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py", line 488, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/p/qdata/jm8wx/datasets/nlp/src/nlp/utils/info_utils.py", line 70, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=49424491, num_examples=11490, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='test', num_bytes=48931393, num_examples=11379, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='train', num_bytes=1249178681, num_examples=287113, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='train', num_bytes=1240618482, num_examples=285161, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='validation', num_bytes=57149241, num_examples=13368, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='validation', num_bytes=56637485, num_examples=13255, dataset_name='cnn_dailymail')}] ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/331/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/331/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/329
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/329/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/329/comments
https://api.github.com/repos/huggingface/datasets/issues/329/events
https://github.com/huggingface/datasets/issues/329
648,446,979
MDU6SXNzdWU2NDg0NDY5Nzk=
329
[Bug] FileLock dependency incompatible with filesystem
{ "login": "jarednielsen", "id": 4564897, "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jarednielsen", "html_url": "https://github.com/jarednielsen", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "repos_url": "https://api.github.com/users/jarednielsen/repos", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi, can you give details on your environment/os/packages versions/etc?", "Environment is Ubuntu 18.04, Python 3.7.5, nlp==0.3.0, filelock=3.0.12.\r\n\r\nThe external volume is Amazon FSx for Lustre, and it by default creates files with limited permissions. My working theory is that FileLock creates a lockfile that isn't writable, and thus there's no way to acquire it by removing the .lock file. But Python is able to create new files and write to them outside of the FileLock package.\r\n\r\nWhen I attempt to use FileLock within a Docker container by writing to `/root/.cache/hello.txt`, it succeeds. So there's some permissions issue. But it's not a Docker configuration issue; I've replicated it without Docker.\r\n```bash\r\necho \"hello world\" >> hello.txt\r\nls -l\r\n\r\n-rw-rw-r-- 1 ubuntu ubuntu 10 Jun 30 19:52 hello.txt\r\n```", "Looks like the `flock` syscall does not work on Lustre filesystems by default: https://github.com/benediktschmitt/py-filelock/issues/67.\r\n\r\nI added the `-o flock` option when mounting the filesystem, as [described here](https://docs.aws.amazon.com/fsx/latest/LustreGuide/getting-started-step2.html), which fixed the issue.", "Awesome, thanks a lot for sharing your fix!" ]
1,593,546,331,000
1,593,586,558,000
1,593,552,786,000
CONTRIBUTOR
null
I'm downloading a dataset successfully with `load_dataset("wikitext", "wikitext-2-raw-v1")` But when I attempt to cache it on an external volume, it hangs indefinitely: `load_dataset("wikitext", "wikitext-2-raw-v1", cache_dir="/fsx") # /fsx is an external volume mount` The filesystem when hanging looks like this: ```bash /fsx ----downloads ----94be...73.lock ----wikitext ----wikitext-2-raw ----wikitext-2-raw-1.0.0.incomplete ``` It appears that on this filesystem, the FileLock object is forever stuck in its "acquire" stage. I have verified that the issue lies specifically with the `filelock` dependency: ```python open("/fsx/hello.txt").write("hello") # succeeds from filelock import FileLock with FileLock("/fsx/hello.lock"): open("/fsx/hello.txt").write("hello") # hangs indefinitely ``` Has anyone else run into this issue? I'd raise it directly on the FileLock repo, but that project appears abandoned with the last update over a year ago. Or if there's a solution that would remove the FileLock dependency from the project, I would appreciate that.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/329/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/329/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/328
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/328/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/328/comments
https://api.github.com/repos/huggingface/datasets/issues/328/events
https://github.com/huggingface/datasets/issues/328
648,326,841
MDU6SXNzdWU2NDgzMjY4NDE=
328
Fork dataset
{ "login": "timothyjlaurent", "id": 2000204, "node_id": "MDQ6VXNlcjIwMDAyMDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/timothyjlaurent", "html_url": "https://github.com/timothyjlaurent", "followers_url": "https://api.github.com/users/timothyjlaurent/followers", "following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}", "gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}", "starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions", "organizations_url": "https://api.github.com/users/timothyjlaurent/orgs", "repos_url": "https://api.github.com/users/timothyjlaurent/repos", "events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}", "received_events_url": "https://api.github.com/users/timothyjlaurent/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "To be able to generate the Arrow dataset you need to either use our csv or json utilities `load_dataset(\"json\", data_files=my_json_files)` OR write your own custom dataset script (you can find some inspiration from the [squad](https://github.com/huggingface/nlp/blob/master/datasets/squad/squad.py) script for example). Custom dataset scripts can be called locally with `nlp.load_dataset(path_to_my_script_directory)`.\r\n\r\nThis should help you get what you call \"Dataset1\".\r\n\r\nThen using some dataset transforms like `.map` for example you can get to \"DatasetNER\" and \"DatasetREL\".\r\n", "Thanks for the helpful advice, @lhoestq -- I wasn't quite able to get the json recipe working - \r\n\r\n```\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/pyarrow/ipc.py in __init__(self, source)\r\n 60 \r\n 61 def __init__(self, source):\r\n---> 62 self._open(source)\r\n 63 \r\n 64 \r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/pyarrow/ipc.pxi in pyarrow.lib._RecordBatchStreamReader._open()\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()\r\nArrowInvalid: Tried reading schema message, was null or length 0\r\n```\r\n\r\nBut I'm going to give the generator_dataset_builder a try.\r\n\r\n1 more quick question -- can .map be used to output different length mappings -- could I skip one, or yield 2, can you map_batch ", "You can use `.map(my_func, batched=True)` and return less examples, or more examples if you want", "Thanks this answers my question. I think the issue I was having using the json loader were due to using gzipped jsonl files.\r\n\r\nThe error I get now is :\r\n\r\n```\r\n\r\nUsing custom data configuration test\r\n---------------------------------------------------------------------------\r\n\r\nValueError Traceback (most recent call last)\r\n\r\n<ipython-input-38-29082a31e5b2> in <module>\r\n 5 print(ner_datafiles)\r\n 6 \r\n----> 7 ds = nlp.load_dataset(\"json\", \"test\", data_files=ner_datafiles[0])\r\n 8 \r\n\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 522 download_mode=download_mode,\r\n 523 ignore_verifications=ignore_verifications,\r\n--> 524 save_infos=save_infos,\r\n 525 )\r\n 526 \r\n\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 430 verify_infos = not save_infos and not ignore_verifications\r\n 431 self._download_and_prepare(\r\n--> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 433 )\r\n 434 # Sync info\r\n\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 481 try:\r\n 482 # Prepare split will record examples associated to the split\r\n--> 483 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 484 except OSError:\r\n 485 raise OSError(\"Cannot find data file. \" + (self.manual_download_instructions or \"\"))\r\n\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in _prepare_split(self, split_generator)\r\n 736 schema_dict[field.name] = Value(str(field.type))\r\n 737 \r\n--> 738 parse_schema(writer.schema, features)\r\n 739 self.info.features = Features(features)\r\n 740 \r\n\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in parse_schema(schema, schema_dict)\r\n 734 parse_schema(field.type.value_type, schema_dict[field.name])\r\n 735 else:\r\n--> 736 schema_dict[field.name] = Value(str(field.type))\r\n 737 \r\n 738 parse_schema(writer.schema, features)\r\n\r\n<string> in __init__(self, dtype, id, _type)\r\n\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/features.py in __post_init__(self)\r\n 55 \r\n 56 def __post_init__(self):\r\n---> 57 self.pa_type = string_to_arrow(self.dtype)\r\n 58 \r\n 59 def __call__(self):\r\n\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/features.py in string_to_arrow(type_str)\r\n 32 if str(type_str + \"_\") not in pa.__dict__:\r\n 33 raise ValueError(\r\n---> 34 f\"Neither {type_str} nor {type_str + '_'} seems to be a pyarrow data type. \"\r\n 35 f\"Please make sure to use a correct data type, see: \"\r\n 36 f\"https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions\"\r\n\r\nValueError: Neither list<item: int64> nor list<item: int64>_ seems to be a pyarrow data type. Please make sure to use a correct data type, see: https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions.\r\n```\r\n\r\nIf I just create a pa- table manually like is done in the jsonloader -- it seems to work fine. Ths JSON I'm trying to load isn't overly complex - 1 integer field, the rest text fields with a nested list of objects with text fields .", "I'll close this -- It's still unclear how to go about troubleshooting the json example as I mentioned above. If I decide it's worth the trouble, I'll create another issue, or wait for a better support for using nlp for making custom data-loaders." ]
1,593,535,373,000
1,594,071,839,000
1,594,071,839,000
NONE
null
We have a multi-task learning model training I'm trying to convert to using the Arrow-based nlp dataset. We're currently training a custom TensorFlow model but the nlp paradigm should be a bridge for us to be able to use the wealth of pre-trained models in Transformers. Our preprocessing flow parses raw text and json with Entity and Relations annotations and creates 2 datasets for training a NER and Relations prediction heads. Is there some good way to "fork" dataset- EG 1. text + json -> Dataset1 1. Dataset1 -> DatasetNER 1. Dataset1 -> DatasetREL or 1. text + json -> Dataset1 1. Dataset1 -> DatasetNER 1. Dataset1 + DatasetNER -> DatasetREL
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/328/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/328/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/326
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/326/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/326/comments
https://api.github.com/repos/huggingface/datasets/issues/326/events
https://github.com/huggingface/datasets/issues/326
648,126,103
MDU6SXNzdWU2NDgxMjYxMDM=
326
Large dataset in Squad2-format
{ "login": "flozi00", "id": 47894090, "node_id": "MDQ6VXNlcjQ3ODk0MDkw", "avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4", "gravatar_id": "", "url": "https://api.github.com/users/flozi00", "html_url": "https://github.com/flozi00", "followers_url": "https://api.github.com/users/flozi00/followers", "following_url": "https://api.github.com/users/flozi00/following{/other_user}", "gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}", "starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/flozi00/subscriptions", "organizations_url": "https://api.github.com/users/flozi00/orgs", "repos_url": "https://api.github.com/users/flozi00/repos", "events_url": "https://api.github.com/users/flozi00/events{/privacy}", "received_events_url": "https://api.github.com/users/flozi00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I'm pretty sure you can get some inspiration from the squad_v2 script. It looks like the dataset is quite big so it will take some time for the users to generate it, but it should be reasonable.\r\n\r\nAlso you are saying that you are still making the dataset grow in size right ?\r\nIt's probably good practice to let the users do their training/evaluations with the exact same version of the dataset.\r\nWe allow for each dataset to specify a version (ex: 1.0.0) and increment this number every time there are new samples in the dataset for example. Does it look like a good solution for you ? Or would you rather have one final version with the full dataset ?", "It would also be good if there is any possibility for versioning, I think this way is much better than the dynamic way.\nIf you mean that part to put the tiles into one is the generation it would take up to 15-20 minutes on home computer hardware.\nAre there any compression or optimization algorithms while generating the dataset ?\nOtherwise the hardware limit is around 32 GB ram at the moment.\nIf everything works well we will add some more gigabytes of data in future what would make it pretty memory costly.", "15-20 minutes is fine !\r\nAlso there's no RAM limitations as we save to disk every 1000 elements while generating the dataset by default.\r\nAfter generation, the dataset is ready to use with (again) no RAM limitations as we do memory-mapping.", "Wow, that sounds pretty cool.\nActually I have the problem of running out of memory while tokenization on our local machine.\nThat wouldn't happen again, would it ?", "You can do the tokenization step using `my_tokenized_dataset = my_dataset.map(my_tokenize_function)` that writes the tokenized texts on disk as well. And then `my_tokenized_dataset` will be a memory-mapped dataset too, so you should be fine :)", "Does it have an affect to the trainings speed ?", "In your training loop, loading the tokenized texts is going to be fast and pretty much negligible compared to a forward pass. You shouldn't expect any slow down.", "Closing this one. Feel free to re-open if you have other questions" ]
1,593,519,539,000
1,594,285,310,000
1,594,285,310,000
NONE
null
At the moment we are building an large question answering dataset and think about sharing it with the huggingface community. Caused the computing power we splitted it into multiple tiles, but they are all in the same format. Right now the most important facts about are this: - Contexts: 1.047.671 - questions: 1.677.732 - Answers: 6.742.406 - unanswerable: 377.398 It is already cleaned <pre><code> train_data = [ { 'context': "this is the context", 'qas': [ { 'id': "00002", 'is_impossible': False, 'question': "whats is this", 'answers': [ { 'text': "answer", 'answer_start': 0 } ] }, { 'id': "00003", 'is_impossible': False, 'question': "question2", 'answers': [ { 'text': "answer2", 'answer_start': 1 } ] } ] } ] </code></pre> Cause it is growing every day we are thinking about an structure like this: We host an Json file, containing all the download links and the script can load it dynamically. At the moment it is around ~20GB Any advice how to handle this, or an ready to use template ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/326/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/326/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/324
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/324/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/324/comments
https://api.github.com/repos/huggingface/datasets/issues/324/events
https://github.com/huggingface/datasets/issues/324
647,525,725
MDU6SXNzdWU2NDc1MjU3MjU=
324
Error when calculating glue score
{ "login": "D-i-l-r-u-k-s-h-i", "id": 47185867, "node_id": "MDQ6VXNlcjQ3MTg1ODY3", "avatar_url": "https://avatars.githubusercontent.com/u/47185867?v=4", "gravatar_id": "", "url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i", "html_url": "https://github.com/D-i-l-r-u-k-s-h-i", "followers_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/followers", "following_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/following{/other_user}", "gists_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/gists{/gist_id}", "starred_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/subscriptions", "organizations_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/orgs", "repos_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/repos", "events_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/events{/privacy}", "received_events_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The glue metric for cola is a metric for classification. It expects label ids as integers as inputs.", "I want to evaluate a sentence pair whether they are semantically equivalent, so I used MRPC and it gives the same error, does that mean we have to encode the sentences and parse as input?\r\n\r\nusing BertTokenizer;\r\n```\r\nencoded_reference=tokenizer.encode(reference, add_special_tokens=False)\r\nencoded_prediction=tokenizer.encode(prediction, add_special_tokens=False)\r\n```\r\n\r\n`glue_score = glue_metric.compute(encoded_prediction, encoded_reference)`\r\n```\r\n\r\nValueError Traceback (most recent call last)\r\n<ipython-input-9-4c3a3ce7b583> in <module>()\r\n----> 1 glue_score = glue_metric.compute(encoded_prediction, encoded_reference)\r\n\r\n6 frames\r\n/usr/local/lib/python3.6/dist-packages/nlp/metric.py in compute(self, predictions, references, timeout, **metrics_kwargs)\r\n 198 predictions = self.data[\"predictions\"]\r\n 199 references = self.data[\"references\"]\r\n--> 200 output = self._compute(predictions=predictions, references=references, **metrics_kwargs)\r\n 201 return output\r\n 202 \r\n\r\n/usr/local/lib/python3.6/dist-packages/nlp/metrics/glue/27b1bc63e520833054bd0d7a8d0bc7f6aab84cc9eed1b576e98c806f9466d302/glue.py in _compute(self, predictions, references)\r\n 101 return pearson_and_spearman(predictions, references)\r\n 102 elif self.config_name in [\"mrpc\", \"qqp\"]:\r\n--> 103 return acc_and_f1(predictions, references)\r\n 104 elif self.config_name in [\"sst2\", \"mnli\", \"mnli_mismatched\", \"mnli_matched\", \"qnli\", \"rte\", \"wnli\", \"hans\"]:\r\n 105 return {\"accuracy\": simple_accuracy(predictions, references)}\r\n\r\n/usr/local/lib/python3.6/dist-packages/nlp/metrics/glue/27b1bc63e520833054bd0d7a8d0bc7f6aab84cc9eed1b576e98c806f9466d302/glue.py in acc_and_f1(preds, labels)\r\n 60 def acc_and_f1(preds, labels):\r\n 61 acc = simple_accuracy(preds, labels)\r\n---> 62 f1 = f1_score(y_true=labels, y_pred=preds)\r\n 63 return {\r\n 64 \"accuracy\": acc,\r\n\r\n/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in f1_score(y_true, y_pred, labels, pos_label, average, sample_weight, zero_division)\r\n 1097 pos_label=pos_label, average=average,\r\n 1098 sample_weight=sample_weight,\r\n-> 1099 zero_division=zero_division)\r\n 1100 \r\n 1101 \r\n\r\n/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in fbeta_score(y_true, y_pred, beta, labels, pos_label, average, sample_weight, zero_division)\r\n 1224 warn_for=('f-score',),\r\n 1225 sample_weight=sample_weight,\r\n-> 1226 zero_division=zero_division)\r\n 1227 return f\r\n 1228 \r\n\r\n/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in precision_recall_fscore_support(y_true, y_pred, beta, labels, pos_label, average, warn_for, sample_weight, zero_division)\r\n 1482 raise ValueError(\"beta should be >=0 in the F-beta score\")\r\n 1483 labels = _check_set_wise_labels(y_true, y_pred, average, labels,\r\n-> 1484 pos_label)\r\n 1485 \r\n 1486 # Calculate tp_sum, pred_sum, true_sum ###\r\n\r\n/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in _check_set_wise_labels(y_true, y_pred, average, labels, pos_label)\r\n 1314 raise ValueError(\"Target is %s but average='binary'. Please \"\r\n 1315 \"choose another average setting, one of %r.\"\r\n-> 1316 % (y_type, average_options))\r\n 1317 elif pos_label not in (None, 1):\r\n 1318 warnings.warn(\"Note that pos_label (set to %r) is ignored when \"\r\n\r\nValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted'].\r\n\r\n```", "MRPC is also a binary classification task, so its metric is a binary classification metric.\r\n\r\nTo evaluate if pairs of sentences are semantically equivalent, maybe you could take a look at models that compute if one sentence entails the other or not (typically the kinds of model that could work well on the MRPC task).", "Closing this one. Feel free to re-open if you have other questions :)" ]
1,593,449,628,000
1,594,286,014,000
1,594,286,014,000
NONE
null
I was trying glue score along with other metrics here. But glue gives me this error; ``` import nlp glue_metric = nlp.load_metric('glue',name="cola") glue_score = glue_metric.compute(predictions, references) ``` ``` --------------------------------------------------------------------------- --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-8-b9210a524504> in <module>() ----> 1 glue_score = glue_metric.compute(predictions, references) 6 frames /usr/local/lib/python3.6/dist-packages/nlp/metric.py in compute(self, predictions, references, timeout, **metrics_kwargs) 191 """ 192 if predictions is not None: --> 193 self.add_batch(predictions=predictions, references=references) 194 self.finalize(timeout=timeout) 195 /usr/local/lib/python3.6/dist-packages/nlp/metric.py in add_batch(self, predictions, references, **kwargs) 207 if self.writer is None: 208 self._init_writer() --> 209 self.writer.write_batch(batch) 210 211 def add(self, prediction=None, reference=None, **kwargs): /usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size) 155 if self.pa_writer is None: 156 self._build_writer(pa_table=pa.Table.from_pydict(batch_examples)) --> 157 pa_table: pa.Table = pa.Table.from_pydict(batch_examples, schema=self._schema) 158 if writer_batch_size is None: 159 writer_batch_size = self.writer_batch_size /usr/local/lib/python3.6/dist-packages/pyarrow/types.pxi in __iter__() /usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib.asarray() /usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib.array() /usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array() TypeError: an integer is required (got type str) ``` I'm not sure whether I'm doing this wrong or whether it's an issue. I would like to know a workaround. Thank you.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/324/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/324/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/321
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/321/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/321/comments
https://api.github.com/repos/huggingface/datasets/issues/321/events
https://github.com/huggingface/datasets/issues/321
647,271,526
MDU6SXNzdWU2NDcyNzE1MjY=
321
ERROR:root:mwparserfromhell
{ "login": "Shiro-LK", "id": 26505641, "node_id": "MDQ6VXNlcjI2NTA1NjQx", "avatar_url": "https://avatars.githubusercontent.com/u/26505641?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Shiro-LK", "html_url": "https://github.com/Shiro-LK", "followers_url": "https://api.github.com/users/Shiro-LK/followers", "following_url": "https://api.github.com/users/Shiro-LK/following{/other_user}", "gists_url": "https://api.github.com/users/Shiro-LK/gists{/gist_id}", "starred_url": "https://api.github.com/users/Shiro-LK/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Shiro-LK/subscriptions", "organizations_url": "https://api.github.com/users/Shiro-LK/orgs", "repos_url": "https://api.github.com/users/Shiro-LK/repos", "events_url": "https://api.github.com/users/Shiro-LK/events{/privacy}", "received_events_url": "https://api.github.com/users/Shiro-LK/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
null
[]
null
[ "It looks like it comes from `mwparserfromhell`.\r\n\r\nWould it be possible to get the bad `section` that causes this issue ? The `section` string is from `datasets/wikipedia.py:L548` ? You could just add a `try` statement and print the section if the line `section_text.append(section.strip_code().strip())` crashes.\r\n\r\nIt will help us know if we have to fix it on our side or if it is a `mwparserfromhell` issue.", "Hi, \r\n\r\nThank you for you answer.\r\nI have try to print the bad section using `try` and `except`, but it is a bit weird as the error seems to appear 3 times for instance, but the two first error does not print anything (as if the function did not go in the `except` part).\r\nFor the third one, I got that (I haven't display the entire text) :\r\n\r\n> error : ==== Parque nacional Cajas ====\r\n> {{AP|Parque nacional Cajas}}\r\n> [[Archivo:Ecuador cajas national park.jpg|thumb|left|300px|Laguna del Cajas]]\r\n> El parque nacional Cajas está situado en los [[Cordillera de los Andes|Andes]], al sur del [[Ecuador]], en la provincia de [[Provincia de Azuay|Azuay]], a 33\r\n> [[km]] al noroccidente de la ciudad de [[Cuenca (Ecuador)|Cuenca]]. Los accesos más comunes al parque inician todos en Cuenca: Desde allí, la vía Cuenca-Mol\r\n> leturo atraviesa en Control de [[Surocucho]] en poco más de 30 minutos de viaje; más adelante, esta misma carretera pasa a orillas de la laguna La Toreadora donde están el Centro Administrativo y de Información del parque. Siguiendo de largo hacia [[Molleturo]], por esta vía se conoce el sector norte del Cajas y se serpentea entre varias lagunas mayores y menores.\r\n> Para acceder al parque desde la costa, la vía Molleturo-Cuenca es también la mejor opción.\r\n\r\nHow can I display the link instead of the text ? I suppose it will help you more ", "The error appears several times as Apache Beam retries to process examples up to 4 times irc.\r\n\r\nI just tried to run this text into `mwparserfromhell` but it worked without the issue.\r\n\r\nI used this code (from the `wikipedia.py` script):\r\n```python\r\nimport mwparserfromhell as parser\r\nimport re\r\nimport six\r\n\r\nraw_content = r\"\"\"==== Parque nacional Cajas ====\r\n{{AP|Parque nacional Cajas}}\r\n[[Archivo:Ecuador cajas national park.jpg|thumb|left|300px|Laguna del Cajas]]\r\nEl parque nacional Cajas está situado en los [[Cordillera de los Andes|Andes]], al sur del [[Ecuador]], en la provincia de [[Provincia de Azuay|Azuay]], a 33\r\n[[km]] al noroccidente de la ciudad de [[Cuenca (Ecuador)|Cuenca]]. Los accesos más comunes al parque inician todos en Cuenca: Desde allí, la vía Cuenca-Mol\r\nleturo atraviesa en Control de [[Surocucho]] en poco más de 30 minutos de viaje; más adelante, esta misma carretera pasa a orillas de la laguna La Toreadora donde están el Centro Administrativo y de Información del parque. Siguiendo de largo hacia [[Molleturo]], por esta vía se conoce el sector norte del Cajas y se serpentea entre varias lagunas mayores y menores.\r\n\"\"\"\r\n\r\nwikicode = parser.parse(raw_content)\r\n\r\n# Filters for references, tables, and file/image links.\r\nre_rm_wikilink = re.compile(\"^(?:File|Image|Media):\", flags=re.IGNORECASE | re.UNICODE)\r\n\r\ndef rm_wikilink(obj):\r\n return bool(re_rm_wikilink.match(six.text_type(obj.title)))\r\n\r\ndef rm_tag(obj):\r\n return six.text_type(obj.tag) in {\"ref\", \"table\"}\r\n\r\ndef rm_template(obj):\r\n return obj.name.lower() in {\"reflist\", \"notelist\", \"notelist-ua\", \"notelist-lr\", \"notelist-ur\", \"notelist-lg\"}\r\n\r\ndef try_remove_obj(obj, section):\r\n try:\r\n section.remove(obj)\r\n except ValueError:\r\n # For unknown reasons, objects are sometimes not found.\r\n pass\r\n\r\nsection_text = []\r\nfor section in wikicode.get_sections(flat=True, include_lead=True, include_headings=True):\r\n for obj in section.ifilter_wikilinks(matches=rm_wikilink, recursive=True):\r\n try_remove_obj(obj, section)\r\n for obj in section.ifilter_templates(matches=rm_template, recursive=True):\r\n try_remove_obj(obj, section)\r\n for obj in section.ifilter_tags(matches=rm_tag, recursive=True):\r\n try_remove_obj(obj, section)\r\n\r\n section_text.append(section.strip_code().strip())\r\n```", "Not sure why we're having this issue. Maybe could you get also the file that's causing that ?", "thanks for your answer.\r\nHow can I know which file is causing the issue ? \r\nI am trying to load the spanish wikipedia data. ", "Because of the way Apache Beam works we indeed don't have access to the file name at this point in the code.\r\nWe'll have to use some tricks I think :p \r\n\r\nYou can append `filepath` to `title` in `wikipedia.py:L512` for example. [[EDIT: it's L494 my bad]]\r\nThen just do `try:...except:` on the call of `_parse_and_clean_wikicode` L500 I guess.\r\n\r\nThanks for diving into this ! I tried it myself but I run out of memory on my laptop\r\nAs soon as we have the name of the file it should be easier to find what's wrong.", "Thanks for your help.\r\n\r\nI tried to print the \"title\" of the document inside the` except (mwparserfromhell.parser.ParserError) as e`,the title displayed was : \"Campeonato Mundial de futsal de la AMF 2015\". (Wikipedia ES) Is it what you were looking for ?", "Thanks a lot @Shiro-LK !\r\n\r\nI was able to reproduce the issue. It comes from [this table on wikipedia](https://es.wikipedia.org/wiki/Campeonato_Mundial_de_futsal_de_la_AMF_2015#Clasificados) that can't be parsed.\r\n\r\nThe file in which the problem occurs comes from the wikipedia dumps, and it can be downloaded [here](https://dumps.wikimedia.org/eswiki/20200501/eswiki-20200501-pages-articles-multistream6.xml-p6424816p7924815.bz2)\r\n\r\nParsing the file this way raises the parsing issue:\r\n\r\n```python\r\nimport mwparserfromhell as parser\r\nfrom tqdm.auto import tqdm\r\nimport bz2\r\nimport six\r\nimport logging\r\nimport codecs\r\nimport xml.etree.cElementTree as etree\r\n\r\nfilepath = \"path/to/eswiki-20200501-pages-articles-multistream6.xml-p6424816p7924815.bz2\"\r\n\r\ndef _extract_content(filepath):\r\n \"\"\"Extracts article content from a single WikiMedia XML file.\"\"\"\r\n logging.info(\"generating examples from = %s\", filepath)\r\n with open(filepath, \"rb\") as f:\r\n f = bz2.BZ2File(filename=f)\r\n if six.PY3:\r\n # Workaround due to:\r\n # https://github.com/tensorflow/tensorflow/issues/33563\r\n utf_f = codecs.getreader(\"utf-8\")(f)\r\n else:\r\n utf_f = f\r\n # To clear root, to free-up more memory than just `elem.clear()`.\r\n context = etree.iterparse(utf_f, events=(\"end\",))\r\n context = iter(context)\r\n unused_event, root = next(context)\r\n for unused_event, elem in tqdm(context, total=949087):\r\n if not elem.tag.endswith(\"page\"):\r\n continue\r\n namespace = elem.tag[:-4]\r\n title = elem.find(\"./{0}title\".format(namespace)).text\r\n ns = elem.find(\"./{0}ns\".format(namespace)).text\r\n id_ = elem.find(\"./{0}id\".format(namespace)).text\r\n # Filter pages that are not in the \"main\" namespace.\r\n if ns != \"0\":\r\n root.clear()\r\n continue\r\n raw_content = elem.find(\"./{0}revision/{0}text\".format(namespace)).text\r\n root.clear()\r\n\r\n if \"Campeonato Mundial de futsal de la AMF 2015\" in title:\r\n yield (id_, title, raw_content)\r\n\r\nfor id_, title, raw_content in _extract_content(filepath):\r\n wikicode = parser.parse(raw_content)\r\n```\r\n\r\nThe copied the raw content that can't be parsed [here](https://pastebin.com/raw/ZbmevLyH).\r\n\r\nThe minimal code to reproduce is:\r\n```python\r\nimport mwparserfromhell as parser\r\nimport requests\r\n\r\nraw_content = requests.get(\"https://pastebin.com/raw/ZbmevLyH\").content.decode(\"utf-8\")\r\nwikicode = parser.parse(raw_content)\r\n\r\n```\r\n\r\nI will create an issue on mwparserfromhell's repo to see if we can fix that\r\n", "This going to be fixed in the next `mwparserfromhell` release :)" ]
1,593,429,043,000
1,595,521,714,000
null
NONE
null
Hi, I am trying to download some wikipedia data but I got this error for spanish "es" (but there are maybe some others languages which have the same error I haven't tried all of them ). `ERROR:root:mwparserfromhell ParseError: This is a bug and should be reported. Info: C tokenizer exited with non-empty token stack.` The code I have use was : `dataset = load_dataset('wikipedia', '20200501.es', beam_runner='DirectRunner')`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/321/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/321/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/320
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/320/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/320/comments
https://api.github.com/repos/huggingface/datasets/issues/320/events
https://github.com/huggingface/datasets/issues/320
647,188,167
MDU6SXNzdWU2NDcxODgxNjc=
320
Blog Authorship Corpus, Non Matching Splits Sizes Error, nlp viewer
{ "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "repos_url": "https://api.github.com/users/mariamabarham/repos", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "type": "User", "site_admin": false }
[ { "id": 2107841032, "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer", "name": "nlp-viewer", "color": "94203D", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "I wonder if this means downloading failed? That corpus has a really slow server.", "This dataset seems to have a decoding problem that results in inconsistencies in the number of generated examples.\r\nSee #215.\r\nThat's why we end up with a `NonMatchingSplitsSizesError `." ]
1,593,416,195,000
1,593,441,882,000
1,593,441,882,000
CONTRIBUTOR
null
Selecting `blog_authorship_corpus` in the nlp viewer throws the following error: ``` NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train', num_bytes=614706451, num_examples=535568, dataset_name='blog_authorship_corpus')}, {'expected': SplitInfo(name='validation', num_bytes=37500394, num_examples=31277, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='validation', num_bytes=32553710, num_examples=28521, dataset_name='blog_authorship_corpus')}] Traceback: File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script exec(code, module.__dict__) File "/home/sasha/nlp-viewer/run.py", line 172, in <module> dts, fail = get(str(option.id), str(conf_option.name) if conf_option else None) File "/home/sasha/streamlit/lib/streamlit/caching.py", line 591, in wrapped_func return get_or_create_cached_value() File "/home/sasha/streamlit/lib/streamlit/caching.py", line 575, in get_or_create_cached_value return_value = func(*args, **kwargs) File "/home/sasha/nlp-viewer/run.py", line 132, in get builder_instance.download_and_prepare() File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 432, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 488, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/utils/info_utils.py", line 70, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) ``` @srush @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/320/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/320/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/319
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/319/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/319/comments
https://api.github.com/repos/huggingface/datasets/issues/319/events
https://github.com/huggingface/datasets/issues/319
646,792,487
MDU6SXNzdWU2NDY3OTI0ODc=
319
Nested sequences with dicts
{ "login": "ghomasHudson", "id": 13795113, "node_id": "MDQ6VXNlcjEzNzk1MTEz", "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghomasHudson", "html_url": "https://github.com/ghomasHudson", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Oh yes, this is a backward compatibility feature with tensorflow_dataset in which a `Sequence` or `dict` is converted in a `dict` of `lists`, unfortunately it is not very intuitive, see here: https://github.com/huggingface/nlp/blob/master/src/nlp/features.py#L409\r\n\r\nTo avoid this behavior, you can just define the list in the feature with a simple list or a tuple (which is also simpler to write).\r\nIn your case, the features could be as follow:\r\n``` python\r\n...\r\nfeatures=nlp.Features({\r\n \"title\": nlp.Value(\"string\"),\r\n \"vertexSet\": [[{\r\n \"name\": nlp.Value(\"string\"),\r\n \"sent_id\": nlp.Value(\"int32\"),\r\n \"pos\": nlp.features.Sequence(nlp.Value(\"int32\")),\r\n \"type\": nlp.Value(\"string\"),\r\n }]],\r\n ...\r\n }),\r\n...\r\n```" ]
1,593,301,517,000
1,593,771,720,000
1,593,771,720,000
CONTRIBUTOR
null
Am pretty much finished [adding a dataset](https://github.com/ghomasHudson/nlp/blob/DocRED/datasets/docred/docred.py) for [DocRED](https://github.com/thunlp/DocRED), but am getting an error when trying to add a nested `nlp.features.sequence(nlp.features.sequence({key:value,...}))`. The original data is in this format: ```python { 'title': "Title of wiki page", 'vertexSet': [ [ { 'name': "mention_name", 'sent_id': "mention in which sentence", 'pos': ["postion of mention in a sentence"], 'type': "NER_type"}, {another mention} ], [another entity] ] ... } ``` So to represent this I've attempted to write: ``` ... features=nlp.Features({ "title": nlp.Value("string"), "vertexSet": nlp.features.Sequence(nlp.features.Sequence({ "name": nlp.Value("string"), "sent_id": nlp.Value("int32"), "pos": nlp.features.Sequence(nlp.Value("int32")), "type": nlp.Value("string"), })), ... }), ... ``` This is giving me the error: ``` pyarrow.lib.ArrowTypeError: Could not convert [{'pos': [[0,2], [2,4], [3,5]], "type": ["ORG", "ORG", "ORG"], "name": ["Lark Force", "Lark Force", "Lark Force", "sent_id": [0, 3, 4]}..... with type list: was not a dict, tuple, or recognized null value for conversion to struct type ``` Do we expect the pyarrow stuff to break when doing this deeper nesting? I've checked that it still works when you do `nlp.features.Sequence(nlp.features.Sequence(nlp.Value("string"))` or `nlp.features.Sequence({key:value,...})` just not nested sequences with a dict. If it's not possible, I can always convert it to a shallower structure. I'd rather not change the DocRED authors' structure if I don't have to though.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/319/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/319/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/317
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/317/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/317/comments
https://api.github.com/repos/huggingface/datasets/issues/317/events
https://github.com/huggingface/datasets/issues/317
646,555,384
MDU6SXNzdWU2NDY1NTUzODQ=
317
Adding a dataset with multiple subtasks
{ "login": "erickrf", "id": 294483, "node_id": "MDQ6VXNlcjI5NDQ4Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/294483?v=4", "gravatar_id": "", "url": "https://api.github.com/users/erickrf", "html_url": "https://github.com/erickrf", "followers_url": "https://api.github.com/users/erickrf/followers", "following_url": "https://api.github.com/users/erickrf/following{/other_user}", "gists_url": "https://api.github.com/users/erickrf/gists{/gist_id}", "starred_url": "https://api.github.com/users/erickrf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/erickrf/subscriptions", "organizations_url": "https://api.github.com/users/erickrf/orgs", "repos_url": "https://api.github.com/users/erickrf/repos", "events_url": "https://api.github.com/users/erickrf/events{/privacy}", "received_events_url": "https://api.github.com/users/erickrf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "For one dataset you can have different configurations that each have their own `nlp.Features`.\r\nWe imagine having one configuration per subtask for example.\r\nThey are loaded with `nlp.load_dataset(\"my_dataset\", \"my_config\")`.\r\n\r\nFor example the `glue` dataset has many configurations. It is a bit different from your case though because each configuration is a dataset by itself (sst2, mnli).\r\nAnother example is `wikipedia` that has one configuration per language." ]
1,593,213,259,000
1,603,813,012,000
1,603,813,012,000
NONE
null
I intent to add the datasets of the MT Quality Estimation shared tasks to `nlp`. However, they have different subtasks -- such as word-level, sentence-level and document-level quality estimation, each of which having different language pairs, and some of the data reused in different subtasks. For example, in [QE 2019,](http://www.statmt.org/wmt19/qe-task.html) we had the same English-Russian and English-German data for word-level and sentence-level QE. I suppose these datasets could have both their word and sentence-level labels inside `nlp.Features`; but what about other subtasks? Should they be considered a different dataset altogether? I read the discussion on #217 but the case of QE seems a lot simpler.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/317/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/317/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/315
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/315/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/315/comments
https://api.github.com/repos/huggingface/datasets/issues/315/events
https://github.com/huggingface/datasets/issues/315
645,888,943
MDU6SXNzdWU2NDU4ODg5NDM=
315
[Question] Best way to batch a large dataset?
{ "login": "jarednielsen", "id": 4564897, "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jarednielsen", "html_url": "https://github.com/jarednielsen", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "repos_url": "https://api.github.com/users/jarednielsen/repos", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "type": "User", "site_admin": false }
[ { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
open
false
null
[]
null
[ "Update: I think I've found a solution.\r\n\r\n```python\r\noutput_types = {\"input_ids\": tf.int64, \"token_type_ids\": tf.int64, \"attention_mask\": tf.int64}\r\ndef train_dataset_gen():\r\n for i in range(len(train_dataset)):\r\n yield train_dataset[i]\r\ntf_dataset = tf.data.Dataset.from_generator(train_dataset_gen, output_types=output_types)\r\n```\r\n\r\nloads WikiText-2 in 20 ms, and WikiText-103 in 20 ms. It appears to be lazily loading via indexing train_dataset.", "Yes this is the current best solution. We should probably show it in the tutorial notebook.\r\n\r\nNote that this solution unfortunately doesn't allow to train on TPUs (yet). See #193 ", "This approach still seems quite slow. When using TFRecords with a similar training loop, I get ~3.0-3.5 it/s on multi-node, multi-GPU training. I notice a pretty severe performance regression when scaling, with observed performance numbers. Since the allreduce step takes less than 100ms/it and I've achieved 80% scaling efficiency up to 64 GPUs, it must be the data pipeline.\r\n\r\n| Nodes | GPUs | Iterations/Second |\r\n| --- | --- | --- |\r\n| 1 | 2 | 2.01 |\r\n| 1 | 8 | 0.81 |\r\n| 2 | 16 | 0.37 |\r\n\r\nHere are performance metrics over 10k steps. The iteration speed appears to follow some sort of caching pattern. I would love to use `nlp` in my project, but a slowdown from 3.0 it/s to 0.3 it/s is too great to stomach.\r\n\r\n<img width=\"1361\" alt=\"Screen Shot 2020-07-02 at 8 29 22 AM\" src=\"https://user-images.githubusercontent.com/4564897/86378156-2f8d3900-bc3e-11ea-918b-c395c3df5377.png\">\r\n", "An interesting alternative to investigate here would be to use the tf.io library which has some support for Arrow to TF conversion: https://www.tensorflow.org/io/api_docs/python/tfio/arrow/ArrowDataset\r\n\r\nThere are quite a few types supported, including lists so if the unsupported columns are dropped then we could maybe have a zero-copy mapping from Arrow to TensorFlow, including tokenized inputs and 1D tensors like the ones we mostly use in NLP: https://github.com/tensorflow/io/blob/322b3170c43ecac5c6af9e39dbd18fd747913e5a/tensorflow_io/arrow/python/ops/arrow_dataset_ops.py#L44-L72\r\n\r\nHere is an introduction on Arrow to TF using tf.io: https://medium.com/tensorflow/tensorflow-with-apache-arrow-datasets-cdbcfe80a59f", "Interesting. There's no support for strings, but it does enable int and floats so that would work for tokenized inputs. \r\n\r\nArrowStreamDataset requires loading from a \"record batch iterator\", which can be instantiated from in-memory arrays as described here: https://arrow.apache.org/docs/python/ipc.html. \r\n\r\nBut the nlp.Dataset stores its data as a `pyarrow.lib.Table`, and the underlying features are `pyarrow.lib.ChunkedArray`. I can't find any documentation about lazily creating a record batch iterator from a ChunkedArray or a Table. Have you had any success?\r\n\r\nI can't find [any uses](https://grep.app/search?q=ArrowDataset&filter[lang][0]=Python) of tfio.arrow.ArrowDataset on GitHub.", "You can use `to_batches` maybe?\r\nhttps://arrow.apache.org/docs/python/generated/pyarrow.Table.html#pyarrow.Table.to_batches", "Also note that since #322 it is now possible to do\r\n```python\r\nids = [1, 10, 42, 100]\r\nbatch = dataset[ids]\r\n```\r\nFrom my experience it is quite fast but it can take lots of memory for large batches (haven't played that much with it).\r\nLet me know if you think there could be a better way to implement it. (current code is [here](https://github.com/huggingface/nlp/blob/78628649962671b4aaa31a6b24e7275533416845/src/nlp/arrow_dataset.py#L463))", "Thanks @lhoestq! That format is much better to work with.\r\n\r\nI put together a benchmarking script. This doesn't measure the CPU-to-GPU efficiency, nor how it scales with multi-GPU multi-node training where many processes are making the same demands on the same dataset. But it does show some interesting results:\r\n\r\n```python\r\nimport nlp\r\nimport numpy as np\r\nimport tensorflow as tf\r\nimport time\r\n\r\ndset = nlp.load_dataset(\"wikitext\", \"wikitext-2-raw-v1\", split=\"train\")\r\ndset = dset.filter(lambda ex: len(ex[\"text\"]) > 0)\r\nbsz = 1024\r\nn_batches = 100\r\n\r\ndef single_item_gen():\r\n for i in range(len(dset)):\r\n yield dset[i]\r\n\r\ndef sequential_batch_gen():\r\n for i in range(0, len(dset), bsz):\r\n yield dset[i:i+bsz]\r\n\r\ndef random_batch_gen():\r\n for i in range(len(dset)):\r\n indices = list(np.random.randint(len(dset), size=(bsz,)))\r\n yield dset[indices]\r\n\r\noutput_types = {\"text\": tf.string}\r\nsingle_item = tf.data.Dataset.from_generator(single_item_gen, output_types=output_types).batch(bsz)\r\ninterleaved = tf.data.Dataset.range(10).interleave(\r\n lambda idx: tf.data.Dataset.from_generator(single_item_gen, output_types=output_types),\r\n cycle_length=10,\r\n)\r\nsequential_batch = tf.data.Dataset.from_generator(sequential_batch_gen, output_types=output_types)\r\nrandom_batch = tf.data.Dataset.from_generator(random_batch_gen, output_types=output_types)\r\n\r\ndef iterate(tf_dset):\r\n start = time.perf_counter()\r\n for i, batch in enumerate(tf_dset.take(n_batches)):\r\n pass\r\n elapsed = time.perf_counter() - start\r\n print(f\"{tf_dset} took {elapsed:.3f} secs\")\r\n\r\niterate(single_item)\r\niterate(interleaved)\r\niterate(sequential_batch)\r\niterate(random_batch)\r\n```\r\n\r\nResults:\r\n```\r\n<BatchDataset shapes: {text: <unknown>}, types: {text: tf.string}> took 23.005 secs\r\n<InterleaveDataset shapes: {text: <unknown>}, types: {text: tf.string}> took 0.135 secs\r\n<FlatMapDataset shapes: {text: <unknown>}, types: {text: tf.string}> took 0.074 secs\r\n<FlatMapDataset shapes: {text: <unknown>}, types: {text: tf.string}> took 0.550 secs\r\n```\r\n\r\n- Batching a generator which fetches a single item is terrible.\r\n- Interleaving performs well on a single process, but doesn't scale well to multi-GPU training. I believe the bottleneck here is in Arrow dataset locking or something similar. The numbers from the table above are with interleaving.\r\n- The sequential access dominates the random access (7x faster). Is there any way to bring random access times closer to sequential access? Maybe re-indexing the dataset after shuffling each pass over the data.", "Hey @jarednielsen \r\n\r\nThanks for this very interesting analysis!! IMHO to read text data one should use `tf.data.TextLineDataset`. It would be interesting to compare what you have done with simply load with a `TextLineDataset` and see if there is a difference.\r\n\r\nA good example can be found here https://www.tensorflow.org/tutorials/load_data/text", "Thanks! I'm not actually loading in raw text data, that was just the synthetic data I created for this benchmark. A more realistic use case would be a dataset of tokenized examples, which would be a dict of lists of integers. TensorFlow's TextLineDataset greedily loads the dataset into the graph itself, which can lead to out-of-memory errors - one of the main reason I'm so drawn to the `nlp` library is its zero-copy no-RAM approach to dataset loading and mapping. \r\n\r\nIt's quite helpful for running a preprocessing pipeline - a sample ELECTRA pipeline I've built is here: https://github.com/jarednielsen/deep-learning-models/blob/nlp/models/nlp/common/preprocess.py.", "Sorry, I think I badly expressed myself, my bad. What I suggested is to compare with the usual loading textual data in pure TF with `TextLineDataset` with `nlp`. I know it is not recommended with very large datasets to use it, but I was curious to see how it behaves compared to a processing with `nlp` on smaller datasets.\r\n\r\nBTW your script looks very interesting, thanks for sharing!!" ]
1,593,124,220,000
1,603,813,097,000
null
CONTRIBUTOR
null
I'm training on large datasets such as Wikipedia and BookCorpus. Following the instructions in [the tutorial notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb), I see the following recommended for TensorFlow: ```python train_tf_dataset = train_tf_dataset.filter(remove_none_values, load_from_cache_file=False) columns = ['input_ids', 'token_type_ids', 'attention_mask', 'start_positions', 'end_positions'] train_tf_dataset.set_format(type='tensorflow', columns=columns) features = {x: train_tf_dataset[x].to_tensor(default_value=0, shape=[None, tokenizer.max_len]) for x in columns[:3]} labels = {"output_1": train_tf_dataset["start_positions"].to_tensor(default_value=0, shape=[None, 1])} labels["output_2"] = train_tf_dataset["end_positions"].to_tensor(default_value=0, shape=[None, 1]) ### Question about this last line ### tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8) ``` This code works for something like WikiText-2. However, scaling up to WikiText-103, the last line takes 5-10 minutes to run. I assume it is because tf.data.Dataset.from_tensor_slices() is pulling everything into memory, not lazily loading. This approach won't scale up to datasets 25x larger such as Wikipedia. So I tried manual batching using `dataset.select()`: ```python idxs = np.random.randint(len(dataset), size=bsz) batch = dataset.select(idxs).map(lambda example: {"input_ids": tokenizer(example["text"])}) tf_batch = tf.constant(batch["ids"], dtype=tf.int64) ``` This appears to create a new Apache Arrow dataset with every batch I grab, and then tries to cache it. The runtime of `dataset.select([0, 1])` appears to be much worse than `dataset[:2]`. So using `select()` doesn't seem to be performant enough for a training loop. Is there a performant scalable way to lazily load batches of nlp Datasets?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/315/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/315/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/312
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/312/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/312/comments
https://api.github.com/repos/huggingface/datasets/issues/312/events
https://github.com/huggingface/datasets/issues/312
645,025,561
MDU6SXNzdWU2NDUwMjU1NjE=
312
[Feature request] Add `shard()` method to dataset
{ "login": "jarednielsen", "id": 4564897, "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jarednielsen", "html_url": "https://github.com/jarednielsen", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "repos_url": "https://api.github.com/users/jarednielsen/repos", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi Jared,\r\nInteresting, thanks for raising this question. You can also do that after loading with `dataset.select()` or `dataset.filter()` which let you keep only a specific subset of rows in a dataset.\r\nWhat is your use-case for sharding?", "Thanks for the pointer to those functions! It's still a little more verbose since you have to manually calculate which ids each rank would keep, but definitely works.\r\n\r\nMy use case is multi-node, multi-GPU training and avoiding global batches of duplicate elements. I'm using horovod. You can shuffle indices, or set random seeds, but explicitly sharding the dataset up front is the safest and clearest way I've found to do so." ]
1,593,038,913,000
1,594,038,936,000
1,594,038,936,000
CONTRIBUTOR
null
Currently, to shard a dataset into 10 pieces on different ranks, you can run ```python rank = 3 # for example size = 10 dataset = nlp.load_dataset('wikitext', 'wikitext-2-raw-v1', split=f"train[{rank*10}%:{(rank+1)*10}%]") ``` However, this breaks down if you have a number of ranks that doesn't divide cleanly into 100, such as 64 ranks. Is there interest in adding a method shard() that looks like this? ```python rank = 3 size = 64 dataset = nlp.load_dataset("wikitext", "wikitext-2-raw-v1", split="train").shard(rank=rank, size=size) ``` TensorFlow has a similar API: https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shard. I'd be happy to contribute this code.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/312/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/312/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/307
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/307/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/307/comments
https://api.github.com/repos/huggingface/datasets/issues/307/events
https://github.com/huggingface/datasets/issues/307
644,187,262
MDU6SXNzdWU2NDQxODcyNjI=
307
Specify encoding for MRPC
{ "login": "patpizio", "id": 15801338, "node_id": "MDQ6VXNlcjE1ODAxMzM4", "avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patpizio", "html_url": "https://github.com/patpizio", "followers_url": "https://api.github.com/users/patpizio/followers", "following_url": "https://api.github.com/users/patpizio/following{/other_user}", "gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}", "starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patpizio/subscriptions", "organizations_url": "https://api.github.com/users/patpizio/orgs", "repos_url": "https://api.github.com/users/patpizio/repos", "events_url": "https://api.github.com/users/patpizio/events{/privacy}", "received_events_url": "https://api.github.com/users/patpizio/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,592,951,089,000
1,593,087,369,000
1,593,087,369,000
CONTRIBUTOR
null
Same as #242, but with MRPC: on Windows, I get a `UnicodeDecodeError` when I try to download the dataset: ```python dataset = nlp.load_dataset('glue', 'mrpc') ``` ```python Downloading and preparing dataset glue/mrpc (download: Unknown size, generated: Unknown size, total: Unknown size) to C:\Users\Python\.cache\huggingface\datasets\glue\mrpc\1.0.0... --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) ~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in incomplete_dir(dirname) 369 try: --> 370 yield tmp_dir 371 if os.path.isdir(dirname): ~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 430 verify_infos = not save_infos and not ignore_verifications --> 431 self._download_and_prepare( 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs ~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 482 # Prepare split will record examples associated to the split --> 483 self._prepare_split(split_generator, **prepare_split_kwargs) 484 except OSError: ~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in _prepare_split(self, split_generator) 663 generator = self._generate_examples(**split_generator.gen_kwargs) --> 664 for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False): 665 example = self.info.features.encode_example(record) ~\Miniconda3\envs\nlp\lib\site-packages\tqdm\notebook.py in __iter__(self, *args, **kwargs) 217 try: --> 218 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): 219 # return super(tqdm...) will not catch exception ~\Miniconda3\envs\nlp\lib\site-packages\tqdm\std.py in __iter__(self) 1128 try: -> 1129 for obj in iterable: 1130 yield obj ~\Miniconda3\envs\nlp\lib\site-packages\nlp\datasets\glue\7fc58099eb3983a04c8dac8500b70d27e6eceae63ffb40d7900c977897bb58c6\glue.py in _generate_examples(self, data_file, split, mrpc_files) 514 examples = self._generate_example_mrpc_files(mrpc_files=mrpc_files, split=split) --> 515 for example in examples: 516 yield example["idx"], example ~\Miniconda3\envs\nlp\lib\site-packages\nlp\datasets\glue\7fc58099eb3983a04c8dac8500b70d27e6eceae63ffb40d7900c977897bb58c6\glue.py in _generate_example_mrpc_files(self, mrpc_files, split) 576 reader = csv.DictReader(f, delimiter="\t", quoting=csv.QUOTE_NONE) --> 577 for n, row in enumerate(reader): 578 is_row_in_dev = [row["#1 ID"], row["#2 ID"]] in dev_ids ~\Miniconda3\envs\nlp\lib\csv.py in __next__(self) 110 self.fieldnames --> 111 row = next(self.reader) 112 self.line_num = self.reader.line_num ~\Miniconda3\envs\nlp\lib\encodings\cp1252.py in decode(self, input, final) 22 def decode(self, input, final=False): ---> 23 return codecs.charmap_decode(input,self.errors,decoding_table)[0] 24 UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 1180: character maps to <undefined> ``` The fix is the same: specify `utf-8` encoding when opening the file. The previous fix didn't work as MRPC's download process is different from the others in GLUE. I am going to propose a new PR :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/307/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/307/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/305
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/305/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/305/comments
https://api.github.com/repos/huggingface/datasets/issues/305/events
https://github.com/huggingface/datasets/issues/305
644,148,149
MDU6SXNzdWU2NDQxNDgxNDk=
305
Importing downloaded package repository fails
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 2067393914, "node_id": "MDU6TGFiZWwyMDY3MzkzOTE0", "url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug", "name": "metric bug", "color": "25b21e", "default": false, "description": "A bug in a metric script" } ]
closed
false
null
[]
null
[]
1,592,946,545,000
1,596,127,463,000
1,596,127,463,000
MEMBER
null
The `get_imports` function in `src/nlp/load.py` has a feature to download a package as a zip archive of the github repository and import functions from the unpacked directory. This is used for example in the `metrics/coval.py` file, and would be useful to add BLEURT (@ankparikh). Currently however, the code seems to have trouble with imports within the package. For example: ``` import nlp coval = nlp.load_metric('coval') ``` yields: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/yacine/Code/nlp/src/nlp/load.py", line 432, in load_metric metric_cls = import_main_class(module_path, dataset=False) File "/home/yacine/Code/nlp/src/nlp/load.py", line 57, in import_main_class module = importlib.import_module(module_path) File "/home/yacine/anaconda3/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/yacine/Code/nlp/src/nlp/metrics/coval/a78807df33ac45edbb71799caf2b3b47e55df4fd690267808fe963a5e8b30952/coval.py", line 21, in <module> from .coval_backend.conll import reader # From: https://github.com/ns-moosavi/coval File "/home/yacine/Code/nlp/src/nlp/metrics/coval/a78807df33ac45edbb71799caf2b3b47e55df4fd690267808fe963a5e8b30952/coval_backend/conll/reader.py", line 2, in <module> from conll import mention ModuleNotFoundError: No module named 'conll' ``` Not sure what the fix would be there.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/305/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 1, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/305/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/304
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/304/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/304/comments
https://api.github.com/repos/huggingface/datasets/issues/304/events
https://github.com/huggingface/datasets/issues/304
644,091,970
MDU6SXNzdWU2NDQwOTE5NzA=
304
Problem while printing doc string when instantiating multiple metrics.
{ "login": "codehunk628", "id": 51091425, "node_id": "MDQ6VXNlcjUxMDkxNDI1", "avatar_url": "https://avatars.githubusercontent.com/u/51091425?v=4", "gravatar_id": "", "url": "https://api.github.com/users/codehunk628", "html_url": "https://github.com/codehunk628", "followers_url": "https://api.github.com/users/codehunk628/followers", "following_url": "https://api.github.com/users/codehunk628/following{/other_user}", "gists_url": "https://api.github.com/users/codehunk628/gists{/gist_id}", "starred_url": "https://api.github.com/users/codehunk628/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/codehunk628/subscriptions", "organizations_url": "https://api.github.com/users/codehunk628/orgs", "repos_url": "https://api.github.com/users/codehunk628/repos", "events_url": "https://api.github.com/users/codehunk628/events{/privacy}", "received_events_url": "https://api.github.com/users/codehunk628/received_events", "type": "User", "site_admin": false }
[ { "id": 2067393914, "node_id": "MDU6TGFiZWwyMDY3MzkzOTE0", "url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug", "name": "metric bug", "color": "25b21e", "default": false, "description": "A bug in a metric script" } ]
closed
false
null
[]
null
[]
1,592,940,725,000
1,595,411,458,000
1,595,411,458,000
CONTRIBUTOR
null
When I load more than one metric and try to print doc string of a particular metric,. It shows the doc strings of all imported metric one after the other which looks quite confusing and clumsy. Attached [Colab](https://colab.research.google.com/drive/13H0ZgyQ2se0mqJ2yyew0bNEgJuHaJ8H3?usp=sharing) Notebook for problem clarification..
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/304/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/304/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/302
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/302/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/302/comments
https://api.github.com/repos/huggingface/datasets/issues/302/events
https://github.com/huggingface/datasets/issues/302
643,910,418
MDU6SXNzdWU2NDM5MTA0MTg=
302
Question - Sign Language Datasets
{ "login": "AmitMY", "id": 5757359, "node_id": "MDQ6VXNlcjU3NTczNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmitMY", "html_url": "https://github.com/AmitMY", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "repos_url": "https://api.github.com/users/AmitMY/repos", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
closed
false
null
[]
null
[ "Even more complicating - \r\n\r\nAs I see it, datasets can have \"addons\".\r\nFor example, the WebNLG dataset is a dataset for data-to-text. However, a work of mine and other works enriched this dataset with text plans / underlying text structures. In that case, I see a need to load the dataset \"WebNLG\" with \"plans\" addon.\r\n\r\nSame for sign language - if there is a dataset of videos, one addon can be to run OpenPose, another to run ARKit4 pose estimation, and another to run PoseNet, or even just a video embedding addon. (which are expensive to run individually for everyone who wants to use these data)\r\n\r\nThis is something I dabbled with my own implementation to a [research datasets library](https://github.com/AmitMY/meta-scholar/) and I love to get the discussion going on these topics.", "This is a really cool idea !\r\nThe example for data objects you gave for the RWTH-PHOENIX-Weather 2014 T dataset can totally fit inside the library.\r\n\r\nFor your point about formats like `ilex`, `eaf`, or `srt`, it is possible to use any library in your dataset script.\r\nHowever most user probably won't need these libraries, as most datasets don't need them, and therefore it's unlikely that we will have them in the minimum requirements to use `nlp` (we want to keep it as light-weight as possible). If a user wants to load your dataset and doesn't have the libraries you need, an error is raised asking the user to install them.\r\n\r\nMore generally, we plan to have something like a `requirements.txt` per dataset. This could also be a place for addons as you said. What do you think ?", "Thanks, Quentin, I think a `requirements.txt` per dataset will be a good thing.\r\nI will work on adding this dataset next week, and once we sort all of the kinks, I'll add more." ]
1,592,924,020,000
1,606,303,533,000
1,606,303,533,000
CONTRIBUTOR
null
An emerging field in NLP is SLP - sign language processing. I was wondering about adding datasets here, specifically because it's shaping up to be large and easily usable. The metrics for sign language to text translation are the same. So, what do you think about (me, or others) adding datasets here? An example dataset would be [RWTH-PHOENIX-Weather 2014 T](https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/) For every item in the dataset, the data object includes: 1. video_path - path to mp4 file 2. pose_path - a path to `.pose` file with human pose landmarks 3. openpose_path - a path to a `.json` file with human pose landmarks 4. gloss - string 5. text - string 6. video_metadata - height, width, frames, framerate ------ To make it a tad more complicated - what if sign language libraries add requirements to `nlp`? for example, sign language is commonly annotated using `ilex`, `eaf`, or `srt` files, which are all loadable as text, but there is no reason for the dataset to parse that file by itself, if libraries exist to do so.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/302/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/302/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/301
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/301/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/301/comments
https://api.github.com/repos/huggingface/datasets/issues/301/events
https://github.com/huggingface/datasets/issues/301
643,763,525
MDU6SXNzdWU2NDM3NjM1MjU=
301
Setting cache_dir gives error on wikipedia download
{ "login": "hallvagi", "id": 33862536, "node_id": "MDQ6VXNlcjMzODYyNTM2", "avatar_url": "https://avatars.githubusercontent.com/u/33862536?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hallvagi", "html_url": "https://github.com/hallvagi", "followers_url": "https://api.github.com/users/hallvagi/followers", "following_url": "https://api.github.com/users/hallvagi/following{/other_user}", "gists_url": "https://api.github.com/users/hallvagi/gists{/gist_id}", "starred_url": "https://api.github.com/users/hallvagi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hallvagi/subscriptions", "organizations_url": "https://api.github.com/users/hallvagi/orgs", "repos_url": "https://api.github.com/users/hallvagi/repos", "events_url": "https://api.github.com/users/hallvagi/events{/privacy}", "received_events_url": "https://api.github.com/users/hallvagi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Whoops didn't mean to close this one.\r\nI did some changes, could you try to run it from the master branch ?", "Now it works, thanks!" ]
1,592,911,904,000
1,592,982,307,000
1,592,982,307,000
NONE
null
First of all thank you for a super handy library! I'd like to download large files to a specific drive so I set `cache_dir=my_path`. This works fine with e.g. imdb and squad. But on wikipedia I get an error: ``` nlp.load_dataset('wikipedia', '20200501.de', split = 'train', cache_dir=my_path) ``` ``` OSError Traceback (most recent call last) <ipython-input-2-23551344d7bc> in <module> 1 import nlp ----> 2 nlp.load_dataset('wikipedia', '20200501.de', split = 'train', cache_dir=path) ~/anaconda3/envs/fastai2/lib/python3.7/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 522 download_mode=download_mode, 523 ignore_verifications=ignore_verifications, --> 524 save_infos=save_infos, 525 ) 526 ~/anaconda3/envs/fastai2/lib/python3.7/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 385 with utils.temporary_assignment(self, "_cache_dir", tmp_data_dir): 386 reader = ArrowReader(self._cache_dir, self.info) --> 387 reader.download_from_hf_gcs(self._cache_dir, self._relative_data_dir(with_version=True)) 388 downloaded_info = DatasetInfo.from_directory(self._cache_dir) 389 self.info.update(downloaded_info) ~/anaconda3/envs/fastai2/lib/python3.7/site-packages/nlp/arrow_reader.py in download_from_hf_gcs(self, cache_dir, relative_data_dir) 231 remote_dataset_info = os.path.join(remote_cache_dir, "dataset_info.json") 232 downloaded_dataset_info = cached_path(remote_dataset_info) --> 233 os.rename(downloaded_dataset_info, os.path.join(cache_dir, "dataset_info.json")) 234 if self._info is not None: 235 self._info.update(self._info.from_directory(cache_dir)) OSError: [Errno 18] Invalid cross-device link: '/home/local/NTU/nn/.cache/huggingface/datasets/025fa4fd4f04aaafc9e939260fbc8f0bb190ce14c61310c8ae1ddd1dcb31f88c.9637f367b6711a79ca478be55fe6989b8aea4941b7ef7adc67b89ff403020947' -> '/data/nn/nlp/wikipedia/20200501.de/1.0.0.incomplete/dataset_info.json' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/301/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/301/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/297
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/297/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/297/comments
https://api.github.com/repos/huggingface/datasets/issues/297/events
https://github.com/huggingface/datasets/issues/297
643,444,625
MDU6SXNzdWU2NDM0NDQ2MjU=
297
Error in Demo for Specific Datasets
{ "login": "s-jse", "id": 60150701, "node_id": "MDQ6VXNlcjYwMTUwNzAx", "avatar_url": "https://avatars.githubusercontent.com/u/60150701?v=4", "gravatar_id": "", "url": "https://api.github.com/users/s-jse", "html_url": "https://github.com/s-jse", "followers_url": "https://api.github.com/users/s-jse/followers", "following_url": "https://api.github.com/users/s-jse/following{/other_user}", "gists_url": "https://api.github.com/users/s-jse/gists{/gist_id}", "starred_url": "https://api.github.com/users/s-jse/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/s-jse/subscriptions", "organizations_url": "https://api.github.com/users/s-jse/orgs", "repos_url": "https://api.github.com/users/s-jse/repos", "events_url": "https://api.github.com/users/s-jse/events{/privacy}", "received_events_url": "https://api.github.com/users/s-jse/received_events", "type": "User", "site_admin": false }
[ { "id": 2107841032, "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer", "name": "nlp-viewer", "color": "94203D", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Thanks for reporting these errors :)\r\n\r\nI can actually see two issues here.\r\n\r\nFirst, datasets like `natural_questions` require apache_beam to be processed. Right now the import is not at the right place so we have this error message. However, even the imports are fixed, the nlp viewer doesn't actually have the resources to process NQ right now so we'll have to wait until we have a version that we've already processed on our google storage (that's what we've done for wikipedia for example).\r\n\r\nSecond, datasets like `newsroom` require manual downloads as we're not allowed to redistribute the data ourselves (if I'm not wrong). An error message should be displayed saying that we're not allowed to show the dataset.\r\n\r\nI can fix the first issue with the imports but for the second one I think we'll have to see with @srush to show a message for datasets that require manual downloads (it can be checked whether a dataset requires manual downloads if `dataset_builder_instance.manual_download_instructions is not None`).\r\n\r\n", "I added apache-beam to the viewer. We can think about how to add newsroom. ", "We don't plan to host the source files of newsroom ourselves for now.\r\nYou can still get the dataset if you follow the download instructions given by `dataset = load_dataset('newsroom')` though.\r\nThe viewer also shows the instructions now.\r\n\r\nClosing this one. If you have other questions, feel free to re-open :)" ]
1,592,872,722,000
1,595,007,786,000
1,595,007,786,000
NONE
null
Selecting `natural_questions` or `newsroom` dataset in the online demo results in an error similar to the following. ![image](https://user-images.githubusercontent.com/60150701/85347842-ac861900-b4ae-11ea-98c4-a53a00934783.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/297/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/297/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/296
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/296/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/296/comments
https://api.github.com/repos/huggingface/datasets/issues/296/events
https://github.com/huggingface/datasets/issues/296
643,423,717
MDU6SXNzdWU2NDM0MjM3MTc=
296
snli -1 labels
{ "login": "jxmorris12", "id": 13238952, "node_id": "MDQ6VXNlcjEzMjM4OTUy", "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jxmorris12", "html_url": "https://github.com/jxmorris12", "followers_url": "https://api.github.com/users/jxmorris12/followers", "following_url": "https://api.github.com/users/jxmorris12/following{/other_user}", "gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}", "starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions", "organizations_url": "https://api.github.com/users/jxmorris12/orgs", "repos_url": "https://api.github.com/users/jxmorris12/repos", "events_url": "https://api.github.com/users/jxmorris12/events{/privacy}", "received_events_url": "https://api.github.com/users/jxmorris12/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@jxmorris12 , we use `-1` to label examples for which `gold label` is missing (`gold label = -` in the original dataset). ", "Thanks @mariamabarham! so the original dataset is missing some labels? That is weird. Is standard practice just to discard those examples training/eval?", "Yes the original dataset is missing some labels maybe @sleepinyourhat , @gangeli can correct me if I'm wrong \r\nFor my personal opinion at least if you want your model to learn to predict no answer (-1) you can leave it their but otherwise you can discard them. ", "thanks @mariamabarham :)" ]
1,592,868,810,000
1,592,923,319,000
1,592,923,318,000
CONTRIBUTOR
null
I'm trying to train a model on the SNLI dataset. Why does it have so many -1 labels? ``` import nlp from collections import Counter data = nlp.load_dataset('snli')['train'] print(Counter(data['label'])) Counter({0: 183416, 2: 183187, 1: 182764, -1: 785}) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/296/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/295
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/295/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/295/comments
https://api.github.com/repos/huggingface/datasets/issues/295/events
https://github.com/huggingface/datasets/issues/295
643,245,412
MDU6SXNzdWU2NDMyNDU0MTI=
295
Improve input warning for evaluation metrics
{ "login": "Tiiiger", "id": 19514537, "node_id": "MDQ6VXNlcjE5NTE0NTM3", "avatar_url": "https://avatars.githubusercontent.com/u/19514537?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tiiiger", "html_url": "https://github.com/Tiiiger", "followers_url": "https://api.github.com/users/Tiiiger/followers", "following_url": "https://api.github.com/users/Tiiiger/following{/other_user}", "gists_url": "https://api.github.com/users/Tiiiger/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tiiiger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tiiiger/subscriptions", "organizations_url": "https://api.github.com/users/Tiiiger/orgs", "repos_url": "https://api.github.com/users/Tiiiger/repos", "events_url": "https://api.github.com/users/Tiiiger/events{/privacy}", "received_events_url": "https://api.github.com/users/Tiiiger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,592,846,937,000
1,592,923,657,000
1,592,923,657,000
NONE
null
Hi, I am the author of `bert_score`. Recently, we received [ an issue ](https://github.com/Tiiiger/bert_score/issues/62) reporting a problem in using `bert_score` from the `nlp` package (also see #238 in this repo). After looking into this, I realized that the problem arises from the format `nlp.Metric` takes input. Here is a minimal example: ```python import nlp scorer = nlp.load_metric("bertscore") with open("pred.txt") as p, open("ref.txt") as g: for lp, lg in zip(p, g): scorer.add(lp, lg) score = scorer.compute(lang="en") ``` The problem in the above code is that `scorer.add()` expects a list of strings as input for the references. As a result, the `scorer` here would take a list of characters in `lg` to be the references. The correct implementation would be calling ```python scorer.add(lp, [lg]) ``` I just want to raise this issue to you to prevent future user errors of a similar kind. I assume some simple type checking can prevent this from happening? Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/295/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/295/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/294
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/294/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/294/comments
https://api.github.com/repos/huggingface/datasets/issues/294/events
https://github.com/huggingface/datasets/issues/294
643,181,179
MDU6SXNzdWU2NDMxODExNzk=
294
Cannot load arxiv dataset on MacOS?
{ "login": "JohnGiorgi", "id": 8917831, "node_id": "MDQ6VXNlcjg5MTc4MzE=", "avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JohnGiorgi", "html_url": "https://github.com/JohnGiorgi", "followers_url": "https://api.github.com/users/JohnGiorgi/followers", "following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}", "gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}", "starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions", "organizations_url": "https://api.github.com/users/JohnGiorgi/orgs", "repos_url": "https://api.github.com/users/JohnGiorgi/repos", "events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}", "received_events_url": "https://api.github.com/users/JohnGiorgi/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "I couldn't replicate this issue on my macbook :/\r\nCould you try to play with different encodings in `with open(path, encoding=...) as f` in scientific_papers.py:L108 ?", "I was able to track down the file causing the problem by adding the following to `scientific_papers.py` (starting at line 116):\r\n\r\n```python\r\n from json import JSONDecodeError\r\n try:\r\n d = json.loads(line)\r\n summary = \"\\n\".join(d[\"abstract_text\"])\r\n except JSONDecodeError:\r\n print(path, line)\r\n```\r\n\r\n\r\n\r\nFor me it was at: `/Users/johngiorgi/.cache/huggingface/datasets/f87fd498c5003cbe253a2af422caa1e58f87a4fd74cb3e67350c635c8903b259/arxiv-dataset/train.txt` with `\"article_id\": \"1407.3051\"`.\r\n\r\nNot really 100% sure at the moment, but it looks like this specific substring from `\"article_text\"` may be causing the problem?\r\n\r\n```\r\n\"after the missing - mass scale adjustment , the validity of the corrections was tested in the @xmath85 productions at 1.69 gev/@xmath1 . in fig . [\", \"fig : calibrations ] ( a ) , we show the missing - mass spectrum in the @xmath86 region in the @xmath87 reaction at 1.69 gev/@xmath1 . a fitting result with a lorentzian function for the @xmath86 ( dashed line ) and the three - body phas\r\n```\r\n\r\nperhaps because it appears to be truncated. I (think) I can recreate the problem by doing the following:\r\n\r\n```python\r\nimport json\r\n\r\n# A minimal example of the json file that causes the error\r\ninvalid_json = '{\"article_id\": \"1407.3051\", \"article_text\": [\"the missing - mass resolution was obtained to be 2.8 @xmath3 0.1 mev/@xmath4 ( fwhm ) , which corresponds to the missing - mass resolution of 3.2 @xmath3 0.2 mev/@xmath4 ( fwhm ) at the @xmath6 cusp region in the @xmath0 reaction .\", \"this resolution is at least by a factor of 2 better than the previous measurement with the same reaction ( 3.2@xmath595.5 mev/@xmath4 in @xmath84 ) @xcite .\", \"after the missing - mass scale adjustment , the validity of the corrections was tested in the @xmath85 productions at 1.69 gev/@xmath1 . in fig . [\", \"fig : calibrations ] ( a ) , we show the missing - mass spectrum in the @xmath86 region in the @xmath87 reaction at 1.69 gev/@xmath1 . a fitting result with a lorentzian function for the @xmath86 ( dashed line ) and the three - body phas' \r\n# The line of code from `scientific_papers.py` which appears to cause the error\r\njson.loads(invalid_json)\r\n```\r\n\r\nThis is as far as I get before I am stumped.", "I just checked inside `train.txt` and this line isn't truncated for me (line 163577).\r\nCould you try to clear your cache and re-download the dataset ?", "Ah the turn-it-off-turn-it-on again solution! That did it, thanks a lot :) " ]
1,592,840,815,000
1,593,530,710,000
1,593,530,710,000
NONE
null
I am having trouble loading the `"arxiv"` config from the `"scientific_papers"` dataset on MacOS. When I try loading the dataset with: ```python arxiv = nlp.load_dataset("scientific_papers", "arxiv") ``` I get the following stack trace: ```bash JSONDecodeError Traceback (most recent call last) <ipython-input-2-8e00c55d5a59> in <module> ----> 1 arxiv = nlp.load_dataset("scientific_papers", "arxiv") ~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 522 download_mode=download_mode, 523 ignore_verifications=ignore_verifications, --> 524 save_infos=save_infos, 525 ) 526 ~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 430 verify_infos = not save_infos and not ignore_verifications 431 self._download_and_prepare( --> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 433 ) 434 # Sync info ~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 481 try: 482 # Prepare split will record examples associated to the split --> 483 self._prepare_split(split_generator, **prepare_split_kwargs) 484 except OSError: 485 raise OSError("Cannot find data file. " + (self.manual_download_instructions or "")) ~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/builder.py in _prepare_split(self, split_generator) 662 663 generator = self._generate_examples(**split_generator.gen_kwargs) --> 664 for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False): 665 example = self.info.features.encode_example(record) 666 writer.write(example) ~/miniconda3/envs/t2t/lib/python3.7/site-packages/tqdm/std.py in __iter__(self) 1106 fp_write=getattr(self.fp, 'write', sys.stderr.write)) 1107 -> 1108 for obj in iterable: 1109 yield obj 1110 # Update and possibly print the progressbar. ~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/datasets/scientific_papers/107a416c0e1958cb846f5934b5aae292f7884a5b27e86af3f3ef1a093e058bbc/scientific_papers.py in _generate_examples(self, path) 114 # "section_names": list[str], list of section names. 115 # "sections": list[list[str]], list of sections (list of paragraphs) --> 116 d = json.loads(line) 117 summary = "\n".join(d["abstract_text"]) 118 # In original paper, <S> and </S> are not used in vocab during training ~/miniconda3/envs/t2t/lib/python3.7/json/__init__.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 346 parse_int is None and parse_float is None and 347 parse_constant is None and object_pairs_hook is None and not kw): --> 348 return _default_decoder.decode(s) 349 if cls is None: 350 cls = JSONDecoder ~/miniconda3/envs/t2t/lib/python3.7/json/decoder.py in decode(self, s, _w) 335 336 """ --> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end()) 338 end = _w(s, end).end() 339 if end != len(s): ~/miniconda3/envs/t2t/lib/python3.7/json/decoder.py in raw_decode(self, s, idx) 351 """ 352 try: --> 353 obj, end = self.scan_once(s, idx) 354 except StopIteration as err: 355 raise JSONDecodeError("Expecting value", s, err.value) from None JSONDecodeError: Unterminated string starting at: line 1 column 46983 (char 46982) 163502 examples [02:10, 2710.68 examples/s] ``` I am not sure how to trace back to the specific JSON file that has the "Unterminated string". Also, I do not get this error on colab so I suspect it may be MacOS specific. Copy pasting the relevant lines from `transformers-cli env` below: - Platform: Darwin-19.5.0-x86_64-i386-64bit - Python version: 3.7.5 - PyTorch version (GPU?): 1.5.0 (False) - Tensorflow version (GPU?): 2.2.0 (False) Any ideas?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/294/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/294/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/290
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/290/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/290/comments
https://api.github.com/repos/huggingface/datasets/issues/290/events
https://github.com/huggingface/datasets/issues/290
641,978,286
MDU6SXNzdWU2NDE5NzgyODY=
290
ConnectionError - Eli5 dataset download
{ "login": "JovanNj", "id": 8490096, "node_id": "MDQ6VXNlcjg0OTAwOTY=", "avatar_url": "https://avatars.githubusercontent.com/u/8490096?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JovanNj", "html_url": "https://github.com/JovanNj", "followers_url": "https://api.github.com/users/JovanNj/followers", "following_url": "https://api.github.com/users/JovanNj/following{/other_user}", "gists_url": "https://api.github.com/users/JovanNj/gists{/gist_id}", "starred_url": "https://api.github.com/users/JovanNj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JovanNj/subscriptions", "organizations_url": "https://api.github.com/users/JovanNj/orgs", "repos_url": "https://api.github.com/users/JovanNj/repos", "events_url": "https://api.github.com/users/JovanNj/events{/privacy}", "received_events_url": "https://api.github.com/users/JovanNj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "It should ne fixed now, thanks for reporting this one :)\r\nIt was an issue on our google storage.\r\n\r\nLet me now if you're still facing this issue.", "It works now, thanks for prompt help!" ]
1,592,574,033,000
1,592,659,344,000
1,592,659,344,000
NONE
null
Hi, I have a problem with downloading Eli5 dataset. When typing `nlp.load_dataset('eli5')`, I get ConnectionError: Couldn't reach https://storage.googleapis.com/huggingface-nlp/cache/datasets/eli5/LFQA_reddit/1.0.0/explain_like_im_five-train_eli5.arrow I would appreciate if you could help me with this issue.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/290/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/290/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/288
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/288/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/288/comments
https://api.github.com/repos/huggingface/datasets/issues/288/events
https://github.com/huggingface/datasets/issues/288
641,888,610
MDU6SXNzdWU2NDE4ODg2MTA=
288
Error at the first example in README: AttributeError: module 'dill' has no attribute '_dill'
{ "login": "wutong8023", "id": 14964542, "node_id": "MDQ6VXNlcjE0OTY0NTQy", "avatar_url": "https://avatars.githubusercontent.com/u/14964542?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wutong8023", "html_url": "https://github.com/wutong8023", "followers_url": "https://api.github.com/users/wutong8023/followers", "following_url": "https://api.github.com/users/wutong8023/following{/other_user}", "gists_url": "https://api.github.com/users/wutong8023/gists{/gist_id}", "starred_url": "https://api.github.com/users/wutong8023/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wutong8023/subscriptions", "organizations_url": "https://api.github.com/users/wutong8023/orgs", "repos_url": "https://api.github.com/users/wutong8023/repos", "events_url": "https://api.github.com/users/wutong8023/events{/privacy}", "received_events_url": "https://api.github.com/users/wutong8023/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "It looks like the bug comes from `dill`. Which version of `dill` are you using ?", "Thank you. It is version 0.2.6, which version is better?", "0.2.6 is three years old now, maybe try a more recent one, e.g. the current 0.3.2 if you can?", "Thanks guys! I upgraded dill and it works.", "Awesome" ]
1,592,564,482,000
1,592,730,311,000
1,592,730,311,000
NONE
null
/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:469: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:470: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:471: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:472: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:473: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:476: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) /Users/parasol_tree/anaconda3/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6 return f(*args, **kwds) /Users/parasol_tree/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters Traceback (most recent call last): File "/Users/parasol_tree/Resource/019 - Github/AcademicEnglishToolkit /test.py", line 7, in <module> import nlp File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/__init__.py", line 27, in <module> from .arrow_dataset import Dataset File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/arrow_dataset.py", line 31, in <module> from nlp.utils.py_utils import dumps File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/utils/__init__.py", line 20, in <module> from .download_manager import DownloadManager, GenerateMode File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/utils/download_manager.py", line 25, in <module> from .py_utils import flatten_nested, map_nested, size_str File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/utils/py_utils.py", line 244, in <module> class Pickler(dill.Pickler): File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/utils/py_utils.py", line 247, in Pickler dispatch = dill._dill.MetaCatchingDict(dill.Pickler.dispatch.copy()) AttributeError: module 'dill' has no attribute '_dill'
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/288/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/288/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/283
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/283/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/283/comments
https://api.github.com/repos/huggingface/datasets/issues/283/events
https://github.com/huggingface/datasets/issues/283
641,270,439
MDU6SXNzdWU2NDEyNzA0Mzk=
283
Consistent formatting of citations
{ "login": "srush", "id": 35882, "node_id": "MDQ6VXNlcjM1ODgy", "avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4", "gravatar_id": "", "url": "https://api.github.com/users/srush", "html_url": "https://github.com/srush", "followers_url": "https://api.github.com/users/srush/followers", "following_url": "https://api.github.com/users/srush/following{/other_user}", "gists_url": "https://api.github.com/users/srush/gists{/gist_id}", "starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/srush/subscriptions", "organizations_url": "https://api.github.com/users/srush/orgs", "repos_url": "https://api.github.com/users/srush/repos", "events_url": "https://api.github.com/users/srush/events{/privacy}", "received_events_url": "https://api.github.com/users/srush/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "repos_url": "https://api.github.com/users/mariamabarham/repos", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "type": "User", "site_admin": false }
[ { "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "repos_url": "https://api.github.com/users/mariamabarham/repos", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "type": "User", "site_admin": false } ]
null
[]
1,592,491,725,000
1,592,847,046,000
1,592,847,046,000
CONTRIBUTOR
null
The citations are all of a different format, some have "```" and have text inside, others are proper bibtex. Can we make it so that they all are proper citations, i.e. parse by the bibtex spec: https://bibtexparser.readthedocs.io/en/master/
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/283/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/283/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/281
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/281/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/281/comments
https://api.github.com/repos/huggingface/datasets/issues/281/events
https://github.com/huggingface/datasets/issues/281
641,067,856
MDU6SXNzdWU2NDEwNjc4NTY=
281
Private/sensitive data
{ "login": "MFreidank", "id": 6368040, "node_id": "MDQ6VXNlcjYzNjgwNDA=", "avatar_url": "https://avatars.githubusercontent.com/u/6368040?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MFreidank", "html_url": "https://github.com/MFreidank", "followers_url": "https://api.github.com/users/MFreidank/followers", "following_url": "https://api.github.com/users/MFreidank/following{/other_user}", "gists_url": "https://api.github.com/users/MFreidank/gists{/gist_id}", "starred_url": "https://api.github.com/users/MFreidank/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MFreidank/subscriptions", "organizations_url": "https://api.github.com/users/MFreidank/orgs", "repos_url": "https://api.github.com/users/MFreidank/repos", "events_url": "https://api.github.com/users/MFreidank/events{/privacy}", "received_events_url": "https://api.github.com/users/MFreidank/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @MFreidank, you should already be able to load a dataset from local sources, indeed. (ping @lhoestq and @jplu)\r\n\r\nWe're also thinking about the ability to host private datasets on a hosted bucket with permission management, but that's further down the road.", "Hi @MFreidank, it is possible to load a dataset from your local storage, but only CSV/TSV and JSON are supported. To load a dataset in JSON format:\r\n\r\n```\r\nnlp.load_dataset(path=\"json\", data_files={nlp.Split.TRAIN: [\"path/to/train.json\"], nlp.Split.TEST: [\"path/to/test.json\"]})\r\n```\r\n\r\nFor CSV/TSV datasets, you have to replace `json` by `csv`.", "Hi @julien-c @jplu,\r\nThanks for sharing this solution with me, it helps, this is what I was looking for. \r\nIf not already there and only missed by me, this could be a great addition in the docs.\r\n\r\nClosing my issue as resolved, thanks again." ]
1,592,473,647,000
1,592,658,912,000
1,592,658,912,000
NONE
null
Hi all, Thanks for this fantastic library, it makes it very easy to do prototyping for NLP projects interchangeably between TF/Pytorch. Unfortunately, there is data that cannot easily be shared publicly as it may contain sensitive information. Is there support/a plan to support such data with NLP, e.g. by reading it from local sources? Use case flow could look like this: use NLP to prototype an approach on similar, public data and apply the resulting prototype on sensitive/private data without the need to rethink data processing pipelines. Many thanks for your responses ahead of time and kind regards, MFreidank
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/281/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/281/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/280
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/280/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/280/comments
https://api.github.com/repos/huggingface/datasets/issues/280/events
https://github.com/huggingface/datasets/issues/280
640,677,615
MDU6SXNzdWU2NDA2Nzc2MTU=
280
Error with SquadV2 Metrics
{ "login": "avinregmi", "id": 32203792, "node_id": "MDQ6VXNlcjMyMjAzNzky", "avatar_url": "https://avatars.githubusercontent.com/u/32203792?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avinregmi", "html_url": "https://github.com/avinregmi", "followers_url": "https://api.github.com/users/avinregmi/followers", "following_url": "https://api.github.com/users/avinregmi/following{/other_user}", "gists_url": "https://api.github.com/users/avinregmi/gists{/gist_id}", "starred_url": "https://api.github.com/users/avinregmi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avinregmi/subscriptions", "organizations_url": "https://api.github.com/users/avinregmi/orgs", "repos_url": "https://api.github.com/users/avinregmi/repos", "events_url": "https://api.github.com/users/avinregmi/events{/privacy}", "received_events_url": "https://api.github.com/users/avinregmi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,592,421,054,000
1,592,555,621,000
1,592,555,621,000
NONE
null
I can't seem to import squad v2 metrics. **squad_metric = nlp.load_metric('squad_v2')** **This throws me an error.:** ``` ImportError Traceback (most recent call last) <ipython-input-8-170b6a170555> in <module> ----> 1 squad_metric = nlp.load_metric('squad_v2') ~/env/lib64/python3.6/site-packages/nlp/load.py in load_metric(path, name, process_id, num_process, data_dir, experiment_id, in_memory, download_config, **metric_init_kwargs) 426 """ 427 module_path = prepare_module(path, download_config=download_config, dataset=False) --> 428 metric_cls = import_main_class(module_path, dataset=False) 429 metric = metric_cls( 430 name=name, ~/env/lib64/python3.6/site-packages/nlp/load.py in import_main_class(module_path, dataset) 55 """ 56 importlib.invalidate_caches() ---> 57 module = importlib.import_module(module_path) 58 59 if dataset: /usr/lib64/python3.6/importlib/__init__.py in import_module(name, package) 124 break 125 level += 1 --> 126 return _bootstrap._gcd_import(name[level:], package, level) 127 128 /usr/lib64/python3.6/importlib/_bootstrap.py in _gcd_import(name, package, level) /usr/lib64/python3.6/importlib/_bootstrap.py in _find_and_load(name, import_) /usr/lib64/python3.6/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_) /usr/lib64/python3.6/importlib/_bootstrap.py in _load_unlocked(spec) /usr/lib64/python3.6/importlib/_bootstrap_external.py in exec_module(self, module) /usr/lib64/python3.6/importlib/_bootstrap.py in _call_with_frames_removed(f, *args, **kwds) ~/env/lib64/python3.6/site-packages/nlp/metrics/squad_v2/a15e787c76889174874386d3def75321f0284c11730d2a57e28fe1352c9b5c7a/squad_v2.py in <module> 16 17 import nlp ---> 18 from .evaluate import evaluate 19 20 _CITATION = """\ ImportError: cannot import name 'evaluate' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/280/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/280/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/279
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/279/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/279/comments
https://api.github.com/repos/huggingface/datasets/issues/279/events
https://github.com/huggingface/datasets/issues/279
640,611,692
MDU6SXNzdWU2NDA2MTE2OTI=
279
Dataset Preprocessing Cache with .map() function not working as expected
{ "login": "sarahwie", "id": 8027676, "node_id": "MDQ6VXNlcjgwMjc2NzY=", "avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sarahwie", "html_url": "https://github.com/sarahwie", "followers_url": "https://api.github.com/users/sarahwie/followers", "following_url": "https://api.github.com/users/sarahwie/following{/other_user}", "gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}", "starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions", "organizations_url": "https://api.github.com/users/sarahwie/orgs", "repos_url": "https://api.github.com/users/sarahwie/repos", "events_url": "https://api.github.com/users/sarahwie/events{/privacy}", "received_events_url": "https://api.github.com/users/sarahwie/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "When you're processing a dataset with `.map`, it checks whether it has already done this computation using a hash based on the function and the input (using some fancy serialization with `dill`). If you found that it doesn't work as expected in some cases, let us know !\r\n\r\nGiven that, you can still force to re-process using `.map(my_func, load_from_cache_file=False)` if you want to.\r\n\r\nI am curious about the problem you have with splits. It makes me think about #160 that was an issue of version 0.1.0. What version of `nlp` are you running ? Could you give me more details ?", "Thanks, that's helpful! I was running 0.1.0, but since upgraded to 0.2.1. I can't reproduce the issue anymore as I've cleared the cache & everything now seems to be running fine since the upgrade. I've added some checks to my code, so if I do encounter it again I will reopen this issue.", "Just checking in, the cache sometimes still does not work when I make changes in my processing function in version `1.2.1`. The changes made to my data processing function only propagate to the dataset when I use `load_from_cache_file=False` or clear the cache. Is this a system-specific issue?", "Hi @sarahwie \r\nThe data are reloaded from the cache if the hash of the function you provide is the same as a computation you've done before. The hash is computed by recursively looking at the python objects of the function you provide.\r\n\r\nIf you think there's an issue, can you share the function you used or a google colab please ?", "I can't reproduce it, so I'll close for now." ]
1,592,414,241,000
1,625,607,808,000
1,618,789,429,000
NONE
null
I've been having issues with reproducibility when loading and processing datasets with the `.map` function. I was only able to resolve them by clearing all of the cache files on my system. Is there a way to disable using the cache when processing a dataset? As I make minor processing changes on the same dataset, I want to be able to be certain the data is being re-processed rather than loaded from a cached file. Could you also help me understand a bit more about how the caching functionality is used for pre-processing? E.g. how is it determined when to load from a cache vs. reprocess. I was particularly having an issue where the correct dataset splits were loaded, but as soon as I applied the `.map()` function to each split independently, they somehow all exited this process having been converted to the test set. Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/279/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/279/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/278
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/278/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/278/comments
https://api.github.com/repos/huggingface/datasets/issues/278/events
https://github.com/huggingface/datasets/issues/278
640,518,917
MDU6SXNzdWU2NDA1MTg5MTc=
278
MemoryError when loading German Wikipedia
{ "login": "gregburman", "id": 4698028, "node_id": "MDQ6VXNlcjQ2OTgwMjg=", "avatar_url": "https://avatars.githubusercontent.com/u/4698028?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gregburman", "html_url": "https://github.com/gregburman", "followers_url": "https://api.github.com/users/gregburman/followers", "following_url": "https://api.github.com/users/gregburman/following{/other_user}", "gists_url": "https://api.github.com/users/gregburman/gists{/gist_id}", "starred_url": "https://api.github.com/users/gregburman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gregburman/subscriptions", "organizations_url": "https://api.github.com/users/gregburman/orgs", "repos_url": "https://api.github.com/users/gregburman/repos", "events_url": "https://api.github.com/users/gregburman/events{/privacy}", "received_events_url": "https://api.github.com/users/gregburman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi !\r\n\r\nAs you noticed, \"big\" datasets like Wikipedia require apache beam to be processed.\r\nHowever users usually don't have an apache beam runtime available (spark, dataflow, etc.) so our goal for this library is to also make available processed versions of these datasets, so that users can just download and use them right away.\r\n\r\nThis is the case for english and french wikipedia right now: we've processed them ourselves and now they are available from our google storage. However we've not processed the german one (yet).", "Hi @lhoestq \r\n\r\nThank you for your quick reply. I thought this might be the case, that the processing was done for some languages and not for others. Is there any set timeline for when other languages (German, Italian) will be processed?\r\n\r\nGiven enough memory, is it possible to process the data ourselves by specifying the `beam_runner`?", "Adding them is definitely in our short term objectives. I'll be working on this early next week :)\r\n\r\nAlthough if you have an apache beam runtime feel free to specify the beam runner. You can find more info [here](https://github.com/huggingface/nlp/blob/master/docs/beam_dataset.md) on how to make it work on Dataflow but you can adapt it for Spark or any other beam runtime (by changing the `runner`).\r\n\r\nHowever if you don't have a beam runtime and even if you have enough memory, I discourage you to use the `DirectRunner` on the german or italian wikipedia. According to Apache Beam documentation it was made for testing purposes and therefore it is memory-inefficient.", "German is [almost] done @gregburman", "I added the German and the Italian Wikipedia to our google cloud storage:\r\nFirst update the `nlp` package to 0.3.0:\r\n```bash\r\npip install nlp --upgrade\r\n```\r\nand then\r\n```python\r\nfrom nlp import load_dataset\r\nwiki_de = load_dataset(\"wikipedia\", \"20200501.de\")\r\nwiki_it = load_dataset(\"wikipedia\", \"20200501.it\")\r\n```\r\nThe datasets are downloaded and directly ready to use (no processing).", "Hi @lhoestq \r\n\r\nWow, thanks so much, that's **really** incredible! I was considering looking at creating my own Beam Dataset, as per the doc you linked, but instead opted to process the data myself using `wikiextractor`. However, now that this is available, I'll definitely switch across and use it.\r\n\r\nThanks so much for the incredible work, this really helps out our team considerably!\r\n\r\nHave a great (and well-deserved ;) weekend ahead!\r\n\r\nP.S. I'm not sure if I should close the issue here - if so I'm happy to do so.", "Thanks for your message, glad I could help :)\r\nClosing this one." ]
1,592,406,381,000
1,592,571,182,000
1,592,571,182,000
NONE
null
Hi, first off let me say thank you for all the awesome work you're doing at Hugging Face across all your projects (NLP, Transformers, Tokenizers) - they're all amazing contributions to us working with NLP models :) I'm trying to download the German Wikipedia dataset as follows: ``` wiki = nlp.load_dataset("wikipedia", "20200501.de", split="train") ``` However, when I do so, I get the following error: ``` Downloading and preparing dataset wikipedia/20200501.de (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/ubuntu/.cache/huggingface/datasets/wikipedia/20200501.de/1.0.0... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/ubuntu/anaconda3/envs/albert/lib/python3.7/site-packages/nlp/load.py", line 520, in load_dataset save_infos=save_infos, File "/home/ubuntu/anaconda3/envs/albert/lib/python3.7/site-packages/nlp/builder.py", line 433, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/ubuntu/anaconda3/envs/albert/lib/python3.7/site-packages/nlp/builder.py", line 824, in _download_and_prepare "\n\t`{}`".format(usage_example) nlp.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/ If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). Example of usage: `load_dataset('wikipedia', '20200501.de', beam_runner='DirectRunner')` ``` So, following on from the example usage at the bottom, I tried specifying `beam_runner='DirectRunner`, however when I do this after about 20 min after the data has all downloaded, I get a `MemoryError` as warned. This isn't an issue for the English or French Wikipedia datasets (I've tried both), as neither seem to require that `beam_runner` be specified. Can you please clarify why this is an issue for the German dataset? My nlp version is 0.2.1. Thank you!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/278/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/278/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/277
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/277/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/277/comments
https://api.github.com/repos/huggingface/datasets/issues/277/events
https://github.com/huggingface/datasets/issues/277
640,163,053
MDU6SXNzdWU2NDAxNjMwNTM=
277
Empty samples in glue/qqp
{ "login": "richarddwang", "id": 17963619, "node_id": "MDQ6VXNlcjE3OTYzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richarddwang", "html_url": "https://github.com/richarddwang", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "repos_url": "https://api.github.com/users/richarddwang/repos", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "We are only wrapping the original dataset.\r\n\r\nMaybe try to ask on the GLUE mailing list or reach out to the original authors?", "Tanks for the suggestion, I'll try to ask GLUE benchmark.\r\nI'll first close the issue, post the following up here afterwards, and reopen the issue if needed. " ]
1,592,373,292,000
1,592,698,905,000
1,592,698,905,000
CONTRIBUTOR
null
``` qqp = nlp.load_dataset('glue', 'qqp') print(qqp['train'][310121]) print(qqp['train'][362225]) ``` ``` {'question1': 'How can I create an Android app?', 'question2': '', 'label': 0, 'idx': 310137} {'question1': 'How can I develop android app?', 'question2': '', 'label': 0, 'idx': 362246} ``` Notice that question 2 is empty string. BTW, I have checked and these two are the only naughty ones in all splits of qqp.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/277/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/277/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/275
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/275/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/275/comments
https://api.github.com/repos/huggingface/datasets/issues/275/events
https://github.com/huggingface/datasets/issues/275
639,439,052
MDU6SXNzdWU2Mzk0MzkwNTI=
275
NonMatchingChecksumError when loading pubmed dataset
{ "login": "DavideStenner", "id": 48441753, "node_id": "MDQ6VXNlcjQ4NDQxNzUz", "avatar_url": "https://avatars.githubusercontent.com/u/48441753?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DavideStenner", "html_url": "https://github.com/DavideStenner", "followers_url": "https://api.github.com/users/DavideStenner/followers", "following_url": "https://api.github.com/users/DavideStenner/following{/other_user}", "gists_url": "https://api.github.com/users/DavideStenner/gists{/gist_id}", "starred_url": "https://api.github.com/users/DavideStenner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DavideStenner/subscriptions", "organizations_url": "https://api.github.com/users/DavideStenner/orgs", "repos_url": "https://api.github.com/users/DavideStenner/repos", "events_url": "https://api.github.com/users/DavideStenner/events{/privacy}", "received_events_url": "https://api.github.com/users/DavideStenner/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "For some reason the files are not available for unauthenticated users right now (like the download service of this package). Instead of downloading the right files, it downloads the html of the error.\r\nAccording to the error it should be back again in 24h.\r\n\r\n![image](https://user-images.githubusercontent.com/42851186/84751599-096c6580-afbd-11ea-97f3-ee4aef791711.png)\r\n" ]
1,592,292,711,000
1,592,552,227,000
1,592,552,227,000
NONE
null
I get this error when i run `nlp.load_dataset('scientific_papers', 'pubmed', split = 'train[:50%]')`. The error is: ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) <ipython-input-2-7742dea167d0> in <module>() ----> 1 df = nlp.load_dataset('scientific_papers', 'pubmed', split = 'train[:50%]') 2 df = pd.DataFrame(df) 3 gc.collect() 3 frames /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 518 download_mode=download_mode, 519 ignore_verifications=ignore_verifications, --> 520 save_infos=save_infos, 521 ) 522 /usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 431 verify_infos = not save_infos and not ignore_verifications 432 self._download_and_prepare( --> 433 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 434 ) 435 # Sync info /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 468 # Checksums verification 469 if verify_infos: --> 470 verify_checksums(self.info.download_checksums, dl_manager.get_recorded_sizes_checksums()) 471 for split_generator in split_generators: 472 if str(split_generator.split_info.name).lower() == "all": /usr/local/lib/python3.6/dist-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums) 34 bad_urls = [url for url in expected_checksums if expected_checksums[url] != recorded_checksums[url]] 35 if len(bad_urls) > 0: ---> 36 raise NonMatchingChecksumError(str(bad_urls)) 37 logger.info("All the checksums matched successfully.") 38 NonMatchingChecksumError: ['https://drive.google.com/uc?id=1b3rmCSIoh6VhD4HKWjI4HOW-cSwcwbeC&export=download', 'https://drive.google.com/uc?id=1lvsqvsFi3W-pE1SqNZI0s8NR9rC1tsja&export=download'] ``` I'm currently working on google colab. That is quite strange because yesterday it was fine.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/275/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/275/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/274
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/274/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/274/comments
https://api.github.com/repos/huggingface/datasets/issues/274/events
https://github.com/huggingface/datasets/issues/274
639,156,625
MDU6SXNzdWU2MzkxNTY2MjU=
274
PG-19
{ "login": "lucidrains", "id": 108653, "node_id": "MDQ6VXNlcjEwODY1Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/108653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lucidrains", "html_url": "https://github.com/lucidrains", "followers_url": "https://api.github.com/users/lucidrains/followers", "following_url": "https://api.github.com/users/lucidrains/following{/other_user}", "gists_url": "https://api.github.com/users/lucidrains/gists{/gist_id}", "starred_url": "https://api.github.com/users/lucidrains/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucidrains/subscriptions", "organizations_url": "https://api.github.com/users/lucidrains/orgs", "repos_url": "https://api.github.com/users/lucidrains/repos", "events_url": "https://api.github.com/users/lucidrains/events{/privacy}", "received_events_url": "https://api.github.com/users/lucidrains/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "Sounds good! Do you want to give it a try?", "Ok, I'll see if I can figure it out tomorrow!", "Got around to this today, and so far so good, I'm able to download and load pg19 locally. However, I think there may be an issue with the dummy data, and testing in general.\r\n\r\nThe problem lies in the fact that each book from pg19 actually resides as its own text file in a google cloud folder that denotes the split, where the book id is the name of the text file. https://console.cloud.google.com/storage/browser/deepmind-gutenberg/train/ I don't believe there's anywhere else (even in the supplied metadata), where the mapping of id -> split can be found.\r\n\r\nTherefore I end up making a network call `tf.io.gfile.listdir` to get all the files within each of the split directories. https://github.com/lucidrains/nlp/commit/adbacbd85decc80db2347d0882e7dab4faa6fd03#diff-cece8f166a85dd927caf574ba303d39bR78\r\n\r\nDoes this network call need to be eventually stubbed out for testing?", "Ohh nevermind, I think I can use `download_custom` here with `listdir` as the custom function. Ok, I'll keep trying to make the dummy data work!" ]
1,592,254,946,000
1,594,049,702,000
1,594,049,702,000
CONTRIBUTOR
null
Hi, and thanks for all your open-sourced work, as always! I was wondering if you would be open to adding PG-19 to your collection of datasets. https://github.com/deepmind/pg19 It is often used for benchmarking long-range language modeling.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/274/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/274/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/270
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/270/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/270/comments
https://api.github.com/repos/huggingface/datasets/issues/270/events
https://github.com/huggingface/datasets/issues/270
638,121,617
MDU6SXNzdWU2MzgxMjE2MTc=
270
c4 dataset is not viewable in nlpviewer demo
{ "login": "rajarsheem", "id": 6441313, "node_id": "MDQ6VXNlcjY0NDEzMTM=", "avatar_url": "https://avatars.githubusercontent.com/u/6441313?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rajarsheem", "html_url": "https://github.com/rajarsheem", "followers_url": "https://api.github.com/users/rajarsheem/followers", "following_url": "https://api.github.com/users/rajarsheem/following{/other_user}", "gists_url": "https://api.github.com/users/rajarsheem/gists{/gist_id}", "starred_url": "https://api.github.com/users/rajarsheem/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rajarsheem/subscriptions", "organizations_url": "https://api.github.com/users/rajarsheem/orgs", "repos_url": "https://api.github.com/users/rajarsheem/repos", "events_url": "https://api.github.com/users/rajarsheem/events{/privacy}", "received_events_url": "https://api.github.com/users/rajarsheem/received_events", "type": "User", "site_admin": false }
[ { "id": 2107841032, "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer", "name": "nlp-viewer", "color": "94203D", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "C4 is too large to be shown in the viewer" ]
1,592,036,776,000
1,603,812,929,000
1,603,812,913,000
NONE
null
I get the following error when I try to view the c4 dataset in [nlpviewer](https://huggingface.co/nlp/viewer/) ```python ModuleNotFoundError: No module named 'langdetect' Traceback: File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/ScriptRunner.py", line 322, in _run_script exec(code, module.__dict__) File "/home/sasha/nlp_viewer/run.py", line 54, in <module> configs = get_confs(option.id) File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/caching.py", line 591, in wrapped_func return get_or_create_cached_value() File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/caching.py", line 575, in get_or_create_cached_value return_value = func(*args, **kwargs) File "/home/sasha/nlp_viewer/run.py", line 48, in get_confs builder_cls = nlp.load.import_main_class(module_path, dataset=True) File "/home/sasha/.local/lib/python3.7/site-packages/nlp/load.py", line 57, in import_main_class module = importlib.import_module(module_path) File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/sasha/.local/lib/python3.7/site-packages/nlp/datasets/c4/88bb1b1435edad3fb772325710c4a43327cbf4a23b9030094556e6f01e14ec19/c4.py", line 29, in <module> from .c4_utils import ( File "/home/sasha/.local/lib/python3.7/site-packages/nlp/datasets/c4/88bb1b1435edad3fb772325710c4a43327cbf4a23b9030094556e6f01e14ec19/c4_utils.py", line 29, in <module> import langdetect ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/270/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/270/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/269
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/269/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/269/comments
https://api.github.com/repos/huggingface/datasets/issues/269/events
https://github.com/huggingface/datasets/issues/269
638,106,774
MDU6SXNzdWU2MzgxMDY3NzQ=
269
Error in metric.compute: missing `original_instructions` argument
{ "login": "zphang", "id": 1668462, "node_id": "MDQ6VXNlcjE2Njg0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/1668462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zphang", "html_url": "https://github.com/zphang", "followers_url": "https://api.github.com/users/zphang/followers", "following_url": "https://api.github.com/users/zphang/following{/other_user}", "gists_url": "https://api.github.com/users/zphang/gists{/gist_id}", "starred_url": "https://api.github.com/users/zphang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zphang/subscriptions", "organizations_url": "https://api.github.com/users/zphang/orgs", "repos_url": "https://api.github.com/users/zphang/repos", "events_url": "https://api.github.com/users/zphang/events{/privacy}", "received_events_url": "https://api.github.com/users/zphang/received_events", "type": "User", "site_admin": false }
[ { "id": 2067393914, "node_id": "MDU6TGFiZWwyMDY3MzkzOTE0", "url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug", "name": "metric bug", "color": "25b21e", "default": false, "description": "A bug in a metric script" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[]
1,592,029,614,000
1,592,466,104,000
1,592,466,104,000
NONE
null
I'm running into an error using metrics for computation in the latest master as well as version 0.2.1. Here is a minimal example: ```python import nlp rte_metric = nlp.load_metric('glue', name="rte") rte_metric.compute( [0, 0, 1, 1], [0, 1, 0, 1], ) ``` ``` 181 # Read the predictions and references 182 reader = ArrowReader(path=self.data_dir, info=None) --> 183 self.data = reader.read_files(node_files) 184 185 # Release all of our locks TypeError: read_files() missing 1 required positional argument: 'original_instructions' ``` I believe this might have been introduced with cc8d2508b75f7ba0e5438d0686ee02dcec43c7f4, which added the `original_instructions` argument. Elsewhere, an empty-string default is provided--perhaps that could be done here too?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/269/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/269/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/267
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/267/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/267/comments
https://api.github.com/repos/huggingface/datasets/issues/267/events
https://github.com/huggingface/datasets/issues/267
637,415,545
MDU6SXNzdWU2Mzc0MTU1NDU=
267
How can I load/find WMT en-romanian?
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
null
[ "I will take a look :-) " ]
1,591,924,177,000
1,592,555,059,000
1,592,555,059,000
MEMBER
null
I believe it is from `wmt16` When I run ```python wmt = nlp.load_dataset('wmt16') ``` I get: ```python AssertionError: The dataset wmt16 with config cs-en requires manual data. Please follow the manual download instructions: Some of the wmt configs here, require a manual download. Please look into wmt.py to see the exact path (and file name) that has to be downloaded. . Manual data can be loaded with `nlp.load(wmt16, data_dir='<path/to/manual/data>') ``` There is no wmt.py,as the error message suggests, and wmt16.py doesn't have manual download instructions. Any idea how to do this? Thanks in advance!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/267/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/267/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/263
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/263/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/263/comments
https://api.github.com/repos/huggingface/datasets/issues/263/events
https://github.com/huggingface/datasets/issues/263
637,028,015
MDU6SXNzdWU2MzcwMjgwMTU=
263
[Feature request] Support for external modality for language datasets
{ "login": "aleSuglia", "id": 1479733, "node_id": "MDQ6VXNlcjE0Nzk3MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/1479733?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aleSuglia", "html_url": "https://github.com/aleSuglia", "followers_url": "https://api.github.com/users/aleSuglia/followers", "following_url": "https://api.github.com/users/aleSuglia/following{/other_user}", "gists_url": "https://api.github.com/users/aleSuglia/gists{/gist_id}", "starred_url": "https://api.github.com/users/aleSuglia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aleSuglia/subscriptions", "organizations_url": "https://api.github.com/users/aleSuglia/orgs", "repos_url": "https://api.github.com/users/aleSuglia/repos", "events_url": "https://api.github.com/users/aleSuglia/events{/privacy}", "received_events_url": "https://api.github.com/users/aleSuglia/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
open
false
null
[]
null
[ "Thanks a lot, @aleSuglia for the very detailed and introductive feature request.\r\nIt seems like we could build something pretty useful here indeed.\r\n\r\nOne of the questions here is that Arrow doesn't have built-in support for generic \"tensors\" in records but there might be ways to do that in a clean way. We'll probably try to tackle this during the summer.", "I was looking into Facebook MMF and apparently they decided to use LMDB to store additional features associated with every example: https://github.com/facebookresearch/mmf/blob/master/mmf/datasets/databases/features_database.py\r\n\r\n", "I saw the Mozilla common_voice dataset in model hub, which has mp3 audio recordings as part it. It's use predominantly maybe in ASR and TTS, but dataset is a Language + Voice Dataset similar to @aleSuglia's point about Language + Vision. \r\n\r\nhttps://huggingface.co/datasets/common_voice" ]
1,591,882,938,000
1,617,247,964,000
null
CONTRIBUTOR
null
# Background In recent years many researchers have advocated that learning meanings from text-based only datasets is just like asking a human to "learn to speak by listening to the radio" [[E. Bender and A. Koller,2020](https://openreview.net/forum?id=GKTvAcb12b), [Y. Bisk et. al, 2020](https://arxiv.org/abs/2004.10151)]. Therefore, the importance of multi-modal datasets for the NLP community is of paramount importance for next-generation models. For this reason, I raised a [concern](https://github.com/huggingface/nlp/pull/236#issuecomment-639832029) related to the best way to integrate external features in NLP datasets (e.g., visual features associated with an image, audio features associated with a recording, etc.). This would be of great importance for a more systematic way of representing data for ML models that are learning from multi-modal data. # Language + Vision ## Use case Typically, people working on Language+Vision tasks, have a reference dataset (either in JSON or JSONL format) and for each example, they have an identifier that specifies the reference image. For a practical example, you can refer to the [GQA](https://cs.stanford.edu/people/dorarad/gqa/download.html#seconddown) dataset. Currently, images are represented by either pooling-based features (average pooling of ResNet or VGGNet features, see [DeVries et.al, 2017](https://arxiv.org/abs/1611.08481), [Shekhar et.al, 2019](https://www.aclweb.org/anthology/N19-1265.pdf)) where you have a single vector for every image. Another option is to use a set of feature maps for every image extracted from a specific layer of a CNN (see [Xu et.al, 2015](https://arxiv.org/abs/1502.03044)). A more recent option, especially with large-scale multi-modal transformers [Li et. al, 2019](https://arxiv.org/abs/1908.03557), is to use FastRCNN features. For all these types of features, people use one of the following formats: 1. [HD5F](https://pypi.org/project/h5py/) 2. [NumPy](https://numpy.org/doc/stable/reference/generated/numpy.savez.html) 3. [LMDB](https://lmdb.readthedocs.io/en/release/) ## Implementation considerations I was thinking about possible ways of implementing this feature. As mentioned above, depending on the model, different visual features can be used. This step usually relies on another model (say ResNet-101) that is used to generate the visual features for each image used in the dataset. Typically, this step is done in a separate script that completes the feature generation procedure. The usual processing steps for these datasets are the following: 1. Download dataset 2. Download images associated with the dataset 3. Write a script that generates the visual features for every image and store them in a specific file 4. Create a DataLoader that maps the visual features to the corresponding language example In my personal projects, I've decided to ignore HD5F because it doesn't have out-of-the-box support for multi-processing (see this PyTorch [issue](https://github.com/pytorch/pytorch/issues/11929)). I've been successfully using a NumPy compressed file for each image so that I can store any sort of information in it. For ease of use of all these Language+Vision datasets, it would be really handy to have a way to associate the visual features with the text and store them in an efficient way. That's why I immediately thought about the HuggingFace NLP backend based on Apache Arrow. The assumption here is that the external modality will be mapped to a N-dimensional tensor so easily represented by a NumPy array. Looking forward to hearing your thoughts about it!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/263/reactions", "total_count": 23, "+1": 18, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 4 }
https://api.github.com/repos/huggingface/datasets/issues/263/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/261
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/261/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/261/comments
https://api.github.com/repos/huggingface/datasets/issues/261/events
https://github.com/huggingface/datasets/issues/261
636,372,380
MDU6SXNzdWU2MzYzNzIzODA=
261
Downloading dataset error with pyarrow.lib.RecordBatch
{ "login": "cuent", "id": 5248968, "node_id": "MDQ6VXNlcjUyNDg5Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/5248968?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cuent", "html_url": "https://github.com/cuent", "followers_url": "https://api.github.com/users/cuent/followers", "following_url": "https://api.github.com/users/cuent/following{/other_user}", "gists_url": "https://api.github.com/users/cuent/gists{/gist_id}", "starred_url": "https://api.github.com/users/cuent/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cuent/subscriptions", "organizations_url": "https://api.github.com/users/cuent/orgs", "repos_url": "https://api.github.com/users/cuent/repos", "events_url": "https://api.github.com/users/cuent/events{/privacy}", "received_events_url": "https://api.github.com/users/cuent/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "When you install `nlp` for the first time on a Colab runtime, it updates the `pyarrow` library that was already on colab. This update shows this message on colab:\r\n```\r\nWARNING: The following packages were previously imported in this runtime:\r\n [pyarrow]\r\nYou must restart the runtime in order to use newly installed versions.\r\n```\r\nYou just have to restart the runtime and it should be fine.\r\nIf you don't restart, then it breaks like in your message.", "Yeah, that worked! Thanks :) " ]
1,591,805,059,000
1,591,886,112,000
1,591,886,112,000
NONE
null
I am trying to download `sentiment140` and I have the following error ``` /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 518 download_mode=download_mode, 519 ignore_verifications=ignore_verifications, --> 520 save_infos=save_infos, 521 ) 522 /usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 418 verify_infos = not save_infos and not ignore_verifications 419 self._download_and_prepare( --> 420 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 421 ) 422 # Sync info /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 472 try: 473 # Prepare split will record examples associated to the split --> 474 self._prepare_split(split_generator, **prepare_split_kwargs) 475 except OSError: 476 raise OSError("Cannot find data file. " + (self.MANUAL_DOWNLOAD_INSTRUCTIONS or "")) /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _prepare_split(self, split_generator) 652 for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False): 653 example = self.info.features.encode_example(record) --> 654 writer.write(example) 655 num_examples, num_bytes = writer.finalize() 656 /usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in write(self, example, writer_batch_size) 143 self._build_writer(pa_table=pa.Table.from_pydict(example)) 144 if writer_batch_size is not None and len(self.current_rows) >= writer_batch_size: --> 145 self.write_on_file() 146 147 def write_batch( /usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in write_on_file(self) 127 else: 128 # All good --> 129 self._write_array_on_file(pa_array) 130 self.current_rows = [] 131 /usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in _write_array_on_file(self, pa_array) 96 def _write_array_on_file(self, pa_array): 97 """Write a PyArrow Array""" ---> 98 pa_batch = pa.RecordBatch.from_struct_array(pa_array) 99 self._num_bytes += pa_array.nbytes 100 self.pa_writer.write_batch(pa_batch) AttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array' ``` I installed the last version and ran the following command: ```python import nlp sentiment140 = nlp.load_dataset('sentiment140', cache_dir='/content') ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/261/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/261/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/259
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/259/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/259/comments
https://api.github.com/repos/huggingface/datasets/issues/259/events
https://github.com/huggingface/datasets/issues/259
636,239,529
MDU6SXNzdWU2MzYyMzk1Mjk=
259
documentation missing how to split a dataset
{ "login": "fotisj", "id": 2873355, "node_id": "MDQ6VXNlcjI4NzMzNTU=", "avatar_url": "https://avatars.githubusercontent.com/u/2873355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fotisj", "html_url": "https://github.com/fotisj", "followers_url": "https://api.github.com/users/fotisj/followers", "following_url": "https://api.github.com/users/fotisj/following{/other_user}", "gists_url": "https://api.github.com/users/fotisj/gists{/gist_id}", "starred_url": "https://api.github.com/users/fotisj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fotisj/subscriptions", "organizations_url": "https://api.github.com/users/fotisj/orgs", "repos_url": "https://api.github.com/users/fotisj/repos", "events_url": "https://api.github.com/users/fotisj/events{/privacy}", "received_events_url": "https://api.github.com/users/fotisj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "this seems to work for my specific problem:\r\n\r\n`self.train_ds, self.test_ds, self.val_ds = map(_prepare_ds, ('train', 'test[:25%]+test[50%:75%]', 'test[75%:]'))`", "Currently you can indeed split a dataset using `ds_test = nlp.load_dataset('imdb, split='test[:5000]')` (works also with percentages).\r\n\r\nHowever right now we don't have a way to shuffle a dataset but we are thinking about it in the discussion in #166. Feel free to share your thoughts about it.\r\n\r\nOne trick that you can do until we have a better solution is to shuffle and split the indices of your dataset:\r\n```python\r\nimport nlp\r\nfrom sklearn.model_selection import train_test_split\r\n\r\nimdb = nlp.load_dataset('imbd', split='test')\r\ntest_indices, val_indices = train_test_split(range(len(imdb)))\r\n```\r\n\r\nand then to iterate each split:\r\n```python\r\nfor i in test_indices:\r\n example = imdb[i]\r\n ...\r\n```\r\n", "I added a small guide [here](https://github.com/huggingface/nlp/tree/master/docs/splits.md) that explains how to split a dataset. It is very similar to the tensorflow datasets guide, as we kept the same logic.", "Thanks a lot, the new explanation is very helpful!\r\n\r\nAbout using train_test_split from sklearn: I stumbled across the [same error message as this user ](https://github.com/huggingface/nlp/issues/147 )and thought it can't be used at the moment in this context. Will check it out again.\r\n\r\nOne of the problems is how to shuffle very large datasets, which don't fit into the memory. Well, one strategy could be shuffling data in sections. But in a case where the data is sorted by the labels you have to swap larger sections first. \r\n", "We added a way to shuffle datasets (shuffle the indices and then reorder to make a new dataset).\r\nYou can do `shuffled_dset = dataset.shuffle(seed=my_seed)`. It shuffles the whole dataset.\r\nThere is also `dataset.train_test_split()` which if very handy (with the same signature as sklearn).\r\n\r\nClosing this issue as we added the docs for splits and tools to split datasets. Thanks again for your feedback !" ]
1,591,795,093,000
1,592,518,824,000
1,592,518,824,000
NONE
null
I am trying to understand how to split a dataset ( as arrow_dataset). I know I can do something like this to access a split which is already in the original dataset : `ds_test = nlp.load_dataset('imdb, split='test') ` But how can I split ds_test into a test and a validation set (without reading the data into memory and keeping the arrow_dataset as container)? I guess it has something to do with the module split :-) but there is no real documentation in the code but only a reference to a longer description: > See the [guide on splits](https://github.com/huggingface/nlp/tree/master/docs/splits.md) for more information. But the guide seems to be missing. To clarify: I know that this has been modelled after the dataset of tensorflow and that some of the documentation there can be used [like this one](https://www.tensorflow.org/datasets/splits). But to come back to the example above: I cannot simply split the testset doing this: `ds_test = nlp.load_dataset('imdb, split='test'[:5000]) ` `ds_val = nlp.load_dataset('imdb, split='test'[5000:])` because the imdb test data is sorted by class (probably not a good idea anyway)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/259/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/259/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/258
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/258/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/258/comments
https://api.github.com/repos/huggingface/datasets/issues/258/events
https://github.com/huggingface/datasets/issues/258
635,859,525
MDU6SXNzdWU2MzU4NTk1MjU=
258
Why is dataset after tokenization far more larger than the orginal one ?
{ "login": "richarddwang", "id": 17963619, "node_id": "MDQ6VXNlcjE3OTYzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richarddwang", "html_url": "https://github.com/richarddwang", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "repos_url": "https://api.github.com/users/richarddwang/repos", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! This is because `.map` added the new column `input_ids` to the dataset, and so all the other columns were kept. Therefore the dataset size increased a lot.\r\n If you want to only keep the `input_ids` column, you can stash the other ones by specifying `remove_columns=[\"title\", \"text\"]` in the arguments of `.map`", "Hi ! Thanks for your reply.\r\n\r\nBut since size of `input_ids` < size of `text`, I am wondering why\r\nsize of `input_ids` + `text` > 2x the size of `text` 🤔", "Hard to tell... This is probably related to the way apache arrow compresses lists of integers, that may be different from the compression of strings.", "Thanks for your point. 😀, It might be answer.\r\nSince this is hard to know, I'll close this issue.\r\nBut if somebody knows more details, please comment below ~ 😁" ]
1,591,752,427,000
1,591,793,194,000
1,591,793,194,000
CONTRIBUTOR
null
I tokenize wiki dataset by `map` and cache the results. ``` def tokenize_tfm(example): example['input_ids'] = hf_fast_tokenizer.convert_tokens_to_ids(hf_fast_tokenizer.tokenize(example['text'])) return example wiki = nlp.load_dataset('wikipedia', '20200501.en', cache_dir=cache_dir)['train'] wiki.map(tokenize_tfm, cache_file_name=cache_dir/"wikipedia/20200501.en/1.0.0/tokenized_wiki.arrow") ``` and when I see their size ``` ls -l --block-size=M 17460M wikipedia-train.arrow 47511M tokenized_wiki.arrow ``` The tokenized one is over 2x size of original one. Is there something I did wrong ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/258/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/258/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/257
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/257/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/257/comments
https://api.github.com/repos/huggingface/datasets/issues/257/events
https://github.com/huggingface/datasets/issues/257
635,620,979
MDU6SXNzdWU2MzU2MjA5Nzk=
257
Tokenizer pickling issue fix not landed in `nlp` yet?
{ "login": "sarahwie", "id": 8027676, "node_id": "MDQ6VXNlcjgwMjc2NzY=", "avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sarahwie", "html_url": "https://github.com/sarahwie", "followers_url": "https://api.github.com/users/sarahwie/followers", "following_url": "https://api.github.com/users/sarahwie/following{/other_user}", "gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}", "starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions", "organizations_url": "https://api.github.com/users/sarahwie/orgs", "repos_url": "https://api.github.com/users/sarahwie/repos", "events_url": "https://api.github.com/users/sarahwie/events{/privacy}", "received_events_url": "https://api.github.com/users/sarahwie/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Yes, the new release of tokenizers solves this and should be out soon.\r\nIn the meantime, you can install it with `pip install tokenizers==0.8.0-dev2`", "If others run into this issue, a quick fix is to use python 3.6 instead of 3.7+. Serialization differences between the 3rd party `dataclasses` package for 3.6 and the built in `dataclasses` in 3.7+ cause the issue.\r\n\r\nProbably a dumb fix, but it works for me." ]
1,591,722,754,000
1,591,825,532,000
1,591,723,613,000
NONE
null
Unless I recreate an arrow_dataset from my loaded nlp dataset myself (which I think does not use the cache by default), I get the following error when applying the map function: ``` dataset = nlp.load_dataset('cos_e') tokenizer = GPT2TokenizerFast.from_pretrained('gpt2', cache_dir=cache_dir) for split in dataset.keys(): dataset[split].map(lambda x: some_function(x, tokenizer)) ``` ``` 06/09/2020 10:09:19 - INFO - nlp.builder - Constructing Dataset for split train[:10], from /home/sarahw/.cache/huggingface/datasets/cos_e/default/0.0.1 Traceback (most recent call last): File "generation/input_to_label_and_rationale.py", line 390, in <module> main() File "generation/input_to_label_and_rationale.py", line 263, in main dataset[split] = dataset[split].map(lambda x: input_to_explanation_plus_label(x, tokenizer, max_length, datasource=data_args.task_name, wt5=(model_class=='t5'), expl_only=model_args.rationale_only), batched=False) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/nlp/arrow_dataset.py", line 522, in map cache_file_name = self._get_cache_file_path(function, cache_kwargs) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/nlp/arrow_dataset.py", line 381, in _get_cache_file_path function_bytes = dumps(function) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/nlp/utils/py_utils.py", line 257, in dumps dump(obj, file) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/nlp/utils/py_utils.py", line 250, in dump Pickler(file).dump(obj) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 445, in dump StockPickler.dump(self, obj) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 485, in dump self.save(obj) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 1410, in save_function pickler.save_reduce(_create_function, (obj.__code__, File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 690, in save_reduce save(args) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 899, in save_tuple save(element) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 899, in save_tuple save(element) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 1147, in save_cell pickler.save_reduce(_create_cell, (f,), obj=obj) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 690, in save_reduce save(args) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 884, in save_tuple save(element) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 601, in save self.save_reduce(obj=obj, *rv) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 715, in save_reduce save(state) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 912, in save_module_dict StockPickler.save_dict(pickler, obj) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 969, in save_dict self._batch_setitems(obj.items()) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 995, in _batch_setitems save(v) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 601, in save self.save_reduce(obj=obj, *rv) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 715, in save_reduce save(state) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 912, in save_module_dict StockPickler.save_dict(pickler, obj) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 969, in save_dict self._batch_setitems(obj.items()) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 995, in _batch_setitems save(v) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 576, in save rv = reduce(self.proto) TypeError: cannot pickle 'Tokenizer' object ``` Fix seems to be in the tokenizers [`0.8.0.dev1 pre-release`](https://github.com/huggingface/tokenizers/issues/87), which I can't install with any package managers.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/257/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/257/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/256
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/256/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/256/comments
https://api.github.com/repos/huggingface/datasets/issues/256/events
https://github.com/huggingface/datasets/issues/256
635,596,295
MDU6SXNzdWU2MzU1OTYyOTU=
256
[Feature request] Add a feature to dataset
{ "login": "sarahwie", "id": 8027676, "node_id": "MDQ6VXNlcjgwMjc2NzY=", "avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sarahwie", "html_url": "https://github.com/sarahwie", "followers_url": "https://api.github.com/users/sarahwie/followers", "following_url": "https://api.github.com/users/sarahwie/following{/other_user}", "gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}", "starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions", "organizations_url": "https://api.github.com/users/sarahwie/orgs", "repos_url": "https://api.github.com/users/sarahwie/repos", "events_url": "https://api.github.com/users/sarahwie/events{/privacy}", "received_events_url": "https://api.github.com/users/sarahwie/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Do you have an example of what you would like to do? (you can just add a field in the output of the unction you give to map and this will add this field in the output table)", "Given another source of data loaded in, I want to pre-add it to the dataset so that it aligns with the indices of the arrow dataset prior to performing map.\r\n\r\nE.g. \r\n```\r\nnew_info = list of length dataset['train']\r\n\r\ndataset['train'] = dataset['train'].map(lambda x: some_function(x, new_info[index of x]))\r\n\r\ndef some_function(x, new_info_x):\r\n # adds new_info[index of x] as a field to x\r\n x['new_info'] = new_info_x\r\n return x\r\n```\r\nI was thinking to instead create a new field in the arrow dataset so that instance x contains all the necessary information when map function is applied (since I don't have index information to pass to map function).", "This is what I have so far: \r\n\r\n```\r\nimport pyarrow as pa\r\nfrom nlp.arrow_dataset import Dataset\r\n\r\naug_dataset = dataset['train'][:]\r\naug_dataset['new_info'] = new_info\r\n\r\n#reformat as arrow-table\r\nschema = dataset['train'].schema\r\n\r\n# this line doesn't work:\r\nschema.append(pa.field('new_info', pa.int32()))\r\n\r\ntable = pa.Table.from_pydict(\r\n aug_dataset,\r\n schema=schema\r\n)\r\ndataset['train'] = Dataset(table) \r\n```", "Maybe you can use `with_indices`?\r\n\r\n```python\r\nnew_info = list of length dataset['train']\r\n\r\ndef some_function(indice, x):\r\n # adds new_info[index of x] as a field to x\r\n x['new_info'] = new_info_x[indice]\r\n return x\r\n\r\ndataset['train'] = dataset['train'].map(some_function, with_indices=True)\r\n```", "Oh great. That should work. I missed that in the documentation- thanks :) " ]
1,591,720,692,000
1,591,721,502,000
1,591,721,502,000
NONE
null
Is there a straightforward way to add a field to the arrow_dataset, prior to performing map?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/256/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/256/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/254
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/254/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/254/comments
https://api.github.com/repos/huggingface/datasets/issues/254/events
https://github.com/huggingface/datasets/issues/254
635,057,568
MDU6SXNzdWU2MzUwNTc1Njg=
254
[Feature request] Be able to remove a specific sample of the dataset
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "organizations_url": "https://api.github.com/users/astariul/orgs", "repos_url": "https://api.github.com/users/astariul/repos", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "received_events_url": "https://api.github.com/users/astariul/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Oh yes you can now do that with the `dataset.filter()` method that was added in #214 " ]
1,591,669,333,000
1,591,692,098,000
1,591,692,098,000
NONE
null
As mentioned in #117, it's currently not possible to remove a sample of the dataset. But it is a important use case : After applying some preprocessing, some samples might be empty for example. We should be able to remove these samples from the dataset, or at least mark them as `removed` so when iterating the dataset, we don't iterate these samples. I think it should be a feature. What do you think ? --- Any work-around in the meantime ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/254/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/254/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/252
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/252/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/252/comments
https://api.github.com/repos/huggingface/datasets/issues/252/events
https://github.com/huggingface/datasets/issues/252
634,563,239
MDU6SXNzdWU2MzQ1NjMyMzk=
252
NonMatchingSplitsSizesError error when reading the IMDB dataset
{ "login": "antmarakis", "id": 17463361, "node_id": "MDQ6VXNlcjE3NDYzMzYx", "avatar_url": "https://avatars.githubusercontent.com/u/17463361?v=4", "gravatar_id": "", "url": "https://api.github.com/users/antmarakis", "html_url": "https://github.com/antmarakis", "followers_url": "https://api.github.com/users/antmarakis/followers", "following_url": "https://api.github.com/users/antmarakis/following{/other_user}", "gists_url": "https://api.github.com/users/antmarakis/gists{/gist_id}", "starred_url": "https://api.github.com/users/antmarakis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/antmarakis/subscriptions", "organizations_url": "https://api.github.com/users/antmarakis/orgs", "repos_url": "https://api.github.com/users/antmarakis/repos", "events_url": "https://api.github.com/users/antmarakis/events{/privacy}", "received_events_url": "https://api.github.com/users/antmarakis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I just tried on my side and I didn't encounter your problem.\r\nApparently the script doesn't generate all the examples on your side.\r\n\r\nCan you provide the version of `nlp` you're using ?\r\nCan you try to clear your cache and re-run the code ?", "I updated it, that was it, thanks!", "Hello, I am facing the same problem... how do you clear the huggingface cache?", "Hi ! The cache is at ~/.cache/huggingface\r\nYou can just delete this folder if needed :)" ]
1,591,619,184,000
1,630,077,658,000
1,591,624,886,000
NONE
null
Hi! I am trying to load the `imdb` dataset with this line: `dataset = nlp.load_dataset('imdb', data_dir='/A/PATH', cache_dir='/A/PATH')` but I am getting the following error: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/load.py", line 517, in load_dataset save_infos=save_infos, File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/builder.py", line 363, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/builder.py", line 421, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/utils/info_utils.py", line 70, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=33442202, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=5929447, num_examples=4537, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}] ``` Am I overlooking something? Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/252/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/252/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/249
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/249/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/249/comments
https://api.github.com/repos/huggingface/datasets/issues/249/events
https://github.com/huggingface/datasets/issues/249
633,393,443
MDU6SXNzdWU2MzMzOTM0NDM=
249
[Dataset created] some critical small issues when I was creating a dataset
{ "login": "richarddwang", "id": 17963619, "node_id": "MDQ6VXNlcjE3OTYzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richarddwang", "html_url": "https://github.com/richarddwang", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "repos_url": "https://api.github.com/users/richarddwang/repos", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for noticing all these :) They should be easy to fix indeed", "Alright I think I fixed all the problems you mentioned. Thanks again, that will be useful for many people.\r\nThere is still more work needed for point 7. but we plan to have some nice docs soon." ]
1,591,534,734,000
1,591,950,531,000
1,591,950,531,000
CONTRIBUTOR
null
Hi, I successfully created a dataset and has made a pr #248. But I have encountered several problems when I was creating it, and those should be easy to fix. 1. Not found dataset_info.json should be fixed by #241 , eager to wait it be merged. 2. Forced to install `apach_beam` If we should install it, then it might be better to include it in the pakcage dependency or specified in `CONTRIBUTING.md` ``` Traceback (most recent call last): File "nlp-cli", line 10, in <module> from nlp.commands.run_beam import RunBeamCommand File "/home/yisiang/nlp/src/nlp/commands/run_beam.py", line 6, in <module> import apache_beam as beam ModuleNotFoundError: No module named 'apache_beam' ``` 3. `cached_dir` is `None` ``` File "/home/yisiang/nlp/src/nlp/datasets/bookscorpus/aea0bd5142d26df645a8fce23d6110bb95ecb81772bb2a1f29012e329191962c/bookscorpus.py", line 88, in _split_generators downloaded_path_or_paths = dl_manager.download_custom(_GDRIVE_FILE_ID, download_file_from_google_drive) File "/home/yisiang/nlp/src/nlp/utils/download_manager.py", line 128, in download_custom downloaded_path_or_paths = map_nested(url_to_downloaded_path, url_or_urls) File "/home/yisiang/nlp/src/nlp/utils/py_utils.py", line 172, in map_nested return function(data_struct) File "/home/yisiang/nlp/src/nlp/utils/download_manager.py", line 126, in url_to_downloaded_path return os.path.join(self._download_config.cache_dir, hash_url_to_filename(url)) File "/home/yisiang/miniconda3/envs/nlppr/lib/python3.7/posixpath.py", line 80, in join a = os.fspath(a) ``` This is because this line https://github.com/huggingface/nlp/blob/2e0a8639a79b1abc848cff5c669094d40bba0f63/src/nlp/commands/test.py#L30-L32 And I add `--cache_dir="...."` to `python nlp-cli test datasets/<your-dataset-folder> --save_infos --all_configs` in the doc, finally I could pass this error. But it seems to ignore my arg and use `/home/yisiang/.cache/huggingface/datasets/bookscorpus/plain_text/1.0.0` as cahe_dir 4. There is no `pytest` So maybe in the doc we should specify a step to install pytest 5. Not enough capacity in my `/tmp` When run test for dummy data, I don't know why it ask me for 5.6g to download something, ``` def download_and_prepare ... if not utils.has_sufficient_disk_space(self.info.size_in_bytes or 0, directory=self._cache_dir_root): raise IOError( "Not enough disk space. Needed: {} (download: {}, generated: {})".format( utils.size_str(self.info.size_in_bytes or 0), utils.size_str(self.info.download_size or 0), > utils.size_str(self.info.dataset_size or 0), ) ) E OSError: Not enough disk space. Needed: 5.62 GiB (download: 1.10 GiB, generated: 4.52 GiB) ``` I add a `processed_temp_dir="some/dir"; raw_temp_dir="another/dir"` to 71, and the test passed https://github.com/huggingface/nlp/blob/a67a6c422dece904b65d18af65f0e024e839dbe8/tests/test_dataset_common.py#L70-L72 I suggest we can create tmp dir under the `/home/user/tmp` but not `/tmp`, because take our lab server for example, everyone use `/tmp` thus it has not much capacity. Or at least we can improve error message, so the user know is what directory has no space and how many has it lefted. Or we could do both. 6. name of datasets I was surprised by the dataset name `books_corpus`, and didn't know it is from `class BooksCorpus(nlp.GeneratorBasedBuilder)` . I change it to `Bookscorpus` afterwards. I think this point shold be also on the doc. 7. More thorough doc to how to create `dataset.py` I believe there will be. **Feel free to close this issue** if you think these are solved.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/249/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/249/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/246
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/246/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/246/comments
https://api.github.com/repos/huggingface/datasets/issues/246/events
https://github.com/huggingface/datasets/issues/246
632,380,054
MDU6SXNzdWU2MzIzODAwNTQ=
246
What is the best way to cache a dataset?
{ "login": "Mistobaan", "id": 112599, "node_id": "MDQ6VXNlcjExMjU5OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/112599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mistobaan", "html_url": "https://github.com/Mistobaan", "followers_url": "https://api.github.com/users/Mistobaan/followers", "following_url": "https://api.github.com/users/Mistobaan/following{/other_user}", "gists_url": "https://api.github.com/users/Mistobaan/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mistobaan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mistobaan/subscriptions", "organizations_url": "https://api.github.com/users/Mistobaan/orgs", "repos_url": "https://api.github.com/users/Mistobaan/repos", "events_url": "https://api.github.com/users/Mistobaan/events{/privacy}", "received_events_url": "https://api.github.com/users/Mistobaan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Everything is already cached by default in 🤗nlp (in particular dataset\nloading and all the “map()” operations) so I don’t think you need to do any\nspecific caching in streamlit.\n\nTell us if you feel like it’s not the case.\n\nOn Sat, 6 Jun 2020 at 13:02, Fabrizio Milo <notifications@github.com> wrote:\n\n> For example if I want to use streamlit with a nlp dataset:\n>\n> @st.cache\n> def load_data():\n> return nlp.load_dataset('squad')\n>\n> This code raises the error \"uncachable object\"\n>\n> Right now I just fixed with a constant for my specific case:\n>\n> @st.cache(hash_funcs={pyarrow.lib.Buffer: lambda b: 0})\n>\n> But I was curious to know what is the best way in general\n>\n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/nlp/issues/246>, or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABYDIHKAKO7CWGX2QY55UXLRVIO3ZANCNFSM4NV333RQ>\n> .\n>\n", "Closing this one. Feel free to re-open if you have other questions !" ]
1,591,441,327,000
1,594,286,107,000
1,594,286,107,000
NONE
null
For example if I want to use streamlit with a nlp dataset: ``` @st.cache def load_data(): return nlp.load_dataset('squad') ``` This code raises the error "uncachable object" Right now I just fixed with a constant for my specific case: ``` @st.cache(hash_funcs={pyarrow.lib.Buffer: lambda b: 0}) ``` But I was curious to know what is the best way in general
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/246/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/246/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/245
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/245/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/245/comments
https://api.github.com/repos/huggingface/datasets/issues/245/events
https://github.com/huggingface/datasets/issues/245
631,985,108
MDU6SXNzdWU2MzE5ODUxMDg=
245
SST-2 test labels are all -1
{ "login": "jxmorris12", "id": 13238952, "node_id": "MDQ6VXNlcjEzMjM4OTUy", "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jxmorris12", "html_url": "https://github.com/jxmorris12", "followers_url": "https://api.github.com/users/jxmorris12/followers", "following_url": "https://api.github.com/users/jxmorris12/following{/other_user}", "gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}", "starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions", "organizations_url": "https://api.github.com/users/jxmorris12/orgs", "repos_url": "https://api.github.com/users/jxmorris12/repos", "events_url": "https://api.github.com/users/jxmorris12/events{/privacy}", "received_events_url": "https://api.github.com/users/jxmorris12/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "this also happened to me with `nlp.load_dataset('glue', 'mnli')`", "Yes, this is because the test sets for glue are hidden so the labels are\nnot publicly available. You can read the glue paper for more details.\n\nOn Sat, 6 Jun 2020 at 18:16, Jack Morris <notifications@github.com> wrote:\n\n> this also happened to me with nlp.load_datasets('glue', 'mnli')\n>\n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/nlp/issues/245#issuecomment-640083980>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABYDIHMVQD2EDX2HTZUXG5DRVJTWRANCNFSM4NVG3AKQ>\n> .\n>\n", "Thanks @thomwolf!", "@thomwolf shouldn't this be visible in the .info and/or in the .features?", "It should be in the datasets card (the README.md and on the hub) in my opinion. What do you think @yjernite?", "I checked both before I got to looking at issues, so that would be fine as well.\r\n\r\nSome additional thoughts on this: Is there a specific reason why the \"test\" split even has a \"label\" column if it isn't tagged. Shouldn't there just not be any. Seems more transparent", "I'm a little confused with the data size.\r\n`sst2` dataset is referenced to `Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank` and the link of the dataset in the paper is https://nlp.stanford.edu/sentiment/index.html which is often shown in GLUE/SST2 reference.\r\nFrom the original data, the standard train/dev/test splits split is 6920/872/1821 for binary classification. \r\nWhy in GLUE/SST2 the train/dev/test split is 67,349/872/1,821 ? \r\n\r\n", "> I'm a little confused with the data size.\r\n> `sst2` dataset is referenced to `Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank` and the link of the dataset in the paper is https://nlp.stanford.edu/sentiment/index.html which is often shown in GLUE/SST2 reference.\r\n> From the original data, the standard train/dev/test splits split is 6920/872/1821 for binary classification.\r\n> Why in GLUE/SST2 the train/dev/test split is 67,349/872/1,821 ?\r\n\r\nHave you figured out this problem? AFAIK, the original sst-2 dataset is totally different from the GLUE/sst-2. Do you think so?", "@yc1999 Sorry, I didn't solve this conflict. In the end, I just use a local data file provided by the previous work I followed(for consistent comparison), not use `datasets` package.\r\n\r\nRelated information: https://github.com/thunlp/OpenAttack/issues/146#issuecomment-766323571", "@yc1999 I find that the original SST-2 dataset (6,920/872/1,821) can be loaded from https://huggingface.co/datasets/gpt3mix/sst2 or built with SST data and the scripts in https://github.com/prrao87/fine-grained-sentiment/tree/master/data/sst.\r\nThe GLUE/SST-2 dataset (67,349/872/1,821) should be a completely different version.\r\n" ]
1,591,393,302,000
1,638,924,452,000
1,591,462,601,000
CONTRIBUTOR
null
I'm trying to test a model on the SST-2 task, but all the labels I see in the test set are -1. ``` >>> import nlp >>> glue = nlp.load_dataset('glue', 'sst2') >>> glue {'train': Dataset(schema: {'sentence': 'string', 'label': 'int64', 'idx': 'int32'}, num_rows: 67349), 'validation': Dataset(schema: {'sentence': 'string', 'label': 'int64', 'idx': 'int32'}, num_rows: 872), 'test': Dataset(schema: {'sentence': 'string', 'label': 'int64', 'idx': 'int32'}, num_rows: 1821)} >>> list(l['label'] for l in glue['test']) [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1] ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/245/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/245/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/242
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/242/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/242/comments
https://api.github.com/repos/huggingface/datasets/issues/242/events
https://github.com/huggingface/datasets/issues/242
631,733,683
MDU6SXNzdWU2MzE3MzM2ODM=
242
UnicodeDecodeError when downloading GLUE-MNLI
{ "login": "patpizio", "id": 15801338, "node_id": "MDQ6VXNlcjE1ODAxMzM4", "avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patpizio", "html_url": "https://github.com/patpizio", "followers_url": "https://api.github.com/users/patpizio/followers", "following_url": "https://api.github.com/users/patpizio/following{/other_user}", "gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}", "starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patpizio/subscriptions", "organizations_url": "https://api.github.com/users/patpizio/orgs", "repos_url": "https://api.github.com/users/patpizio/repos", "events_url": "https://api.github.com/users/patpizio/events{/privacy}", "received_events_url": "https://api.github.com/users/patpizio/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "It should be good now, thanks for noticing and fixing it ! I would say that it was because you are on windows but not 100% sure", "On Windows Python supports Unicode almost everywhere, but one of the notable exceptions is open() where it uses the locale encoding schema. So platform independent python scripts would always set the encoding='utf-8' in calls to open explicitly. \r\nIn the meantime: since Python 3.7 Windows users can set the default encoding for everything including open() to Unicode by setting this environment variable: set PYTHONUTF8=1 (details can be found in [PEP 540](https://www.python.org/dev/peps/pep-0540/))\r\n\r\nFor me this fixed the problem described by the OP." ]
1,591,374,601,000
1,591,718,807,000
1,591,605,903,000
CONTRIBUTOR
null
When I run ```python dataset = nlp.load_dataset('glue', 'mnli') ``` I get an encoding error (could it be because I'm using Windows?) : ```python # Lots of error log lines later... ~\Miniconda3\envs\nlp\lib\site-packages\tqdm\std.py in __iter__(self) 1128 try: -> 1129 for obj in iterable: 1130 yield obj ~\Miniconda3\envs\nlp\lib\site-packages\nlp\datasets\glue\5256cc2368cf84497abef1f1a5f66648522d5854b225162148cb8fc78a5a91cc\glue.py in _generate_examples(self, data_file, split, mrpc_files) 529 --> 530 for n, row in enumerate(reader): 531 if is_cola_non_test: ~\Miniconda3\envs\nlp\lib\csv.py in __next__(self) 110 self.fieldnames --> 111 row = next(self.reader) 112 self.line_num = self.reader.line_num ~\Miniconda3\envs\nlp\lib\encodings\cp1252.py in decode(self, input, final) 22 def decode(self, input, final=False): ---> 23 return codecs.charmap_decode(input,self.errors,decoding_table)[0] 24 UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 6744: character maps to <undefined> ``` Anyway this can be solved by specifying to decode in UTF when reading the csv file. I am proposing a PR if that's okay.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/242/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/242/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/240
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/240/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/240/comments
https://api.github.com/repos/huggingface/datasets/issues/240/events
https://github.com/huggingface/datasets/issues/240
631,434,677
MDU6SXNzdWU2MzE0MzQ2Nzc=
240
Deterministic dataset loading
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Yes good point !", "I think using `sorted(glob.glob())` would actually solve this problem. Can you think of other reasons why dataset loading might not be deterministic? @mariamabarham @yjernite @lhoestq @thomwolf . \r\n\r\nI can do a sweep through the dataset scripts and fix the glob.glob() if you guys are ok with it", "I'm pretty sure it would solve the problem too.\r\n\r\nThe only other dataset that is not deterministic right now is `blog_authorship_corpus` (see #215) but this is a problem related to string encodings.", "I think we should do the same also for `os.list_dir`" ]
1,591,347,806,000
1,591,607,894,000
1,591,607,894,000
MEMBER
null
When calling: ```python import nlp dataset = nlp.load_dataset("trivia_qa", split="validation[:1%]") ``` the resulting dataset is not deterministic over different google colabs. After talking to @thomwolf, I suspect the reason to be the use of `glob.glob` in line: https://github.com/huggingface/nlp/blob/2e0a8639a79b1abc848cff5c669094d40bba0f63/datasets/trivia_qa/trivia_qa.py#L180 which seems to return an ordering of files that depends on the filesystem: https://stackoverflow.com/questions/6773584/how-is-pythons-glob-glob-ordered I think we should go through all the dataset scripts and make sure to have deterministic behavior. A simple solution for `glob.glob()` would be to just replace it with `sorted(glob.glob())` to have everything sorted by name. What do you think @lhoestq?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/240/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/240/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/239
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/239/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/239/comments
https://api.github.com/repos/huggingface/datasets/issues/239/events
https://github.com/huggingface/datasets/issues/239
631,340,440
MDU6SXNzdWU2MzEzNDA0NDA=
239
[Creating new dataset] Not found dataset_info.json
{ "login": "richarddwang", "id": 17963619, "node_id": "MDQ6VXNlcjE3OTYzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richarddwang", "html_url": "https://github.com/richarddwang", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "repos_url": "https://api.github.com/users/richarddwang/repos", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "I think you can just `rm` this directory and it should be good :)", "@lhoestq - this seems to happen quite often (already the 2nd issue). Can we maybe delete this automatically?", "Yes I have an idea of what's going on. I'm sure I can fix that", "Hi, I rebase my local copy to `fix-empty-cache-dir`, and try to run again `python nlp-cli test datasets/bookcorpus --save_infos --all_configs`.\r\n\r\nI got this, \r\n```\r\nTraceback (most recent call last):\r\n File \"nlp-cli\", line 10, in <module>\r\n from nlp.commands.run_beam import RunBeamCommand\r\n File \"/home/yisiang/nlp/src/nlp/commands/run_beam.py\", line 6, in <module>\r\n import apache_beam as beam\r\nModuleNotFoundError: No module named 'apache_beam'\r\n```\r\nAnd after I installed it. I got this\r\n```\r\nFile \"/home/yisiang/nlp/src/nlp/datasets/bookcorpus/aea0bd5142d26df645a8fce23d6110bb95ecb81772bb2a1f29012e329191962c/bookcorpus.py\", line 88, in _split_generators\r\n downloaded_path_or_paths = dl_manager.download_custom(_GDRIVE_FILE_ID, download_file_from_google_drive)\r\n File \"/home/yisiang/nlp/src/nlp/utils/download_manager.py\", line 128, in download_custom\r\n downloaded_path_or_paths = map_nested(url_to_downloaded_path, url_or_urls)\r\n File \"/home/yisiang/nlp/src/nlp/utils/py_utils.py\", line 172, in map_nested\r\n return function(data_struct)\r\n File \"/home/yisiang/nlp/src/nlp/utils/download_manager.py\", line 126, in url_to_downloaded_path\r\n return os.path.join(self._download_config.cache_dir, hash_url_to_filename(url))\r\n File \"/home/yisiang/miniconda3/envs/nlppr/lib/python3.7/posixpath.py\", line 80, in join\r\n a = os.fspath(a)\r\n```\r\nThe problem is when I print `self._download_config.cache_dir` using pdb, it is `None`.\r\n\r\nDid I miss something ? Or can you provide a workaround first so I can keep testing my script ?", "I'll close this issue because I brings more reports in another issue #249 ." ]
1,591,337,704,000
1,591,534,864,000
1,591,534,864,000
CONTRIBUTOR
null
Hi, I am trying to create Toronto Book Corpus. #131 I ran `~/nlp % python nlp-cli test datasets/bookcorpus --save_infos --all_configs` but this doesn't create `dataset_info.json` and try to use it ``` INFO:nlp.load:Checking datasets/bookcorpus/bookcorpus.py for additional imports. INFO:filelock:Lock 139795325778640 acquired on datasets/bookcorpus/bookcorpus.py.lock INFO:nlp.load:Found main folder for dataset datasets/bookcorpus/bookcorpus.py at /home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/datasets/bookcorpus INFO:nlp.load:Found specific version folder for dataset datasets/bookcorpus/bookcorpus.py at /home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/datasets/bookcorpus/8e84759446cf68d0b0deb3417e60cc331f30a3bbe58843de18a0f48e87d1efd9 INFO:nlp.load:Found script file from datasets/bookcorpus/bookcorpus.py to /home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/datasets/bookcorpus/8e84759446cf68d0b0deb3417e60cc331f30a3bbe58843de18a0f48e87d1efd9/bookcorpus.py INFO:nlp.load:Couldn't find dataset infos file at datasets/bookcorpus/dataset_infos.json INFO:nlp.load:Found metadata file for dataset datasets/bookcorpus/bookcorpus.py at /home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/datasets/bookcorpus/8e84759446cf68d0b0deb3417e60cc331f30a3bbe58843de18a0f48e87d1efd9/bookcorpus.json INFO:filelock:Lock 139795325778640 released on datasets/bookcorpus/bookcorpus.py.lock INFO:nlp.builder:Overwrite dataset info from restored data version. INFO:nlp.info:Loading Dataset info from /home/yisiang/.cache/huggingface/datasets/book_corpus/plain_text/1.0.0 Traceback (most recent call last): File "nlp-cli", line 37, in <module> service.run() File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/commands/test.py", line 78, in run builders.append(builder_cls(name=config.name, data_dir=self._data_dir)) File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/builder.py", line 610, in __init__ super(GeneratorBasedBuilder, self).__init__(*args, **kwargs) File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/builder.py", line 152, in __init__ self.info = DatasetInfo.from_directory(self._cache_dir) File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/info.py", line 157, in from_directory with open(os.path.join(dataset_info_dir, DATASET_INFO_FILENAME), "r") as f: FileNotFoundError: [Errno 2] No such file or directory: '/home/yisiang/.cache/huggingface/datasets/book_corpus/plain_text/1.0.0/dataset_info.json' ``` btw, `ls /home/yisiang/.cache/huggingface/datasets/book_corpus/plain_text/1.0.0/` show me nothing is in the directory. I have also pushed the script to my fork [bookcorpus.py](https://github.com/richardyy1188/nlp/blob/bookcorpusdev/datasets/bookcorpus/bookcorpus.py).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/239/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/239/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/238
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/238/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/238/comments
https://api.github.com/repos/huggingface/datasets/issues/238/events
https://github.com/huggingface/datasets/issues/238
631,260,143
MDU6SXNzdWU2MzEyNjAxNDM=
238
[Metric] Bertscore : Warning : Empty candidate sentence; Setting recall to be 0.
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "organizations_url": "https://api.github.com/users/astariul/orgs", "repos_url": "https://api.github.com/users/astariul/repos", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "received_events_url": "https://api.github.com/users/astariul/received_events", "type": "User", "site_admin": false }
[ { "id": 2067393914, "node_id": "MDU6TGFiZWwyMDY3MzkzOTE0", "url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug", "name": "metric bug", "color": "25b21e", "default": false, "description": "A bug in a metric script" } ]
closed
false
null
[]
null
[ "This print statement comes from the official implementation of bert_score (see [here](https://github.com/Tiiiger/bert_score/blob/master/bert_score/utils.py#L343)). The warning shows up only if the attention mask outputs no candidate.\r\nRight now we want to only use official code for metrics to have fair evaluations, so I'm not sure we can do anything about it. Maybe you can try to create an issue on their [repo](https://github.com/Tiiiger/bert_score) ?" ]
1,591,323,287,000
1,593,450,619,000
1,593,450,619,000
NONE
null
When running BERT-Score, I'm meeting this warning : > Warning: Empty candidate sentence; Setting recall to be 0. Code : ``` import nlp metric = nlp.load_metric("bertscore") scores = metric.compute(["swag", "swags"], ["swags", "totally something different"], lang="en", device=0) ``` --- **What am I doing wrong / How can I hide this warning ?**
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/238/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/238/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/237
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/237/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/237/comments
https://api.github.com/repos/huggingface/datasets/issues/237/events
https://github.com/huggingface/datasets/issues/237
631,199,940
MDU6SXNzdWU2MzExOTk5NDA=
237
Can't download MultiNLI
{ "login": "patpizio", "id": 15801338, "node_id": "MDQ6VXNlcjE1ODAxMzM4", "avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patpizio", "html_url": "https://github.com/patpizio", "followers_url": "https://api.github.com/users/patpizio/followers", "following_url": "https://api.github.com/users/patpizio/following{/other_user}", "gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}", "starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patpizio/subscriptions", "organizations_url": "https://api.github.com/users/patpizio/orgs", "repos_url": "https://api.github.com/users/patpizio/repos", "events_url": "https://api.github.com/users/patpizio/events{/privacy}", "received_events_url": "https://api.github.com/users/patpizio/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "You should use `load_dataset('glue', 'mnli')`", "Thanks! I thought I had to use the same code displayed in the live viewer:\r\n```python\r\n!pip install nlp\r\nfrom nlp import load_dataset\r\ndataset = load_dataset('multi_nli', 'plain_text')\r\n```\r\nYour suggestion works, even if then I got a different issue (#242). ", "Glad it helps !\nThough I am not one of hf team, but maybe you should close this issue first." ]
1,591,311,921,000
1,591,440,694,000
1,591,440,694,000
CONTRIBUTOR
null
When I try to download MultiNLI with ```python dataset = load_dataset('multi_nli') ``` I get this long error: ```python --------------------------------------------------------------------------- OSError Traceback (most recent call last) <ipython-input-13-3b11f6be4cb9> in <module> 1 # Load a dataset and print the first examples in the training set 2 # nli_dataset = nlp.load_dataset('multi_nli') ----> 3 dataset = load_dataset('multi_nli') 4 # nli_dataset = nlp.load_dataset('multi_nli', split='validation_matched[:10%]') 5 # print(nli_dataset['train'][0]) ~\Miniconda3\envs\nlp\lib\site-packages\nlp\load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 514 515 # Download and prepare data --> 516 builder_instance.download_and_prepare( 517 download_config=download_config, 518 download_mode=download_mode, ~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 417 with utils.temporary_assignment(self, "_cache_dir", tmp_data_dir): 418 verify_infos = not save_infos and not ignore_verifications --> 419 self._download_and_prepare( 420 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 421 ) ~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 455 split_dict = SplitDict(dataset_name=self.name) 456 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 457 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 458 # Checksums verification 459 if verify_infos: ~\Miniconda3\envs\nlp\lib\site-packages\nlp\datasets\multi_nli\60774175381b9f3f1e6ae1028229e3cdb270d50379f45b9f2c01008f50f09e6b\multi_nli.py in _split_generators(self, dl_manager) 99 def _split_generators(self, dl_manager): 100 --> 101 downloaded_dir = dl_manager.download_and_extract( 102 "http://storage.googleapis.com/tfds-data/downloads/multi_nli/multinli_1.0.zip" 103 ) ~\Miniconda3\envs\nlp\lib\site-packages\nlp\utils\download_manager.py in download_and_extract(self, url_or_urls) 214 extracted_path(s): `str`, extracted paths of given URL(s). 215 """ --> 216 return self.extract(self.download(url_or_urls)) 217 218 def get_recorded_sizes_checksums(self): ~\Miniconda3\envs\nlp\lib\site-packages\nlp\utils\download_manager.py in extract(self, path_or_paths) 194 path_or_paths. 195 """ --> 196 return map_nested( 197 lambda path: cached_path(path, extract_compressed_file=True, force_extract=False), path_or_paths, 198 ) ~\Miniconda3\envs\nlp\lib\site-packages\nlp\utils\py_utils.py in map_nested(function, data_struct, dict_only, map_tuple) 168 return tuple(mapped) 169 # Singleton --> 170 return function(data_struct) 171 172 ~\Miniconda3\envs\nlp\lib\site-packages\nlp\utils\download_manager.py in <lambda>(path) 195 """ 196 return map_nested( --> 197 lambda path: cached_path(path, extract_compressed_file=True, force_extract=False), path_or_paths, 198 ) 199 ~\Miniconda3\envs\nlp\lib\site-packages\nlp\utils\file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 231 if is_zipfile(output_path): 232 with ZipFile(output_path, "r") as zip_file: --> 233 zip_file.extractall(output_path_extracted) 234 zip_file.close() 235 elif tarfile.is_tarfile(output_path): ~\Miniconda3\envs\nlp\lib\zipfile.py in extractall(self, path, members, pwd) 1644 1645 for zipinfo in members: -> 1646 self._extract_member(zipinfo, path, pwd) 1647 1648 @classmethod ~\Miniconda3\envs\nlp\lib\zipfile.py in _extract_member(self, member, targetpath, pwd) 1698 1699 with self.open(member, pwd=pwd) as source, \ -> 1700 open(targetpath, "wb") as target: 1701 shutil.copyfileobj(source, target) 1702 OSError: [Errno 22] Invalid argument: 'C:\\Users\\Python\\.cache\\huggingface\\datasets\\3e12413b8ec69f22dfcfd54a79d1ba9e7aac2e18e334bbb6b81cca64fd16bffc\\multinli_1.0\\Icon\r' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/237/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/237/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/234
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/234/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/234/comments
https://api.github.com/repos/huggingface/datasets/issues/234/events
https://github.com/huggingface/datasets/issues/234
630,534,427
MDU6SXNzdWU2MzA1MzQ0Mjc=
234
Huggingface NLP, Uploading custom dataset
{ "login": "Nouman97", "id": 42269506, "node_id": "MDQ6VXNlcjQyMjY5NTA2", "avatar_url": "https://avatars.githubusercontent.com/u/42269506?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Nouman97", "html_url": "https://github.com/Nouman97", "followers_url": "https://api.github.com/users/Nouman97/followers", "following_url": "https://api.github.com/users/Nouman97/following{/other_user}", "gists_url": "https://api.github.com/users/Nouman97/gists{/gist_id}", "starred_url": "https://api.github.com/users/Nouman97/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Nouman97/subscriptions", "organizations_url": "https://api.github.com/users/Nouman97/orgs", "repos_url": "https://api.github.com/users/Nouman97/repos", "events_url": "https://api.github.com/users/Nouman97/events{/privacy}", "received_events_url": "https://api.github.com/users/Nouman97/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "What do you mean 'custom' ? You may want to elaborate on it when ask a question.\r\n\r\nAnyway, there are two things you may interested\r\n`nlp.Dataset.from_file` and `load_dataset(..., cache_dir=)`", "To load a dataset you need to have a script that defines the format of the examples, the splits and the way to generate examples. As your dataset has the same format of squad, you can just copy the squad script (see the [datasets](https://github.com/huggingface/nlp/tree/master/datasets) forlder) and just replace the url to load the data to your local or remote path.\r\n\r\nThen what you can do is `load_dataset(<path/to/your/script>)`", "Also if you want to upload your script, you should be able to use the `nlp-cli`.\r\n\r\nUnfortunately the upload feature was not shipped in the latest version 0.2.0. so right now you can either clone the repo to use it or wait for the next release. We will add some docs to explain how to upload datasets.\r\n", "Since the latest release 0.2.1 you can use \r\n```bash\r\nnlp-cli upload_dataset <path/to/dataset>\r\n```\r\nwhere `<path/to/dataset>` is a path to a folder containing your script (ex: `squad.py`).\r\nThis will upload the script under your namespace on our S3.\r\n\r\nOptionally the folder can also contain `dataset_infos.json` generated using\r\n```bash\r\nnlp-cli test <path/to/dataset> --all_configs --save_infos\r\n```\r\n\r\nThen you should be able to do\r\n```python\r\nnlp.load_dataset(\"my_namespace/dataset_name\")\r\n```" ]
1,591,250,346,000
1,594,028,006,000
1,594,028,006,000
NONE
null
Hello, Does anyone know how we can call our custom dataset using the nlp.load command? Let's say that I have a dataset based on the same format as that of squad-v1.1, how am I supposed to load it using huggingface nlp. Thank you!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/234/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/234/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/233
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/233/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/233/comments
https://api.github.com/repos/huggingface/datasets/issues/233/events
https://github.com/huggingface/datasets/issues/233
630,432,132
MDU6SXNzdWU2MzA0MzIxMzI=
233
Fail to download c4 english corpus
{ "login": "donggyukimc", "id": 16605764, "node_id": "MDQ6VXNlcjE2NjA1NzY0", "avatar_url": "https://avatars.githubusercontent.com/u/16605764?v=4", "gravatar_id": "", "url": "https://api.github.com/users/donggyukimc", "html_url": "https://github.com/donggyukimc", "followers_url": "https://api.github.com/users/donggyukimc/followers", "following_url": "https://api.github.com/users/donggyukimc/following{/other_user}", "gists_url": "https://api.github.com/users/donggyukimc/gists{/gist_id}", "starred_url": "https://api.github.com/users/donggyukimc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/donggyukimc/subscriptions", "organizations_url": "https://api.github.com/users/donggyukimc/orgs", "repos_url": "https://api.github.com/users/donggyukimc/repos", "events_url": "https://api.github.com/users/donggyukimc/events{/privacy}", "received_events_url": "https://api.github.com/users/donggyukimc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hello ! Thanks for noticing this bug, let me fix that.\r\n\r\nAlso for information, as specified in the changelog of the latest release, C4 currently needs to have a runtime for apache beam to work on. Apache beam is used to process this very big dataset and it can work on dataflow, spark, flink, apex, etc. You can find more info on beam datasets [here](https://github.com/huggingface/nlp/blob/master/docs/beam_dataset.md).\r\n\r\nOur goal in the future is to make available an already-processed version of C4 (as we do for wikipedia for example) so that users without apache beam runtimes can load it.", "@lhoestq I am facing `IsADirectoryError` while downloading with this command.\r\nCan you pls look into it & help me.\r\nI'm using version 0.4.0 of `nlp`.\r\n\r\n```\r\ndataset = load_dataset(\"c4\", 'en', data_dir='.', beam_runner='DirectRunner')\r\n```\r\n\r\nHere's the complete stack trace.\r\n\r\n```\r\nDownloading and preparing dataset c4/en (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to /home/devops/.cache/huggingface/datasets/c4/en/2.3.0/096df5a27756d51957c959a2499453e60a08154971fceb017bbb29f54b11bef7...\r\n\r\n---------------------------------------------------------------------------\r\nIsADirectoryError Traceback (most recent call last)\r\n<ipython-input-11-f622e6705e03> in <module>\r\n----> 1 dataset = load_dataset(\"c4\", 'en', data_dir='.', beam_runner='DirectRunner')\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 547 # Download and prepare data\r\n 548 builder_instance.download_and_prepare(\r\n--> 549 download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,\r\n 550 )\r\n 551 \r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 461 if not downloaded_from_gcs:\r\n 462 self._download_and_prepare(\r\n--> 463 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 464 )\r\n 465 # Sync info\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos)\r\n 964 pipeline = beam_utils.BeamPipeline(runner=beam_runner, options=beam_options,)\r\n 965 super(BeamBasedBuilder, self)._download_and_prepare(\r\n--> 966 dl_manager, verify_infos=False, pipeline=pipeline,\r\n 967 ) # TODO handle verify_infos in beam datasets\r\n 968 # Run pipeline\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 516 split_dict = SplitDict(dataset_name=self.name)\r\n 517 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 518 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 519 # Checksums verification\r\n 520 if verify_infos:\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/datasets/c4/096df5a27756d51957c959a2499453e60a08154971fceb017bbb29f54b11bef7/c4.py in _split_generators(self, dl_manager, pipeline)\r\n 187 if self.config.realnewslike:\r\n 188 files_to_download[\"realnews_domains\"] = _REALNEWS_DOMAINS_URL\r\n--> 189 file_paths = dl_manager.download_and_extract(files_to_download)\r\n 190 \r\n 191 if self.config.webtextlike:\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/download_manager.py in download_and_extract(self, url_or_urls)\r\n 218 extracted_path(s): `str`, extracted paths of given URL(s).\r\n 219 \"\"\"\r\n--> 220 return self.extract(self.download(url_or_urls))\r\n 221 \r\n 222 def get_recorded_sizes_checksums(self):\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/download_manager.py in download(self, url_or_urls)\r\n 156 lambda url: cached_path(url, download_config=self._download_config,), url_or_urls,\r\n 157 )\r\n--> 158 self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths)\r\n 159 return downloaded_path_or_paths\r\n 160 \r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/download_manager.py in _record_sizes_checksums(self, url_or_urls, downloaded_path_or_paths)\r\n 106 flattened_downloaded_path_or_paths = flatten_nested(downloaded_path_or_paths)\r\n 107 for url, path in zip(flattened_urls_or_urls, flattened_downloaded_path_or_paths):\r\n--> 108 self._recorded_sizes_checksums[url] = get_size_checksum_dict(path)\r\n 109 \r\n 110 def download_custom(self, url_or_urls, custom_download):\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/info_utils.py in get_size_checksum_dict(path)\r\n 77 \"\"\"Compute the file size and the sha256 checksum of a file\"\"\"\r\n 78 m = sha256()\r\n---> 79 with open(path, \"rb\") as f:\r\n 80 for chunk in iter(lambda: f.read(1 << 20), b\"\"):\r\n 81 m.update(chunk)\r\n\r\nIsADirectoryError: [Errno 21] Is a directory: '/'\r\n\r\n```\r\n\r\nCan anyone please try to see what I am doing wrong or is this a bug?", "I have the same problem as @prashant-kikani", "Looks like a bug in the dataset script, can you open an issue ?", "I see the same issue as @prashant-kikani. I'm using `datasets` version 1.2.0 to download C4." ]
1,591,232,798,000
1,610,090,252,000
1,591,607,819,000
NONE
null
i run following code to download c4 English corpus. ``` dataset = nlp.load_dataset('c4', 'en', beam_runner='DirectRunner' , data_dir='/mypath') ``` and i met failure as follows ``` Downloading and preparing dataset c4/en (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/adam/.cache/huggingface/datasets/c4/en/2.3.0... Traceback (most recent call last): File "download_corpus.py", line 38, in <module> , data_dir='/home/adam/data/corpus/en/c4') File "/home/adam/anaconda3/envs/adam/lib/python3.7/site-packages/nlp/load.py", line 520, in load_dataset save_infos=save_infos, File "/home/adam/anaconda3/envs/adam/lib/python3.7/site-packages/nlp/builder.py", line 420, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/adam/anaconda3/envs/adam/lib/python3.7/site-packages/nlp/builder.py", line 816, in _download_and_prepare dl_manager, verify_infos=False, pipeline=pipeline, File "/home/adam/anaconda3/envs/adam/lib/python3.7/site-packages/nlp/builder.py", line 457, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/adam/anaconda3/envs/adam/lib/python3.7/site-packages/nlp/datasets/c4/f545de9f63300d8d02a6795e2eb34e140c47e62a803f572ac5599e170ee66ecc/c4.py", line 175, in _split_generators dl_manager.download_checksums(_CHECKSUMS_URL) AttributeError: 'DownloadManager' object has no attribute 'download_checksums ``` can i get any advice?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/233/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/233/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/228
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/228/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/228/comments
https://api.github.com/repos/huggingface/datasets/issues/228/events
https://github.com/huggingface/datasets/issues/228
629,952,402
MDU6SXNzdWU2Mjk5NTI0MDI=
228
Not able to access the XNLI dataset
{ "login": "aswin-giridhar", "id": 11817160, "node_id": "MDQ6VXNlcjExODE3MTYw", "avatar_url": "https://avatars.githubusercontent.com/u/11817160?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aswin-giridhar", "html_url": "https://github.com/aswin-giridhar", "followers_url": "https://api.github.com/users/aswin-giridhar/followers", "following_url": "https://api.github.com/users/aswin-giridhar/following{/other_user}", "gists_url": "https://api.github.com/users/aswin-giridhar/gists{/gist_id}", "starred_url": "https://api.github.com/users/aswin-giridhar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aswin-giridhar/subscriptions", "organizations_url": "https://api.github.com/users/aswin-giridhar/orgs", "repos_url": "https://api.github.com/users/aswin-giridhar/repos", "events_url": "https://api.github.com/users/aswin-giridhar/events{/privacy}", "received_events_url": "https://api.github.com/users/aswin-giridhar/received_events", "type": "User", "site_admin": false }
[ { "id": 2107841032, "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer", "name": "nlp-viewer", "color": "94203D", "default": false, "description": "" } ]
closed
false
{ "login": "srush", "id": 35882, "node_id": "MDQ6VXNlcjM1ODgy", "avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4", "gravatar_id": "", "url": "https://api.github.com/users/srush", "html_url": "https://github.com/srush", "followers_url": "https://api.github.com/users/srush/followers", "following_url": "https://api.github.com/users/srush/following{/other_user}", "gists_url": "https://api.github.com/users/srush/gists{/gist_id}", "starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/srush/subscriptions", "organizations_url": "https://api.github.com/users/srush/orgs", "repos_url": "https://api.github.com/users/srush/repos", "events_url": "https://api.github.com/users/srush/events{/privacy}", "received_events_url": "https://api.github.com/users/srush/received_events", "type": "User", "site_admin": false }
[ { "login": "srush", "id": 35882, "node_id": "MDQ6VXNlcjM1ODgy", "avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4", "gravatar_id": "", "url": "https://api.github.com/users/srush", "html_url": "https://github.com/srush", "followers_url": "https://api.github.com/users/srush/followers", "following_url": "https://api.github.com/users/srush/following{/other_user}", "gists_url": "https://api.github.com/users/srush/gists{/gist_id}", "starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/srush/subscriptions", "organizations_url": "https://api.github.com/users/srush/orgs", "repos_url": "https://api.github.com/users/srush/repos", "events_url": "https://api.github.com/users/srush/events{/privacy}", "received_events_url": "https://api.github.com/users/srush/received_events", "type": "User", "site_admin": false } ]
null
[ "Added pull request to change the name of the file from dataset_infos.json to dataset_info.json", "Thanks for reporting this bug !\r\nAs it seems to be just a cache problem, I closed your PR.\r\nI think we might just need to clear and reload the `xnli` cache @srush ? ", "Update: The dataset_info.json error is gone, but we have a new one instead:\r\n```\r\nConnectionError: Couldn't reach https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip\r\n```\r\nI am not able to reproduce on my side unfortunately. Any idea @srush ?", "xnli is now properly shown in the viewer.\r\nClosing this one." ]
1,591,187,114,000
1,595,007,862,000
1,595,007,862,000
NONE
null
When I try to access the XNLI dataset, I get the following error. The option of plain_text get selected automatically and then I get the following error. ``` FileNotFoundError: [Errno 2] No such file or directory: '/home/sasha/.cache/huggingface/datasets/xnli/plain_text/1.0.0/dataset_info.json' Traceback: File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/ScriptRunner.py", line 322, in _run_script exec(code, module.__dict__) File "/home/sasha/nlp_viewer/run.py", line 86, in <module> dts, fail = get(str(option.id), str(conf_option.name) if conf_option else None) File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/caching.py", line 591, in wrapped_func return get_or_create_cached_value() File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/caching.py", line 575, in get_or_create_cached_value return_value = func(*args, **kwargs) File "/home/sasha/nlp_viewer/run.py", line 72, in get builder_instance = builder_cls(name=conf) File "/home/sasha/.local/lib/python3.7/site-packages/nlp/builder.py", line 610, in __init__ super(GeneratorBasedBuilder, self).__init__(*args, **kwargs) File "/home/sasha/.local/lib/python3.7/site-packages/nlp/builder.py", line 152, in __init__ self.info = DatasetInfo.from_directory(self._cache_dir) File "/home/sasha/.local/lib/python3.7/site-packages/nlp/info.py", line 157, in from_directory with open(os.path.join(dataset_info_dir, DATASET_INFO_FILENAME), "r") as f: ``` Is it possible to see if the dataset_info.json is correctly placed?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/228/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/228/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/227
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/227/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/227/comments
https://api.github.com/repos/huggingface/datasets/issues/227/events
https://github.com/huggingface/datasets/issues/227
629,845,704
MDU6SXNzdWU2Mjk4NDU3MDQ=
227
Should we still have to force to install apache_beam to download wikipedia ?
{ "login": "richarddwang", "id": 17963619, "node_id": "MDQ6VXNlcjE3OTYzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richarddwang", "html_url": "https://github.com/richarddwang", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "repos_url": "https://api.github.com/users/richarddwang/repos", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for your message 😊 \r\nIndeed users shouldn't have to install those dependencies", "Got it, feel free to close this issue when you think it’s resolved.", "It should be good now :)" ]
1,591,176,800,000
1,591,197,941,000
1,591,197,941,000
CONTRIBUTOR
null
Hi, first thanks to @lhoestq 's revolutionary work, I successfully downloaded processed wikipedia according to the doc. 😍😍😍 But at the first try, it tell me to install `apache_beam` and `mwparserfromhell`, which I thought wouldn't be used according to #204 , it was kind of confusing me at that time. Maybe we should not force users to install these ? Or we just add them to`nlp`'s dependency ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/227/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/227/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/225
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/225/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/225/comments
https://api.github.com/repos/huggingface/datasets/issues/225/events
https://github.com/huggingface/datasets/issues/225
628,083,366
MDU6SXNzdWU2MjgwODMzNjY=
225
[ROUGE] Different scores with `files2rouge`
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "organizations_url": "https://api.github.com/users/astariul/orgs", "repos_url": "https://api.github.com/users/astariul/repos", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "received_events_url": "https://api.github.com/users/astariul/received_events", "type": "User", "site_admin": false }
[ { "id": 2067400959, "node_id": "MDU6TGFiZWwyMDY3NDAwOTU5", "url": "https://api.github.com/repos/huggingface/datasets/labels/Metric%20discussion", "name": "Metric discussion", "color": "d722e8", "default": false, "description": "Discussions on the metrics" } ]
closed
false
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false } ]
null
[ "@Colanim unfortunately there are different implementations of the ROUGE metric floating around online which yield different results, and we had to chose one for the package :) We ended up including the one from the google-research repository, which does minimal post-processing before computing the P/R/F scores. If I recall correctly, files2rouge relies on the Perl, script, which among other things normalizes all numbers to a special token: in the case you presented, this should account for a good chunk of the difference.\r\n\r\nWe may end up adding in more versions of the metric, but probably not for a while (@lhoestq correct me if I'm wrong). However, feel free to take a stab at adding it in yourself and submitting a PR if you're interested!", "Thank you for your kind answer.\r\n\r\nAs a side question : Isn't it better to have a package that normalize more ?\r\n\r\nI understand to idea of having a package that does minimal post-processing for transparency.\r\n\r\nBut it means that people using different architecture (with different tokenizers for example) will have difference in ROUGE scores even if their predictions are actually similar. \r\nThe goal of `nlp` is to have _one package to rule them all_, right ?\r\n\r\nI will look into it but I'm not sure I have the required skill for this ^^ ", "You're right, there's a pretty interesting trade-off here between robustness and sensitivity :) The flip side of your argument is that we also still want the metric to be sensitive to model mistakes. How we think about number normalization for example has evolved a fair bit since the Perl script was written: at the time, ROUGE was used mostly to evaluate short-medium text summarization systems, where there were only a few numbers in the input and it was assumed that the most popular methods in use at the time would get those right. However, as your example showcases, that assumption does not hold any more, and we do want to be able to penalize a model that generates a wrong numerical value.\r\n\r\nAlso, we think that abstracting away tokenization differences is the role of the model/tokenizer: if you use the 🤗Tokenizers library for example, it will handle that for you ;)\r\n\r\nFinally, there is a lot of active research on developing model-powered metrics that are both more sensitive and more robust than ROUGE. Check out for example BERTscore, which is implemented in this library!" ]
1,590,972,636,000
1,591,198,038,000
1,591,198,038,000
NONE
null
It seems that the ROUGE score of `nlp` is lower than the one of `files2rouge`. Here is a self-contained notebook to reproduce both scores : https://colab.research.google.com/drive/14EyAXValB6UzKY9x4rs_T3pyL7alpw_F?usp=sharing --- `nlp` : (Only mid F-scores) >rouge1 0.33508031962733364 rouge2 0.14574333776191592 rougeL 0.2321187823256159 `files2rouge` : >Running ROUGE... =========================== 1 ROUGE-1 Average_R: 0.48873 (95%-conf.int. 0.41192 - 0.56339) 1 ROUGE-1 Average_P: 0.29010 (95%-conf.int. 0.23605 - 0.34445) 1 ROUGE-1 Average_F: 0.34761 (95%-conf.int. 0.29479 - 0.39871) =========================== 1 ROUGE-2 Average_R: 0.20280 (95%-conf.int. 0.14969 - 0.26244) 1 ROUGE-2 Average_P: 0.12772 (95%-conf.int. 0.08603 - 0.17752) 1 ROUGE-2 Average_F: 0.14798 (95%-conf.int. 0.10517 - 0.19240) =========================== 1 ROUGE-L Average_R: 0.32960 (95%-conf.int. 0.26501 - 0.39676) 1 ROUGE-L Average_P: 0.19880 (95%-conf.int. 0.15257 - 0.25136) 1 ROUGE-L Average_F: 0.23619 (95%-conf.int. 0.19073 - 0.28663) --- When using longer predictions/gold, the difference is bigger. **How can I reproduce same score as `files2rouge` ?** @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/225/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/225/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/224
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/224/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/224/comments
https://api.github.com/repos/huggingface/datasets/issues/224/events
https://github.com/huggingface/datasets/issues/224
627,791,693
MDU6SXNzdWU2Mjc3OTE2OTM=
224
[Feature Request/Help] BLEURT model -> PyTorch
{ "login": "adamwlev", "id": 6889910, "node_id": "MDQ6VXNlcjY4ODk5MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/6889910?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adamwlev", "html_url": "https://github.com/adamwlev", "followers_url": "https://api.github.com/users/adamwlev/followers", "following_url": "https://api.github.com/users/adamwlev/following{/other_user}", "gists_url": "https://api.github.com/users/adamwlev/gists{/gist_id}", "starred_url": "https://api.github.com/users/adamwlev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adamwlev/subscriptions", "organizations_url": "https://api.github.com/users/adamwlev/orgs", "repos_url": "https://api.github.com/users/adamwlev/repos", "events_url": "https://api.github.com/users/adamwlev/events{/privacy}", "received_events_url": "https://api.github.com/users/adamwlev/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false } ]
null
[ "Is there any update on this? \r\n\r\nThanks!", "Hitting this error when using bleurt with PyTorch ...\r\n\r\n```\r\nUnrecognizedFlagError: Unknown command line flag 'f'\r\n```\r\n... and I'm assuming because it was built for TF specifically. Is there a way to use this metric in PyTorch?", "We currently provide a wrapper on the TensorFlow implementation: https://huggingface.co/metrics/bleurt\r\n\r\nWe have long term plans to better handle model-based metrics, but they probably won't be implemented right away\r\n\r\n@adamwlev it would still be cool to add the BLEURT checkpoints to the transformers repo if you're interested, but that would best be discussed there :) \r\n\r\nclosing for now", "Hi there. We ran into the same problem this year (converting BLEURT to PyTorch) and thanks to @adamwlev found his colab notebook which didn't work but served as a good starting point. Finally, we **made it work** by doing just two simple conceptual fixes: \r\n\r\n1. Transposing 'kernel' layers instead of 'dense' ones when copying params from the original model;\r\n2. Taking pooler_output as a cls_state in forward function of the BleurtModel class.\r\n\r\nPlus few minor syntactical fixes for the outdated parts. The result is still not exactly the same, but is very close to the expected one (1.0483 vs 1.0474).\r\n\r\nFind the fixed version here (fixes are commented): https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing \r\n" ]
1,590,863,440,000
1,630,594,937,000
1,609,754,012,000
NONE
null
Hi, I am interested in porting google research's new BLEURT learned metric to PyTorch (because I wish to do something experimental with language generation and backpropping through BLEURT). I noticed that you guys don't have it yet so I am partly just asking if you plan to add it (@thomwolf said you want to do so on Twitter). I had a go of just like manually using the checkpoint that they publish which includes the weights. It seems like the architecture is exactly aligned with the out-of-the-box BertModel in transformers just with a single linear layer on top of the CLS embedding. I loaded all the weights to the PyTorch model but I am not able to get the same numbers as the BLEURT package's python api. Here is my colab notebook where I tried https://colab.research.google.com/drive/1Bfced531EvQP_CpFvxwxNl25Pj6ptylY?usp=sharing . If you have any pointers on what might be going wrong that would be much appreciated! Thank you muchly!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/224/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/224/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/223
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/223/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/223/comments
https://api.github.com/repos/huggingface/datasets/issues/223/events
https://github.com/huggingface/datasets/issues/223
627,683,386
MDU6SXNzdWU2Mjc2ODMzODY=
223
[Feature request] Add FLUE dataset
{ "login": "lbourdois", "id": 58078086, "node_id": "MDQ6VXNlcjU4MDc4MDg2", "avatar_url": "https://avatars.githubusercontent.com/u/58078086?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lbourdois", "html_url": "https://github.com/lbourdois", "followers_url": "https://api.github.com/users/lbourdois/followers", "following_url": "https://api.github.com/users/lbourdois/following{/other_user}", "gists_url": "https://api.github.com/users/lbourdois/gists{/gist_id}", "starred_url": "https://api.github.com/users/lbourdois/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lbourdois/subscriptions", "organizations_url": "https://api.github.com/users/lbourdois/orgs", "repos_url": "https://api.github.com/users/lbourdois/repos", "events_url": "https://api.github.com/users/lbourdois/events{/privacy}", "received_events_url": "https://api.github.com/users/lbourdois/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "Hi @lbourdois, yes please share it with us", "@mariamabarham \r\nI put all the datasets on this drive: https://1drv.ms/u/s!Ao2Rcpiny7RFinDypq7w-LbXcsx9?e=iVsEDh\r\n\r\n\r\nSome information : \r\n• For FLUE, the quote used is\r\n\r\n> @misc{le2019flaubert,\r\n> title={FlauBERT: Unsupervised Language Model Pre-training for French},\r\n> author={Hang Le and Loïc Vial and Jibril Frej and Vincent Segonne and Maximin Coavoux and Benjamin Lecouteux and Alexandre Allauzen and Benoît Crabbé and Laurent Besacier and Didier Schwab},\r\n> year={2019},\r\n> eprint={1912.05372},\r\n> archivePrefix={arXiv},\r\n> primaryClass={cs.CL}\r\n> }\r\n\r\n• The Github repo of FLUE is avaible here : https://github.com/getalp/Flaubert/tree/master/flue\r\n\r\n\r\n\r\nInformation related to the different tasks of FLUE : \r\n\r\n**1. Classification**\r\nThree dataframes are available: \r\n- Book\r\n- DVD\r\n- Music\r\nFor each of these dataframes is available a set of training and test data, and a third one containing unlabelled data.\r\n\r\nCitation : \r\n>@dataset{prettenhofer_peter_2010_3251672,\r\n author = {Prettenhofer, Peter and\r\n Stein, Benno},\r\n title = {{Webis Cross-Lingual Sentiment Dataset 2010 (Webis- \r\n CLS-10)}},\r\n month = jul,\r\n year = 2010,\r\n publisher = {Zenodo},\r\n doi = {10.5281/zenodo.3251672},\r\n url = {https://doi.org/10.5281/zenodo.3251672}\r\n}\r\n\r\n\r\n**2. Paraphrasing** \r\nFrench part of the PAWS-X dataset (https://github.com/google-research-datasets/paws).\r\nThree dataframes are available: \r\n- train\r\n- dev\r\n- test \r\n\r\nCitation : \r\n> @InProceedings{pawsx2019emnlp,\r\n> title = {{PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification}},\r\n> author = {Yang, Yinfei and Zhang, Yuan and Tar, Chris and Baldridge, Jason},\r\n> booktitle = {Proc. of EMNLP},\r\n> year = {2019}\r\n> }\r\n\r\n\r\n\r\n**3. Natural Language Inference**\r\nFrench part of the XNLI dataset (https://github.com/facebookresearch/XNLI).\r\nThree dataframes are available: \r\n- train\r\n- dev\r\n- test \r\n\r\nFor the dev and test datasets, extra columns compared to the train dataset were available so I left them in the dataframe (I didn't know if these columns could be useful for other tasks or not). \r\nIn the context of the FLUE benchmark, only the columns gold_label, sentence1 and sentence2 are useful.\r\n\r\n\r\nCitation : \r\n\r\n> @InProceedings{conneau2018xnli,\r\n> author = \"Conneau, Alexis\r\n> and Rinott, Ruty\r\n> and Lample, Guillaume\r\n> and Williams, Adina\r\n> and Bowman, Samuel R.\r\n> and Schwenk, Holger\r\n> and Stoyanov, Veselin\",\r\n> title = \"XNLI: Evaluating Cross-lingual Sentence Representations\",\r\n> booktitle = \"Proceedings of the 2018 Conference on Empirical Methods\r\n> in Natural Language Processing\",\r\n> year = \"2018\",\r\n> publisher = \"Association for Computational Linguistics\",\r\n> location = \"Brussels, Belgium\",\r\n\r\n\r\n**4. Parsing**\r\nThe dataset used by the FLUE authors for this task is not freely available.\r\nUsers of your library will therefore not be able to access it.\r\nNevertheless, I think maybe it is useful to add a link to the site where to request this dataframe: http://ftb.linguist.univ-paris-diderot.fr/telecharger.php?langue=en \r\n(personally it was sent to me less than 48 hours after I requested it).\r\n\r\n\r\n**5. Word Sense Disambiguation Tasks**\r\n5.1 Verb Sense Disambiguation\r\n\r\nTwo dataframes are available: train and test\r\nFor both dataframes, 4 columns are available: document, sentence, lemma and word.\r\nI created the document column thinking that there were several documents in the dataset but afterwards it turns out that there were not: several sentences but only one document. It's up to you to keep it or not when importing these two dataframes.\r\n\r\nThe sentence column is used to determine to which sentence the word in the word column belongs. It is in the form of a dictionary {'id': 'd000.s001', 'idx': '1'}. I thought for a while to keep only the idx because the id doesn't matter any more information. Nevertheless for the test dataset, the dictionary has an extra value indicating the source of the sentence. I don't know if it's useful or not, that's why I left the dictionary just in case. The user is free to do what he wants with it.\r\n\r\nCitation : \r\n\r\n> Segonne, V., Candito, M., and Crabb ́e, B. (2019). Usingwiktionary as a resource for wsd: the case of frenchverbs. InProceedings of the 13th International Confer-ence on Computational Semantics-Long Papers, pages259–270\r\n\r\n5.2 Noun Sense Disambiguation\r\nTwo dataframes are available: 2 train and 1 test\r\n\r\nI confess I didn't fully understand the procedure for this task.\r\n\r\nCitation : \r\n\r\n> @dataset{loic_vial_2019_3549806,\r\n> author = {Loïc Vial},\r\n> title = {{French Word Sense Disambiguation with Princeton \r\n> WordNet Identifiers}},\r\n> month = nov,\r\n> year = 2019,\r\n> publisher = {Zenodo},\r\n> version = {1.0},\r\n> doi = {10.5281/zenodo.3549806},\r\n> url = {https://doi.org/10.5281/zenodo.3549806}\r\n> }\r\n\r\nFinally, additional information about FLUE is available in the FlauBERT publication : \r\nhttps://arxiv.org/abs/1912.05372 (p. 4).\r\n\r\n\r\nHoping to have provided you with everything you need to add this benchmark :) \r\n", "https://github.com/huggingface/datasets/pull/943" ]
1,590,828,735,000
1,607,002,773,000
1,607,002,773,000
NONE
null
Hi, I think it would be interesting to add the FLUE dataset for francophones or anyone wishing to work on French. In other requests, I read that you are already working on some datasets, and I was wondering if FLUE was planned. If it is not the case, I can provide each of the cleaned FLUE datasets (in the form of a directly exploitable dataset rather than in the original xml formats which require additional processing, with the French part for cases where the dataset is based on a multilingual dataframe, etc.).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/223/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/223/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/222
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/222/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/222/comments
https://api.github.com/repos/huggingface/datasets/issues/222/events
https://github.com/huggingface/datasets/issues/222
627,586,690
MDU6SXNzdWU2Mjc1ODY2OTA=
222
Colab Notebook breaks when downloading the squad dataset
{ "login": "carlos-aguayo", "id": 338917, "node_id": "MDQ6VXNlcjMzODkxNw==", "avatar_url": "https://avatars.githubusercontent.com/u/338917?v=4", "gravatar_id": "", "url": "https://api.github.com/users/carlos-aguayo", "html_url": "https://github.com/carlos-aguayo", "followers_url": "https://api.github.com/users/carlos-aguayo/followers", "following_url": "https://api.github.com/users/carlos-aguayo/following{/other_user}", "gists_url": "https://api.github.com/users/carlos-aguayo/gists{/gist_id}", "starred_url": "https://api.github.com/users/carlos-aguayo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/carlos-aguayo/subscriptions", "organizations_url": "https://api.github.com/users/carlos-aguayo/orgs", "repos_url": "https://api.github.com/users/carlos-aguayo/repos", "events_url": "https://api.github.com/users/carlos-aguayo/events{/privacy}", "received_events_url": "https://api.github.com/users/carlos-aguayo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The notebook forces version 0.1.0. If I use the latest, things work, I'll run the whole notebook and create a PR.\r\n\r\nBut in the meantime, this issue gets fixed by changing:\r\n`!pip install nlp==0.1.0`\r\nto\r\n`!pip install nlp`", "It still breaks very near the end\r\n\r\n![image](https://user-images.githubusercontent.com/338917/83312264-aa96a600-a1df-11ea-987f-2f4a0474247e.png)\r\n", "When you install `nlp` for the first time on a Colab runtime, it updates the `pyarrow` library that was already on colab. This update shows this message on colab:\r\n```\r\nWARNING: The following packages were previously imported in this runtime:\r\n [pyarrow]\r\nYou must restart the runtime in order to use newly installed versions.\r\n```\r\nYou just have to restart the runtime and it should be fine.\r\nIf you don't restart, then it breaks like in your first message ", "Thanks for reporting the second one ! We'll update the notebook to fix this one :)", "This trick from @thomwolf seems to be the most reliable solution to fix this colab notebook issue:\r\n\r\n```python\r\n# install nlp\r\n!pip install -qq nlp==0.2.0\r\n\r\n# Make sure that we have a recent version of pyarrow in the session before we continue - otherwise reboot Colab to activate it\r\nimport pyarrow\r\nif int(pyarrow.__version__.split('.')[1]) < 16:\r\n import os\r\n os.kill(os.getpid(), 9)\r\n```", "The second part got fixed here: 2cbc656d6fc4b18ce57eac070baec05b31180d39\r\n\r\nThanks! I'm then closing this issue." ]
1,590,792,959,000
1,591,230,065,000
1,591,230,065,000
NONE
null
When I run the notebook in Colab https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb breaks when running this cell: ![image](https://user-images.githubusercontent.com/338917/83311709-ffd1b800-a1dd-11ea-8394-3a87df0d7f8b.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/222/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/222/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/217
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/217/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/217/comments
https://api.github.com/repos/huggingface/datasets/issues/217/events
https://github.com/huggingface/datasets/issues/217
627,128,403
MDU6SXNzdWU2MjcxMjg0MDM=
217
Multi-task dataset mixing
{ "login": "ghomasHudson", "id": 13795113, "node_id": "MDQ6VXNlcjEzNzk1MTEz", "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghomasHudson", "html_url": "https://github.com/ghomasHudson", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
open
false
null
[]
null
[ "I like this feature! I think the first question we should decide on is how to convert all datasets into the same format. In T5, the authors decided to format every dataset into a text-to-text format. If the dataset had \"multiple\" inputs like MNLI, the inputs were concatenated. So in MNLI the input:\r\n\r\n> - **Hypothesis**: The St. Louis Cardinals have always won.\r\n> \r\n> - **Premise**: yeah well losing is i mean i’m i’m originally from Saint Louis and Saint Louis Cardinals when they were there were uh a mostly a losing team but \r\n\r\nwas flattened to a single input:\r\n\r\n> mnli hypothesis: The St. Louis Cardinals have always won. premise:\r\n> yeah well losing is i mean i’m i’m originally from Saint Louis and Saint Louis Cardinals\r\n> when they were there were uh a mostly a losing team but.\r\n\r\nThis flattening is actually a very simple operation in `nlp` already. You would just need to do the following:\r\n\r\n```python \r\ndef flatten_inputs(example):\r\n return {\"input\": \"mnli hypothesis: \" + example['hypothesis'] + \" premise: \" + example['premise']}\r\n\r\nt5_ready_mnli_ds = mnli_ds.map(flatten_inputs, remove_columns=[<all columns except output>])\r\n```\r\n\r\nSo I guess converting the datasets into the same format can be left to the user for now. \r\nThen the question is how we can merge the datasets. I would probably be in favor of a simple \r\n\r\n```python \r\ndataset.add()\r\n```\r\n\r\nfunction that checks if the dataset is of the same format and if yes merges the two datasets. Finally, how should the sampling be implemented? **Examples-proportional mixing** corresponds to just merging the datasets and shuffling. For the other two sampling approaches we would need some higher-level features, maybe even a `dataset.sample()` function for merged datasets. \r\n\r\nWhat are your thoughts on this @thomwolf @lhoestq @ghomasHudson @enzoampil ?", "I agree that we should leave the flattening of the dataset to the user for now. Especially because although the T5 framing seems obvious, there are slight variations on how the T5 authors do it in comparison to other approaches such as gpt-3 and decaNLP.\r\n\r\nIn terms of sampling, Examples-proportional mixing does seem the simplest to implement so would probably be a good starting point.\r\n\r\nTemperature-scaled mixing would probably most useful, offering flexibility as it can simulate the other 2 methods by setting the temperature parameter. There is a [relevant part of the T5 repo](https://github.com/google-research/text-to-text-transfer-transformer/blob/03c94165a7d52e4f7230e5944a0541d8c5710788/t5/data/utils.py#L889-L1118) which should help with implementation.\r\n\r\nAccording to the T5 authors, equal-mixing performs worst. Among the other two methods, tuning the K value (the artificial dataset size limit) has a large impact.\r\n", "I agree with going with temperature-scaled mixing for its flexibility!\r\n\r\nFor the function that combines the datasets, I also find `dataset.add()` okay while also considering that users may want it to be easy to combine a list of say 10 data sources in one go.\r\n\r\n`dataset.sample()` should also be good. By the looks of it, we're planning to have as main parameters: `temperature`, and `K`.\r\n\r\nOn converting the datasets to the same format, I agree that we can leave these to the users for now. But, I do imagine it'd be an awesome feature for the future to have this automatically handled, based on a chosen *approach* to formatting :smile: \r\n\r\nE.g. T5, GPT-3, decaNLP, original raw formatting, or a contributed way of formatting in text-to-text. ", "This is an interesting discussion indeed and it would be nice to make multi-task easier.\r\n\r\nProbably the best would be to have a new type of dataset especially designed for that in order to easily combine and sample from the multiple datasets.\r\n\r\nThis way we could probably handle the combination of datasets with differing schemas as well (unlike T5).", "@thomwolf Are you suggesting making a wrapper class which can take existing datasets as arguments and do all the required sampling/combining, to present the same interface as a normal dataset?\r\n\r\nThat doesn't seem too complicated to implement.\r\n", "I guess we're looking at the end user writing something like:\r\n``` python\r\nds = nlp.load_dataset('multitask-t5',datasets=[\"squad\",\"cnn_dm\",...], k=1000, t=2.0)\r\n```\r\nUsing the t5 method of combining here (or this could be a function passed in as an arg) \r\n\r\nPassing kwargs to each 'sub-dataset' might become tricky.", "From thinking upon @thomwolf 's suggestion, I've started experimenting:\r\n```python\r\nclass MultitaskDataset(DatasetBuilder):\r\n def __init__(self, *args, **kwargs):\r\n super(MultitaskDataset, self).__init__(*args, **kwargs)\r\n self._datasets = kwargs.get(\"datasets\")\r\n\r\n def _info(self):\r\n return nlp.DatasetInfo(\r\n description=_DESCRIPTION,\r\n features=nlp.Features({\r\n \"source\": nlp.Value(\"string\"),\r\n \"target\": nlp.Sequence(nlp.Value(\"string\"))\r\n })\r\n )\r\n\r\n def _get_common_splits(self):\r\n '''Finds the common splits present in all self._datasets'''\r\n min_set = None\r\n for dataset in self._datasets:\r\n if min_set != None:\r\n min_set.intersection(set(dataset.keys()))\r\n else:\r\n min_set = set(dataset.keys())\r\n return min_set\r\n\r\n....\r\n\r\n# Maybe this?:\r\nsquad = nlp.load_dataset(\"squad\")\r\ncnn_dm = nlp.load_dataset(\"cnn_dailymail\",\"3.0.0\")\r\nmultitask_dataset = nlp.load_dataset(\r\n 'multitask_dataset',\r\n datasets=[squad,cnn_dailymail], \r\n k=1000, \r\n t=2.0\r\n)\r\n\r\n```\r\n\r\nDoes anyone know what methods of `MultitaskDataset` I would need to implement? Maybe `as_dataset` and `download_and_prepare`? Most of these should be just calling the methods of the sub-datasets. \r\n\r\nI'm assuming DatasetBuilder is better than the more specific `GeneratorBasedBuilder`, `BeamBasedBuilder`, etc....\r\n\r\nOne of the other problems is that the dataset size is unknown till you construct it (as you can pick the sub-datasets). Am hoping not to need to make changes to `nlp.load_dataset` just for this class.\r\n\r\nI'd appreciate it if anyone more familiar with nlp's internal workings could tell me if I'm on the right track!", "I think I would probably go for a `MultiDataset` wrapper around a list of `Dataset`.\r\n\r\nI'm not sure we need to give it `k` and `t` parameters at creation, it can maybe be something along the lines of:\r\n```python\r\nsquad = nlp.load_dataset(\"squad\")\r\ncnn_dm = nlp.load_dataset(\"cnn_dailymail\",\"3.0.0\")\r\n\r\nmultitask_dataset = nlp.MultiDataset(squad, cnn_dm)\r\n\r\nbatch = multitask_dataset.sample(10, temperature=2.0, k=1000)\r\n```\r\n\r\nThe first proof-of-concept for multi-task datasets could definitely require that the provided datasets have the same name/type for columns (if needed you easily rename/cast a column prior to instantiating the `MultiDataset`).\r\n\r\nIt's good to think about it for some time though and don't overfit too much on the T5 examples (in particular for the ways/kwargs for sampling among datasets).", "The problem with changing `k` and `t` per sampling is that you'd have to somehow remember which examples you'd already returned while re-weighting the remaining examples based on the new `k` and `t`values. It seems possible but complicated (I can't really see a reason why you'd want to change the weighting of datasets after you constructed the multidataset).\r\n\r\nWouldn't it be convenient if it implemented the dataset interface? Then if someone has code using a single nlp dataset, they can replace it with a multitask combination of more datasets without having to change other code. We would at least need to be able to pass it into a `DataLoader`.\r\n\r\n", "A very janky (but working) implementation of `multitask_dataset.sample()` could be something like this:\r\n```python\r\nimport nlp\r\nimport torch\r\n\r\nclass MultiDataset():\r\n def __init__(self, *args, temperature=2.0, k=1000, maximum=None, scale=1):\r\n self.datasets = args\r\n self._dataloaders = {}\r\n for split in self._get_common_splits():\r\n split_datasets = [ds[split] for ds in self.datasets]\r\n mixing_rates = self._calc_mixing_rates(split_datasets,temperature, k, maximum, scale)\r\n weights = []\r\n for i in range(len(self.datasets)):\r\n weights += [mixing_rates[i]]*len(self.datasets[i][split])\r\n self._dataloaders[split] = torch.utils.data.DataLoader(torch.utils.data.ConcatDataset(split_datasets),\r\n sampler=torch.utils.data.sampler.WeightedRandomSampler(\r\n num_samples=len(weights),\r\n weights = weights,\r\n replacement=True),\r\n shuffle=False)\r\n\r\n def _get_common_splits(self):\r\n '''Finds the common splits present in all self.datasets'''\r\n min_set = None\r\n for dataset in self.datasets:\r\n if min_set != None:\r\n min_set.intersection(set(dataset.keys()))\r\n else:\r\n min_set = set(dataset.keys())\r\n return min_set\r\n\r\n\r\n def _calc_mixing_rates(self,datasets, temperature=2.0, k=1000, maximum=None, scale=1):\r\n '''Work out the weighting of each dataset based on t and k'''\r\n mixing_rates = []\r\n for dataset in datasets:\r\n rate = len(dataset)\r\n rate *= scale\r\n if maximum:\r\n rate = min(rate, maximum)\r\n if temperature != 1.0:\r\n rate = rate ** (1.0/temperature)\r\n mixing_rates.append(rate)\r\n return mixing_rates\r\n\r\n def sample(self,n,split):\r\n batch = []\r\n for example in self._dataloaders[split]:\r\n batch.append(example)\r\n n -= 1\r\n if n == 0:\r\n return batch\r\n\r\n\r\ndef flatten(dataset,flatten_fn):\r\n for k in dataset.keys():\r\n if isinstance(dataset[k],nlp.Dataset):\r\n dataset[k] = dataset[k].map(flatten_fn,remove_columns=dataset[k].column_names)\r\n\r\n# Squad\r\ndef flatten_squad(example):\r\n return {\"source\": \"squad context: \" + example['context'] + \" question: \" + example['question'],\"target\":example[\"answers\"][\"text\"]}\r\nsquad = nlp.load_dataset(\"squad\")\r\nflatten(squad,flatten_squad)\r\n\r\n# CNN_DM\r\ndef flatten_cnn_dm(example):\r\n return {\"source\": \"cnn_dm: \" + example['article'],\"target\":[example[\"highlights\"]]}\r\ncnn_dm = nlp.load_dataset(\"cnn_dailymail\", \"3.0.0\")\r\nflatten(cnn_dm,flatten_cnn_dm)\r\n\r\nmultitask_dataset = MultiDataset(squad, cnn_dm)\r\nbatch = multitask_dataset.sample(100,\"train\")\r\n```\r\n\r\nThere's definitely a more sensible way than embedding `DataLoader`s inside. ", "There is an interesting related investigation by @zphang here https://colab.research.google.com/github/zphang/zphang.github.io/blob/master/files/notebooks/Multi_task_Training_with_Transformers_NLP.ipynb", "Good spot! Here are my thoughts:\r\n\r\n- Aside: Adding `MultitaskModel` to transformers might be a thing to raise - even though having task-specific heads has become unfashionable in recent times in favour of text-to-text type models.\r\n- Adding the task name as an extra field also seems useful for these kind of models which have task-specific heads\r\n- There is some validation of our approach that the user should be expected to `map` datasets into a common form.\r\n- The size-proportional sampling (also called \"Examples-proportional mixing\") used here doesn't perform too badly in the T5 paper (it's comparable to temperature-scaled mixing in many cases but less flexible. This is only reasonable with a `K` maximum size parameter to prevent very large datasets dominating). This might be good for a first prototype using:\r\n ```python\r\n def __iter__(self):\r\n \"\"\"\r\n For each batch, sample a task, and yield a batch from the respective\r\n task Dataloader.\r\n\r\n We use size-proportional sampling, but you could easily modify this\r\n to sample from some-other distribution.\r\n \"\"\"\r\n task_choice_list = []\r\n for i, task_name in enumerate(self.task_name_list):\r\n task_choice_list += [i] * self.num_batches_dict[task_name]\r\n task_choice_list = np.array(task_choice_list)\r\n np.random.shuffle(task_choice_list)\r\n\r\n dataloader_iter_dict = {\r\n task_name: iter(dataloader) \r\n for task_name, dataloader in self.dataloader_dict.items()\r\n }\r\n for task_choice in task_choice_list:\r\n task_name = self.task_name_list[task_choice]\r\n yield next(dataloader_iter_dict[task_name]) \r\n ```\r\n We'd just need to pull samples from the raw datasets and not from `DataLoader`s for each task. We can assume the user has done `dataset.shuffle()` if they want to.\r\n\r\n Other sampling methods can later be implemented by changing how the `task_choice_list` is generated. This should allow more flexibility and not tie us to specific methods for sampling among datasets.\r\n", "Another thought: Multitasking over benchmarks (represented as Meta-datasets in nlp) is probably a common use case. Would be nice to pass an entire benchmark to our `MultiDataset` wrapper rather than having to pass individual components.", "Here's a fully working implementation based on the `__iter__` function of @zphang.\r\n\r\n- I've generated the task choice list in the constructor as it allows us to index into the MultiDataset just like a normal dataset. I'm changing `task_choice_list` into a list of `(dataset_idx, example_idx)` so each entry references a unique dataset example. The shuffling has to be done before this as we don't want to shuffle within each task (we assume this is done by the user if this is what they intend).\r\n- I'm slightly concerned this list could become very large if many large datasets were used. Can't see a way round it at the moment though.\r\n- I've used `task.info.builder_name` as the dataset name. Not sure if this is correct.\r\n- I'd love to add some of the other `Dataset` methods (map, slicing by column, etc...). Would be great to implement the whole interface so a single dataset can be simply replaced by this.\r\n- This does everything on the individual example-level. If some application required batches all from a single task in turn we can't really do that.\r\n\r\n```python\r\nimport nlp\r\nimport numpy as np\r\n\r\nclass MultiDataset:\r\n def __init__(self,tasks):\r\n self.tasks = tasks\r\n\r\n # Create random order of tasks\r\n # Using size-proportional sampling\r\n task_choice_list = []\r\n for i, task in enumerate(self.tasks):\r\n task_choice_list += [i] * len(task)\r\n task_choice_list = np.array(task_choice_list)\r\n np.random.shuffle(task_choice_list)\r\n\r\n # Add index into each dataset\r\n # - We don't want to shuffle within each task\r\n counters = {}\r\n self.task_choice_list = []\r\n for i in range(len(task_choice_list)):\r\n idx = counters.get(task_choice_list[i],0)\r\n self.task_choice_list.append((task_choice_list[i],idx))\r\n counters[task_choice_list[i]] = idx + 1\r\n\r\n\r\n def __len__(self):\r\n return np.sum([len(t) for t in self.tasks])\r\n\r\n def __repr__(self):\r\n task_str = \", \".join([str(t) for t in self.tasks])\r\n return f\"MultiDataset(tasks: {task_str})\"\r\n\r\n def __getitem__(self,key):\r\n if isinstance(key, int):\r\n task_idx, example_idx = self.task_choice_list[key]\r\n task = self.tasks[task_idx]\r\n example = task[example_idx]\r\n example[\"task_name\"] = task.info.builder_name\r\n return example\r\n elif isinstance(key, slice):\r\n raise NotImplementedError()\r\n\r\n def __iter__(self):\r\n for i in range(len(self)):\r\n yield self[i]\r\n\r\n\r\ndef load_multitask(*datasets):\r\n '''Create multitask datasets per split'''\r\n\r\n def _get_common_splits(datasets):\r\n '''Finds the common splits present in all self.datasets'''\r\n min_set = None\r\n for dataset in datasets:\r\n if min_set != None:\r\n min_set.intersection(set(dataset.keys()))\r\n else:\r\n min_set = set(dataset.keys())\r\n return min_set\r\n\r\n common_splits = _get_common_splits(datasets)\r\n out = {}\r\n for split in common_splits:\r\n out[split] = MultiDataset([d[split] for d in datasets])\r\n return out\r\n\r\n\r\n##########################################\r\n# Dataset Flattening\r\n\r\ndef flatten(dataset,flatten_fn):\r\n for k in dataset.keys():\r\n if isinstance(dataset[k],nlp.Dataset):\r\n dataset[k] = dataset[k].map(flatten_fn,remove_columns=dataset[k].column_names)\r\n\r\n# Squad\r\ndef flatten_squad(example):\r\n return {\"source\": \"squad context: \" + example['context'] + \" question: \" + example['question'],\r\n \"target\":example[\"answers\"][\"text\"]}\r\nsquad = nlp.load_dataset(\"squad\")\r\nflatten(squad,flatten_squad)\r\n\r\n# CNN_DM\r\ndef flatten_cnn_dm(example):\r\n return {\"source\": \"cnn_dm: \" + example['article'],\"target\":[example[\"highlights\"]]}\r\ncnn_dm = nlp.load_dataset(\"cnn_dailymail\", \"3.0.0\")\r\nflatten(cnn_dm,flatten_cnn_dm)\r\n\r\n#############################################\r\n\r\nmtds = load_multitask(squad,cnn_dm)\r\n\r\nfor example in mtds[\"train\"]:\r\n print(example[\"task_name\"],example[\"target\"])\r\n```\r\nLet me know if you have any thoughts. I've started using this in some of my projects and it seems to work. If people are happy with the general approach for a first version, I can make a pull request.", "Hey! Happy to jump into the discussion here. I'm still getting familiar with bits of this code, but the reasons I sampled over data loaders rather than datasets is 1) ensuring that each sampled batch corresponds to only 1 task (in case of different inputs formats/downstream models) and 2) potentially having different batch sizes per task (e.g. some tasks have very long/short inputs). How are you currently dealing with these in your PR?", "The short answer is - I'm not! Everything is currently on a per-example basis. It would be fairly simple to add a `batch_size` argument which would ensure that every `batch_size` examples come from the same task. That should suit most use-cases (unless you wanted to ensure batches all came from the same task and apply something like `SortishSampler` on each task first)\r\n\r\nYour notebook was really inspiring by the way - thanks!", "@zphang is having different batch sizes per task actually helpful? Would be interesting to know as it's not something I've come across as a technique used by any MTL papers.", "mt-dnn's [batcher.py](https://github.com/namisan/mt-dnn/blob/master/mt_dnn/batcher.py) might be worth looking at.", "> @zphang is having different batch sizes per task actually helpful? Would be interesting to know as it's not something I've come across as a technique used by any MTL papers.\r\n\r\nI think having different batch sizes per task is particularly helpful in some scenarios where each task has different amount of data. For example, the problem I'm currently facing is one task has tens of thousands of samples while one task has a couple hundreds. I think in this case different batch size could help. But if using the same batch size is a lot simpler to implement, I guess it makes sense to go with that.", "I think that instead of proportional to size sampling you should specify weights or probabilities for drawing a batch from each dataset. We should also ensure that the smaller datasets are repeated so that the encoder layer doesn't overtrain on the largest dataset.", "Are there any references for people doing different batch sizes per task in the literature? I've only seen constant batch sizes with differing numbers of batches for each task which seems sufficient to prevent the impact of large datasets (Read 3.5.3 of the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf) for example).\r\n\r\n", "Hi,\r\nregarding building T5 dataset , I think we can use datasets https://github.com/huggingface/datasets and then need something similar to tf.data.experimental.sample_from_datasets, do you know if similar functionality exist in pytorch? Which can sample multiple datasets with the given rates. thanks. " ]
1,590,744,146,000
1,603,701,993,000
null
CONTRIBUTOR
null
It seems like many of the best performing models on the GLUE benchmark make some use of multitask learning (simultaneous training on multiple tasks). The [T5 paper](https://arxiv.org/pdf/1910.10683.pdf) highlights multiple ways of mixing the tasks together during finetuning: - **Examples-proportional mixing** - sample from tasks proportionally to their dataset size - **Equal mixing** - sample uniformly from each task - **Temperature-scaled mixing** - The generalized approach used by multilingual BERT which uses a temperature T, where the mixing rate of each task is raised to the power 1/T and renormalized. When T=1 this is equivalent to equal mixing, and becomes closer to equal mixing with increasing T. Following this discussion https://github.com/huggingface/transformers/issues/4340 in [transformers](https://github.com/huggingface/transformers), @enzoampil suggested that the `nlp` library might be a better place for this functionality. Some method for combining datasets could be implemented ,e.g. ``` dataset = nlp.load_multitask(['squad','imdb','cnn_dm'], temperature=2.0, ...) ``` We would need a few additions: - Method of identifying the tasks - how can we support adding a string to each task as an identifier: e.g. 'summarisation: '? - Method of combining the metrics - a standard approach is to use the specific metric for each task and add them together for a combined score. It would be great to support common use cases such as pretraining on the GLUE benchmark before fine-tuning on each GLUE task in turn. I'm willing to write bits/most of this I just need some guidance on the interface and other library details so I can integrate it properly.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/217/reactions", "total_count": 9, "+1": 9, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/217/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/216
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/216/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/216/comments
https://api.github.com/repos/huggingface/datasets/issues/216/events
https://github.com/huggingface/datasets/issues/216
626,896,890
MDU6SXNzdWU2MjY4OTY4OTA=
216
❓ How to get ROUGE-2 with the ROUGE metric ?
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "organizations_url": "https://api.github.com/users/astariul/orgs", "repos_url": "https://api.github.com/users/astariul/repos", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "received_events_url": "https://api.github.com/users/astariul/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "ROUGE-1 and ROUGE-L shouldn't return the same thing. This is weird", "For the rouge2 metric you can do\r\n\r\n```python\r\nrouge = nlp.load_metric('rouge')\r\nwith open(\"pred.txt\") as p, open(\"ref.txt\") as g:\r\n for lp, lg in zip(p, g):\r\n rouge.add(lp, lg)\r\nscore = rouge.compute(rouge_types=[\"rouge2\"])\r\n```\r\n\r\nNote that I just did a PR to have both `.add` and `.add_batch` for metrics, that's why now this is `rouge.add(lp, lg)` and not `rouge.add([lp], [lg])`", "Well I just tested with the official script and both rouge1 and rougeL return exactly the same thing for the input you gave, so this is actually fine ^^\r\n\r\nI hope it helped :)" ]
1,590,709,652,000
1,590,969,875,000
1,590,969,875,000
NONE
null
I'm trying to use ROUGE metric, but I don't know how to get the ROUGE-2 metric. --- I compute scores with : ```python import nlp rouge = nlp.load_metric('rouge') with open("pred.txt") as p, open("ref.txt") as g: for lp, lg in zip(p, g): rouge.add([lp], [lg]) score = rouge.compute() ``` then : _(print only the F-score for readability)_ ```python for k, s in score.items(): print(k, s.mid.fmeasure) ``` It gives : >rouge1 0.7915168355671788 rougeL 0.7915168355671788 --- **How can I get the ROUGE-2 score ?** Also, it's seems weird that ROUGE-1 and ROUGE-L scores are the same. Did I made a mistake ? @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/216/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/216/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/215
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/215/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/215/comments
https://api.github.com/repos/huggingface/datasets/issues/215/events
https://github.com/huggingface/datasets/issues/215
626,867,879
MDU6SXNzdWU2MjY4Njc4Nzk=
215
NonMatchingSplitsSizesError when loading blog_authorship_corpus
{ "login": "cedricconol", "id": 52105365, "node_id": "MDQ6VXNlcjUyMTA1MzY1", "avatar_url": "https://avatars.githubusercontent.com/u/52105365?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cedricconol", "html_url": "https://github.com/cedricconol", "followers_url": "https://api.github.com/users/cedricconol/followers", "following_url": "https://api.github.com/users/cedricconol/following{/other_user}", "gists_url": "https://api.github.com/users/cedricconol/gists{/gist_id}", "starred_url": "https://api.github.com/users/cedricconol/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cedricconol/subscriptions", "organizations_url": "https://api.github.com/users/cedricconol/orgs", "repos_url": "https://api.github.com/users/cedricconol/repos", "events_url": "https://api.github.com/users/cedricconol/events{/privacy}", "received_events_url": "https://api.github.com/users/cedricconol/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
null
[]
null
[ "I just ran it on colab and got this\r\n```\r\n[{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812,\r\ndataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train',\r\nnum_bytes=611607465, num_examples=533285, dataset_name='blog_authorship_corpus')},\r\n{'expected': SplitInfo(name='validation', num_bytes=37500394, num_examples=31277,\r\ndataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='validation',\r\nnum_bytes=35652716, num_examples=30804, dataset_name='blog_authorship_corpus')}]\r\n```\r\nwhich is different from the `dataset_infos.json` and also different from yours.\r\n\r\nIt looks like the script for generating examples is not consistent", "The files provided by the authors are corrupted and the script seems to ignore the xml files that can't be decoded (it does `try:... except UnicodeDecodeError`). Maybe depending of the environment some files can be opened and some others don't but not sure why", "Feel free to do `ignore_verifications=True` for now... The verifications only include a check on the checksums of the downloaded files, and a check on the number of examples in each splits.", "I'm getting this same issue when loading the `imdb` corpus via `dataset = load_dataset(\"imdb\")`. When I try `ignore_verifications=True`, no examples are read into the `train` portion of the dataset. ", "> I'm getting this same issue when loading the `imdb` corpus via `dataset = load_dataset(\"imdb\")`. When I try `ignore_verifications=True`, no examples are read into the `train` portion of the dataset.\r\n\r\nWhen the checksums don't match, it may mean that the file you downloaded is corrupted. In this case you can try to load the dataset again `load_dataset(\"imdb\", download_mode=\"force_redownload\")`\r\n\r\nAlso I just checked on my side and it worked fine:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"imdb\")\r\nprint(len(dataset[\"train\"]))\r\n# 25000\r\n```\r\n\r\nLet me know if redownloading fixes your issue @EmilyAlsentzer .\r\nIf not, feel free to open a separate issue.", "It doesn't seem to fix the problem. I'll open a separate issue. Thanks. ", "I wasn't aware of the \"force_redownload\" option and manually removed the '/home/me/.cache/huggingface/datasets/' dir, this worked for me (dataset 'cnn_dailymail')", "Yes I think this might not be documented well enough. Let’s add it to the doc @lhoestq @SBrandeis.\r\nAnd everything on how to control the cache behavior better (removing, overriding, changing the path, etc)" ]
1,590,706,519,000
1,609,969,023,000
null
NONE
null
Getting this error when i run `nlp.load_dataset('blog_authorship_corpus')`. ``` raise NonMatchingSplitsSizesError(str(bad_splits)) nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train', num_bytes=616473500, num_examples=536323, dataset_name='blog_authorship_corpus')}, {'expected': SplitInfo(name='validation', num_bytes=37500394, num_examples=31277, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='validation', num_bytes=30786661, num_examples=27766, dataset_name='blog_authorship_corpus')}] ``` Upon checking it seems like there is a disparity between the information in `datasets/blog_authorship_corpus/dataset_infos.json` and what was downloaded. Although I can get away with this by passing `ignore_verifications=True` in `load_dataset`, I'm thinking doing so might give problems later on.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/215/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/215/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/211
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/211/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/211/comments
https://api.github.com/repos/huggingface/datasets/issues/211/events
https://github.com/huggingface/datasets/issues/211
626,565,994
MDU6SXNzdWU2MjY1NjU5OTQ=
211
[Arrow writer, Trivia_qa] Could not convert TagMe with type str: converting to null type
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[ { "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }, { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }, { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Here the full error trace:\r\n\r\n```\r\nArrowInvalid Traceback (most recent call last)\r\n<ipython-input-1-7aaf3f011358> in <module>\r\n 1 import nlp\r\n 2 ds = nlp.load_dataset(\"trivia_qa\", \"rc\", split=\"validation[:1%]\") # this might take 2.3 min to download but it's cached afterwards...\r\n----> 3 ds.map(lambda x: x, load_from_cache_file=False)\r\n\r\n~/python_bin/nlp/arrow_dataset.py in map(self, function, with_indices, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, arrow_schema, disable_nullable)\r\n 549\r\n 550 if update_data:\r\n--> 551 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file\r\n 552\r\n 553 # Create new Dataset from buffer or file\r\n\r\n~/python_bin/nlp/arrow_writer.py in finalize(self, close_stream)\r\n 182 def finalize(self, close_stream=True):\r\n 183 if self.pa_writer is not None:\r\n--> 184 self.write_on_file()\r\n 185 self.pa_writer.close()\r\n 186 if close_stream:\r\n\r\n~/python_bin/nlp/arrow_writer.py in write_on_file(self)\r\n 104 \"\"\"\r\n 105 if self.current_rows:\r\n--> 106 pa_array = pa.array(self.current_rows, type=self._type)\r\n 107 first_example = pa.array(self.current_rows[0:1], type=self._type)[0]\r\n 108 # Sanity check\r\n\r\n~/hugging_face/venv_3.7/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.array()\r\n\r\n~/hugging_face/venv_3.7/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array()\r\n\r\n~/hugging_face/venv_3.7/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowInvalid: Could not convert TagMe with type str: converting to null type\r\n```", "Actually thinking a bit more about it, it's probably a data sample that is not correct in `trivia_qa`. But I'm a bit surprised though that we managed to write it in .arrow format and now cannot write it anymore after an \"identity\" mapping.", "I don't have this error :x", "Interesting, maybe I have a very old cache of trivia_qa...thanks for checking", "I'm running it right now on colab to double check", "Actually, I know what the problem is...I'm quite sure it's a bug. Here we take some test inputs: https://github.com/huggingface/nlp/blob/0e0ef12c14d2175e0b0bd7d8aa814b09e2cd7e1f/src/nlp/arrow_dataset.py#L472\r\n\r\nIt might be that in the test inputs, a `Sequence` type value is an emtpy list. So in my case I have `ds[0][\"entity_pages'][\"wiki_context\"] = []`. => this leads to an `arrow_schema` equal to `null` for `[\"entity_pages'][\"wiki_context\"]` => see line: https://github.com/huggingface/nlp/blob/0e0ef12c14d2175e0b0bd7d8aa814b09e2cd7e1f/src/nlp/arrow_dataset.py#L501 instead of list of string which it should for other examples. \r\n\r\nGuess it's an edge case, but it can happen.", "Good point, I think the schema should be infered at the writing stage where we have a `writer_batch_size` number of examples (typically 10k) so it's even less likely to run into this scenario." ]
1,590,676,694,000
1,595,499,316,000
1,595,499,316,000
MEMBER
null
Running the following code ``` import nlp ds = nlp.load_dataset("trivia_qa", "rc", split="validation[:1%]") # this might take 2.3 min to download but it's cached afterwards... ds.map(lambda x: x, load_from_cache_file=False) ``` triggers a `ArrowInvalid: Could not convert TagMe with type str: converting to null type` error. On the other hand if we remove a certain column of `trivia_qa` which seems responsible for the bug, it works: ``` import nlp ds = nlp.load_dataset("trivia_qa", "rc", split="validation[:1%]") # this might take 2.3 min to download but it's cached afterwards... ds.map(lambda x: x, remove_columns=["entity_pages"], load_from_cache_file=False) ``` . Seems quite hard to debug what's going on here... @lhoestq @thomwolf - do you have a good first guess what the problem could be? **Note** BTW: I think this could be a good test to check that the datasets work correctly: Take a tiny portion of the dataset and check that it can be written correctly.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/211/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/211/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/207
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/207/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/207/comments
https://api.github.com/repos/huggingface/datasets/issues/207/events
https://github.com/huggingface/datasets/issues/207
625,932,200
MDU6SXNzdWU2MjU5MzIyMDA=
207
Remove test set from NLP viewer
{ "login": "chrisdonahue", "id": 748399, "node_id": "MDQ6VXNlcjc0ODM5OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/748399?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chrisdonahue", "html_url": "https://github.com/chrisdonahue", "followers_url": "https://api.github.com/users/chrisdonahue/followers", "following_url": "https://api.github.com/users/chrisdonahue/following{/other_user}", "gists_url": "https://api.github.com/users/chrisdonahue/gists{/gist_id}", "starred_url": "https://api.github.com/users/chrisdonahue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chrisdonahue/subscriptions", "organizations_url": "https://api.github.com/users/chrisdonahue/orgs", "repos_url": "https://api.github.com/users/chrisdonahue/repos", "events_url": "https://api.github.com/users/chrisdonahue/events{/privacy}", "received_events_url": "https://api.github.com/users/chrisdonahue/received_events", "type": "User", "site_admin": false }
[ { "id": 2107841032, "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer", "name": "nlp-viewer", "color": "94203D", "default": false, "description": "" } ]
open
false
null
[]
null
[ "~is the viewer also open source?~\r\n[is a streamlit app!](https://docs.streamlit.io/en/latest/getting_started.html)", "Appears that [two thirds of those polled on Twitter](https://twitter.com/srush_nlp/status/1265734497632477185) are in favor of _some_ mechanism for averting eyeballs from the test data." ]
1,590,604,327,000
1,591,198,147,000
null
NONE
null
While the new [NLP viewer](https://huggingface.co/nlp/viewer/) is a great tool, I think it would be best to outright remove the option of looking at the test sets. At the very least, a warning should be displayed to users before showing the test set. Newcomers to the field might not be aware of best practices, and small things like this can help increase awareness.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/207/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/207/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/206
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/206/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/206/comments
https://api.github.com/repos/huggingface/datasets/issues/206/events
https://github.com/huggingface/datasets/issues/206
625,842,989
MDU6SXNzdWU2MjU4NDI5ODk=
206
[Question] Combine 2 datasets which have the same columns
{ "login": "airKlizz", "id": 25703835, "node_id": "MDQ6VXNlcjI1NzAzODM1", "avatar_url": "https://avatars.githubusercontent.com/u/25703835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/airKlizz", "html_url": "https://github.com/airKlizz", "followers_url": "https://api.github.com/users/airKlizz/followers", "following_url": "https://api.github.com/users/airKlizz/following{/other_user}", "gists_url": "https://api.github.com/users/airKlizz/gists{/gist_id}", "starred_url": "https://api.github.com/users/airKlizz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/airKlizz/subscriptions", "organizations_url": "https://api.github.com/users/airKlizz/orgs", "repos_url": "https://api.github.com/users/airKlizz/repos", "events_url": "https://api.github.com/users/airKlizz/events{/privacy}", "received_events_url": "https://api.github.com/users/airKlizz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "We are thinking about ways to combine datasets for T5 in #217, feel free to share your thoughts about this.", "Ok great! I will look at it. Thanks" ]
1,590,596,752,000
1,591,780,274,000
1,591,780,274,000
CONTRIBUTOR
null
Hi, I am using ``nlp`` to load personal datasets. I created summarization datasets in multi-languages based on wikinews. I have one dataset for english and one for german (french is getting to be ready as well). I want to keep these datasets independent because they need different pre-processing (add different task-specific prefixes for T5 : *summarize:* for english and *zusammenfassen:* for german) My issue is that I want to train T5 on the combined english and german datasets to see if it improves results. So I would like to combine 2 datasets (which have the same columns) to make one and train T5 on it. I was wondering if there is a proper way to do it? I assume that it can be done by combining all examples of each dataset but maybe you have a better solution. Hoping this is clear enough, Thanks a lot 😊 Best
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/206/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/206/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/202
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/202/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/202/comments
https://api.github.com/repos/huggingface/datasets/issues/202/events
https://github.com/huggingface/datasets/issues/202
625,493,983
MDU6SXNzdWU2MjU0OTM5ODM=
202
Mistaken `_KWARGS_DESCRIPTION` for XNLI metric
{ "login": "phiyodr", "id": 33572125, "node_id": "MDQ6VXNlcjMzNTcyMTI1", "avatar_url": "https://avatars.githubusercontent.com/u/33572125?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phiyodr", "html_url": "https://github.com/phiyodr", "followers_url": "https://api.github.com/users/phiyodr/followers", "following_url": "https://api.github.com/users/phiyodr/following{/other_user}", "gists_url": "https://api.github.com/users/phiyodr/gists{/gist_id}", "starred_url": "https://api.github.com/users/phiyodr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phiyodr/subscriptions", "organizations_url": "https://api.github.com/users/phiyodr/orgs", "repos_url": "https://api.github.com/users/phiyodr/repos", "events_url": "https://api.github.com/users/phiyodr/events{/privacy}", "received_events_url": "https://api.github.com/users/phiyodr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Indeed, good catch ! thanks\r\nFixing it right now" ]
1,590,568,482,000
1,590,672,156,000
1,590,672,156,000
NONE
null
Hi! The [`_KWARGS_DESCRIPTION`](https://github.com/huggingface/nlp/blob/7d0fa58641f3f462fb2861dcdd6ce7f0da3f6a56/metrics/xnli/xnli.py#L45) for the XNLI metric uses `Args` and `Returns` text from [BLEU](https://github.com/huggingface/nlp/blob/7d0fa58641f3f462fb2861dcdd6ce7f0da3f6a56/metrics/bleu/bleu.py#L58) metric: ``` _KWARGS_DESCRIPTION = """ Computes XNLI score which is just simple accuracy. Args: predictions: list of translations to score. Each translation should be tokenized into a list of tokens. references: list of lists of references for each translation. Each reference should be tokenized into a list of tokens. max_order: Maximum n-gram order to use when computing BLEU score. smooth: Whether or not to apply Lin et al. 2004 smoothing. Returns: 'bleu': bleu score, 'precisions': geometric mean of n-gram precisions, 'brevity_penalty': brevity penalty, 'length_ratio': ratio of lengths, 'translation_length': translation_length, 'reference_length': reference_length """ ``` But it should be something like: ``` _KWARGS_DESCRIPTION = """ Computes XNLI score which is just simple accuracy. Args: predictions: Predicted labels. references: Ground truth labels. Returns: 'accuracy': accuracy ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/202/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/202/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/198
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/198/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/198/comments
https://api.github.com/repos/huggingface/datasets/issues/198/events
https://github.com/huggingface/datasets/issues/198
625,200,627
MDU6SXNzdWU2MjUyMDA2Mjc=
198
Index outside of table length
{ "login": "casajarm", "id": 305717, "node_id": "MDQ6VXNlcjMwNTcxNw==", "avatar_url": "https://avatars.githubusercontent.com/u/305717?v=4", "gravatar_id": "", "url": "https://api.github.com/users/casajarm", "html_url": "https://github.com/casajarm", "followers_url": "https://api.github.com/users/casajarm/followers", "following_url": "https://api.github.com/users/casajarm/following{/other_user}", "gists_url": "https://api.github.com/users/casajarm/gists{/gist_id}", "starred_url": "https://api.github.com/users/casajarm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/casajarm/subscriptions", "organizations_url": "https://api.github.com/users/casajarm/orgs", "repos_url": "https://api.github.com/users/casajarm/repos", "events_url": "https://api.github.com/users/casajarm/events{/privacy}", "received_events_url": "https://api.github.com/users/casajarm/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Sounds like something related to the nlp viewer @srush ", "Fixed. " ]
1,590,527,380,000
1,590,533,029,000
1,590,533,029,000
NONE
null
The offset input box warns of numbers larger than a limit (like 2000) but then the errors start at a smaller value than that limit (like 1955). > ValueError: Index (2000) outside of table length (2000). > Traceback: > File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/ScriptRunner.py", line 322, in _run_script > exec(code, module.__dict__) > File "/home/sasha/nlp_viewer/run.py", line 116, in <module> > v = d[item][k] > File "/home/sasha/.local/lib/python3.7/site-packages/nlp/arrow_dataset.py", line 338, in __getitem__ > output_all_columns=self._output_all_columns, > File "/home/sasha/.local/lib/python3.7/site-packages/nlp/arrow_dataset.py", line 290, in _getitem > raise ValueError(f"Index ({key}) outside of table length ({self._data.num_rows}).")
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/198/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/198/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/197
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/197/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/197/comments
https://api.github.com/repos/huggingface/datasets/issues/197/events
https://github.com/huggingface/datasets/issues/197
624,966,904
MDU6SXNzdWU2MjQ5NjY5MDQ=
197
Scientific Papers only downloading Pubmed
{ "login": "antmarakis", "id": 17463361, "node_id": "MDQ6VXNlcjE3NDYzMzYx", "avatar_url": "https://avatars.githubusercontent.com/u/17463361?v=4", "gravatar_id": "", "url": "https://api.github.com/users/antmarakis", "html_url": "https://github.com/antmarakis", "followers_url": "https://api.github.com/users/antmarakis/followers", "following_url": "https://api.github.com/users/antmarakis/following{/other_user}", "gists_url": "https://api.github.com/users/antmarakis/gists{/gist_id}", "starred_url": "https://api.github.com/users/antmarakis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/antmarakis/subscriptions", "organizations_url": "https://api.github.com/users/antmarakis/orgs", "repos_url": "https://api.github.com/users/antmarakis/repos", "events_url": "https://api.github.com/users/antmarakis/events{/privacy}", "received_events_url": "https://api.github.com/users/antmarakis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi so there are indeed two configurations in the datasets as you can see [here](https://github.com/huggingface/nlp/blob/master/datasets/scientific_papers/scientific_papers.py#L81-L82).\r\n\r\nYou can load either one with:\r\n```python\r\ndataset = nlp.load_dataset('scientific_papers', 'pubmed')\r\ndataset = nlp.load_dataset('scientific_papers', 'arxiv')\r\n```\r\n\r\nThis issues is actually related to a similar user-experience issue with GLUE. When several configurations are available and the first configuration is loaded by default (see issue #152 and #130), it seems to be unexpected for users.\r\n\r\nI think we should maybe raise a (very explicit) error when there are several configurations available and the user doesn't specify one.\r\n\r\nWhat do you think @lhoestq @patrickvonplaten @mariamabarham ?", "Yes, it looks like the right thing to do ", "Now if you don't specify which part you want, it raises an error:\r\n```\r\nValueError: Config name is missing.\r\nPlease pick one among the available configs: ['pubmed', 'arxiv']\r\nExample of usage:\r\n\t`load_dataset('scientific_papers', 'pubmed')`\r\n```" ]
1,590,506,327,000
1,590,653,968,000
1,590,653,968,000
NONE
null
Hi! I have been playing around with this module, and I am a bit confused about the `scientific_papers` dataset. I thought that it would download two separate datasets, arxiv and pubmed. But when I run the following: ``` dataset = nlp.load_dataset('scientific_papers', data_dir='.', cache_dir='.') Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5.05k/5.05k [00:00<00:00, 2.66MB/s] Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.90k/4.90k [00:00<00:00, 2.42MB/s] Downloading and preparing dataset scientific_papers/pubmed (download: 4.20 GiB, generated: 2.33 GiB, total: 6.53 GiB) to ./scientific_papers/pubmed/1.1.1... Downloading: 3.62GB [00:40, 90.5MB/s] Downloading: 880MB [00:08, 101MB/s] Dataset scientific_papers downloaded and prepared to ./scientific_papers/pubmed/1.1.1. Subsequent calls will reuse this data. ``` only a pubmed folder is created. There doesn't seem to be something for arxiv. Are these two datasets merged? Or have I misunderstood something? Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/197/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/197/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/193
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/193/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/193/comments
https://api.github.com/repos/huggingface/datasets/issues/193/events
https://github.com/huggingface/datasets/issues/193
624,655,558
MDU6SXNzdWU2MjQ2NTU1NTg=
193
[Tensorflow] Use something else than `from_tensor_slices()`
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "organizations_url": "https://api.github.com/users/astariul/orgs", "repos_url": "https://api.github.com/users/astariul/repos", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "received_events_url": "https://api.github.com/users/astariul/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "I guess we can use `tf.data.Dataset.from_generator` instead. I'll give it a try.", "Is `tf.data.Dataset.from_generator` working on TPU ?", "`from_generator` is not working on TPU, I met the following error :\r\n\r\n```\r\nFile \"/usr/local/lib/python3.6/contextlib.py\", line 88, in __exit__\r\n next(self.gen)\r\n File \"/home/usr/.venv/bart/lib/python3.6/site-packages/tensorflow_core/python/eager/context.py\", line 1900, in execution_mode\r\n executor_new.wait()\r\n File \"/home/usr/.venv/bart/lib/python3.6/site-packages/tensorflow_core/python/eager/executor.py\", line 67, in wait\r\n pywrap_tensorflow.TFE_ExecutorWaitForAllPendingNodes(self._handle)\r\ntensorflow.python.framework.errors_impl.NotFoundError: No registered 'PyFunc' OpKernel for 'CPU' devices compatible with node {{node PyFunc}}\r\n . Registered: <no registered kernels>\r\n\r\n [[PyFunc]]\r\n```\r\n\r\n---\r\n\r\n@lhoestq It seems you merged some changes that allow lazy-loading. **Can you give an example of how to use ?** Maybe the Colab notebook should be updated with this method as well.", "Could you send me the code you used to run create the dataset using `.from_generator` ? What version of tensorflow are you using ?", "I'm using TF2.2\r\n\r\nHere is my code :\r\n```\r\nimport nlp\r\nfrom transformers import BartTokenizer\r\n\r\ntokenizer = BartTokenizer.from_pretrained('bart-large')\r\n\r\ndef encode(sample):\r\n article_inputs = tokenizer.encode_plus(sample[\"article\"], max_length=tokenizer.model_max_length, pad_to_max_length=True)\r\n summary_inputs = tokenizer.encode_plus(sample[\"highlights\"], max_length=tokenizer.model_max_length, pad_to_max_length=True)\r\n\r\n article_inputs.update({\"lm_labels\": summary_inputs['input_ids']})\r\n return article_inputs\r\n\r\ncnn_dm = nlp.load_dataset('cnn_dailymail', '3.0.0', split='test')\r\ncnn_dm = cnn_dm.map(encode)\r\n\r\ndef gen():\r\n for sample in cnn_dm:\r\n s = {}\r\n s['input_ids'] = sample['input_ids']\r\n s['attention_mask'] = sample['attention_mask']\r\n s['lm_labels'] = sample['lm_labels']\r\n yield s\r\n\r\ndataset = tf.data.Dataset.from_generator(gen, output_types={k: tf.int32 for k in ['input_ids', 'attention_mask', 'lm_labels']}, output_shapes={k: tf.TensorShape([tokenizer.model_max_length]) for k in ['input_ids', 'attention_mask', 'lm_labels']}\r\n```", "Apparently we'll have to wait for the next tensorflow release to use `.from_generator` and TPU. See https://github.com/tensorflow/tensorflow/issues/34346#issuecomment-598262489", "Fixed by https://github.com/huggingface/datasets/pull/339" ]
1,590,477,554,000
1,603,812,491,000
1,603,812,491,000
NONE
null
In the example notebook, the TF Dataset is built using `from_tensor_slices()` : ```python columns = ['input_ids', 'token_type_ids', 'attention_mask', 'start_positions', 'end_positions'] train_tf_dataset.set_format(type='tensorflow', columns=columns) features = {x: train_tf_dataset[x] for x in columns[:3]} labels = {"output_1": train_tf_dataset["start_positions"]} labels["output_2"] = train_tf_dataset["end_positions"] tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8) ``` But according to [official tensorflow documentation](https://www.tensorflow.org/guide/data#consuming_numpy_arrays), this will load the entire dataset to memory. **This defeats one purpose of this library, which is lazy loading.** Is there any other way to load the `nlp` dataset into TF dataset lazily ? --- For example, is it possible to use [Arrow dataset](https://www.tensorflow.org/io/api_docs/python/tfio/arrow/ArrowDataset) ? If yes, is there any code example ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/193/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/193/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/192
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/192/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/192/comments
https://api.github.com/repos/huggingface/datasets/issues/192/events
https://github.com/huggingface/datasets/issues/192
624,397,592
MDU6SXNzdWU2MjQzOTc1OTI=
192
[Question] Create Apache Arrow dataset from raw text file
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "We store every dataset in the Arrow format. This is convenient as it supports nested types and memory mapping. If you are curious feel free to check the [pyarrow documentation](https://arrow.apache.org/docs/python/)\r\n\r\nYou can use this library to load your covid papers by creating a dataset script. You can find inspiration from the ones we've already written in `/datasets`. Here is a link to the steps to [add a dataset](https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset)", "Hello @mrm8488 and @lhoestq \r\n\r\nIs there a way to convert a dataset to Apache arrow format (locally/personal use) & use it before sending it to hugging face?\r\n\r\nThanks :)", "> Is there a way to convert a dataset to Apache arrow format (locally/personal use) & use it before sending it to hugging face?\r\n\r\nSure, to get a dataset in arrow format you can either:\r\n- [load from local files (txt, json, csv)](https://huggingface.co/nlp/loading_datasets.html?highlight=csv#from-local-files)\r\n- OR [load from python data (dict, pandas)](https://huggingface.co/nlp/loading_datasets.html?highlight=csv#from-in-memory-data)\r\n- OR [create your own dataset script](https://huggingface.co/nlp/loading_datasets.html?highlight=csv#using-a-custom-dataset-loading-script)\r\n", "> > Is there a way to convert a dataset to Apache arrow format (locally/personal use) & use it before sending it to hugging face?\r\n> \r\n> Sure, to get a dataset in arrow format you can either:\r\n> \r\n> * [load from local files (txt, json, csv)](https://huggingface.co/nlp/loading_datasets.html?highlight=csv#from-local-files)\r\n> \r\n> * OR [load from python data (dict, pandas)](https://huggingface.co/nlp/loading_datasets.html?highlight=csv#from-in-memory-data)\r\n> \r\n> * OR [create your own dataset script](https://huggingface.co/nlp/loading_datasets.html?highlight=csv#using-a-custom-dataset-loading-script)\r\n\r\nLinks were broken. \r\n\r\nUpdated links provided as below\r\n- [load from local files (txt, json, csv)](https://huggingface.co/docs/datasets/loading_datasets.html#from-local-or-remote-files)\r\n- [load from python data (dict, pandas)](https://huggingface.co/docs/datasets/loading_datasets.html#from-in-memory-data)\r\n- [create your own dataset script](https://huggingface.co/docs/datasets/loading_datasets.html#using-a-custom-dataset-loading-script)\r\n" ]
1,590,424,967,000
1,639,791,934,000
1,603,812,022,000
NONE
null
Hi guys, I have gathered and preprocessed about 2GB of COVID papers from CORD dataset @ Kggle. I have seen you have a text dataset as "Crime and punishment" in Apache arrow format. Do you have any script to do it from a raw txt file (preprocessed as for BERT like) or any guide? Is the worth of send it to you and add it to the NLP library? Thanks, Manu
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/192/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/192/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/189
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/189/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/189/comments
https://api.github.com/repos/huggingface/datasets/issues/189/events
https://github.com/huggingface/datasets/issues/189
624,048,881
MDU6SXNzdWU2MjQwNDg4ODE=
189
[Question] BERT-style multiple choice formatting
{ "login": "sarahwie", "id": 8027676, "node_id": "MDQ6VXNlcjgwMjc2NzY=", "avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sarahwie", "html_url": "https://github.com/sarahwie", "followers_url": "https://api.github.com/users/sarahwie/followers", "following_url": "https://api.github.com/users/sarahwie/following{/other_user}", "gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}", "starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions", "organizations_url": "https://api.github.com/users/sarahwie/orgs", "repos_url": "https://api.github.com/users/sarahwie/repos", "events_url": "https://api.github.com/users/sarahwie/events{/privacy}", "received_events_url": "https://api.github.com/users/sarahwie/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @sarahwie, can you details this a little more?\r\n\r\nI'm not sure I understand what you refer to and what you mean when you say \"Previously, this was done by passing a list of InputFeatures to the dataloader instead of a list of InputFeature\"", "I think I've resolved it. For others' reference: to convert from using the [`MultipleChoiceDataset` class](https://github.com/huggingface/transformers/blob/a34a9896ac2a4a33ff9cd805c76eed914c8d8965/examples/multiple-choice/utils_multiple_choice.py#L82)/[`run_multiple_choice.py`](https://github.com/huggingface/transformers/blob/a34a9896ac2a4a33ff9cd805c76eed914c8d8965/examples/multiple-choice/run_multiple_choice.py) script in Huggingface Transformers, I've done the following for hellaswag:\r\n\r\n1. converted the `convert_examples_to_features()` function to only take one input and return a dictionary rather than a list:\r\n```\r\ndef convert_examples_to_features(example, tokenizer, max_length):\r\n\r\n choices_inputs = defaultdict(list)\r\n for ending_idx, ending in enumerate(example['endings']['ending']):\r\n text_a = example['ctx']\r\n text_b = ending\r\n\r\n inputs = tokenizer.encode_plus(\r\n text_a,\r\n text_b,\r\n add_special_tokens=True,\r\n max_length=max_length,\r\n pad_to_max_length=True,\r\n return_overflowing_tokens=True,\r\n )\r\n if \"num_truncated_tokens\" in inputs and inputs[\"num_truncated_tokens\"] > 0:\r\n logger.info(\r\n \"Attention! you are cropping tokens (swag task is ok). \"\r\n \"If you are training ARC and RACE and you are poping question + options,\"\r\n \"you need to try to use a bigger max seq length!\"\r\n )\r\n\r\n for key in inputs:\r\n choices_inputs[key].append(inputs[key])\r\n \r\n choices_inputs['label'] = int(example['label'])\r\n\r\n return choices_inputs\r\n```\r\n2. apply this directly (instance-wise) to dataset, convert dataset to torch tensors. Dataset is then ready to be passed to `Trainer` instance.\r\n\r\n```\r\ndataset['train'] = dataset['train'].map(lambda x: convert_examples_to_features(x, tokenizer, max_length), batched=False)\r\ncolumns = ['input_ids', 'token_type_ids', 'attention_mask', 'label']\r\ndataset['train'].set_format(type='torch', columns=columns)\r\n```" ]
1,590,383,465,000
1,590,431,908,000
1,590,431,908,000
NONE
null
Hello, I am wondering what the equivalent formatting of a dataset should be to allow for multiple-choice answering prediction, BERT-style. Previously, this was done by passing a list of `InputFeatures` to the dataloader instead of a list of `InputFeature`, where `InputFeatures` contained lists of length equal to the number of answer choices in the MCQ instead of single items. I'm a bit confused on what the output of my feature conversion function should be when using `dataset.map()` to ensure similar behavior. Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/189/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/189/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/188
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/188/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/188/comments
https://api.github.com/repos/huggingface/datasets/issues/188/events
https://github.com/huggingface/datasets/issues/188
623,890,430
MDU6SXNzdWU2MjM4OTA0MzA=
188
When will the remaining math_dataset modules be added as dataset objects
{ "login": "tylerroost", "id": 31251196, "node_id": "MDQ6VXNlcjMxMjUxMTk2", "avatar_url": "https://avatars.githubusercontent.com/u/31251196?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tylerroost", "html_url": "https://github.com/tylerroost", "followers_url": "https://api.github.com/users/tylerroost/followers", "following_url": "https://api.github.com/users/tylerroost/following{/other_user}", "gists_url": "https://api.github.com/users/tylerroost/gists{/gist_id}", "starred_url": "https://api.github.com/users/tylerroost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tylerroost/subscriptions", "organizations_url": "https://api.github.com/users/tylerroost/orgs", "repos_url": "https://api.github.com/users/tylerroost/repos", "events_url": "https://api.github.com/users/tylerroost/events{/privacy}", "received_events_url": "https://api.github.com/users/tylerroost/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "On a similar note it would be nice to differentiate between train-easy, train-medium, and train-hard", "Hi @tylerroost, we don't have a timeline for this at the moment.\r\nIf you want to give it a look we would be happy to review a PR on it.\r\nAlso, the library is one week old so everything is quite barebones, in particular the doc.\r\nYou should expect some bumps on the road.\r\n\r\nTo get you started, you can check the datasets scripts in the `./datasets` folder on the repo and find the one on math_datasets that will need to be modified. Then you should check the original repository on the math_dataset to see where the other files to download are located and what is the expected format for the various parts of the dataset.\r\n\r\nTo get a general overview on how datasets scripts are written and used, you can read the nice tutorial on how to add a new dataset for TensorFlow Dataset [here](https://www.tensorflow.org/datasets/add_dataset), our API is not exactly identical but it can give you a high-level overview.", "Thanks I'll give it a look" ]
1,590,335,212,000
1,590,346,428,000
1,590,346,428,000
NONE
null
Currently only the algebra_linear_1d is supported. Is there a timeline for making the other modules supported. If no timeline is established, how can I help?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/188/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/188/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/187
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/187/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/187/comments
https://api.github.com/repos/huggingface/datasets/issues/187/events
https://github.com/huggingface/datasets/issues/187
623,627,800
MDU6SXNzdWU2MjM2Mjc4MDA=
187
[Question] How to load wikipedia ? Beam runner ?
{ "login": "richarddwang", "id": 17963619, "node_id": "MDQ6VXNlcjE3OTYzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richarddwang", "html_url": "https://github.com/richarddwang", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "repos_url": "https://api.github.com/users/richarddwang/repos", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I have seen that somebody is hard working on easierly loadable wikipedia. #129 \r\nMaybe I should wait a few days for that version ?", "Yes we (well @lhoestq) are very actively working on this." ]
1,590,229,132,000
1,590,365,522,000
1,590,365,522,000
CONTRIBUTOR
null
When `nlp.load_dataset('wikipedia')`, I got * `WARNING:nlp.builder:Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided. Please pass a nlp.DownloadConfig(beam_runner=...) object to the builder.download_and_prepare(download_config=...) method. Default values will be used.` * `AttributeError: 'NoneType' object has no attribute 'size'` Could somebody tell me what should I do ? # Env On Colab, ``` git clone https://github.com/huggingface/nlp cd nlp pip install -q . ``` ``` %pip install -q apache_beam mwparserfromhell -> ERROR: pydrive 1.3.1 has requirement oauth2client>=4.0.0, but you'll have oauth2client 3.0.0 which is incompatible. ERROR: google-api-python-client 1.7.12 has requirement httplib2<1dev,>=0.17.0, but you'll have httplib2 0.12.0 which is incompatible. ERROR: chainer 6.5.0 has requirement typing-extensions<=3.6.6, but you'll have typing-extensions 3.7.4.2 which is incompatible. ``` ``` pip install -q apache-beam[interactive] ERROR: google-colab 1.0.0 has requirement ipython~=5.5.0, but you'll have ipython 5.10.0 which is incompatible. ``` # The whole message ``` WARNING:nlp.builder:Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided. Please pass a nlp.DownloadConfig(beam_runner=...) object to the builder.download_and_prepare(download_config=...) method. Default values will be used. Downloading and preparing dataset wikipedia/20200501.aa (download: Unknown size, generated: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wikipedia/20200501.aa/1.0.0... --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process() 44 frames /usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker.invoke_process() /usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window() /usr/local/lib/python3.6/dist-packages/apache_beam/io/iobase.py in process(self, element, init_result) 1081 writer.write(e) -> 1082 return [window.TimestampedValue(writer.close(), timestamp.MAX_TIMESTAMP)] 1083 /usr/local/lib/python3.6/dist-packages/apache_beam/io/filebasedsink.py in close(self) 422 def close(self): --> 423 self.sink.close(self.temp_handle) 424 return self.temp_shard_path /usr/local/lib/python3.6/dist-packages/apache_beam/io/parquetio.py in close(self, writer) 537 if len(self._buffer[0]) > 0: --> 538 self._flush_buffer() 539 if self._record_batches_byte_size > 0: /usr/local/lib/python3.6/dist-packages/apache_beam/io/parquetio.py in _flush_buffer(self) 569 for b in x.buffers(): --> 570 size = size + b.size 571 self._record_batches_byte_size = self._record_batches_byte_size + size AttributeError: 'NoneType' object has no attribute 'size' During handling of the above exception, another exception occurred: AttributeError Traceback (most recent call last) <ipython-input-9-340aabccefff> in <module>() ----> 1 dset = nlp.load_dataset('wikipedia') /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 518 download_mode=download_mode, 519 ignore_verifications=ignore_verifications, --> 520 save_infos=save_infos, 521 ) 522 /usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs) 370 verify_infos = not save_infos and not ignore_verifications 371 self._download_and_prepare( --> 372 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 373 ) 374 # Sync info /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos) 770 with beam.Pipeline(runner=beam_runner, options=beam_options,) as pipeline: 771 super(BeamBasedBuilder, self)._download_and_prepare( --> 772 dl_manager, pipeline=pipeline, verify_infos=False 773 ) # TODO{beam} verify infos 774 /usr/local/lib/python3.6/dist-packages/apache_beam/pipeline.py in __exit__(self, exc_type, exc_val, exc_tb) 501 def __exit__(self, exc_type, exc_val, exc_tb): 502 if not exc_type: --> 503 self.run().wait_until_finish() 504 505 def visit(self, visitor): /usr/local/lib/python3.6/dist-packages/apache_beam/pipeline.py in run(self, test_runner_api) 481 return Pipeline.from_runner_api( 482 self.to_runner_api(use_fake_coders=True), self.runner, --> 483 self._options).run(False) 484 485 if self._options.view_as(TypeOptions).runtime_type_check: /usr/local/lib/python3.6/dist-packages/apache_beam/pipeline.py in run(self, test_runner_api) 494 finally: 495 shutil.rmtree(tmpdir) --> 496 return self.runner.run_pipeline(self, self._options) 497 498 def __enter__(self): /usr/local/lib/python3.6/dist-packages/apache_beam/runners/direct/direct_runner.py in run_pipeline(self, pipeline, options) 128 runner = BundleBasedDirectRunner() 129 --> 130 return runner.run_pipeline(pipeline, options) 131 132 /usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in run_pipeline(self, pipeline, options) 553 554 self._latest_run_result = self.run_via_runner_api( --> 555 pipeline.to_runner_api(default_environment=self._default_environment)) 556 return self._latest_run_result 557 /usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in run_via_runner_api(self, pipeline_proto) 563 # TODO(pabloem, BEAM-7514): Create a watermark manager (that has access to 564 # the teststream (if any), and all the stages). --> 565 return self.run_stages(stage_context, stages) 566 567 @contextlib.contextmanager /usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in run_stages(self, stage_context, stages) 704 stage, 705 pcoll_buffers, --> 706 stage_context.safe_coders) 707 metrics_by_stage[stage.name] = stage_results.process_bundle.metrics 708 monitoring_infos_by_stage[stage.name] = ( /usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in _run_stage(self, worker_handler_factory, pipeline_components, stage, pcoll_buffers, safe_coders) 1071 cache_token_generator=cache_token_generator) 1072 -> 1073 result, splits = bundle_manager.process_bundle(data_input, data_output) 1074 1075 def input_for(transform_id, input_id): /usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in process_bundle(self, inputs, expected_outputs) 2332 2333 with UnboundedThreadPoolExecutor() as executor: -> 2334 for result, split_result in executor.map(execute, part_inputs): 2335 2336 split_result_list += split_result /usr/lib/python3.6/concurrent/futures/_base.py in result_iterator() 584 # Careful not to keep a reference to the popped future 585 if timeout is None: --> 586 yield fs.pop().result() 587 else: 588 yield fs.pop().result(end_time - time.monotonic()) /usr/lib/python3.6/concurrent/futures/_base.py in result(self, timeout) 430 raise CancelledError() 431 elif self._state == FINISHED: --> 432 return self.__get_result() 433 else: 434 raise TimeoutError() /usr/lib/python3.6/concurrent/futures/_base.py in __get_result(self) 382 def __get_result(self): 383 if self._exception: --> 384 raise self._exception 385 else: 386 return self._result /usr/local/lib/python3.6/dist-packages/apache_beam/utils/thread_pool_executor.py in run(self) 42 # If the future wasn't cancelled, then attempt to execute it. 43 try: ---> 44 self._future.set_result(self._fn(*self._fn_args, **self._fn_kwargs)) 45 except BaseException as exc: 46 # Even though Python 2 futures library has #set_exection(), /usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in execute(part_map) 2329 self._registered, 2330 cache_token_generator=self._cache_token_generator) -> 2331 return bundle_manager.process_bundle(part_map, expected_outputs) 2332 2333 with UnboundedThreadPoolExecutor() as executor: /usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in process_bundle(self, inputs, expected_outputs) 2243 process_bundle_descriptor_id=self._bundle_descriptor.id, 2244 cache_tokens=[next(self._cache_token_generator)])) -> 2245 result_future = self._worker_handler.control_conn.push(process_bundle_req) 2246 2247 split_results = [] # type: List[beam_fn_api_pb2.ProcessBundleSplitResponse] /usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in push(self, request) 1557 self._uid_counter += 1 1558 request.instruction_id = 'control_%s' % self._uid_counter -> 1559 response = self.worker.do_instruction(request) 1560 return ControlFuture(request.instruction_id, response) 1561 /usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/sdk_worker.py in do_instruction(self, request) 413 # E.g. if register is set, this will call self.register(request.register)) 414 return getattr(self, request_type)( --> 415 getattr(request, request_type), request.instruction_id) 416 else: 417 raise NotImplementedError /usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/sdk_worker.py in process_bundle(self, request, instruction_id) 448 with self.maybe_profile(instruction_id): 449 delayed_applications, requests_finalization = ( --> 450 bundle_processor.process_bundle(instruction_id)) 451 monitoring_infos = bundle_processor.monitoring_infos() 452 monitoring_infos.extend(self.state_cache_metrics_fn()) /usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/bundle_processor.py in process_bundle(self, instruction_id) 837 for data in data_channel.input_elements(instruction_id, 838 expected_transforms): --> 839 input_op_by_transform_id[data.transform_id].process_encoded(data.data) 840 841 # Finish all operations. /usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/bundle_processor.py in process_encoded(self, encoded_windowed_values) 214 decoded_value = self.windowed_coder_impl.decode_from_stream( 215 input_stream, True) --> 216 self.output(decoded_value) 217 218 def try_split(self, fraction_of_remainder, total_buffer_size): /usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.Operation.output() /usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.Operation.output() /usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.SingletonConsumerSet.receive() /usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process() /usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process() /usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process() /usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner._reraise_augmented() /usr/local/lib/python3.6/dist-packages/future/utils/__init__.py in raise_with_traceback(exc, traceback) 417 if traceback == Ellipsis: 418 _, _, traceback = sys.exc_info() --> 419 raise exc.with_traceback(traceback) 420 421 else: /usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process() /usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker.invoke_process() /usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window() /usr/local/lib/python3.6/dist-packages/apache_beam/io/iobase.py in process(self, element, init_result) 1080 for e in bundle[1]: # values 1081 writer.write(e) -> 1082 return [window.TimestampedValue(writer.close(), timestamp.MAX_TIMESTAMP)] 1083 1084 /usr/local/lib/python3.6/dist-packages/apache_beam/io/filebasedsink.py in close(self) 421 422 def close(self): --> 423 self.sink.close(self.temp_handle) 424 return self.temp_shard_path /usr/local/lib/python3.6/dist-packages/apache_beam/io/parquetio.py in close(self, writer) 536 def close(self, writer): 537 if len(self._buffer[0]) > 0: --> 538 self._flush_buffer() 539 if self._record_batches_byte_size > 0: 540 self._write_batches(writer) /usr/local/lib/python3.6/dist-packages/apache_beam/io/parquetio.py in _flush_buffer(self) 568 for x in arrays: 569 for b in x.buffers(): --> 570 size = size + b.size 571 self._record_batches_byte_size = self._record_batches_byte_size + size AttributeError: 'NoneType' object has no attribute 'size' [while running 'train/Save to parquet/Write/WriteImpl/WriteBundles'] ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/187/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/187/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/186
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/186/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/186/comments
https://api.github.com/repos/huggingface/datasets/issues/186/events
https://github.com/huggingface/datasets/issues/186
623,595,180
MDU6SXNzdWU2MjM1OTUxODA=
186
Weird-ish: Not creating unique caches for different phases
{ "login": "zphang", "id": 1668462, "node_id": "MDQ6VXNlcjE2Njg0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/1668462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zphang", "html_url": "https://github.com/zphang", "followers_url": "https://api.github.com/users/zphang/followers", "following_url": "https://api.github.com/users/zphang/following{/other_user}", "gists_url": "https://api.github.com/users/zphang/gists{/gist_id}", "starred_url": "https://api.github.com/users/zphang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zphang/subscriptions", "organizations_url": "https://api.github.com/users/zphang/orgs", "repos_url": "https://api.github.com/users/zphang/repos", "events_url": "https://api.github.com/users/zphang/events{/privacy}", "received_events_url": "https://api.github.com/users/zphang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Looks like a duplicate of #120.\r\nThis is already fixed on master. We'll do a new release on pypi soon", "Good catch, it looks fixed.\r\n" ]
1,590,216,058,000
1,590,265,338,000
1,590,265,337,000
NONE
null
Sample code: ```python import nlp dataset = nlp.load_dataset('boolq') def func1(x): return x def func2(x): return None train_output = dataset["train"].map(func1) valid_output = dataset["validation"].map(func1) print() print(len(train_output), len(valid_output)) # Output: 9427 9427 ``` The map method in both cases seem to be pointing to the same cache, so the latter call based on the validation data will return the processed train data cache. What's weird is that the following doesn't seem to be an issue: ```python train_output = dataset["train"].map(func2) valid_output = dataset["validation"].map(func2) print() print(len(train_output), len(valid_output)) # 9427 3270 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/186/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/186/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/183
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/183/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/183/comments
https://api.github.com/repos/huggingface/datasets/issues/183/events
https://github.com/huggingface/datasets/issues/183
623,054,270
MDU6SXNzdWU2MjMwNTQyNzA=
183
[Bug] labels of glue/ax are all -1
{ "login": "richarddwang", "id": 17963619, "node_id": "MDQ6VXNlcjE3OTYzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richarddwang", "html_url": "https://github.com/richarddwang", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "repos_url": "https://api.github.com/users/richarddwang/repos", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "This is the test set given by the Glue benchmark. The labels are not provided, and therefore set to -1.", "Ah, yeah. Why it didn’t occur to me. 😂\nThank you for your comment." ]
1,590,137,016,000
1,590,185,645,000
1,590,185,645,000
CONTRIBUTOR
null
``` ax = nlp.load_dataset('glue', 'ax') for i in range(30): print(ax['test'][i]['label'], end=', ') ``` ``` -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/183/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/183/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/181
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/181/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/181/comments
https://api.github.com/repos/huggingface/datasets/issues/181/events
https://github.com/huggingface/datasets/issues/181
622,634,420
MDU6SXNzdWU2MjI2MzQ0MjA=
181
Cannot upload my own dataset
{ "login": "korakot", "id": 3155646, "node_id": "MDQ6VXNlcjMxNTU2NDY=", "avatar_url": "https://avatars.githubusercontent.com/u/3155646?v=4", "gravatar_id": "", "url": "https://api.github.com/users/korakot", "html_url": "https://github.com/korakot", "followers_url": "https://api.github.com/users/korakot/followers", "following_url": "https://api.github.com/users/korakot/following{/other_user}", "gists_url": "https://api.github.com/users/korakot/gists{/gist_id}", "starred_url": "https://api.github.com/users/korakot/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/korakot/subscriptions", "organizations_url": "https://api.github.com/users/korakot/orgs", "repos_url": "https://api.github.com/users/korakot/repos", "events_url": "https://api.github.com/users/korakot/events{/privacy}", "received_events_url": "https://api.github.com/users/korakot/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "It's my misunderstanding. I cannot just upload a csv. I need to write a dataset loading script too.", "I now try with the sample `datasets/csv` folder. \r\n\r\n nlp-cli upload csv\r\n\r\nThe error is still the same\r\n\r\n```\r\n2020-05-21 17:20:56.394659: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\r\nAbout to upload file /content/csv/csv.py to S3 under filename csv/csv.py and namespace korakot\r\nAbout to upload file /content/csv/dummy/0.0.0/dummy_data.zip to S3 under filename csv/dummy/0.0.0/dummy_data.zip and namespace korakot\r\nProceed? [Y/n] y\r\nUploading... This might take a while if files are large\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/nlp-cli\", line 33, in <module>\r\n service.run()\r\n File \"/usr/local/lib/python3.6/dist-packages/nlp/commands/user.py\", line 234, in run\r\n token=token, filename=filename, filepath=filepath, organization=self.args.organization\r\n File \"/usr/local/lib/python3.6/dist-packages/nlp/hf_api.py\", line 141, in presign_and_upload\r\n urls = self.presign(token, filename=filename, organization=organization)\r\n File \"/usr/local/lib/python3.6/dist-packages/nlp/hf_api.py\", line 132, in presign\r\n return PresignedUrl(**d)\r\nTypeError: __init__() got an unexpected keyword argument 'cdn'\r\n```\r\n", "We haven't tested the dataset upload feature yet cc @julien-c \r\nThis is on our short/mid-term roadmap though", "Even if I fix the `TypeError: __init__() got an unexpected keyword argument 'cdn'` error, it looks like it still uploads to `https://s3.amazonaws.com/models.huggingface.co/bert/<namespace>/<dataset_name>` instead of `https://s3.amazonaws.com/datasets.huggingface.co/nlp/<namespace>/<dataset_name>`", "@lhoestq The endpoints in https://github.com/huggingface/nlp/blob/master/src/nlp/hf_api.py should be (depending on the type of file):\r\n```\r\nPOST /api/datasets/presign\r\nGET /api/datasets/listObjs\r\nDELETE /api/datasets/deleteObj\r\nPOST /api/metrics/presign \r\nGET /api/metrics/listObjs\r\nDELETE /api/metrics/deleteObj\r\n```\r\n\r\nIn addition to this, @thomwolf cleaned up the objects with dataclasses but you should revert this and re-align to the hf_api that's in this branch of transformers: https://github.com/huggingface/transformers/pull/4632 (so that potential new JSON attributes in the API output don't break existing versions of any library)", "New commands are\r\n```\r\nnlp-cli upload_dataset <path/to/dataset>\r\nnlp-cli upload_metric <path/to/metric>\r\nnlp-cli s3_datasets {rm, ls}\r\nnlp-cli s3_metrics {rm, ls}\r\n```\r\nClosing this issue." ]
1,590,079,552,000
1,592,518,482,000
1,592,518,482,000
NONE
null
I look into `nlp-cli` and `user.py` to learn how to upload my own data. It is supposed to work like this - Register to get username, password at huggingface.co - `nlp-cli login` and type username, passworld - I have a single file to upload at `./ttc/ttc_freq_extra.csv` - `nlp-cli upload ttc/ttc_freq_extra.csv` But I got this error. ``` 2020-05-21 16:33:52.722464: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 About to upload file /content/ttc/ttc_freq_extra.csv to S3 under filename ttc/ttc_freq_extra.csv and namespace korakot Proceed? [Y/n] y Uploading... This might take a while if files are large Traceback (most recent call last): File "/usr/local/bin/nlp-cli", line 33, in <module> service.run() File "/usr/local/lib/python3.6/dist-packages/nlp/commands/user.py", line 234, in run token=token, filename=filename, filepath=filepath, organization=self.args.organization File "/usr/local/lib/python3.6/dist-packages/nlp/hf_api.py", line 141, in presign_and_upload urls = self.presign(token, filename=filename, organization=organization) File "/usr/local/lib/python3.6/dist-packages/nlp/hf_api.py", line 132, in presign return PresignedUrl(**d) TypeError: __init__() got an unexpected keyword argument 'cdn' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/181/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/181/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/179
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/179/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/179/comments
https://api.github.com/repos/huggingface/datasets/issues/179/events
https://github.com/huggingface/datasets/issues/179
622,525,410
MDU6SXNzdWU2MjI1MjU0MTA=
179
[Feature request] separate split name and split instructions
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "If your dataset is a collection of sub-datasets, you should probably consider having one config per sub-dataset. For example for Glue, we have sst2, mnli etc.\r\nIf you want to have multiple train sets (for example one per stage). The easiest solution would be to name them `nlp.Split(\"train_stage1\")`, `nlp.Split(\"train_stage2\")`, etc. or something like that.", "Thanks for the tip! I ended up setting up three different versions of the dataset with their own configs.\r\n\r\nfor the named splits, I was trying with `nlp.Split(\"train-stage1\")`, which fails. Changing to `nlp.Split(\"train_stage1\")` works :) I looked for examples of what works in the code comments, it may be worth adding some examples of valid/invalid names in there?" ]
1,590,070,251,000
1,590,154,268,000
1,590,154,267,000
MEMBER
null
Currently, the name of an nlp.NamedSplit is parsed in arrow_reader.py and used as the instruction. This makes it impossible to have several training sets, which can occur when: - A dataset corresponds to a collection of sub-datasets - A dataset was built in stages, adding new examples at each stage Would it be possible to have two separate fields in the Split class, a name /instruction and a unique ID that is used as the key in the builder's split_dict ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/179/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/179/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/175
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/175/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/175/comments
https://api.github.com/repos/huggingface/datasets/issues/175/events
https://github.com/huggingface/datasets/issues/175
621,929,428
MDU6SXNzdWU2MjE5Mjk0Mjg=
175
[Manual data dir] Error message: nlp.load_dataset('xsum') -> TypeError
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,994,032,000
1,589,998,730,000
1,589,998,730,000
MEMBER
null
v 0.1.0 from pip ```python import nlp xsum = nlp.load_dataset('xsum') ``` Issue is `dl_manager.manual_dir`is `None` ```python --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-42-8a32f066f3bd> in <module> ----> 1 xsum = nlp.load_dataset('xsum') ~/miniconda3/envs/nb/lib/python3.7/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 515 download_mode=download_mode, 516 ignore_verifications=ignore_verifications, --> 517 save_infos=save_infos, 518 ) 519 ~/miniconda3/envs/nb/lib/python3.7/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs) 361 verify_infos = not save_infos and not ignore_verifications 362 self._download_and_prepare( --> 363 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 364 ) 365 # Sync info ~/miniconda3/envs/nb/lib/python3.7/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 397 split_dict = SplitDict(dataset_name=self.name) 398 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 399 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 400 # Checksums verification 401 if verify_infos: ~/miniconda3/envs/nb/lib/python3.7/site-packages/nlp/datasets/xsum/5c5fca23aaaa469b7a1c6f095cf12f90d7ab99bcc0d86f689a74fd62634a1472/xsum.py in _split_generators(self, dl_manager) 102 with open(dl_path, "r") as json_file: 103 split_ids = json.load(json_file) --> 104 downloaded_path = os.path.join(dl_manager.manual_dir, "xsum-extracts-from-downloads") 105 return [ 106 nlp.SplitGenerator( ~/miniconda3/envs/nb/lib/python3.7/posixpath.py in join(a, *p) 78 will be discarded. An empty last part will result in a path that 79 ends with a separator.""" ---> 80 a = os.fspath(a) 81 sep = _get_sep(a) 82 path = a TypeError: expected str, bytes or os.PathLike object, not NoneType ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/175/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/175/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/174
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/174/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/174/comments
https://api.github.com/repos/huggingface/datasets/issues/174/events
https://github.com/huggingface/datasets/issues/174
621,928,403
MDU6SXNzdWU2MjE5Mjg0MDM=
174
nlp.load_dataset('xsum') -> TypeError
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,993,949,000
1,589,996,626,000
1,589,996,626,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/174/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/174/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/172
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/172/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/172/comments
https://api.github.com/repos/huggingface/datasets/issues/172/events
https://github.com/huggingface/datasets/issues/172
621,377,386
MDU6SXNzdWU2MjEzNzczODY=
172
Clone not working on Windows environment
{ "login": "codehunk628", "id": 51091425, "node_id": "MDQ6VXNlcjUxMDkxNDI1", "avatar_url": "https://avatars.githubusercontent.com/u/51091425?v=4", "gravatar_id": "", "url": "https://api.github.com/users/codehunk628", "html_url": "https://github.com/codehunk628", "followers_url": "https://api.github.com/users/codehunk628/followers", "following_url": "https://api.github.com/users/codehunk628/following{/other_user}", "gists_url": "https://api.github.com/users/codehunk628/gists{/gist_id}", "starred_url": "https://api.github.com/users/codehunk628/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/codehunk628/subscriptions", "organizations_url": "https://api.github.com/users/codehunk628/orgs", "repos_url": "https://api.github.com/users/codehunk628/repos", "events_url": "https://api.github.com/users/codehunk628/events{/privacy}", "received_events_url": "https://api.github.com/users/codehunk628/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Should be fixed on master now :)", "Thanks @lhoestq 👍 Now I can uninstall WSL and get back to work with windows.🙂" ]
1,589,935,514,000
1,590,238,153,000
1,590,233,272,000
CONTRIBUTOR
null
Cloning in a windows environment is not working because of use of special character '?' in folder name .. Please consider changing the folder name .... Reference to folder - nlp/datasets/cnn_dailymail/dummy/3.0.0/3.0.0/dummy_data-zip-extracted/dummy_data/uc?export=download&id=0BwmD_VLjROrfM1BxdkxVaTY2bWs/dailymail/stories/ error log: fatal: cannot create directory at 'datasets/cnn_dailymail/dummy/3.0.0/3.0.0/dummy_data-zip-extracted/dummy_data/uc?export=download&id=0BwmD_VLjROrfM1BxdkxVaTY2bWs': Invalid argument
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/172/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/172/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/168
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/168/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/168/comments
https://api.github.com/repos/huggingface/datasets/issues/168/events
https://github.com/huggingface/datasets/issues/168
620,959,819
MDU6SXNzdWU2MjA5NTk4MTk=
168
Loading 'wikitext' dataset fails
{ "login": "itay1itzhak", "id": 25987633, "node_id": "MDQ6VXNlcjI1OTg3NjMz", "avatar_url": "https://avatars.githubusercontent.com/u/25987633?v=4", "gravatar_id": "", "url": "https://api.github.com/users/itay1itzhak", "html_url": "https://github.com/itay1itzhak", "followers_url": "https://api.github.com/users/itay1itzhak/followers", "following_url": "https://api.github.com/users/itay1itzhak/following{/other_user}", "gists_url": "https://api.github.com/users/itay1itzhak/gists{/gist_id}", "starred_url": "https://api.github.com/users/itay1itzhak/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/itay1itzhak/subscriptions", "organizations_url": "https://api.github.com/users/itay1itzhak/orgs", "repos_url": "https://api.github.com/users/itay1itzhak/repos", "events_url": "https://api.github.com/users/itay1itzhak/events{/privacy}", "received_events_url": "https://api.github.com/users/itay1itzhak/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi, make sure you have a recent version of pyarrow.\r\n\r\nAre you using it in Google Colab? In this case, this error is probably the same as #128", "Thanks!\r\n\r\nYes I'm using Google Colab, it seems like a duplicate then.", "Closing as it is a duplicate", "Hi,\r\nThe squad bug seems to be fixed, but the loading of the 'wikitext' still suffers from this problem (on Colab with pyarrow=0.17.1).", "When you install `nlp` for the first time on a Colab runtime, it updates the `pyarrow` library that was already on colab. This update shows this message on colab:\r\n```\r\nWARNING: The following packages were previously imported in this runtime:\r\n [pyarrow]\r\nYou must restart the runtime in order to use newly installed versions.\r\n```\r\nYou just have to restart the runtime and it should be fine.", "That was it, thanks!" ]
1,589,893,469,000
1,590,529,612,000
1,590,529,612,000
NONE
null
Loading the 'wikitext' dataset fails with Attribute error: Code to reproduce (From example notebook): import nlp wikitext_dataset = nlp.load_dataset('wikitext') Error: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-17-d5d9df94b13c> in <module>() 11 12 # Load a dataset and print the first examples in the training set ---> 13 wikitext_dataset = nlp.load_dataset('wikitext') 14 print(wikitext_dataset['train'][0]) 6 frames /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 518 download_mode=download_mode, 519 ignore_verifications=ignore_verifications, --> 520 save_infos=save_infos, 521 ) 522 /usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs) 363 verify_infos = not save_infos and not ignore_verifications 364 self._download_and_prepare( --> 365 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 366 ) 367 # Sync info /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 416 try: 417 # Prepare split will record examples associated to the split --> 418 self._prepare_split(split_generator, **prepare_split_kwargs) 419 except OSError: 420 raise OSError("Cannot find data file. " + (self.MANUAL_DOWNLOAD_INSTRUCTIONS or "")) /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _prepare_split(self, split_generator) 594 example = self.info.features.encode_example(record) 595 writer.write(example) --> 596 num_examples, num_bytes = writer.finalize() 597 598 assert num_examples == num_examples, f"Expected to write {split_info.num_examples} but wrote {num_examples}" /usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in finalize(self, close_stream) 173 def finalize(self, close_stream=True): 174 if self.pa_writer is not None: --> 175 self.write_on_file() 176 self.pa_writer.close() 177 if close_stream: /usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in write_on_file(self) 124 else: 125 # All good --> 126 self._write_array_on_file(pa_array) 127 self.current_rows = [] 128 /usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in _write_array_on_file(self, pa_array) 93 def _write_array_on_file(self, pa_array): 94 """Write a PyArrow Array""" ---> 95 pa_batch = pa.RecordBatch.from_struct_array(pa_array) 96 self._num_bytes += pa_array.nbytes 97 self.pa_writer.write_batch(pa_batch) AttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/168/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/168/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/166
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/166/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/166/comments
https://api.github.com/repos/huggingface/datasets/issues/166/events
https://github.com/huggingface/datasets/issues/166
620,850,218
MDU6SXNzdWU2MjA4NTAyMTg=
166
Add a method to shuffle a dataset
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[ { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
closed
false
null
[]
null
[ "+1 for the naming convention\r\n\r\nAbout the `shuffle` method, from my understanding it should be done in `Dataloader` (better separation between dataset processing - usage)", "+1 for shuffle in `Dataloader`. \r\nSome `Dataloader` just store idxs of dataset and just shuffle those idxs, which might(?) be faster than do shuffle in dataset, especially when doing shuffle every epoch.\r\n\r\nAlso +1 for the naming convention.", "As you might already know the issue of dataset shuffling came up in the nlp code [walkthrough](https://youtu.be/G3pOvrKkFuk?t=3204) by Yannic Kilcher\r\n", "We added the `.shuffle` method :)\r\n\r\nClosing this one." ]
1,589,882,926,000
1,592,924,853,000
1,592,924,852,000
MEMBER
null
Could maybe be a `dataset.shuffle(generator=None, seed=None)` signature method. Also, we could maybe have a clear indication of which method modify in-place and which methods return/cache a modified dataset. I kinda like torch conversion of having an underscore suffix for all the methods which modify a dataset in-place. What do you think?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/166/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/166/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/165
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/165/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/165/comments
https://api.github.com/repos/huggingface/datasets/issues/165/events
https://github.com/huggingface/datasets/issues/165
620,758,221
MDU6SXNzdWU2MjA3NTgyMjE=
165
ANLI
{ "login": "douwekiela", "id": 6024930, "node_id": "MDQ6VXNlcjYwMjQ5MzA=", "avatar_url": "https://avatars.githubusercontent.com/u/6024930?v=4", "gravatar_id": "", "url": "https://api.github.com/users/douwekiela", "html_url": "https://github.com/douwekiela", "followers_url": "https://api.github.com/users/douwekiela/followers", "following_url": "https://api.github.com/users/douwekiela/following{/other_user}", "gists_url": "https://api.github.com/users/douwekiela/gists{/gist_id}", "starred_url": "https://api.github.com/users/douwekiela/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/douwekiela/subscriptions", "organizations_url": "https://api.github.com/users/douwekiela/orgs", "repos_url": "https://api.github.com/users/douwekiela/repos", "events_url": "https://api.github.com/users/douwekiela/events{/privacy}", "received_events_url": "https://api.github.com/users/douwekiela/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,589,874,657,000
1,589,977,387,000
1,589,977,387,000
NONE
null
Can I recommend the following: For ANLI, use https://github.com/facebookresearch/anli. As that paper says, "Our dataset is not to be confused with abductive NLI (Bhagavatula et al., 2019), which calls itself αNLI, or ART.". Indeed, the paper cited under what is currently called anli says in the abstract "We introduce a challenge dataset, ART". The current naming will confuse people :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/165/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/165/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/164
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/164/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/164/comments
https://api.github.com/repos/huggingface/datasets/issues/164/events
https://github.com/huggingface/datasets/issues/164
620,540,250
MDU6SXNzdWU2MjA1NDAyNTA=
164
Add Spanish POR and NER Datasets
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "Hello @mrm8488, are these datasets official datasets published in an NLP/CL/ML venue?", "What about this one: https://github.com/ccasimiro88/TranslateAlignRetrieve?" ]
1,589,840,301,000
1,590,424,125,000
1,590,424,125,000
NONE
null
Hi guys, In order to cover multilingual support a little step could be adding standard Datasets used for Spanish NER and POS tasks. I can provide it in raw and preprocessed formats.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/164/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/164/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/163
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/163/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/163/comments
https://api.github.com/repos/huggingface/datasets/issues/163/events
https://github.com/huggingface/datasets/issues/163
620,534,307
MDU6SXNzdWU2MjA1MzQzMDc=
163
[Feature request] Add cos-e v1.0
{ "login": "sarahwie", "id": 8027676, "node_id": "MDQ6VXNlcjgwMjc2NzY=", "avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sarahwie", "html_url": "https://github.com/sarahwie", "followers_url": "https://api.github.com/users/sarahwie/followers", "following_url": "https://api.github.com/users/sarahwie/following{/other_user}", "gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}", "starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions", "organizations_url": "https://api.github.com/users/sarahwie/orgs", "repos_url": "https://api.github.com/users/sarahwie/repos", "events_url": "https://api.github.com/users/sarahwie/events{/privacy}", "received_events_url": "https://api.github.com/users/sarahwie/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "Sounds good, @mariamabarham do you want to give a look?\r\nI think we should have two configurations so we can allow either version of the dataset to be loaded with the `1.0` version being the default maybe.\r\n\r\nCc some authors of the great cos-e: @nazneenrajani @bmccann", "cos_e v1.0 is related to CQA v1.0 but only CQA v1.11 dataset is available on their website. Indeed their is lots of ids in cos_e v1, which are not in CQA v1.11 or the other way around.\r\n@sarahwie, @thomwolf, @nazneenrajani, @bmccann do you know where I can find CQA v1.0\r\n", "@mariamabarham I'm also not sure where to find CQA 1.0. Perhaps it's not possible to include this version of the dataset. I'll close the issue if that's the case.", "I do have a copy of the dataset. I can upload it to our repo.", "Great @nazneenrajani. let me know once done.\r\nThanks", "@mariamabarham @sarahwie I added them to the cos-e repo https://github.com/salesforce/cos-e/tree/master/data/v1.0", "You can now do\r\n```python\r\nfrom nlp import load_dataset\r\ncos_e = load_dataset(\"cos_e\", \"v1.0\")\r\n```\r\nThanks @mariamabarham !", "Thanks!", "@mariamabarham Just wanted to note that default behavior `cos_e = load_dataset(\"cos_e\")` now loads `v1.0`. Not sure if this is intentional (but the flag specification does work as intended). ", "> @mariamabarham Just wanted to note that default behavior `cos_e = load_dataset(\"cos_e\")` now loads `v1.0`. Not sure if this is intentional (but the flag specification does work as intended).\r\n\r\nIn the new version of `nlp`, if you try `cos_e = load_dataset(\"cos_e\")` it throws this error:\r\n```\r\nValueError: Config name is missing.\r\nPlease pick one among the available configs: ['v1.0', 'v1.11']\r\nExample of usage:\r\n\t`load_dataset('cos_e', 'v1.0')`\r\n```\r\nFor datasets with at least two configurations, we now force the user to pick one (no default)" ]
1,589,839,526,000
1,592,349,325,000
1,592,333,526,000
NONE
null
I noticed the second release of cos-e (v1.11) is included in this repo. I wanted to request inclusion of v1.0, since this is the version on which results are reported on in [the paper](https://www.aclweb.org/anthology/P19-1487/), and v1.11 has noted [annotation](https://github.com/salesforce/cos-e/issues/2) [issues](https://arxiv.org/pdf/2004.14546.pdf).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/163/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/163/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/161
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/161/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/161/comments
https://api.github.com/repos/huggingface/datasets/issues/161/events
https://github.com/huggingface/datasets/issues/161
620,487,535
MDU6SXNzdWU2MjA0ODc1MzU=
161
Discussion on version identifier & MockDataLoaderManager for test data
{ "login": "EntilZha", "id": 1382460, "node_id": "MDQ6VXNlcjEzODI0NjA=", "avatar_url": "https://avatars.githubusercontent.com/u/1382460?v=4", "gravatar_id": "", "url": "https://api.github.com/users/EntilZha", "html_url": "https://github.com/EntilZha", "followers_url": "https://api.github.com/users/EntilZha/followers", "following_url": "https://api.github.com/users/EntilZha/following{/other_user}", "gists_url": "https://api.github.com/users/EntilZha/gists{/gist_id}", "starred_url": "https://api.github.com/users/EntilZha/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/EntilZha/subscriptions", "organizations_url": "https://api.github.com/users/EntilZha/orgs", "repos_url": "https://api.github.com/users/EntilZha/repos", "events_url": "https://api.github.com/users/EntilZha/events{/privacy}", "received_events_url": "https://api.github.com/users/EntilZha/received_events", "type": "User", "site_admin": false }
[ { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
open
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
null
[ "usually you can replace `download` in your dataset script with `download_and_prepare()` - could you share the code for your dataset here? :-) ", "I have an initial version here: https://github.com/EntilZha/nlp/tree/master/datasets/qanta Thats pretty close to what I'll do as a PR, but still want to do some more sanity checks/tests (just got tests passing).\r\n\r\nI figured out how to get all tests passing by adding a download command and some finagling with the data zip https://github.com/EntilZha/nlp/blob/master/tests/utils.py#L127\r\n\r\n", "I'm quite positive that you can just replace the `dl_manager.download()` statements here: https://github.com/EntilZha/nlp/blob/4d46443b65f1f756921db8275594e6af008a1de7/datasets/qanta/qanta.py#L194 with `dl_manager.download_and_extract()` even though you don't extract anything. I would prefer to avoid adding more functions to the MockDataLoadManager and keep it as simple as possible (It's already to complex now IMO). \r\n\r\nCould you check if you can replace the `download()` function? ", "I might be doing something wrong, but swapping those two gives this error:\r\n```\r\n> with open(path) as f:\r\nE IsADirectoryError: [Errno 21] Is a directory: 'datasets/qanta/dummy/mode=first,char_skip=25/2018.4.18/dummy_data-zip-extracted/dummy_data'\r\n\r\nsrc/nlp/datasets/qanta/3d965403133687b819905ead4b69af7bcee365865279b2f797c79f809b4490c3/qanta.py:280: IsADirectoryError\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n```\r\n\r\nSo it seems like the directory name is getting passed. Is this not functioning as expected, or is there some caching happening maybe? I deleted the dummy files and re-ran the import script with no changes. I'm digging a bit in with a debugger, but no clear reason yet", "From what I can tell here: https://github.com/huggingface/nlp/blob/master/tests/utils.py#L115\r\n\r\n1. `data_url` is the correct http link\r\n2. `path_to_dummy_data` is a directory, which is causing the issue\r\n\r\nThat path comes from `download_dummy_data`, which I think assumes that the data comes from the zip file, but isn't aware of individual files. So it seems like it data manager needs to be aware if the url its getting is for a file or a zip/directory, and pass this information along. This might happen in `download_dummy_data`, but probably better to happen in `download_and_extract`? Maybe a simple check to see if `os.path.basename` returns the dummy data zip filename, if not then join paths with the basename of the url?", "I think the dataset script works correctly. Just the dummy data structure seems to be wrong. I will soon add more commands that should make the create of the dummy data easier.\r\n\r\nI'd recommend that you won't concentrate too much on the dummy data.\r\nIf you manage to load the dataset correctly via:\r\n\r\n```python \r\n# use local path to qanta\r\nnlp.load_dataset(\"./datasets/qanta\")\r\n```\r\n\r\nthen feel free to open a PR and we will look into the dummy data problem together :-) \r\n\r\nAlso please make sure that the Version is in the format 1.0.0 (three numbers separated by two points) - not a date. ", "The script loading seems to work fine so I'll work on getting a PR open after a few sanity checks on the data.\r\n\r\nOn version, we currently have it versioned with YYYY.MM.DD scheme so it would be nice to not change that, but will it cause issues?", "> The script loading seems to work fine so I'll work on getting a PR open after a few sanity checks on the data.\r\n> \r\n> On version, we currently have it versioned with YYYY.MM.DD scheme so it would be nice to not change that, but will it cause issues?\r\n\r\nIt would cause issues for sure for the tests....not sure if it would also cause issues otherwise.\r\n\r\nI would prefer to keep the same version style as we have for other models. You could for example simply add version 1.0.0 and add a comment with the date you currently use for the versioning.\r\n\r\n What is your opinion regarding the version here @lhoestq @mariamabarham @thomwolf ? ", "Maybe use the YYYY.MM.DD as the config name ? That's what we are doing for wikipedia", "> Maybe use the YYYY.MM.DD as the config name ? That's what we are doing for wikipedia\r\n\r\nI'm not sure if this will work because the name should be unique and it seems that he has multiple config name in his data with the same version.\r\nAs @patrickvonplaten suggested, I think you can add a comment about the version in the data description.", "Actually maybe our versioning format (inherited from tfds) is too strong for what we use it for?\r\nWe could allow any string maybe?\r\n\r\nI see it more and more like an identifier for the user that we will back with a serious hashing/versioning system.- so we could let the user quite free on it.", "I'm good with either putting it in description, adding it to the config, or loosening version formatting. I mostly don't have a full conceptual grasp of what each identifier ends up meaning in the datasets code so hard to evaluate the best approach.\r\n\r\nFor background, the multiple formats is a consequence of:\r\n\r\n1. Each example is one multi-sentence trivia question\r\n2. For training, its better to treat each sentence as an example\r\n3. For evaluation, should test on: (1) first sentence, (2) full question, and (3) partial questions (does the model get the question right having seen the first half)\r\n\r\nWe use the date format for version since: (1) we expect some degree of updates since new questions come in every year and (2) the timestamp itself matches the Wikipedia dump that it is dependent on (so similar to the Wikipedia dataset).\r\n\r\nperhaps this is better discussed in https://github.com/huggingface/nlp/pull/169 or update title?" ]
1,589,833,890,000
1,590,343,803,000
null
CONTRIBUTOR
null
Hi, I'm working on adding a dataset and ran into an error due to `download` not being defined on `MockDataLoaderManager`, but being defined in `nlp/utils/download_manager.py`. The readme step running this: `RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_real_dataset_localmydatasetname` triggers the error. If I can get something to work, I can include it in my data PR once I'm done.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/161/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/161/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/160
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/160/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/160/comments
https://api.github.com/repos/huggingface/datasets/issues/160/events
https://github.com/huggingface/datasets/issues/160
620,448,236
MDU6SXNzdWU2MjA0NDgyMzY=
160
caching in map causes same result to be returned for train, validation and test
{ "login": "dpressel", "id": 247881, "node_id": "MDQ6VXNlcjI0Nzg4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/247881?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dpressel", "html_url": "https://github.com/dpressel", "followers_url": "https://api.github.com/users/dpressel/followers", "following_url": "https://api.github.com/users/dpressel/following{/other_user}", "gists_url": "https://api.github.com/users/dpressel/gists{/gist_id}", "starred_url": "https://api.github.com/users/dpressel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dpressel/subscriptions", "organizations_url": "https://api.github.com/users/dpressel/orgs", "repos_url": "https://api.github.com/users/dpressel/repos", "events_url": "https://api.github.com/users/dpressel/events{/privacy}", "received_events_url": "https://api.github.com/users/dpressel/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @dpressel, \r\n\r\nthanks for posting your issue! Can you maybe add a complete code snippet that we can copy paste to reproduce the error? For example, I'm not sure where the variable `train_set` comes from in your code and it seems like you are loading multiple datasets at once? ", "Hi, the full example was listed in the PR above, but here is the exact link:\r\n\r\nhttps://github.com/dpressel/mead-baseline/blob/3c1aa3ca062cb23f303ca98ac40b6652b37ee971/api-examples/layers-classify-hf-datasets.py\r\n\r\nThe problem is coming from\r\n```\r\n if cache_file_name is None:\r\n # we create a unique hash from the function, current dataset file and the mapping args\r\n cache_kwargs = {\r\n \"with_indices\": with_indices,\r\n \"batched\": batched,\r\n \"batch_size\": batch_size,\r\n \"remove_columns\": remove_columns,\r\n \"keep_in_memory\": keep_in_memory,\r\n \"load_from_cache_file\": load_from_cache_file,\r\n \"cache_file_name\": cache_file_name,\r\n \"writer_batch_size\": writer_batch_size,\r\n \"arrow_schema\": arrow_schema,\r\n \"disable_nullable\": disable_nullable,\r\n }\r\n cache_file_name = self._get_cache_file_path(function, cache_kwargs)\r\n```\r\nThe cached value is always the same, but I was able to change that by just renaming the function each time which seems to fix the issue.", "Ok, I think @lhoestq has already found a solution :-) Maybe you can chime in @lhoestq ", "This fixed my issue (I think)\r\n\r\nhttps://github.com/dpressel/mead-baseline/commit/48aa8ecde4b307bd3e7dde5fe71e43a1d4769ee1", "> Ok, I think @lhoestq has already found a solution :-) Maybe you can chime in @lhoestq\r\n\r\nOh, awesome! I see the PR, Ill check it out", "The PR should prevent the cache from losing track of the of the dataset type (based on the location of its data). Not sure about your second problem though (cache off).", "Yes, with caching on, it seems to work without the function renaming hack, I mentioned this also in the PR. Thanks!" ]
1,589,829,723,000
1,589,837,780,000
1,589,837,780,000
NONE
null
hello, I am working on a program that uses the `nlp` library with the `SST2` dataset. The rough outline of the program is: ``` import nlp as nlp_datasets ... parser.add_argument('--dataset', help='HuggingFace Datasets id', default=['glue', 'sst2'], nargs='+') ... dataset = nlp_datasets.load_dataset(*args.dataset) ... # Create feature vocabs vocabs = create_vocabs(dataset.values(), vectorizers) ... # Create a function to vectorize based on vectorizers and vocabs: print('TS', train_set.num_rows) print('VS', valid_set.num_rows) print('ES', test_set.num_rows) # factory method to create a `convert_to_features` function based on vocabs convert_to_features = create_featurizer(vectorizers, vocabs) train_set = train_set.map(convert_to_features, batched=True) train_set.set_format(type='torch', columns=list(vectorizers.keys()) + ['y', 'lengths']) train_loader = torch.utils.data.DataLoader(train_set, batch_size=args.batchsz) valid_set = valid_set.map(convert_to_features, batched=True) valid_set.set_format(type='torch', columns=list(vectorizers.keys()) + ['y', 'lengths']) valid_loader = torch.utils.data.DataLoader(valid_set, batch_size=args.batchsz) test_set = test_set.map(convert_to_features, batched=True) test_set.set_format(type='torch', columns=list(vectorizers.keys()) + ['y', 'lengths']) test_loader = torch.utils.data.DataLoader(test_set, batch_size=args.batchsz) print('TS', train_set.num_rows) print('VS', valid_set.num_rows) print('ES', test_set.num_rows) ``` Im not sure if Im using it incorrectly, but the results are not what I expect. Namely, the `.map()` seems to grab the datset from the cache and then loses track of what the specific dataset is, instead using my training data for all datasets: ``` TS 67349 VS 872 ES 1821 TS 67349 VS 67349 ES 67349 ``` The behavior changes if I turn off the caching but then the results fail: ``` train_set = train_set.map(convert_to_features, batched=True, load_from_cache_file=False) ... valid_set = valid_set.map(convert_to_features, batched=True, load_from_cache_file=False) ... test_set = test_set.map(convert_to_features, batched=True, load_from_cache_file=False) ``` Now I get the right set of features back... ``` TS 67349 VS 872 ES 1821 100%|██████████| 68/68 [00:00<00:00, 92.78it/s] 100%|██████████| 1/1 [00:00<00:00, 75.47it/s] 0%| | 0/2 [00:00<?, ?it/s]TS 67349 VS 872 ES 1821 100%|██████████| 2/2 [00:00<00:00, 77.19it/s] ``` but I think its losing track of the original training set: ``` Traceback (most recent call last): File "/home/dpressel/dev/work/baseline/api-examples/layers-classify-hf-datasets.py", line 148, in <module> for x in train_loader: File "/home/dpressel/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 345, in __next__ data = self._next_data() File "/home/dpressel/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 385, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/dpressel/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/dpressel/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/dpressel/anaconda3/lib/python3.7/site-packages/nlp/arrow_dataset.py", line 338, in __getitem__ output_all_columns=self._output_all_columns, File "/home/dpressel/anaconda3/lib/python3.7/site-packages/nlp/arrow_dataset.py", line 294, in _getitem outputs = self._unnest(self._data.slice(key, 1).to_pydict()) File "pyarrow/table.pxi", line 1211, in pyarrow.lib.Table.slice File "pyarrow/public-api.pxi", line 390, in pyarrow.lib.pyarrow_wrap_table File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Column 3: In chunk 0: Invalid: Length spanned by list offsets (15859698) larger than values array (length 100000) Process finished with exit code 1 ``` The full-example program (minus the print stmts) is here: https://github.com/dpressel/mead-baseline/pull/620/files
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/160/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/160/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/159
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/159/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/159/comments
https://api.github.com/repos/huggingface/datasets/issues/159/events
https://github.com/huggingface/datasets/issues/159
620,420,700
MDU6SXNzdWU2MjA0MjA3MDA=
159
How can we add more datasets to nlp library?
{ "login": "Tahsin-Mayeesha", "id": 17886829, "node_id": "MDQ6VXNlcjE3ODg2ODI5", "avatar_url": "https://avatars.githubusercontent.com/u/17886829?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tahsin-Mayeesha", "html_url": "https://github.com/Tahsin-Mayeesha", "followers_url": "https://api.github.com/users/Tahsin-Mayeesha/followers", "following_url": "https://api.github.com/users/Tahsin-Mayeesha/following{/other_user}", "gists_url": "https://api.github.com/users/Tahsin-Mayeesha/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tahsin-Mayeesha/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tahsin-Mayeesha/subscriptions", "organizations_url": "https://api.github.com/users/Tahsin-Mayeesha/orgs", "repos_url": "https://api.github.com/users/Tahsin-Mayeesha/repos", "events_url": "https://api.github.com/users/Tahsin-Mayeesha/events{/privacy}", "received_events_url": "https://api.github.com/users/Tahsin-Mayeesha/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Found it. https://github.com/huggingface/nlp/tree/master/datasets" ]
1,589,826,931,000
1,589,827,028,000
1,589,827,027,000
NONE
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/159/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/159/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/157
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/157/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/157/comments
https://api.github.com/repos/huggingface/datasets/issues/157/events
https://github.com/huggingface/datasets/issues/157
620,356,542
MDU6SXNzdWU2MjAzNTY1NDI=
157
nlp.load_dataset() gives "TypeError: list_() takes exactly one argument (2 given)"
{ "login": "saahiluppal", "id": 47444392, "node_id": "MDQ6VXNlcjQ3NDQ0Mzky", "avatar_url": "https://avatars.githubusercontent.com/u/47444392?v=4", "gravatar_id": "", "url": "https://api.github.com/users/saahiluppal", "html_url": "https://github.com/saahiluppal", "followers_url": "https://api.github.com/users/saahiluppal/followers", "following_url": "https://api.github.com/users/saahiluppal/following{/other_user}", "gists_url": "https://api.github.com/users/saahiluppal/gists{/gist_id}", "starred_url": "https://api.github.com/users/saahiluppal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/saahiluppal/subscriptions", "organizations_url": "https://api.github.com/users/saahiluppal/orgs", "repos_url": "https://api.github.com/users/saahiluppal/repos", "events_url": "https://api.github.com/users/saahiluppal/events{/privacy}", "received_events_url": "https://api.github.com/users/saahiluppal/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
null
[ "You can just run: \r\n`val = nlp.load_dataset('squad')` \r\n\r\nif you want to have just the validation script you can also do:\r\n\r\n`val = nlp.load_dataset('squad', split=\"validation\")`", "If you want to load a local dataset, make sure you include a `./` before the folder name. ", "This happens by just doing run all cells on colab ... I assumed the colab example is broken. ", "Oh I see you might have a wrong version of pyarrow install on the colab -> could you try the following. Add these lines to the beginning of your notebook, restart the runtime and run it again:\r\n```\r\n!pip uninstall -y -qq pyarrow\r\n!pip uninstall -y -qq nlp\r\n!pip install -qq git+https://github.com/huggingface/nlp.git\r\n```", "> Oh I see you might have a wrong version of pyarrow install on the colab -> could you try the following. Add these lines to the beginning of your notebook, restart the runtime and run it again:\r\n> \r\n> ```\r\n> !pip uninstall -y -qq pyarrow\r\n> !pip uninstall -y -qq nlp\r\n> !pip install -qq git+https://github.com/huggingface/nlp.git\r\n> ```\r\n\r\nTried, having the same error.", "Can you post a link here of your colab? I'll make a copy of it and see what's wrong", "This should be fixed in the current version of the notebook. You can try it again", "Also see: https://github.com/huggingface/nlp/issues/222", "I am getting this error when running this command\r\n```\r\nval = nlp.load_dataset('squad', split=\"validation\")\r\n```\r\n\r\nFileNotFoundError: [Errno 2] No such file or directory: '/root/.cache/huggingface/datasets/squad/plain_text/1.0.0/dataset_info.json'\r\n\r\nCan anybody help?", "It seems like your download was corrupted :-/ Can you run the following command: \r\n\r\n```\r\nrm -r /root/.cache/huggingface/datasets\r\n```\r\n\r\nto delete the cache completely and rerun the download? ", "I tried the notebook again today and it worked without barfing. 👌 " ]
1,589,820,398,000
1,591,344,538,000
1,591,344,538,000
NONE
null
I'm trying to load datasets from nlp but there seems to have error saying "TypeError: list_() takes exactly one argument (2 given)" gist can be found here https://gist.github.com/saahiluppal/c4b878f330b10b9ab9762bc0776c0a6a
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/157/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/157/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/156
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/156/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/156/comments
https://api.github.com/repos/huggingface/datasets/issues/156/events
https://github.com/huggingface/datasets/issues/156
620,263,687
MDU6SXNzdWU2MjAyNjM2ODc=
156
SyntaxError with WMT datasets
{ "login": "tomhosking", "id": 9419158, "node_id": "MDQ6VXNlcjk0MTkxNTg=", "avatar_url": "https://avatars.githubusercontent.com/u/9419158?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomhosking", "html_url": "https://github.com/tomhosking", "followers_url": "https://api.github.com/users/tomhosking/followers", "following_url": "https://api.github.com/users/tomhosking/following{/other_user}", "gists_url": "https://api.github.com/users/tomhosking/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomhosking/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomhosking/subscriptions", "organizations_url": "https://api.github.com/users/tomhosking/orgs", "repos_url": "https://api.github.com/users/tomhosking/repos", "events_url": "https://api.github.com/users/tomhosking/events{/privacy}", "received_events_url": "https://api.github.com/users/tomhosking/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
null
[ "Jeez - don't know what happened there :D Should be fixed now! \r\n\r\nThanks a lot for reporting this @tomhosking !", "Hi @patrickvonplaten!\r\n\r\nI'm now getting the below error:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-28-3206959998b9> in <module>\r\n 1 import nlp\r\n 2 \r\n----> 3 dataset = nlp.load_dataset('wmt14')\r\n 4 print(dataset['train'][0])\r\n\r\n~/.local/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 507 # Instantiate the dataset builder\r\n 508 builder_instance = builder_cls(\r\n--> 509 cache_dir=cache_dir, name=name, version=version, data_dir=data_dir, data_files=data_files, **config_kwargs,\r\n 510 )\r\n 511 \r\n\r\nTypeError: Can't instantiate abstract class Wmt with abstract methods _subsets\r\n```\r\n\r\n", "To correct this error I think you need the master branch of `nlp`. Can you try to install `nlp` with. `WMT` was not included at the beta release of the library. \r\n\r\nCan you try:\r\n`pip install git+https://github.com/huggingface/nlp.git`\r\n\r\nand check again? ", "That works, thanks :)\r\n\r\nThe WMT datasets are listed in by `list_datasets()` in the beta release on pypi - it would be good to only show datasets that are actually supported by that version?", "Usually, the idea is that a dataset can be added without releasing a new version. The problem in the case of `WMT` was that some \"core\" code of the library had to be changed as well. \r\n\r\n@thomwolf @lhoestq @julien-c - How should we go about this. If we add a dataset that also requires \"core\" code changes, how do we handle the versioning? The moment a dataset is on AWS it will actually be listed with `list_datasets()` in all earlier versions...\r\n\r\nIs there a way to somehow insert the `pip version` to the HfApi() and get only the datasets that were available for this version (at the date of the release of the version) @julien-c ? ", "We plan to have something like a `requirements.txt` per dataset to prevent user from loading dataset with old version of `nlp` or any other libraries. Right now the solution is just to keep `nlp` up to date when you want to load a dataset that leverages the latests features of `nlp`.\r\n\r\nFor datasets that are on AWS but that use features that are not released yet we should be able to filter those from the `list_dataset` as soon as we have the `requirements.txt` feature on (filter datasets that need a future version of `nlp`).\r\n\r\nShall we rename this issue to be more explicit about the problem ?\r\nSomething like `Specify the minimum version of the nlp library required for each dataset` ?", "Closing this one.\r\nFeel free to re-open if you have other questions :)" ]
1,589,812,698,000
1,595,522,515,000
1,595,522,515,000
NONE
null
The following snippet produces a syntax error: ``` import nlp dataset = nlp.load_dataset('wmt14') print(dataset['train'][0]) ``` ``` Traceback (most recent call last): File "/home/tom/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3326, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-8-3206959998b9>", line 3, in <module> dataset = nlp.load_dataset('wmt14') File "/home/tom/.local/lib/python3.6/site-packages/nlp/load.py", line 505, in load_dataset builder_cls = import_main_class(module_path, dataset=True) File "/home/tom/.local/lib/python3.6/site-packages/nlp/load.py", line 56, in import_main_class module = importlib.import_module(module_path) File "/usr/lib/python3.6/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 994, in _gcd_import File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/tom/.local/lib/python3.6/site-packages/nlp/datasets/wmt14/c258d646f4f5870b0245f783b7aa0af85c7117e06aacf1e0340bd81935094de2/wmt14.py", line 21, in <module> from .wmt_utils import Wmt, WmtConfig File "/home/tom/.local/lib/python3.6/site-packages/nlp/datasets/wmt14/c258d646f4f5870b0245f783b7aa0af85c7117e06aacf1e0340bd81935094de2/wmt_utils.py", line 659 <<<<<<< HEAD ^ SyntaxError: invalid syntax ``` Python version: `3.6.9 (default, Apr 18 2020, 01:56:04) [GCC 8.4.0]` Running on Ubuntu 18.04, via a Jupyter notebook
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/156/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/156/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/153
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/153/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/153/comments
https://api.github.com/repos/huggingface/datasets/issues/153/events
https://github.com/huggingface/datasets/issues/153
619,972,246
MDU6SXNzdWU2MTk5NzIyNDY=
153
Meta-datasets (GLUE/XTREME/...) – Special care to attributions and citations
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[ { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
open
false
null
[]
null
[ "As @yoavgo suggested, there should be the possibility to call a function like nlp.bib that outputs all bibtex ref from the datasets and models actually used and eventually nlp.bib.forreadme that would output the same info + versions numbers so they can be included in a readme.md file.", "Actually, double checking with @mariamabarham, we already have this feature I think.\r\n\r\nIt's like this currently:\r\n```python\r\n>>> from nlp import load_dataset\r\n>>> \r\n>>> dataset = load_dataset('glue', 'cola', split='train')\r\n>>> print(dataset.info.citation)\r\n@article{warstadt2018neural,\r\n title={Neural Network Acceptability Judgments},\r\n author={Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R},\r\n journal={arXiv preprint arXiv:1805.12471},\r\n year={2018}\r\n}\r\n@inproceedings{wang2019glue,\r\n title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},\r\n author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},\r\n note={In the Proceedings of ICLR.},\r\n year={2019}\r\n}\r\n\r\nNote that each GLUE dataset has its own citation. Please see the source to see\r\nthe correct citation for each contained dataset.\r\n```\r\n\r\nWhat do you think @dseddah?", "Looks good but why would there be a difference between the ref in the source and the one to be printed? ", "Yes, I think we should remove this warning @mariamabarham.\r\n\r\nIt's probably a relic of tfds which didn't have the same way to access citations. " ]
1,589,786,662,000
1,589,836,696,000
null
MEMBER
null
Meta-datasets are interesting in terms of standardized benchmarks but they also have specific behaviors, in particular in terms of attribution and authorship. It's very important that each specific dataset inside a meta dataset is properly referenced and the citation/specific homepage/etc are very visible and accessible and not only the generic citation of the meta-dataset itself. Let's take GLUE as an example: The configuration has the citation for each dataset included (e.g. [here](https://github.com/huggingface/nlp/blob/master/datasets/glue/glue.py#L154-L161)) but it should be copied inside the dataset info so that, when people access `dataset.info.citation` they get both the citation for GLUE and the citation for the specific datasets inside GLUE that they have loaded.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/153/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/153/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/149
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/149/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/149/comments
https://api.github.com/repos/huggingface/datasets/issues/149/events
https://github.com/huggingface/datasets/issues/149
619,735,739
MDU6SXNzdWU2MTk3MzU3Mzk=
149
[Feature request] Add Ubuntu Dialogue Corpus dataset
{ "login": "danth", "id": 28959268, "node_id": "MDQ6VXNlcjI4OTU5MjY4", "avatar_url": "https://avatars.githubusercontent.com/u/28959268?v=4", "gravatar_id": "", "url": "https://api.github.com/users/danth", "html_url": "https://github.com/danth", "followers_url": "https://api.github.com/users/danth/followers", "following_url": "https://api.github.com/users/danth/following{/other_user}", "gists_url": "https://api.github.com/users/danth/gists{/gist_id}", "starred_url": "https://api.github.com/users/danth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danth/subscriptions", "organizations_url": "https://api.github.com/users/danth/orgs", "repos_url": "https://api.github.com/users/danth/repos", "events_url": "https://api.github.com/users/danth/events{/privacy}", "received_events_url": "https://api.github.com/users/danth/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "@AlphaMycelium the Ubuntu Dialogue Corpus [version 2]( https://github.com/rkadlec/ubuntu-ranking-dataset-creator) is added. Note that it requires a manual download by following the download instructions in the [repos]( https://github.com/rkadlec/ubuntu-ranking-dataset-creator).\r\nMaybe we can close this issue for now?" ]
1,589,730,159,000
1,589,821,306,000
1,589,821,306,000
NONE
null
https://github.com/rkadlec/ubuntu-ranking-dataset-creator or http://dataset.cs.mcgill.ca/ubuntu-corpus-1.0/
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/149/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/149/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/148
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/148/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/148/comments
https://api.github.com/repos/huggingface/datasets/issues/148/events
https://github.com/huggingface/datasets/issues/148
619,590,555
MDU6SXNzdWU2MTk1OTA1NTU=
148
_download_and_prepare() got an unexpected keyword argument 'verify_infos'
{ "login": "richarddwang", "id": 17963619, "node_id": "MDQ6VXNlcjE3OTYzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richarddwang", "html_url": "https://github.com/richarddwang", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "repos_url": "https://api.github.com/users/richarddwang/repos", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "Same error for dataset 'wiki40b'", "Should be fixed on master :)" ]
1,589,680,133,000
1,589,787,513,000
1,589,787,513,000
CONTRIBUTOR
null
# Reproduce In Colab, ``` %pip install -q nlp %pip install -q apache_beam mwparserfromhell dataset = nlp.load_dataset('wikipedia') ``` get ``` Downloading and preparing dataset wikipedia/20200501.aa (download: Unknown size, generated: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wikipedia/20200501.aa/1.0.0... --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-6-52471d2a0088> in <module>() ----> 1 dataset = nlp.load_dataset('wikipedia') 1 frames /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 515 download_mode=download_mode, 516 ignore_verifications=ignore_verifications, --> 517 save_infos=save_infos, 518 ) 519 /usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs) 361 verify_infos = not save_infos and not ignore_verifications 362 self._download_and_prepare( --> 363 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 364 ) 365 # Sync info TypeError: _download_and_prepare() got an unexpected keyword argument 'verify_infos' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/148/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/148/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/147
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/147/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/147/comments
https://api.github.com/repos/huggingface/datasets/issues/147/events
https://github.com/huggingface/datasets/issues/147
619,581,907
MDU6SXNzdWU2MTk1ODE5MDc=
147
Error with sklearn train_test_split
{ "login": "ClonedOne", "id": 6853743, "node_id": "MDQ6VXNlcjY4NTM3NDM=", "avatar_url": "https://avatars.githubusercontent.com/u/6853743?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ClonedOne", "html_url": "https://github.com/ClonedOne", "followers_url": "https://api.github.com/users/ClonedOne/followers", "following_url": "https://api.github.com/users/ClonedOne/following{/other_user}", "gists_url": "https://api.github.com/users/ClonedOne/gists{/gist_id}", "starred_url": "https://api.github.com/users/ClonedOne/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ClonedOne/subscriptions", "organizations_url": "https://api.github.com/users/ClonedOne/orgs", "repos_url": "https://api.github.com/users/ClonedOne/repos", "events_url": "https://api.github.com/users/ClonedOne/events{/privacy}", "received_events_url": "https://api.github.com/users/ClonedOne/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Indeed. Probably we will want to have a similar method directly in the library", "Related: #166 " ]
1,589,675,304,000
1,592,497,403,000
1,592,497,403,000
NONE
null
It would be nice if we could use sklearn `train_test_split` to quickly generate subsets from the dataset objects returned by `nlp.load_dataset`. At the moment the code: ```python data = nlp.load_dataset('imdb', cache_dir=data_cache) f_half, s_half = train_test_split(data['train'], test_size=0.5, random_state=seed) ``` throws: ``` ValueError: Can only get row(s) (int or slice) or columns (string). ``` It's not a big deal, since there are other ways to split the data, but it would be a cool thing to have.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/147/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/147/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/143
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/143/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/143/comments
https://api.github.com/repos/huggingface/datasets/issues/143/events
https://github.com/huggingface/datasets/issues/143
619,457,641
MDU6SXNzdWU2MTk0NTc2NDE=
143
ArrowTypeError in squad metrics
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[ { "id": 2067393914, "node_id": "MDU6TGFiZWwyMDY3MzkzOTE0", "url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug", "name": "metric bug", "color": "25b21e", "default": false, "description": "A bug in a metric script" } ]
closed
false
null
[]
null
[ "There was an issue in the format, thanks.\r\nNow you can do\r\n```python3\r\nsquad_dset = nlp.load_dataset(\"squad\")\r\nsquad_metric = nlp.load_metric(\"/Users/quentinlhoest/Desktop/hf/nlp-bis/metrics/squad\")\r\npredictions = [\r\n {\"id\": v[\"id\"], \"prediction_text\": v[\"answers\"][\"text\"][0]} # take first possible answer\r\n for v in squad_dset[\"validation\"]\r\n]\r\nsquad_metric.compute(predictions, squad_dset[\"validation\"])\r\n```\r\n\r\nand the expected format is \r\n```\r\nArgs:\r\n predictions: List of question-answers dictionaries with the following key-values:\r\n - 'id': id of the question-answer pair as given in the references (see below)\r\n - 'prediction_text': the text of the answer\r\n references: List of question-answers dictionaries with the following key-values:\r\n - 'id': id of the question-answer pair (see above),\r\n - 'answers': a Dict {'text': list of possible texts for the answer, as a list of strings}\r\n```" ]
1,589,630,797,000
1,590,154,732,000
1,590,154,608,000
MEMBER
null
`squad_metric.compute` is giving following error ``` ArrowTypeError: Could not convert [{'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}] with type list: was not a dict, tuple, or recognized null value for conversion to struct type ``` This is how my predictions and references look like ``` predictions[0] # {'id': '56be4db0acb8001400a502ec', 'prediction_text': 'Denver Broncos'} ``` ``` references[0] # {'answers': [{'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}], 'id': '56be4db0acb8001400a502ec'} ``` These are structured as per the `squad_metric.compute` help string.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/143/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/143/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/138
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/138/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/138/comments
https://api.github.com/repos/huggingface/datasets/issues/138/events
https://github.com/huggingface/datasets/issues/138
619,225,191
MDU6SXNzdWU2MTkyMjUxOTE=
138
Consider renaming to nld
{ "login": "honnibal", "id": 8059750, "node_id": "MDQ6VXNlcjgwNTk3NTA=", "avatar_url": "https://avatars.githubusercontent.com/u/8059750?v=4", "gravatar_id": "", "url": "https://api.github.com/users/honnibal", "html_url": "https://github.com/honnibal", "followers_url": "https://api.github.com/users/honnibal/followers", "following_url": "https://api.github.com/users/honnibal/following{/other_user}", "gists_url": "https://api.github.com/users/honnibal/gists{/gist_id}", "starred_url": "https://api.github.com/users/honnibal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/honnibal/subscriptions", "organizations_url": "https://api.github.com/users/honnibal/orgs", "repos_url": "https://api.github.com/users/honnibal/repos", "events_url": "https://api.github.com/users/honnibal/events{/privacy}", "received_events_url": "https://api.github.com/users/honnibal/received_events", "type": "User", "site_admin": false }
[ { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
closed
false
null
[]
null
[ "I would suggest `nlds`. NLP is a very general, broad and ambiguous term, the library is not about NLP (as in processing) per se, it is about accessing Natural Language related datasets. So the name should reflect its purpose.\r\n", "Chiming in to second everything @honnibal said, and to add that I think the current name is going to impact the discoverability of this library. People who are looking for \"NLP Datasets\" through a search engine are going to see a library called `nlp` and think it's too broad. People who are looking to do NLP in python are going to search \"Python NLP\" and end up here, confused that this is a collection of datasets.\r\n\r\nThe names of the other huggingface libraries work because they're the only game in town: there are not very many robust, distinct libraries for `tokenizers` or `transformers` in python, for example. But there are several options for NLP in python, and adding this as a possible search result for \"python nlp\" when datasets are likely not what someone is searching for adds noise and frustrates potential users.", "I'm also not sure whether the naming of `nlp` is the problem itself, as long as it comes with the appropriate identifier, so maybe something like `huggingface_nlp`? This is analogous to what @honnibal and spacy are doing for `spacy-transformers`. Of course, this is a \"step back\" from the recent changes/renaming of transformers, but may be some middle ground between a complete rebranding, and keeping it identifiable.", "Interesting, thanks for sharing your thoughts.\r\n\r\nAs we’ll move toward a first non-beta release, we will pool the community of contributors/users of the library for their opinions on a good final name (like when we renamed the beautifully (?) named `pytorch-pretrained-bert`)\r\n\r\nIn the meantime, using `from nlp import load_dataset, load_metric` should work 😉", "I feel like we are conflating two distinct subjects here:\r\n\r\n1. @honnibal's point is that using `nlp` as a package name might break existing code and bring developer usability issues in the future\r\n2. @pmbaumgartner's point is that the `nlp` package name is too broad and shouldn't be used by a package that exposes only datasets and metrics\r\n\r\n(let me know if I mischaracterize your point)\r\n\r\nI'll chime in to say that the first point is a bit silly IMO. As Python developers due to the limitations of the import system we already have to share:\r\n- a single flat namespace for packages\r\n- which also conflicts with local modules i.e. local files\r\n\r\nIf we add the constraint that this flat namespace also be shared with variable names this gets untractable pretty fast :)\r\n\r\nI also think all Python software developers/ML engineers/scientists are capable of at least a subset of:\r\n- importing only the methods that they need like @thomwolf suggested\r\n- aliasing their import\r\n- renaming a local variable", "By the way, `nlp` will very likely not be only about datasets, and not even just about datasets and metrics.\r\n\r\nI see it as a laboratory for testing several long-term ideas about how we could do NLP in terms of research as well as open-source and community sharing, most of these ideas being too experimental/big to fit in `transformers`.\r\n\r\nSome of the directions we would like to explore are about sharing, traceability and more experimental models, as well as seeing a model as the community-based process of creating a composite entity from data, optimization, and code.\r\n\r\nWe'll see how these ideas end up being implemented and we'll better know how we should define the library when we start to dive into these topics. I'll try to get the `nlp` team to draft a roadmap on these topics at some point.", "> If we add the constraint that this flat namespace also be shared with variable names this gets untractable pretty fast :)\r\n\r\nI'm sort of confused by your point here. The namespace *is* shared by variable names. You should not use local variables that are named the same as modules, because then you cannot use the module within the scope of your function.\r\n\r\nFor instance,\r\n\r\n```python\r\n\r\nimport nlp\r\nimport transformers\r\n\r\nnlp = transformers.pipeline(\"sentiment-analysis\")\r\n```\r\n\r\nThis is a bug: you've just overwritten the module, so now you can't use it. Or instead:\r\n\r\n```python\r\n\r\nimport transformers\r\n\r\nnlp = transformers.pipeline(\"sentiment-analysis\")\r\n# (Later, e.g. in a notebook)\r\nimport nlp\r\n```\r\n\r\nThis is also a bug: you've overwritten your variable with an import.\r\n\r\nIf you have a module named `nlp`, you should avoid using `nlp` as a variable, or you'll have bugs in some contexts and inconsistencies in other contexts. You'll have situations where you need to import differently in one module vs another, or name variables differently in one context vs another, which is bad.\r\n\r\n> importing only the methods that they need like @thomwolf suggested\r\n\r\nOkay but the same logic applies to naming the module *literally anything else*. There's absolutely no point in having a module name that's 3 letters if you always plan to do `import from`! It would be entirely better to name it `nlp_datasets` if you don't want people to do `import nlp`.\r\n\r\nAnd finally:\r\n\r\n> By the way, nlp will very likely not be only about datasets, and not even just about datasets and metrics.\r\n\r\nSo...it isn't a datasets library? https://twitter.com/Thom_Wolf/status/1261282491622731781\r\n\r\nI'm confused 😕 ", "Dropping by as I noticed that the library has been renamed `datasets` so I wonder if the conversation above is settled (`nlp` not used anymore) :) ", "I guess indeed", "I'd argue that `datasets` is worse than `nlp`. Datasets should be a user specific decision and not encapsulate all of python (`pip install datasets`). If this package contained every dataset in the world (NLP / vision / etc) then it would make sense =/", "I can't speak for the HF team @jramapuram, but as member of the community it looks to me that HF wanted to avoid the past path of changing names as scope broadened over time:\r\n\r\nRemember\r\nhttps://github.com/huggingface/pytorch-openai-transformer-lm\r\nhttps://github.com/huggingface/pytorch-pretrained-BERT\r\nhttps://github.com/huggingface/pytorch-transformers\r\nand now\r\nhttps://github.com/huggingface/transformers\r\n\r\n;) \r\n\r\nJokes aside, seems that the library is growing in a multi-modal direction (https://github.com/huggingface/datasets/pull/363) so the current name is not that implausible. Possibly HF ambition is really to grow its community and bring here a large chunk of datasets of the world (including tabular / vision / audio?).", "Yea I see your point. However, wouldn't scoping solve the entire problem? \r\n\r\n```python\r\nimport huggingface.datasets as D\r\nimport huggingface.transformers as T\r\n```\r\n\r\nCalling something `datasets` is akin to saying I'm going to name my package `python` --> `import python` " ]
1,589,574,207,000
1,608,238,591,000
1,601,251,690,000
NONE
null
Hey :) Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing. The issue is that modules go into the global namespace, so you shouldn't use variable names that conflict with module names. This means the package makes `nlp` a bad variable name everywhere in the codebase. I've always used `nlp` as the canonical variable name of spaCy's `Language` objects, and this is a convention that a lot of other code has followed (Stanza, flair, etc). And actually, your `transformers` library uses `nlp` as the name for its `Pipeline` instance in your readme. If you stick with the `nlp` name for this package, if anyone uses it then they should rewrite all of that code. If `nlp` is a bad choice of variable anywhere, it's a bad choice of variable everywhere --- because you shouldn't have to notice whether some other function uses a module when you're naming variables within a function. You want to have one convention that you can stick to everywhere. If people use your `nlp` package and continue to use the `nlp` variable name, they'll find themselves with confusing bugs. There will be many many bits of code cut-and-paste from tutorials that give confusing results when combined with the data loading from the `nlp` library. The problem will be especially bad for shadowed modules (people might reasonably have a module named `nlp.py` within their codebase) and notebooks, as people might run notebook cells for data loading out-of-order. I don't think it's an exaggeration to say that if your library becomes popular, we'll all be answering issues around this about once a week for the next few years. That seems pretty unideal, so I do hope you'll reconsider. I suggest `nld` as a better name. It more accurately represents what the package actually does. It's pretty unideal to have a package named `nlp` that doesn't do any processing, and contains data about natural language generation or other non-NLP tasks. The name is equally short, and is sort of a visual pun on `nlp`, since a d is a rotated p.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/138/reactions", "total_count": 32, "+1": 32, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/138/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/137
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/137/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/137/comments
https://api.github.com/repos/huggingface/datasets/issues/137/events
https://github.com/huggingface/datasets/issues/137
619,214,645
MDU6SXNzdWU2MTkyMTQ2NDU=
137
Tokenized BLEU considered harmful - Discussion on community-based process
{ "login": "kpu", "id": 247512, "node_id": "MDQ6VXNlcjI0NzUxMg==", "avatar_url": "https://avatars.githubusercontent.com/u/247512?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kpu", "html_url": "https://github.com/kpu", "followers_url": "https://api.github.com/users/kpu/followers", "following_url": "https://api.github.com/users/kpu/following{/other_user}", "gists_url": "https://api.github.com/users/kpu/gists{/gist_id}", "starred_url": "https://api.github.com/users/kpu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kpu/subscriptions", "organizations_url": "https://api.github.com/users/kpu/orgs", "repos_url": "https://api.github.com/users/kpu/repos", "events_url": "https://api.github.com/users/kpu/events{/privacy}", "received_events_url": "https://api.github.com/users/kpu/received_events", "type": "User", "site_admin": false }
[ { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" }, { "id": 2067400959, "node_id": "MDU6TGFiZWwyMDY3NDAwOTU5", "url": "https://api.github.com/repos/huggingface/datasets/labels/Metric%20discussion", "name": "Metric discussion", "color": "d722e8", "default": false, "description": "Discussions on the metrics" } ]
open
false
null
[]
null
[ "I second this request. The bottom line is that **scores produced with different reference tokenizations are not comparable**. To discourage (even inadvertent) cheating, the user should never touch the reference. The `v13a` tokenization standard is not ideal, but at least it has been consistently used at matrix.statmt.org, facilitating comparisons.\r\n\r\nSacrebleu exposes [all its data sources](https://github.com/mjpost/sacrebleu/blob/master/sacrebleu/dataset.py) and additionally provides [an API](https://github.com/mjpost/sacrebleu/blob/master/sacrebleu/__init__.py) to accessing the references, which seem to fit within the spirit of your codebase.", "Didn't we have a slide and discussion at WMT admitting that, for production-quality models, BLEU doesn't correlate with human eval anyway?\r\n", "Yes, there are slides like that at WMT every year :) BLEU correlates with human judgment only at coarse levels, and it seems to be getting worse when people try to use it to do model selection among high-performing neural systems.\r\n\r\nHowever, the point isn't whether BLEU is a good metric, but whether your BLEU score can be compared to other BLEU scores. They only can be compared if you use the same reference tokenization (similar to how you [can't compare LM perplexities across different segmentations](https://sjmielke.com/comparing-perplexities.htm)). sacrebleu was an attempt to get everyone to use WMT's reference tokenization (meaning, your system has to first remove its own tokenization) so that you could just compare across papers. This also prevents scores from being gamed.", "I do not consider as a sufficient solution switching this library's default metric from BLEU to the wrapper around SacreBLEU. \r\n\r\nAs currently implemented, the wrapper allows end users to toggle SacreBLEU options, but doesn't pass along the SacreBLEU signature. As @mjpost showed in [Post18](https://www.aclweb.org/anthology/W18-6319.pdf), it's simply not credible to assume that people will stick to the defaults, therefore, the signature is necessary to be explicit about what options were used. \r\n\r\nIn addition to the `v13a` or `intl` options for the SacreBLEU `tokenize` argument, which was pointed out earlier, papers frequently differ on whether they lowercase text before scoring (`lowercase`) and the smoothing method used (`smooth_method`). BLEU scores can differ substantially (over 1 BLEU) just by changing these options. \r\n\r\nLosing the SacreBLEU signature is a regression in reproducibility and clarity.\r\n\r\n(Perhaps this should belong in a separate issue?)", "Thanks for sharing your thoughts. This is a very important discussion.\r\n\r\nAlso one of the first items on our mid-term roadmap (we will try to clean it and share it soon) is to introduce mechanisms to get high-quality traceability and reproducibility for all the processes related to the library.\r\n\r\nSo having the signature for `sacrebleu` is really important!\r\n\r\nRegarding BLEU, I guess we can just remove it from the canonical metrics included in the repo itself (it won't prevent people to add it as \"user-metrics\" but at least we won't be promoting it).\r\n\r\nOn a more general note (definitely too large for the scope of this issue) we are wondering, with @srush in particular, how we could handle the selection of metrics/datasets with the most community-based and bottom-up approach possible. If you have opinions on this, please share!", "Yeah, I would love to have discussions about ways this project can have an community-based, transparent process to arrive at strong default metrics. @kpu / @mjpost do you have any suggestions of how that might work or pointers to places where this is done right? Perhaps this question can be template for what is likely to be repeated for other datasets.", "I think @bittlingmayer is referring to Figure 6 in http://statmt.org/wmt19/pdf/53/WMT02.pdf . When you look at Appendix A there are some cases where metrics fall apart at the high end and some where they correlate well. en-zh is arguably production-quality. \r\n\r\nThis could evolve into a metrics Bazaar where the value add is really the packaging and consistency: it installs/compiles the metrics for me, gives a reproducible name to use in publication (involve the authors; you don't want a different sacrebleu hash system), a version number, and evaluation of the metrics like http://ufallab.ms.mff.cuni.cz/~bojar/wmt19-metrics-task-package.tgz but run when code changes rather than once a year. ", "While a Bazaar setup works for models / datasets, I am not sure it is ideal for metrics ? Ideal from my perspective would be to have tasks with metrics moderated by experts who document, cite, and codify known pitchfalls (as above^) and make it non-trivial for beginners to mess it up. ", "@srush @thomwolf \r\n\r\nModelFront could provide (automated, \"QE-based\") evaluation for all the pretrained translation models you host. Not bottom-up and not valid for claiming SoTA, but independent, practical for builders and not top-down.\r\n\r\nFor that I would also suggest some diverse benchmarks (so split it out into datasets with only user-generated data, or only constants, or only UI strings, or only READMEs) which tease out known trade-offs. Even hypothetical magic eval is limited if we always reduce it to a single number.\r\n\r\nRealistically people want to know how a model compares to an API like Google Translate, Microsoft Translator, DeepL or Yandex (especially for a language pair like EN:RU, or for the many languages that only Yandex supports), and that could be done too.\r\n", "Very important discussion.\r\nI am trying to understand the effects of tokenization.\r\nI wanted to ask which is a good practice.\r\nSacrebleu should be used on top of the tokenized output, or detokenized(raw) text?", "Use sacrebleu on detokenized output and raw unmodified references. " ]
1,589,573,314,000
1,610,016,088,000
null
NONE
null
https://github.com/huggingface/nlp/blob/7d1526dfeeb29248d832f1073192dbf03ad642da/metrics/bleu/bleu.py#L76 assumes the inputs are tokenized by the user. This is bad practice because the user's tokenizer is usually not the same as the one used by `mteval-v13a.pl`, the closest thing we have to a standard. Moreover, tokenizers are like window managers: they can be endlessly customized and nobody has quite the same options. As @mjpost reported in https://www.aclweb.org/anthology/W18-6319.pdf BLEU configurations can vary by 1.8. Yet people are incorrectly putting non-comparable BLEU scores in the same table, such as Table 1 in https://arxiv.org/abs/2004.04902 . There are a few use cases for tokenized BLEU like Thai. For Chinese, people seem to use character BLEU for better or worse. The default easy option should be the one that's correct more often. And that is sacrebleu. Please don't make it easy for people to run what is usually the wrong option; it definitely shouldn't be `bleu`. Also, I know this is inherited from TensorFlow and, paging @lmthang, they should discourage it too.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/137/reactions", "total_count": 12, "+1": 12, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/137/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/133
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/133/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/133/comments
https://api.github.com/repos/huggingface/datasets/issues/133/events
https://github.com/huggingface/datasets/issues/133
619,094,954
MDU6SXNzdWU2MTkwOTQ5NTQ=
133
[Question] Using/adding a local dataset
{ "login": "zphang", "id": 1668462, "node_id": "MDQ6VXNlcjE2Njg0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/1668462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zphang", "html_url": "https://github.com/zphang", "followers_url": "https://api.github.com/users/zphang/followers", "following_url": "https://api.github.com/users/zphang/following{/other_user}", "gists_url": "https://api.github.com/users/zphang/gists{/gist_id}", "starred_url": "https://api.github.com/users/zphang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zphang/subscriptions", "organizations_url": "https://api.github.com/users/zphang/orgs", "repos_url": "https://api.github.com/users/zphang/repos", "events_url": "https://api.github.com/users/zphang/events{/privacy}", "received_events_url": "https://api.github.com/users/zphang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @zphang,\r\n\r\nSo you can just give the local path to a dataset script file and it should work.\r\n\r\nHere is an example:\r\n- you can download one of the scripts in the `datasets` folder of the present repo (or clone the repo)\r\n- then you can load it with `load_dataset('PATH/TO/YOUR/LOCAL/SCRIPT.py')`\r\n\r\nDoes it make sense?", "Could you give a more concrete example, please? \r\n\r\nI looked up wikitext dataset script from the repo. Should I just overwrite the `data_file` on line 98 to point to the local dataset directory? Would it work for different configurations of wikitext (wikitext2, wikitext103 etc.)?\r\n\r\nOr maybe we can use DownloadManager to specify local dataset location? In that case, where do we use DownloadManager instance?\r\n\r\nThanks", "Hi @MaveriQ , although what I am doing is to commit a new dataset, but I think looking at imdb script might help.\r\nYou may want to use `dl_manager.download_custom`, give it a url(arbitrary string), a custom_download(arbitrary function) and return a path, and finally use _get sample to fetch a sample.", "The download manager supports local directories. You can specify a local directory instead of a url and it should work.", "Closing this one.\r\nFeel free to re-open if you have other questions :)" ]
1,589,559,966,000
1,595,522,649,000
1,595,522,649,000
NONE
null
Users may want to either create/modify a local copy of a dataset, or use a custom-built dataset with the same `Dataset` API as externally downloaded datasets. It appears to be possible to point to a local dataset path rather than downloading the external ones, but I'm not exactly sure how to go about doing this. A notebook/example script demonstrating this would be very helpful.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/133/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/133/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/132
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/132/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/132/comments
https://api.github.com/repos/huggingface/datasets/issues/132/events
https://github.com/huggingface/datasets/issues/132
619,077,851
MDU6SXNzdWU2MTkwNzc4NTE=
132
[Feature Request] Add the OpenWebText dataset
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "We're experimenting with hosting the OpenWebText corpus on Zenodo for easier downloading. https://zenodo.org/record/3834942#.Xs1w8i-z2J8", "Closing since it's been added in #660 " ]
1,589,558,249,000
1,602,080,568,000
1,602,080,568,000
MEMBER
null
The OpenWebText dataset is an open clone of OpenAI's WebText dataset. It can be used to train ELECTRA as is specified in the [README](https://www.github.com/google-research/electra). More information and the download link are available [here](https://skylion007.github.io/OpenWebTextCorpus/).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/132/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/132/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/131
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/131/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/131/comments
https://api.github.com/repos/huggingface/datasets/issues/131/events
https://github.com/huggingface/datasets/issues/131
619,073,731
MDU6SXNzdWU2MTkwNzM3MzE=
131
[Feature request] Add Toronto BookCorpus dataset
{ "login": "jarednielsen", "id": 4564897, "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jarednielsen", "html_url": "https://github.com/jarednielsen", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "repos_url": "https://api.github.com/users/jarednielsen/repos", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "As far as I understand, `wikitext` is refer to `WikiText-103` and `WikiText-2` that created by researchers in Salesforce, and mostly used in traditional language modeling.\r\n\r\nYou might want to say `wikipedia`, a dump from wikimedia foundation.\r\n\r\nAlso I would like to have Toronto BookCorpus too ! Though it involves copyright problem...", "Hi, @lhoestq, just a reminder that this is solved by #248 .😉 " ]
1,589,557,844,000
1,593,379,651,000
1,593,379,651,000
CONTRIBUTOR
null
I know the copyright/distribution of this one is complex, but it would be great to have! That, combined with the existing `wikitext`, would provide a complete dataset for pretraining models like BERT.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/131/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/131/timeline
null
null
null
false