url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
2.35B
| node_id
stringlengths 18
32
| number
int64 1
6.97k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
int64 0
70
| created_at
unknown | updated_at
unknown | closed_at
unknown | author_association
stringclasses 4
values | active_lock_reason
float64 | draft
float64 0
1
⌀ | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | existe_pull_request
bool 2
classes | comentarios
sequencelengths 0
30
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/333 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/333/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/333/comments | https://api.github.com/repos/huggingface/datasets/issues/333/events | https://github.com/huggingface/datasets/pull/333 | 649,236,516 | MDExOlB1bGxSZXF1ZXN0NDQyOTE1NDQ0 | 333 | fix variable name typo | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [] | closed | false | null | [] | null | 2 | "2020-07-01T19:13:50Z" | "2020-07-24T15:43:31Z" | "2020-07-24T08:32:16Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/333.diff",
"html_url": "https://github.com/huggingface/datasets/pull/333",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/333.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/333"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/333/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/333/timeline | null | null | true | [
"Good catch :)\r\nI think there is another occurence that needs to be fixed in the second gist (line 4924 of the notebook file):\r\n```python\r\nbleu = nlp.load_metric(...)\r\n```",
"Was fixed in e16f79b5f7fc12a6a30c777722be46897a272e6f\r\nClosing it."
] |
|
https://api.github.com/repos/huggingface/datasets/issues/332 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/332/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/332/comments | https://api.github.com/repos/huggingface/datasets/issues/332/events | https://github.com/huggingface/datasets/pull/332 | 649,140,135 | MDExOlB1bGxSZXF1ZXN0NDQyODMwMzMz | 332 | Add wiki_dpr | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 2 | "2020-07-01T17:12:00Z" | "2020-07-06T12:21:17Z" | "2020-07-06T12:21:16Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/332.diff",
"html_url": "https://github.com/huggingface/datasets/pull/332",
"merged_at": "2020-07-06T12:21:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/332.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/332"
} | Presented in the [Dense Passage Retrieval paper](https://arxiv.org/pdf/2004.04906.pdf), this dataset consists in 21M passages from the english wikipedia along with their 768-dim embeddings computed using DPR's context encoder.
Note on the implementation:
- There are two configs: with and without the embeddings (73GB vs 14GB)
- I used a non-fixed-size sequence of floats to describe the feature format of the embeddings. I wanted to use fixed-size sequences but I had issues with reading the arrow file afterwards (for example `dataset[0]` was crashing)
- I added the case for lists of urls as input of the download_manager | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/332/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/332/timeline | null | null | true | [
"The two configurations don't have the same sizes, I may change that so that they both have 21015300 examples for convenience, even though it's supposed to have 21015324 examples in total.\r\n\r\nOne configuration only has 21015300 examples because it seems that the embeddings of the last 24 examples are missing.",
"It's ok to merge now imo. I'll make another PR if we find a way to have the missing embeddings"
] |
https://api.github.com/repos/huggingface/datasets/issues/331 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/331/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/331/comments | https://api.github.com/repos/huggingface/datasets/issues/331/events | https://github.com/huggingface/datasets/issues/331 | 648,533,199 | MDU6SXNzdWU2NDg1MzMxOTk= | 331 | Loading CNN/Daily Mail dataset produces `nlp.utils.info_utils.NonMatchingSplitsSizesError` | {
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jxmorris12",
"id": 13238952,
"login": "jxmorris12",
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jxmorris12"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | [] | null | 5 | "2020-06-30T22:21:33Z" | "2020-07-09T13:03:40Z" | "2020-07-09T13:03:40Z" | CONTRIBUTOR | null | null | null | ```
>>> import nlp
>>> nlp.load_dataset('cnn_dailymail', '3.0.0')
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/p/qdata/jm8wx/datasets/nlp/src/nlp/load.py", line 520, in load_dataset
builder_instance.download_and_prepare(
File "/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py", line 431, in download_and_prepare
self._download_and_prepare(
File "/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py", line 488, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/p/qdata/jm8wx/datasets/nlp/src/nlp/utils/info_utils.py", line 70, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=49424491, num_examples=11490, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='test', num_bytes=48931393, num_examples=11379, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='train', num_bytes=1249178681, num_examples=287113, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='train', num_bytes=1240618482, num_examples=285161, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='validation', num_bytes=57149241, num_examples=13368, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='validation', num_bytes=56637485, num_examples=13255, dataset_name='cnn_dailymail')}]
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/331/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/331/timeline | null | completed | false | [
"I couldn't reproduce on my side.\r\nIt looks like you were not able to generate all the examples, and you have the problem for each split train-test-validation.\r\nCould you try to enable logging, try again and send the logs ?\r\n```python\r\nimport logging\r\nlogging.basicConfig(level=logging.INFO)\r\n```",
"here's the log\r\n```\r\n>>> import nlp\r\nimport logging\r\nlogging.basicConfig(level=logging.INFO)\r\nnlp.load_dataset('cnn_dailymail', '3.0.0')\r\n>>> import logging\r\n>>> logging.basicConfig(level=logging.INFO)\r\n>>> nlp.load_dataset('cnn_dailymail', '3.0.0')\r\nINFO:nlp.load:Checking /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py for additional imports.\r\nINFO:filelock:Lock 140443095301136 acquired on /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py.lock\r\nINFO:nlp.load:Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail\r\nINFO:nlp.load:Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\r\nINFO:nlp.load:Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py to /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/cnn_dailymail.py\r\nINFO:nlp.load:Updating dataset infos file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/dataset_infos.json to /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/dataset_infos.json\r\nINFO:nlp.load:Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/cnn_dailymail.json\r\nINFO:filelock:Lock 140443095301136 released on /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py.lock\r\nINFO:nlp.info:Loading Dataset Infos from /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\r\nINFO:nlp.builder:Generating dataset cnn_dailymail (/u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0)\r\nINFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source\r\nDownloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0...\r\nINFO:nlp.utils.info_utils:All the checksums matched successfully.\r\nINFO:nlp.builder:Generating split train\r\nINFO:nlp.arrow_writer:Done writing 285161 examples in 1240618482 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-train.arrow.\r\nINFO:nlp.builder:Generating split validation\r\nINFO:nlp.arrow_writer:Done writing 13255 examples in 56637485 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-validation.arrow.\r\nINFO:nlp.builder:Generating split test\r\nINFO:nlp.arrow_writer:Done writing 11379 examples in 48931393 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-test.arrow.\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/load.py\", line 520, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py\", line 431, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py\", line 488, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/utils/info_utils.py\", line 70, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\nnlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=49424491, num_examples=11490, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='test', num_bytes=48931393, num_examples=11379, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='train', num_bytes=1249178681, num_examples=287113, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='train', num_bytes=1240618482, num_examples=285161, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='validation', num_bytes=57149241, num_examples=13368, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='validation', num_bytes=56637485, num_examples=13255, dataset_name='cnn_dailymail')}]\r\n```",
"> here's the log\r\n> \r\n> ```\r\n> >>> import nlp\r\n> import logging\r\n> logging.basicConfig(level=logging.INFO)\r\n> nlp.load_dataset('cnn_dailymail', '3.0.0')\r\n> >>> import logging\r\n> >>> logging.basicConfig(level=logging.INFO)\r\n> >>> nlp.load_dataset('cnn_dailymail', '3.0.0')\r\n> INFO:nlp.load:Checking /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py for additional imports.\r\n> INFO:filelock:Lock 140443095301136 acquired on /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py.lock\r\n> INFO:nlp.load:Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail\r\n> INFO:nlp.load:Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\r\n> INFO:nlp.load:Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py to /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/cnn_dailymail.py\r\n> INFO:nlp.load:Updating dataset infos file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/dataset_infos.json to /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/dataset_infos.json\r\n> INFO:nlp.load:Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/cnn_dailymail.json\r\n> INFO:filelock:Lock 140443095301136 released on /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py.lock\r\n> INFO:nlp.info:Loading Dataset Infos from /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\r\n> INFO:nlp.builder:Generating dataset cnn_dailymail (/u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0)\r\n> INFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source\r\n> Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0...\r\n> INFO:nlp.utils.info_utils:All the checksums matched successfully.\r\n> INFO:nlp.builder:Generating split train\r\n> INFO:nlp.arrow_writer:Done writing 285161 examples in 1240618482 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-train.arrow.\r\n> INFO:nlp.builder:Generating split validation\r\n> INFO:nlp.arrow_writer:Done writing 13255 examples in 56637485 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-validation.arrow.\r\n> INFO:nlp.builder:Generating split test\r\n> INFO:nlp.arrow_writer:Done writing 11379 examples in 48931393 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-test.arrow.\r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 1, in <module>\r\n> File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/load.py\", line 520, in load_dataset\r\n> builder_instance.download_and_prepare(\r\n> File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py\", line 431, in download_and_prepare\r\n> self._download_and_prepare(\r\n> File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py\", line 488, in _download_and_prepare\r\n> verify_splits(self.info.splits, split_dict)\r\n> File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/utils/info_utils.py\", line 70, in verify_splits\r\n> raise NonMatchingSplitsSizesError(str(bad_splits))\r\n> nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=49424491, num_examples=11490, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='test', num_bytes=48931393, num_examples=11379, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='train', num_bytes=1249178681, num_examples=287113, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='train', num_bytes=1240618482, num_examples=285161, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='validation', num_bytes=57149241, num_examples=13368, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='validation', num_bytes=56637485, num_examples=13255, dataset_name='cnn_dailymail')}]\r\n> ```\r\n\r\nWith `nlp == 0.3.0` version, I'm not able to reproduce this error on my side.\r\nWhich version are you using for reproducing your bug?\r\n\r\n```\r\n>> nlp.load_dataset('cnn_dailymail', '3.0.0')\r\n\r\n8.90k/8.90k [00:18<00:00, 486B/s]\r\n\r\nDownloading: 100%\r\n9.37k/9.37k [00:00<00:00, 234kB/s]\r\n\r\nDownloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0...\r\nDownloading:\r\n159M/? [00:09<00:00, 16.7MB/s]\r\n\r\nDownloading:\r\n376M/? [00:06<00:00, 62.6MB/s]\r\n\r\nDownloading:\r\n2.11M/? [00:06<00:00, 333kB/s]\r\n\r\nDownloading:\r\n46.4M/? [00:02<00:00, 18.4MB/s]\r\n\r\nDownloading:\r\n2.43M/? [00:00<00:00, 2.62MB/s]\r\n\r\nDataset cnn_dailymail downloaded and prepared to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0. Subsequent calls will reuse this data.\r\n{'test': Dataset(schema: {'article': 'string', 'highlights': 'string'}, num_rows: 11490),\r\n 'train': Dataset(schema: {'article': 'string', 'highlights': 'string'}, num_rows: 287113),\r\n 'validation': Dataset(schema: {'article': 'string', 'highlights': 'string'}, num_rows: 13368)}\r\n\r\n>> ...\r\n\r\n```",
"In general if some examples are missing after processing (hence causing the `NonMatchingSplitsSizesError `), it is often due to either\r\n1) corrupted cached files\r\n2) decoding errors\r\n\r\nI just checked the dataset script for code that could lead to decoding errors but I couldn't find any. Before we try to dive more into the processing of the dataset, could you try to clear your cache ? Just to make sure that it isn't 1)",
"Yes thanks for the support! I cleared out my cache folder and everything works fine now"
] |
https://api.github.com/repos/huggingface/datasets/issues/330 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/330/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/330/comments | https://api.github.com/repos/huggingface/datasets/issues/330/events | https://github.com/huggingface/datasets/pull/330 | 648,525,720 | MDExOlB1bGxSZXF1ZXN0NDQyMzIxMjEw | 330 | Doc red | {
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghomasHudson",
"id": 13795113,
"login": "ghomasHudson",
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghomasHudson"
} | [] | closed | false | null | [] | null | 0 | "2020-06-30T22:05:31Z" | "2020-07-06T12:10:39Z" | "2020-07-05T12:27:29Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/330.diff",
"html_url": "https://github.com/huggingface/datasets/pull/330",
"merged_at": "2020-07-05T12:27:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/330.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/330"
} | Adding [DocRED](https://github.com/thunlp/DocRED) - a relation extraction dataset which tests document-level RE. A few implementation notes:
- There are 2 separate versions of the training set - *annotated* and *distant*. Instead of `nlp.Split.Train` I've used the splits `"train_annotated"` and `"train_distant"` to reflect this.
- As well as the relation id, the full relation name is mapped from `rel_info.json`
- I renamed the 'h', 'r', 't' keys to 'head', 'relation' and 'tail' to make them more readable.
- Used the fix from #319 to allow nested sequences of dicts. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/330/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/330/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/329 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/329/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/329/comments | https://api.github.com/repos/huggingface/datasets/issues/329/events | https://github.com/huggingface/datasets/issues/329 | 648,446,979 | MDU6SXNzdWU2NDg0NDY5Nzk= | 329 | [Bug] FileLock dependency incompatible with filesystem | {
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jarednielsen",
"id": 4564897,
"login": "jarednielsen",
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jarednielsen"
} | [] | closed | false | null | [] | null | 10 | "2020-06-30T19:45:31Z" | "2024-03-14T21:51:00Z" | "2020-06-30T21:33:06Z" | CONTRIBUTOR | null | null | null | I'm downloading a dataset successfully with
`load_dataset("wikitext", "wikitext-2-raw-v1")`
But when I attempt to cache it on an external volume, it hangs indefinitely:
`load_dataset("wikitext", "wikitext-2-raw-v1", cache_dir="/fsx") # /fsx is an external volume mount`
The filesystem when hanging looks like this:
```bash
/fsx
----downloads
----94be...73.lock
----wikitext
----wikitext-2-raw
----wikitext-2-raw-1.0.0.incomplete
```
It appears that on this filesystem, the FileLock object is forever stuck in its "acquire" stage. I have verified that the issue lies specifically with the `filelock` dependency:
```python
open("/fsx/hello.txt").write("hello") # succeeds
from filelock import FileLock
with FileLock("/fsx/hello.lock"):
open("/fsx/hello.txt").write("hello") # hangs indefinitely
```
Has anyone else run into this issue? I'd raise it directly on the FileLock repo, but that project appears abandoned with the last update over a year ago. Or if there's a solution that would remove the FileLock dependency from the project, I would appreciate that. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/329/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/329/timeline | null | completed | false | [
"Hi, can you give details on your environment/os/packages versions/etc?",
"Environment is Ubuntu 18.04, Python 3.7.5, nlp==0.3.0, filelock=3.0.12.\r\n\r\nThe external volume is Amazon FSx for Lustre, and it by default creates files with limited permissions. My working theory is that FileLock creates a lockfile that isn't writable, and thus there's no way to acquire it by removing the .lock file. But Python is able to create new files and write to them outside of the FileLock package.\r\n\r\nWhen I attempt to use FileLock within a Docker container by writing to `/root/.cache/hello.txt`, it succeeds. So there's some permissions issue. But it's not a Docker configuration issue; I've replicated it without Docker.\r\n```bash\r\necho \"hello world\" >> hello.txt\r\nls -l\r\n\r\n-rw-rw-r-- 1 ubuntu ubuntu 10 Jun 30 19:52 hello.txt\r\n```",
"Looks like the `flock` syscall does not work on Lustre filesystems by default: https://github.com/benediktschmitt/py-filelock/issues/67.\r\n\r\nI added the `-o flock` option when mounting the filesystem, as [described here](https://docs.aws.amazon.com/fsx/latest/LustreGuide/getting-started-step2.html), which fixed the issue.",
"Awesome, thanks a lot for sharing your fix!",
"I'm wondering if this can be revisited. In some managed environments the same person using HF cannot change the file-system mount flags, (and the organization may be unwilling to change these flags due to other concerns) but can ensure that there won't be concurrent writes, for example because HF is offline and the models/datasets were downloaded earlier. \r\n\r\nThe real fix would be to FileLock itself, which does not seem very active and seems to not deal with failed system flock calls , which would be one way to fix this, as they mention in the issue below also raised by @jarednielsen \r\n\r\nhttps://github.com/tox-dev/py-filelock/issues/67",
"> I'm wondering if this can be revisited. In some managed environments the same person using HF cannot change the file-system mount flags, (and the organization may be unwilling to change these flags due to other concerns) but can ensure that there won't be concurrent writes, for example because HF is offline and the models/datasets were downloaded earlier.\r\n\r\nI am one of those users. Is there a work around for this?\r\n",
"The machines I use have a shared FS which has the filelock problem as well as a local one that does not. Using some env vars (HF_HOME, which controls both models and datasets, and HF_DATASETS_OFFLINE) for both transformers and datasets library one can influence where these downloads happen, and whether the locks get taken. I think some of the relevant documentation is here https://huggingface.co/docs/transformers/installation#cache-setup. I do end up using different settings when I download the models and when I use them, and have to rsync the models to the local file system using a separate script. ",
"Thanks @orm011 . These filesystems are such a pain. I'll dig around, looks like setting `cache_dir` to a non-lustre filesystem works for `transformers` but not `datasets`.",
"Note I `export HF_HOME=` in the shell prior to running python (I do not use the `cache_dir` argument, I think I ran into similar issues with it, nor `HF_DATASETS_CACHE` , though maybe that works, or maybe you can set it in python prior to importing the library ), and I change no other variables. Then `datasets.load_dataset()` works without any additional flags, and they go into `HF_HOME/datasets/` and the models go into `HF_HOME/transformers/` (and the lock files are all there as well). ",
"I am using a shared cluster with a lustre system that I can't change. I am unable to download or load datsets onto the filesystem because of file lock. @thomwolf can this issue be reopened? "
] |
https://api.github.com/repos/huggingface/datasets/issues/328 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/328/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/328/comments | https://api.github.com/repos/huggingface/datasets/issues/328/events | https://github.com/huggingface/datasets/issues/328 | 648,326,841 | MDU6SXNzdWU2NDgzMjY4NDE= | 328 | Fork dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4",
"events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}",
"followers_url": "https://api.github.com/users/timothyjlaurent/followers",
"following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}",
"gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/timothyjlaurent",
"id": 2000204,
"login": "timothyjlaurent",
"node_id": "MDQ6VXNlcjIwMDAyMDQ=",
"organizations_url": "https://api.github.com/users/timothyjlaurent/orgs",
"received_events_url": "https://api.github.com/users/timothyjlaurent/received_events",
"repos_url": "https://api.github.com/users/timothyjlaurent/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions",
"type": "User",
"url": "https://api.github.com/users/timothyjlaurent"
} | [] | closed | false | null | [] | null | 5 | "2020-06-30T16:42:53Z" | "2020-07-06T21:43:59Z" | "2020-07-06T21:43:59Z" | NONE | null | null | null | We have a multi-task learning model training I'm trying to convert to using the Arrow-based nlp dataset.
We're currently training a custom TensorFlow model but the nlp paradigm should be a bridge for us to be able to use the wealth of pre-trained models in Transformers.
Our preprocessing flow parses raw text and json with Entity and Relations annotations and creates 2 datasets for training a NER and Relations prediction heads.
Is there some good way to "fork" dataset-
EG
1. text + json -> Dataset1
1. Dataset1 -> DatasetNER
1. Dataset1 -> DatasetREL
or
1. text + json -> Dataset1
1. Dataset1 -> DatasetNER
1. Dataset1 + DatasetNER -> DatasetREL
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/328/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/328/timeline | null | completed | false | [
"To be able to generate the Arrow dataset you need to either use our csv or json utilities `load_dataset(\"json\", data_files=my_json_files)` OR write your own custom dataset script (you can find some inspiration from the [squad](https://github.com/huggingface/nlp/blob/master/datasets/squad/squad.py) script for example). Custom dataset scripts can be called locally with `nlp.load_dataset(path_to_my_script_directory)`.\r\n\r\nThis should help you get what you call \"Dataset1\".\r\n\r\nThen using some dataset transforms like `.map` for example you can get to \"DatasetNER\" and \"DatasetREL\".\r\n",
"Thanks for the helpful advice, @lhoestq -- I wasn't quite able to get the json recipe working - \r\n\r\n```\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/pyarrow/ipc.py in __init__(self, source)\r\n 60 \r\n 61 def __init__(self, source):\r\n---> 62 self._open(source)\r\n 63 \r\n 64 \r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/pyarrow/ipc.pxi in pyarrow.lib._RecordBatchStreamReader._open()\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()\r\nArrowInvalid: Tried reading schema message, was null or length 0\r\n```\r\n\r\nBut I'm going to give the generator_dataset_builder a try.\r\n\r\n1 more quick question -- can .map be used to output different length mappings -- could I skip one, or yield 2, can you map_batch ",
"You can use `.map(my_func, batched=True)` and return less examples, or more examples if you want",
"Thanks this answers my question. I think the issue I was having using the json loader were due to using gzipped jsonl files.\r\n\r\nThe error I get now is :\r\n\r\n```\r\n\r\nUsing custom data configuration test\r\n---------------------------------------------------------------------------\r\n\r\nValueError Traceback (most recent call last)\r\n\r\n<ipython-input-38-29082a31e5b2> in <module>\r\n 5 print(ner_datafiles)\r\n 6 \r\n----> 7 ds = nlp.load_dataset(\"json\", \"test\", data_files=ner_datafiles[0])\r\n 8 \r\n\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 522 download_mode=download_mode,\r\n 523 ignore_verifications=ignore_verifications,\r\n--> 524 save_infos=save_infos,\r\n 525 )\r\n 526 \r\n\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 430 verify_infos = not save_infos and not ignore_verifications\r\n 431 self._download_and_prepare(\r\n--> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 433 )\r\n 434 # Sync info\r\n\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 481 try:\r\n 482 # Prepare split will record examples associated to the split\r\n--> 483 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 484 except OSError:\r\n 485 raise OSError(\"Cannot find data file. \" + (self.manual_download_instructions or \"\"))\r\n\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in _prepare_split(self, split_generator)\r\n 736 schema_dict[field.name] = Value(str(field.type))\r\n 737 \r\n--> 738 parse_schema(writer.schema, features)\r\n 739 self.info.features = Features(features)\r\n 740 \r\n\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in parse_schema(schema, schema_dict)\r\n 734 parse_schema(field.type.value_type, schema_dict[field.name])\r\n 735 else:\r\n--> 736 schema_dict[field.name] = Value(str(field.type))\r\n 737 \r\n 738 parse_schema(writer.schema, features)\r\n\r\n<string> in __init__(self, dtype, id, _type)\r\n\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/features.py in __post_init__(self)\r\n 55 \r\n 56 def __post_init__(self):\r\n---> 57 self.pa_type = string_to_arrow(self.dtype)\r\n 58 \r\n 59 def __call__(self):\r\n\r\n~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/features.py in string_to_arrow(type_str)\r\n 32 if str(type_str + \"_\") not in pa.__dict__:\r\n 33 raise ValueError(\r\n---> 34 f\"Neither {type_str} nor {type_str + '_'} seems to be a pyarrow data type. \"\r\n 35 f\"Please make sure to use a correct data type, see: \"\r\n 36 f\"https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions\"\r\n\r\nValueError: Neither list<item: int64> nor list<item: int64>_ seems to be a pyarrow data type. Please make sure to use a correct data type, see: https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions.\r\n```\r\n\r\nIf I just create a pa- table manually like is done in the jsonloader -- it seems to work fine. Ths JSON I'm trying to load isn't overly complex - 1 integer field, the rest text fields with a nested list of objects with text fields .",
"I'll close this -- It's still unclear how to go about troubleshooting the json example as I mentioned above. If I decide it's worth the trouble, I'll create another issue, or wait for a better support for using nlp for making custom data-loaders."
] |
https://api.github.com/repos/huggingface/datasets/issues/327 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/327/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/327/comments | https://api.github.com/repos/huggingface/datasets/issues/327/events | https://github.com/huggingface/datasets/pull/327 | 648,312,858 | MDExOlB1bGxSZXF1ZXN0NDQyMTQyOTQw | 327 | set seed for suffling tests | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-06-30T16:21:34Z" | "2020-07-02T08:34:05Z" | "2020-07-02T08:34:04Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/327.diff",
"html_url": "https://github.com/huggingface/datasets/pull/327",
"merged_at": "2020-07-02T08:34:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/327.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/327"
} | Some tests were randomly failing because of a missing seed in a test for `train_test_split(shuffle=True)` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/327/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/327/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/326 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/326/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/326/comments | https://api.github.com/repos/huggingface/datasets/issues/326/events | https://github.com/huggingface/datasets/issues/326 | 648,126,103 | MDU6SXNzdWU2NDgxMjYxMDM= | 326 | Large dataset in Squad2-format | {
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/flozi00",
"id": 47894090,
"login": "flozi00",
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"repos_url": "https://api.github.com/users/flozi00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/flozi00"
} | [] | closed | false | null | [] | null | 8 | "2020-06-30T12:18:59Z" | "2020-07-09T09:01:50Z" | "2020-07-09T09:01:50Z" | CONTRIBUTOR | null | null | null | At the moment we are building an large question answering dataset and think about sharing it with the huggingface community.
Caused the computing power we splitted it into multiple tiles, but they are all in the same format.
Right now the most important facts about are this:
- Contexts: 1.047.671
- questions: 1.677.732
- Answers: 6.742.406
- unanswerable: 377.398
It is already cleaned
<pre><code>
train_data = [
{
'context': "this is the context",
'qas': [
{
'id': "00002",
'is_impossible': False,
'question': "whats is this",
'answers': [
{
'text': "answer",
'answer_start': 0
}
]
},
{
'id': "00003",
'is_impossible': False,
'question': "question2",
'answers': [
{
'text': "answer2",
'answer_start': 1
}
]
}
]
}
]
</code></pre>
Cause it is growing every day we are thinking about an structure like this:
We host an Json file, containing all the download links and the script can load it dynamically.
At the moment it is around ~20GB
Any advice how to handle this, or an ready to use template ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/326/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/326/timeline | null | completed | false | [
"I'm pretty sure you can get some inspiration from the squad_v2 script. It looks like the dataset is quite big so it will take some time for the users to generate it, but it should be reasonable.\r\n\r\nAlso you are saying that you are still making the dataset grow in size right ?\r\nIt's probably good practice to let the users do their training/evaluations with the exact same version of the dataset.\r\nWe allow for each dataset to specify a version (ex: 1.0.0) and increment this number every time there are new samples in the dataset for example. Does it look like a good solution for you ? Or would you rather have one final version with the full dataset ?",
"It would also be good if there is any possibility for versioning, I think this way is much better than the dynamic way.\nIf you mean that part to put the tiles into one is the generation it would take up to 15-20 minutes on home computer hardware.\nAre there any compression or optimization algorithms while generating the dataset ?\nOtherwise the hardware limit is around 32 GB ram at the moment.\nIf everything works well we will add some more gigabytes of data in future what would make it pretty memory costly.",
"15-20 minutes is fine !\r\nAlso there's no RAM limitations as we save to disk every 1000 elements while generating the dataset by default.\r\nAfter generation, the dataset is ready to use with (again) no RAM limitations as we do memory-mapping.",
"Wow, that sounds pretty cool.\nActually I have the problem of running out of memory while tokenization on our local machine.\nThat wouldn't happen again, would it ?",
"You can do the tokenization step using `my_tokenized_dataset = my_dataset.map(my_tokenize_function)` that writes the tokenized texts on disk as well. And then `my_tokenized_dataset` will be a memory-mapped dataset too, so you should be fine :)",
"Does it have an affect to the trainings speed ?",
"In your training loop, loading the tokenized texts is going to be fast and pretty much negligible compared to a forward pass. You shouldn't expect any slow down.",
"Closing this one. Feel free to re-open if you have other questions"
] |
https://api.github.com/repos/huggingface/datasets/issues/325 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/325/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/325/comments | https://api.github.com/repos/huggingface/datasets/issues/325/events | https://github.com/huggingface/datasets/pull/325 | 647,601,592 | MDExOlB1bGxSZXF1ZXN0NDQxNTk3NTgw | 325 | Add SQuADShifts dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8953195?v=4",
"events_url": "https://api.github.com/users/millerjohnp/events{/privacy}",
"followers_url": "https://api.github.com/users/millerjohnp/followers",
"following_url": "https://api.github.com/users/millerjohnp/following{/other_user}",
"gists_url": "https://api.github.com/users/millerjohnp/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/millerjohnp",
"id": 8953195,
"login": "millerjohnp",
"node_id": "MDQ6VXNlcjg5NTMxOTU=",
"organizations_url": "https://api.github.com/users/millerjohnp/orgs",
"received_events_url": "https://api.github.com/users/millerjohnp/received_events",
"repos_url": "https://api.github.com/users/millerjohnp/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/millerjohnp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/millerjohnp/subscriptions",
"type": "User",
"url": "https://api.github.com/users/millerjohnp"
} | [] | closed | false | null | [] | null | 1 | "2020-06-29T19:11:16Z" | "2020-06-30T17:07:31Z" | "2020-06-30T17:07:31Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/325.diff",
"html_url": "https://github.com/huggingface/datasets/pull/325",
"merged_at": "2020-06-30T17:07:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/325.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/325"
} | This PR adds the four new variants of the SQuAD dataset used in [The Effect of Natural Distribution Shift on Question Answering Models](https://arxiv.org/abs/2004.14444) to facilitate evaluating model robustness to distribution shift. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/325/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/325/timeline | null | null | true | [
"Very cool to have this dataset, thank you for adding it :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/324 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/324/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/324/comments | https://api.github.com/repos/huggingface/datasets/issues/324/events | https://github.com/huggingface/datasets/issues/324 | 647,525,725 | MDU6SXNzdWU2NDc1MjU3MjU= | 324 | Error when calculating glue score | {
"avatar_url": "https://avatars.githubusercontent.com/u/47185867?v=4",
"events_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/events{/privacy}",
"followers_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/followers",
"following_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/following{/other_user}",
"gists_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/D-i-l-r-u-k-s-h-i",
"id": 47185867,
"login": "D-i-l-r-u-k-s-h-i",
"node_id": "MDQ6VXNlcjQ3MTg1ODY3",
"organizations_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/orgs",
"received_events_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/received_events",
"repos_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/subscriptions",
"type": "User",
"url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i"
} | [] | closed | false | null | [] | null | 4 | "2020-06-29T16:53:48Z" | "2020-07-09T09:13:34Z" | "2020-07-09T09:13:34Z" | NONE | null | null | null | I was trying glue score along with other metrics here. But glue gives me this error;
```
import nlp
glue_metric = nlp.load_metric('glue',name="cola")
glue_score = glue_metric.compute(predictions, references)
```
```
---------------------------------------------------------------------------
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-8-b9210a524504> in <module>()
----> 1 glue_score = glue_metric.compute(predictions, references)
6 frames
/usr/local/lib/python3.6/dist-packages/nlp/metric.py in compute(self, predictions, references, timeout, **metrics_kwargs)
191 """
192 if predictions is not None:
--> 193 self.add_batch(predictions=predictions, references=references)
194 self.finalize(timeout=timeout)
195
/usr/local/lib/python3.6/dist-packages/nlp/metric.py in add_batch(self, predictions, references, **kwargs)
207 if self.writer is None:
208 self._init_writer()
--> 209 self.writer.write_batch(batch)
210
211 def add(self, prediction=None, reference=None, **kwargs):
/usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)
155 if self.pa_writer is None:
156 self._build_writer(pa_table=pa.Table.from_pydict(batch_examples))
--> 157 pa_table: pa.Table = pa.Table.from_pydict(batch_examples, schema=self._schema)
158 if writer_batch_size is None:
159 writer_batch_size = self.writer_batch_size
/usr/local/lib/python3.6/dist-packages/pyarrow/types.pxi in __iter__()
/usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib.asarray()
/usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib.array()
/usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array()
TypeError: an integer is required (got type str)
```
I'm not sure whether I'm doing this wrong or whether it's an issue. I would like to know a workaround. Thank you. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/324/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/324/timeline | null | completed | false | [
"The glue metric for cola is a metric for classification. It expects label ids as integers as inputs.",
"I want to evaluate a sentence pair whether they are semantically equivalent, so I used MRPC and it gives the same error, does that mean we have to encode the sentences and parse as input?\r\n\r\nusing BertTokenizer;\r\n```\r\nencoded_reference=tokenizer.encode(reference, add_special_tokens=False)\r\nencoded_prediction=tokenizer.encode(prediction, add_special_tokens=False)\r\n```\r\n\r\n`glue_score = glue_metric.compute(encoded_prediction, encoded_reference)`\r\n```\r\n\r\nValueError Traceback (most recent call last)\r\n<ipython-input-9-4c3a3ce7b583> in <module>()\r\n----> 1 glue_score = glue_metric.compute(encoded_prediction, encoded_reference)\r\n\r\n6 frames\r\n/usr/local/lib/python3.6/dist-packages/nlp/metric.py in compute(self, predictions, references, timeout, **metrics_kwargs)\r\n 198 predictions = self.data[\"predictions\"]\r\n 199 references = self.data[\"references\"]\r\n--> 200 output = self._compute(predictions=predictions, references=references, **metrics_kwargs)\r\n 201 return output\r\n 202 \r\n\r\n/usr/local/lib/python3.6/dist-packages/nlp/metrics/glue/27b1bc63e520833054bd0d7a8d0bc7f6aab84cc9eed1b576e98c806f9466d302/glue.py in _compute(self, predictions, references)\r\n 101 return pearson_and_spearman(predictions, references)\r\n 102 elif self.config_name in [\"mrpc\", \"qqp\"]:\r\n--> 103 return acc_and_f1(predictions, references)\r\n 104 elif self.config_name in [\"sst2\", \"mnli\", \"mnli_mismatched\", \"mnli_matched\", \"qnli\", \"rte\", \"wnli\", \"hans\"]:\r\n 105 return {\"accuracy\": simple_accuracy(predictions, references)}\r\n\r\n/usr/local/lib/python3.6/dist-packages/nlp/metrics/glue/27b1bc63e520833054bd0d7a8d0bc7f6aab84cc9eed1b576e98c806f9466d302/glue.py in acc_and_f1(preds, labels)\r\n 60 def acc_and_f1(preds, labels):\r\n 61 acc = simple_accuracy(preds, labels)\r\n---> 62 f1 = f1_score(y_true=labels, y_pred=preds)\r\n 63 return {\r\n 64 \"accuracy\": acc,\r\n\r\n/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in f1_score(y_true, y_pred, labels, pos_label, average, sample_weight, zero_division)\r\n 1097 pos_label=pos_label, average=average,\r\n 1098 sample_weight=sample_weight,\r\n-> 1099 zero_division=zero_division)\r\n 1100 \r\n 1101 \r\n\r\n/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in fbeta_score(y_true, y_pred, beta, labels, pos_label, average, sample_weight, zero_division)\r\n 1224 warn_for=('f-score',),\r\n 1225 sample_weight=sample_weight,\r\n-> 1226 zero_division=zero_division)\r\n 1227 return f\r\n 1228 \r\n\r\n/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in precision_recall_fscore_support(y_true, y_pred, beta, labels, pos_label, average, warn_for, sample_weight, zero_division)\r\n 1482 raise ValueError(\"beta should be >=0 in the F-beta score\")\r\n 1483 labels = _check_set_wise_labels(y_true, y_pred, average, labels,\r\n-> 1484 pos_label)\r\n 1485 \r\n 1486 # Calculate tp_sum, pred_sum, true_sum ###\r\n\r\n/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in _check_set_wise_labels(y_true, y_pred, average, labels, pos_label)\r\n 1314 raise ValueError(\"Target is %s but average='binary'. Please \"\r\n 1315 \"choose another average setting, one of %r.\"\r\n-> 1316 % (y_type, average_options))\r\n 1317 elif pos_label not in (None, 1):\r\n 1318 warnings.warn(\"Note that pos_label (set to %r) is ignored when \"\r\n\r\nValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted'].\r\n\r\n```",
"MRPC is also a binary classification task, so its metric is a binary classification metric.\r\n\r\nTo evaluate if pairs of sentences are semantically equivalent, maybe you could take a look at models that compute if one sentence entails the other or not (typically the kinds of model that could work well on the MRPC task).",
"Closing this one. Feel free to re-open if you have other questions :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/323 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/323/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/323/comments | https://api.github.com/repos/huggingface/datasets/issues/323/events | https://github.com/huggingface/datasets/pull/323 | 647,521,308 | MDExOlB1bGxSZXF1ZXN0NDQxNTMxOTY3 | 323 | Add package path to sys when downloading package as github archive | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [] | closed | false | null | [] | null | 2 | "2020-06-29T16:46:01Z" | "2020-07-30T14:00:23Z" | "2020-07-30T14:00:23Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/323.diff",
"html_url": "https://github.com/huggingface/datasets/pull/323",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/323.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/323"
} | This fixes the `coval.py` metric so that imports within the downloaded module work correctly. We can use a similar trick to add the BLEURT metric (@ankparikh)
@thomwolf not sure how you feel about adding to the `PYTHONPATH` from the script. This is the only way I could make it work with my understanding of `importlib` but there might be a more elegant method.
This PR fixes https://github.com/huggingface/nlp/issues/305 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/323/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/323/timeline | null | null | true | [
"Sorry for the long diff, everything after the imports comes from `black` for code quality :/ ",
" I think it's fine and I can't think of another way to make the import work anyways.\r\n\r\nMaybe we can have the `sys.path` behavior inside `prepare_module` instead ? Currently it seems to come out of nowhere in the code ^^'\r\nWe could check if external imports have a `__init__.py` and if it is the case then we can add to directory to the `PYTHONPATH`"
] |
https://api.github.com/repos/huggingface/datasets/issues/322 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/322/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/322/comments | https://api.github.com/repos/huggingface/datasets/issues/322/events | https://github.com/huggingface/datasets/pull/322 | 647,483,850 | MDExOlB1bGxSZXF1ZXN0NDQxNTAyMjc2 | 322 | output nested dict in get_nearest_examples | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-06-29T15:47:47Z" | "2020-07-02T08:33:33Z" | "2020-07-02T08:33:32Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/322.diff",
"html_url": "https://github.com/huggingface/datasets/pull/322",
"merged_at": "2020-07-02T08:33:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/322.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/322"
} | As we are using a columnar format like arrow as the backend for datasets, we expect to have a dictionary of columns when we slice a dataset like in this example:
```python
my_examples = dataset[0:10]
print(type(my_examples))
# >>> dict
print(my_examples["my_column"][0]
# >>> this is the first element of the column 'my_column'
```
Therefore I wanted to keep this logic when calling `get_nearest_examples` that returns the top 10 nearest examples:
```python
dataset.add_faiss_index(column="embeddings")
scores, examples = dataset.get_nearest_examples("embeddings", query=my_numpy_embedding)
print(type(examples))
# >>> dict
```
Previously it was returning a list[dict]. It was the only place that was using this output format.
To make it work I had to implement `__getitem__(key)` where `key` is a list.
This is different from `.select` because `.select` is a dataset transform (it returns a new dataset object) while `__getitem__` is an extraction method (it returns python dictionaries). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/322/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/322/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/321 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/321/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/321/comments | https://api.github.com/repos/huggingface/datasets/issues/321/events | https://github.com/huggingface/datasets/issues/321 | 647,271,526 | MDU6SXNzdWU2NDcyNzE1MjY= | 321 | ERROR:root:mwparserfromhell | {
"avatar_url": "https://avatars.githubusercontent.com/u/26505641?v=4",
"events_url": "https://api.github.com/users/Shiro-LK/events{/privacy}",
"followers_url": "https://api.github.com/users/Shiro-LK/followers",
"following_url": "https://api.github.com/users/Shiro-LK/following{/other_user}",
"gists_url": "https://api.github.com/users/Shiro-LK/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Shiro-LK",
"id": 26505641,
"login": "Shiro-LK",
"node_id": "MDQ6VXNlcjI2NTA1NjQx",
"organizations_url": "https://api.github.com/users/Shiro-LK/orgs",
"received_events_url": "https://api.github.com/users/Shiro-LK/received_events",
"repos_url": "https://api.github.com/users/Shiro-LK/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Shiro-LK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shiro-LK/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Shiro-LK"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | [] | null | 10 | "2020-06-29T11:10:43Z" | "2022-02-14T15:21:46Z" | "2022-02-14T15:21:46Z" | NONE | null | null | null | Hi,
I am trying to download some wikipedia data but I got this error for spanish "es" (but there are maybe some others languages which have the same error I haven't tried all of them ).
`ERROR:root:mwparserfromhell ParseError: This is a bug and should be reported. Info: C tokenizer exited with non-empty token stack.`
The code I have use was :
`dataset = load_dataset('wikipedia', '20200501.es', beam_runner='DirectRunner')`
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/321/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/321/timeline | null | completed | false | [
"It looks like it comes from `mwparserfromhell`.\r\n\r\nWould it be possible to get the bad `section` that causes this issue ? The `section` string is from `datasets/wikipedia.py:L548` ? You could just add a `try` statement and print the section if the line `section_text.append(section.strip_code().strip())` crashes.\r\n\r\nIt will help us know if we have to fix it on our side or if it is a `mwparserfromhell` issue.",
"Hi, \r\n\r\nThank you for you answer.\r\nI have try to print the bad section using `try` and `except`, but it is a bit weird as the error seems to appear 3 times for instance, but the two first error does not print anything (as if the function did not go in the `except` part).\r\nFor the third one, I got that (I haven't display the entire text) :\r\n\r\n> error : ==== Parque nacional Cajas ====\r\n> {{AP|Parque nacional Cajas}}\r\n> [[Archivo:Ecuador cajas national park.jpg|thumb|left|300px|Laguna del Cajas]]\r\n> El parque nacional Cajas está situado en los [[Cordillera de los Andes|Andes]], al sur del [[Ecuador]], en la provincia de [[Provincia de Azuay|Azuay]], a 33\r\n> [[km]] al noroccidente de la ciudad de [[Cuenca (Ecuador)|Cuenca]]. Los accesos más comunes al parque inician todos en Cuenca: Desde allí, la vía Cuenca-Mol\r\n> leturo atraviesa en Control de [[Surocucho]] en poco más de 30 minutos de viaje; más adelante, esta misma carretera pasa a orillas de la laguna La Toreadora donde están el Centro Administrativo y de Información del parque. Siguiendo de largo hacia [[Molleturo]], por esta vía se conoce el sector norte del Cajas y se serpentea entre varias lagunas mayores y menores.\r\n> Para acceder al parque desde la costa, la vía Molleturo-Cuenca es también la mejor opción.\r\n\r\nHow can I display the link instead of the text ? I suppose it will help you more ",
"The error appears several times as Apache Beam retries to process examples up to 4 times irc.\r\n\r\nI just tried to run this text into `mwparserfromhell` but it worked without the issue.\r\n\r\nI used this code (from the `wikipedia.py` script):\r\n```python\r\nimport mwparserfromhell as parser\r\nimport re\r\nimport six\r\n\r\nraw_content = r\"\"\"==== Parque nacional Cajas ====\r\n{{AP|Parque nacional Cajas}}\r\n[[Archivo:Ecuador cajas national park.jpg|thumb|left|300px|Laguna del Cajas]]\r\nEl parque nacional Cajas está situado en los [[Cordillera de los Andes|Andes]], al sur del [[Ecuador]], en la provincia de [[Provincia de Azuay|Azuay]], a 33\r\n[[km]] al noroccidente de la ciudad de [[Cuenca (Ecuador)|Cuenca]]. Los accesos más comunes al parque inician todos en Cuenca: Desde allí, la vía Cuenca-Mol\r\nleturo atraviesa en Control de [[Surocucho]] en poco más de 30 minutos de viaje; más adelante, esta misma carretera pasa a orillas de la laguna La Toreadora donde están el Centro Administrativo y de Información del parque. Siguiendo de largo hacia [[Molleturo]], por esta vía se conoce el sector norte del Cajas y se serpentea entre varias lagunas mayores y menores.\r\n\"\"\"\r\n\r\nwikicode = parser.parse(raw_content)\r\n\r\n# Filters for references, tables, and file/image links.\r\nre_rm_wikilink = re.compile(\"^(?:File|Image|Media):\", flags=re.IGNORECASE | re.UNICODE)\r\n\r\ndef rm_wikilink(obj):\r\n return bool(re_rm_wikilink.match(six.text_type(obj.title)))\r\n\r\ndef rm_tag(obj):\r\n return six.text_type(obj.tag) in {\"ref\", \"table\"}\r\n\r\ndef rm_template(obj):\r\n return obj.name.lower() in {\"reflist\", \"notelist\", \"notelist-ua\", \"notelist-lr\", \"notelist-ur\", \"notelist-lg\"}\r\n\r\ndef try_remove_obj(obj, section):\r\n try:\r\n section.remove(obj)\r\n except ValueError:\r\n # For unknown reasons, objects are sometimes not found.\r\n pass\r\n\r\nsection_text = []\r\nfor section in wikicode.get_sections(flat=True, include_lead=True, include_headings=True):\r\n for obj in section.ifilter_wikilinks(matches=rm_wikilink, recursive=True):\r\n try_remove_obj(obj, section)\r\n for obj in section.ifilter_templates(matches=rm_template, recursive=True):\r\n try_remove_obj(obj, section)\r\n for obj in section.ifilter_tags(matches=rm_tag, recursive=True):\r\n try_remove_obj(obj, section)\r\n\r\n section_text.append(section.strip_code().strip())\r\n```",
"Not sure why we're having this issue. Maybe could you get also the file that's causing that ?",
"thanks for your answer.\r\nHow can I know which file is causing the issue ? \r\nI am trying to load the spanish wikipedia data. ",
"Because of the way Apache Beam works we indeed don't have access to the file name at this point in the code.\r\nWe'll have to use some tricks I think :p \r\n\r\nYou can append `filepath` to `title` in `wikipedia.py:L512` for example. [[EDIT: it's L494 my bad]]\r\nThen just do `try:...except:` on the call of `_parse_and_clean_wikicode` L500 I guess.\r\n\r\nThanks for diving into this ! I tried it myself but I run out of memory on my laptop\r\nAs soon as we have the name of the file it should be easier to find what's wrong.",
"Thanks for your help.\r\n\r\nI tried to print the \"title\" of the document inside the` except (mwparserfromhell.parser.ParserError) as e`,the title displayed was : \"Campeonato Mundial de futsal de la AMF 2015\". (Wikipedia ES) Is it what you were looking for ?",
"Thanks a lot @Shiro-LK !\r\n\r\nI was able to reproduce the issue. It comes from [this table on wikipedia](https://es.wikipedia.org/wiki/Campeonato_Mundial_de_futsal_de_la_AMF_2015#Clasificados) that can't be parsed.\r\n\r\nThe file in which the problem occurs comes from the wikipedia dumps, and it can be downloaded [here](https://dumps.wikimedia.org/eswiki/20200501/eswiki-20200501-pages-articles-multistream6.xml-p6424816p7924815.bz2)\r\n\r\nParsing the file this way raises the parsing issue:\r\n\r\n```python\r\nimport mwparserfromhell as parser\r\nfrom tqdm.auto import tqdm\r\nimport bz2\r\nimport six\r\nimport logging\r\nimport codecs\r\nimport xml.etree.cElementTree as etree\r\n\r\nfilepath = \"path/to/eswiki-20200501-pages-articles-multistream6.xml-p6424816p7924815.bz2\"\r\n\r\ndef _extract_content(filepath):\r\n \"\"\"Extracts article content from a single WikiMedia XML file.\"\"\"\r\n logging.info(\"generating examples from = %s\", filepath)\r\n with open(filepath, \"rb\") as f:\r\n f = bz2.BZ2File(filename=f)\r\n if six.PY3:\r\n # Workaround due to:\r\n # https://github.com/tensorflow/tensorflow/issues/33563\r\n utf_f = codecs.getreader(\"utf-8\")(f)\r\n else:\r\n utf_f = f\r\n # To clear root, to free-up more memory than just `elem.clear()`.\r\n context = etree.iterparse(utf_f, events=(\"end\",))\r\n context = iter(context)\r\n unused_event, root = next(context)\r\n for unused_event, elem in tqdm(context, total=949087):\r\n if not elem.tag.endswith(\"page\"):\r\n continue\r\n namespace = elem.tag[:-4]\r\n title = elem.find(\"./{0}title\".format(namespace)).text\r\n ns = elem.find(\"./{0}ns\".format(namespace)).text\r\n id_ = elem.find(\"./{0}id\".format(namespace)).text\r\n # Filter pages that are not in the \"main\" namespace.\r\n if ns != \"0\":\r\n root.clear()\r\n continue\r\n raw_content = elem.find(\"./{0}revision/{0}text\".format(namespace)).text\r\n root.clear()\r\n\r\n if \"Campeonato Mundial de futsal de la AMF 2015\" in title:\r\n yield (id_, title, raw_content)\r\n\r\nfor id_, title, raw_content in _extract_content(filepath):\r\n wikicode = parser.parse(raw_content)\r\n```\r\n\r\nThe copied the raw content that can't be parsed [here](https://pastebin.com/raw/ZbmevLyH).\r\n\r\nThe minimal code to reproduce is:\r\n```python\r\nimport mwparserfromhell as parser\r\nimport requests\r\n\r\nraw_content = requests.get(\"https://pastebin.com/raw/ZbmevLyH\").content.decode(\"utf-8\")\r\nwikicode = parser.parse(raw_content)\r\n\r\n```\r\n\r\nI will create an issue on mwparserfromhell's repo to see if we can fix that\r\n",
"This going to be fixed in the next `mwparserfromhell` release :)",
"Fixed in `mwparserfromhell` version 0.6."
] |
https://api.github.com/repos/huggingface/datasets/issues/320 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/320/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/320/comments | https://api.github.com/repos/huggingface/datasets/issues/320/events | https://github.com/huggingface/datasets/issues/320 | 647,188,167 | MDU6SXNzdWU2NDcxODgxNjc= | 320 | Blog Authorship Corpus, Non Matching Splits Sizes Error, nlp viewer | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | [] | null | 2 | "2020-06-29T07:36:35Z" | "2020-06-29T14:44:42Z" | "2020-06-29T14:44:42Z" | CONTRIBUTOR | null | null | null | Selecting `blog_authorship_corpus` in the nlp viewer throws the following error:
```
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train', num_bytes=614706451, num_examples=535568, dataset_name='blog_authorship_corpus')}, {'expected': SplitInfo(name='validation', num_bytes=37500394, num_examples=31277, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='validation', num_bytes=32553710, num_examples=28521, dataset_name='blog_authorship_corpus')}]
Traceback:
File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script
exec(code, module.__dict__)
File "/home/sasha/nlp-viewer/run.py", line 172, in <module>
dts, fail = get(str(option.id), str(conf_option.name) if conf_option else None)
File "/home/sasha/streamlit/lib/streamlit/caching.py", line 591, in wrapped_func
return get_or_create_cached_value()
File "/home/sasha/streamlit/lib/streamlit/caching.py", line 575, in get_or_create_cached_value
return_value = func(*args, **kwargs)
File "/home/sasha/nlp-viewer/run.py", line 132, in get
builder_instance.download_and_prepare()
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 432, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 488, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/utils/info_utils.py", line 70, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
```
@srush @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/320/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/320/timeline | null | completed | false | [
"I wonder if this means downloading failed? That corpus has a really slow server.",
"This dataset seems to have a decoding problem that results in inconsistencies in the number of generated examples.\r\nSee #215.\r\nThat's why we end up with a `NonMatchingSplitsSizesError `."
] |
https://api.github.com/repos/huggingface/datasets/issues/319 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/319/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/319/comments | https://api.github.com/repos/huggingface/datasets/issues/319/events | https://github.com/huggingface/datasets/issues/319 | 646,792,487 | MDU6SXNzdWU2NDY3OTI0ODc= | 319 | Nested sequences with dicts | {
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghomasHudson",
"id": 13795113,
"login": "ghomasHudson",
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghomasHudson"
} | [] | closed | false | null | [] | null | 1 | "2020-06-27T23:45:17Z" | "2020-07-03T10:22:00Z" | "2020-07-03T10:22:00Z" | CONTRIBUTOR | null | null | null | Am pretty much finished [adding a dataset](https://github.com/ghomasHudson/nlp/blob/DocRED/datasets/docred/docred.py) for [DocRED](https://github.com/thunlp/DocRED), but am getting an error when trying to add a nested `nlp.features.sequence(nlp.features.sequence({key:value,...}))`.
The original data is in this format:
```python
{
'title': "Title of wiki page",
'vertexSet': [
[
{ 'name': "mention_name",
'sent_id': "mention in which sentence",
'pos': ["postion of mention in a sentence"],
'type': "NER_type"},
{another mention}
],
[another entity]
]
...
}
```
So to represent this I've attempted to write:
```
...
features=nlp.Features({
"title": nlp.Value("string"),
"vertexSet": nlp.features.Sequence(nlp.features.Sequence({
"name": nlp.Value("string"),
"sent_id": nlp.Value("int32"),
"pos": nlp.features.Sequence(nlp.Value("int32")),
"type": nlp.Value("string"),
})),
...
}),
...
```
This is giving me the error:
```
pyarrow.lib.ArrowTypeError: Could not convert [{'pos': [[0,2], [2,4], [3,5]], "type": ["ORG", "ORG", "ORG"], "name": ["Lark Force", "Lark Force", "Lark Force", "sent_id": [0, 3, 4]}..... with type list: was not a dict, tuple, or recognized null value for conversion to struct type
```
Do we expect the pyarrow stuff to break when doing this deeper nesting? I've checked that it still works when you do `nlp.features.Sequence(nlp.features.Sequence(nlp.Value("string"))` or `nlp.features.Sequence({key:value,...})` just not nested sequences with a dict.
If it's not possible, I can always convert it to a shallower structure. I'd rather not change the DocRED authors' structure if I don't have to though. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/319/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/319/timeline | null | completed | false | [
"Oh yes, this is a backward compatibility feature with tensorflow_dataset in which a `Sequence` or `dict` is converted in a `dict` of `lists`, unfortunately it is not very intuitive, see here: https://github.com/huggingface/nlp/blob/master/src/nlp/features.py#L409\r\n\r\nTo avoid this behavior, you can just define the list in the feature with a simple list or a tuple (which is also simpler to write).\r\nIn your case, the features could be as follow:\r\n``` python\r\n...\r\nfeatures=nlp.Features({\r\n \"title\": nlp.Value(\"string\"),\r\n \"vertexSet\": [[{\r\n \"name\": nlp.Value(\"string\"),\r\n \"sent_id\": nlp.Value(\"int32\"),\r\n \"pos\": nlp.features.Sequence(nlp.Value(\"int32\")),\r\n \"type\": nlp.Value(\"string\"),\r\n }]],\r\n ...\r\n }),\r\n...\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/318 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/318/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/318/comments | https://api.github.com/repos/huggingface/datasets/issues/318/events | https://github.com/huggingface/datasets/pull/318 | 646,682,840 | MDExOlB1bGxSZXF1ZXN0NDQwOTExOTYy | 318 | Multitask | {
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghomasHudson",
"id": 13795113,
"login": "ghomasHudson",
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghomasHudson"
} | [] | closed | false | null | [] | null | 18 | "2020-06-27T13:27:29Z" | "2022-07-06T15:19:57Z" | "2022-07-06T15:19:57Z" | CONTRIBUTOR | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/318.diff",
"html_url": "https://github.com/huggingface/datasets/pull/318",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/318.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/318"
} | Following our discussion in #217, I've implemented a first working version of `MultiDataset`.
There's a function `build_multitask()` which takes either individual `nlp.Dataset`s or `dicts` of splits and constructs `MultiDataset`(s). I've added a notebook with example usage.
I've implemented many of the `nlp.Dataset` methods (cache_files, columns, nbytes, num_columns, num_rows, column_names, schema, shape). Some of the other methods are complicated as they change the number of examples. These raise `NotImplementedError`s at the moment.
This will need some tests which I haven't written yet.
There's definitely room for improvements but I think the general approach is sound. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/318/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/318/timeline | null | null | true | [
"It's definitely going in the right direction ! Thanks for giving it a try\r\n\r\nI really like the API.\r\nIMO it's fine right now if we don't have all the dataset transforms (map, filter, etc.) as it can be done before building the multitask dataset, but it will be important to have them in the end.\r\nAll the formatting methods could easily be added though.\r\n\r\nI think there are some parts that will require some work with apache arrow like slicing. I can find a way to do it using pyarrow tables concatenation (I did something similar when implementing `__getitem__` with an input that is a list of indices [here](https://github.com/huggingface/nlp/pull/322/files#diff-73270df8d7f08c62a27e40806e1a5fb0R463-R469)). It is very fast and it allows to have the same output format as a normal Dataset.\r\n\r\nAlso maybe we should check that not only the columns but also the schemas match ?\r\nAnd maybe add the `seed` of the shuffling step as an argument ?\r\n\r\n",
"Maybe we should remove the methods that are not implemented for now, WDYT @thomwolf ?",
"That's an interesting first draft, thanks a lot for that and the user facing API is really nice.\r\n\r\nI think we should dive more into this and the questions of #217 before merging the first version though.\r\n\r\nIn particular, the typical way to do multi-tasking is usually to sample a task and then sample a batch within the selected task. I think we should probably stay be closer to this traditional approach, or at least make it very easy to do, rather than go to close to the T5 approach which is very specific to this paper.\r\n\r\nIn this regard, it seems important to find some way to address the remarks of @zphang. I'm still wondering if we should not adopt more of a sampling approach rather than an iteration approach.",
"@thomwolf Thanks! I mainly wanted to get something working quickly for my own MTL research. I agree with a lot of the points you made so I'll convert this pull request back to a draft.\r\n\r\nFor your specific point about 'batch-level' multitask mixing, it would be a pretty trivial change to add a `batch_size` parameter and ensure every `batch_size` examples are from the same task. This would certainly work, but would add a notion of 'batches' to a Dataset, which does feel like a 'Sampler-level' concept and not a Dataset one. There's also the possibility of wanting some specific task-level sampling functionality (e.g. applying `SortishSampler` to each task) which would only work with this kind of 2 step sampling approach. My first proposal in the transformers repo was actually a Sampler https://github.com/huggingface/transformers/issues/4340. I wonder whether functionality at the sampler-level has a place in the vision for the `nlp` repo?\r\n\r\nI imagine following a sampling approach you'd have to abandon maintaining the same user-facing API as a standard dataset (A shame because replacing a single dataset seamlessly with a multitask one is a really nice user-experience).\r\n\r\nRandom half-Idea: You could have a class which accepts a list of any iterables (either a Dataset or a DataLoader which already is doing the batching). Not sure what interface you'd present though. hmmm. \r\n\r\nThere's definitely more discussion to have. \r\n",
"Are there any updates on making multi-task learning more officially supported in the datasets/transformers libraries? \r\nGiven that many papers use more than one task, it would be great to have multi-task learning more officially supported and easier to use. There are a few notebooks/blogs about using HF Transformers for this, but they all mention that it's more of a hack and not really officially supported (e.g. [this notebook](https://colab.research.google.com/github/zphang/zphang.github.io/blob/master/files/notebooks/Multi_task_Training_with_Transformers_NLP.ipynb#scrollTo=xW8bnTgCsx5c), or [this blog](https://medium.com/@shahrukhx01/multi-task-learning-with-transformers-part-1-multi-prediction-heads-b7001cf014bf)). \r\n\r\n[jiant](https://github.com/nyu-mll/jiant) was a framework built on transformers that made multi-task learning a first class feature of the library until recently, but they stopped maintaining their library a month ago ([see here](https://github.com/nyu-mll/jiant)). \r\nThis could be a good reason to increase support from the HF team? @lhoestq @thomwolf \r\n\r\nI'm not advanced enough to contribute on this, but an up-to-date notebook showing how to train a model e.g. on both MLM and next-sentence-prediction would already be very useful!",
"> Are there any updates on making multi-task learning more officially supported in the datasets/transformers libraries? Given that many papers use more than one task, it would be great to have multi-task learning more officially supported and easier to use. There are a few notebooks/blogs about using HF Transformers for this, but they all mention that it's more of a hack and not officially supported (e.g. [this notebook](https://colab.research.google.com/github/zphang/zphang.github.io/blob/master/files/notebooks/Multi_task_Training_with_Transformers_NLP.ipynb#scrollTo=xW8bnTgCsx5c), or [this blog](https://medium.com/@shahrukhx01/multi-task-learning-with-transformers-part-1-multi-prediction-heads-b7001cf014bf)).\r\n> \r\n> [jiant](https://github.com/nyu-mll/jiant) was a framework built on transformers that made multi-task learning a first class feature of the library until recently, but they stopped maintaining their library a month ago. This could be a good reason to increase support from the HF team? @lhoestq\r\n> \r\n> I'm not advanced enough to contribute on this, but an up-to-date notebook showing how to train a model e.g. on both MLM and NSP would already be very useful!\r\n\r\nI kinda stopped working on this as I didn't really get any response on an actual workable solution.\r\n\r\nThe problem that I came up against after initially being redirected here after [proposing this in the transformers repo](https://github.com/huggingface/transformers/issues/4340) ([among](https://github.com/huggingface/transformers/issues/6872) [others](https://github.com/huggingface/transformers/issues/1856)) , was the request be able to do the multitask mixing at the batch level as well as at the level of individual examples. As this repo doesn't really have the concept of 'batches' it would need to be implemented in the transformers repo, rather than here. You could then pick which level to do your multitask learning on.\r\n\r\nWork on T5 and as of last week, on [exT5](https://arxiv.org/pdf/2111.10952.pdf), have shown that multitask mixing on the example level works incredibly well (with a big enough batch size), so if you're ok doing that, then this pull request works.\r\n\r\nI completely agree that multitask learning is a vital part of modern NLP, nearly every piece of research code I write has at least some aspect of multitask learning (currently using this patch). Many of the top GLUE and SuperGLUE submissions are using some aspect of mutlitask learning. We need to support it.",
"Fully agree. Batching and data loading is one important thing. The part I'm struggling with right now is the classification head (which is more part of the Transformers repo, but also essential for multi-task learning). @ghomasHudson, how do you tune two classification heads simultaneously? Say, when I want to fine-tune an existing base-model on some classification task (like NLI, or next-sentence-prediction) and at the same time add some MLM for regularisation & domain adaptation. In this case I need two classification heads, but I don't know how to switch them between the batches. ",
"> Fully agree. Batching and data loading is one important thing. The part I'm struggling with right now is the classification head (which is more part of the Transformers repo, but also essential for multi-task learning). @ghomasHudson, how do you tune two classification heads simultaneously? Say, when I want to fine-tune an existing base-model on some classification task (like NLI, or next-sentence-prediction) and at the same time add some MLM for regularisation & domain adaptation. In this case I need two classification heads, but I don't know how to switch them between the batches.\r\n\r\nThis pull request is mainly focused on getting the data in the right format, but you're right that there's no easy way to pick between the heads without something like jiant. You could of course replicate this functionality yourself - probably by making a class that implements the functionality of both `ModelNameForSequenceClassification` or `ModelNameForMaskedLM` picking between them depending on some task parameter you add to the forward pass. \r\n\r\njiant make this approach model agnostic by [ignoring the custom per-model head implementations of huggingface](https://github.com/nyu-mll/jiant/blob/386d4e726a27becda1b03c241f064eb13c54860f/jiant/proj/main/modeling/heads.py#L17-L18), instead making generic versions. Then the jiant code [passes a `task` parameter](https://github.com/nyu-mll/jiant/blob/386d4e726a27becda1b03c241f064eb13c54860f/jiant/proj/main/modeling/primary.py#L107-L109) into their [JiantModel](https://github.com/nyu-mll/jiant/blob/386d4e726a27becda1b03c241f064eb13c54860f/jiant/proj/main/modeling/primary.py#L36-L79) wrapper. To implement this in huggingface transformers would require quite a few modifications to the current approach (potentially interfering with some other project aims e.g. code readability), so you might find it tricky to get a change like that accepted. It would be super cool though.\r\n\r\nAnd there's of course the exT5 way of doing things too where you sidestep this issue entirely by treating both tasks as text-to-text problems so you can end up with 100% shared parameters, e.g.\r\nMLM: `Lorem <mask_0> amet, consectetur <mask_1> do eiusmod tempor incididunt ut labore <mask_2>`\r\nNLI: `Premise: The Old One always comforted Ca'daan, except today. hypothesis: Ca'daan knew the Old One very well.`\r\nThis also allows you to do mixed batches of both tasks.\r\n\r\nPersonally, my research mainly focuses on this last approach, using the structure of the data itself to indicate the task rather than swapping in and out different parts of the network.",
"Hi! `jiant` maintainer here, don't have much to add to the conversation yet but I'm happy to share my experience/thoughts on working with Multitask models if people have questions.",
"Hi ! I think it could be easier to simply share as examples in `transformers` some code that uses `jiant` and/or subclass/reimplement some part of `transformers` for multitask ?",
"> Hi ! I think it could be easier to simply share as examples in `transformers` some code that uses `jiant` and/or subclass/reimplement some part of `transformers` for multitask ?\r\n\r\nWell since `jiant` requires new huggingface models to be explicitly added (as there are [\"subtle differences in the models that jiant must abstract\"](https://github.com/nyu-mll/jiant/blob/master/guides/models/adding_models.md)), and isn't being maintained anymore, then the first option might be out of date quickly.\r\n\r\nIf `transformers` could move towards making the task-specific heads more generic and as well as [creating a new base model in the `__init__` method](https://github.com/huggingface/transformers/blob/43f953cc2eec804eba04e2a9ae164d1a33fd97a8/src/transformers/models/bert/modeling_bert.py#L1502), allowing it to be passed as an argument (along with other little tweaks to standardize the approach), then this functionality could be moved into `transformers` itself.\r\n\r\nIt does seem a little redundant to have `jiant` as a library abstracting all the idiosyncrasies of each model type, where this could be done directly in the `transformers` repo in a single place alongside the model.\r\n\r\nIt's not an easy problem to solve though, especially balanced with the desire to expose models with minimal abstraction. @zphang probably knows more about this than me though.",
"As mentioned, one of the main obstacles is that HF/T doesn't support generic heads. At first glance, this should be easy, since the interface is quite simple: models output both a token-wise and a sequence representation (e.g. `[CLS]`), and heads use either one and output the corresponding predictions/losses.\r\n\r\nHowever, there are a number of cases where this doesn't work. One of them is multiple-choice tasks like HellaSwag, which is a multiple choice task with 4 text options. The way this is normally formatted is that you encode `context + question + option_X` for X=1..4, and then score all four options based on a scoring head and pick the highest scoring option as the prediction. This requires you to run the encoder on 4 separate inputs, which breaks the above abstraction (the task-specific model might need to call the encoder multiple times).\r\n\r\nAnother thing is batching. You can imagine with the above that you might want a different batch size for multiple-choice tasks compared to simpler classification tasks. This means you need task-specific batching as well. In addition, [it's been shown](https://arxiv.org/abs/2101.11038) that you really want to mix tasks within a single batch. This also leads into issues like how you want to sample different task examples, early stopping on them, how to mix the validation scores, etc. (`jiant` addressed these, through probably more-complicated-than-necessary configurations.)\r\n\r\nNone of these are insurmountable problems, but it requires some tweaking of the current code layout to get it to work. I would guess that it wouldn't take much work to get a 90% implementation.",
"> Another thing is batching. You can imagine with the above that you might want a different batch size for multiple-choice tasks compared to simpler classification tasks. This means you need task-specific batching as well. In addition, [it's been shown](https://arxiv.org/abs/2101.11038) that you really want to mix tasks within a single batch. This also leads into issues like how you want to sample different task examples, early stopping on them, how to mix the validation scores, etc. (`jiant` addressed these, through probably more-complicated-than-necessary configurations.)\r\n\r\nThat's reassuring. exT5 find the same thing - that mixing tasks together in a batch gives better performance (provided the batch size is big enough that each batch contains a mix of different tasks). Assuming this, we can ignore doing things at the batch-level and just do this at the individual example level - in which case this pull request already does the data mixing part of the problem! Balancing different tasks could easily be added here by implementing temperature-scaled mixing, custom weights, etc...\r\n\r\nTo make a generic implementation of this using different heads would be hard (impossible?) without doing the sub-batching that Muppet do - in which case we're back at dealing with the 'batch' (sub-batch) level which would need an implementation in `transformers` not here.\r\n\r\n",
"Mixing at example should work fine. One issue though is that, as mentioned above, different tasks maybe actually require different amounts of memory, so downstream the user would have to find some way to handle that. But this might be one of those \"the last/edge-case 10% is the hardest\" to handle kind of deals.",
"Very true - there's always going to be those cases. I also feel that the way things are going, if we just leave this for a few years no one will be wanting to use task-specific heads anymore - it'll all be task prompts included in the input a-la GPT, T5, etc... which will make this substantially simpler to implement.\r\n\r\nIt's quite tricky to make a suitably non-opinionated generic version of this at the moment.",
"> Is there an advantage to varying the proportions of each task in each batch\r\n\r\nSome tasks have much less data than others. E.g. SNLI vs. CoLA is almost a 100x difference, so people often sample differently-sized tasks differently.",
"As a short-term solution, I like @lhoestq's suggestion to create a notebook that shows how to implement multi-task learning by subclassing some transformer & dataset classes in a general way. I've been trying to get @zphang's [great but old notebook](https://colab.research.google.com/github/zphang/zphang.github.io/blob/master/files/notebooks/Multi_task_Training_with_Transformers_NLP.ipynb#scrollTo=CQ39AbTAPAUi) on multi-task learning running today and I didn't get it to work, probably because it was implemented a long time ago with `transformers==2.11`, `torch==1.2`~ etc and installing older versions still caused errors.\r\nThere is also this [interesting new repo](https://github.com/shahrukhx01/multitask-learning-transformers), which has a cool way of enabling you to save and load a model with two classification heads ([see model here](https://huggingface.co/shahrukhx01/bert-multitask-query-classifiers) and blog post [here](https://medium.com/@shahrukhx01/multi-task-learning-with-transformers-part-1-multi-prediction-heads-b7001cf014bf)). Haven't tried it yet, but it only uses `BertForSequenceClassification` instead of the more general AutoModelForXYZ\r\n\r\n@zphang, would you maybe be up for contributing an updated version of your older notebook with the latest version of `transformers` and `datasets` which runs in today's colabs? I feel like this would be very helpful for the community and if you keep the classes/functions somewhat general, people can easily adapt it to their use cases! 🙏 :) \r\nWould be a great addition to the [HF notebooks](https://huggingface.co/docs/transformers/notebooks).\r\n\r\nIn the medium-term, I agree that it would be great to have more native support for this via the HF libraries. I feels weird that you can neither train the old BERT (trained on two tasks) nor any of the newer models, without some hacks. ",
"@zphang would love to see the newer notebook as suggested by @MoritzLaurer "
] |
https://api.github.com/repos/huggingface/datasets/issues/317 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/317/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/317/comments | https://api.github.com/repos/huggingface/datasets/issues/317/events | https://github.com/huggingface/datasets/issues/317 | 646,555,384 | MDU6SXNzdWU2NDY1NTUzODQ= | 317 | Adding a dataset with multiple subtasks | {
"avatar_url": "https://avatars.githubusercontent.com/u/294483?v=4",
"events_url": "https://api.github.com/users/erickrf/events{/privacy}",
"followers_url": "https://api.github.com/users/erickrf/followers",
"following_url": "https://api.github.com/users/erickrf/following{/other_user}",
"gists_url": "https://api.github.com/users/erickrf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/erickrf",
"id": 294483,
"login": "erickrf",
"node_id": "MDQ6VXNlcjI5NDQ4Mw==",
"organizations_url": "https://api.github.com/users/erickrf/orgs",
"received_events_url": "https://api.github.com/users/erickrf/received_events",
"repos_url": "https://api.github.com/users/erickrf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/erickrf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erickrf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/erickrf"
} | [] | closed | false | null | [] | null | 1 | "2020-06-26T23:14:19Z" | "2020-10-27T15:36:52Z" | "2020-10-27T15:36:52Z" | NONE | null | null | null | I intent to add the datasets of the MT Quality Estimation shared tasks to `nlp`. However, they have different subtasks -- such as word-level, sentence-level and document-level quality estimation, each of which having different language pairs, and some of the data reused in different subtasks.
For example, in [QE 2019,](http://www.statmt.org/wmt19/qe-task.html) we had the same English-Russian and English-German data for word-level and sentence-level QE.
I suppose these datasets could have both their word and sentence-level labels inside `nlp.Features`; but what about other subtasks? Should they be considered a different dataset altogether?
I read the discussion on #217 but the case of QE seems a lot simpler. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/317/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/317/timeline | null | completed | false | [
"For one dataset you can have different configurations that each have their own `nlp.Features`.\r\nWe imagine having one configuration per subtask for example.\r\nThey are loaded with `nlp.load_dataset(\"my_dataset\", \"my_config\")`.\r\n\r\nFor example the `glue` dataset has many configurations. It is a bit different from your case though because each configuration is a dataset by itself (sst2, mnli).\r\nAnother example is `wikipedia` that has one configuration per language."
] |
https://api.github.com/repos/huggingface/datasets/issues/316 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/316/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/316/comments | https://api.github.com/repos/huggingface/datasets/issues/316/events | https://github.com/huggingface/datasets/pull/316 | 646,366,450 | MDExOlB1bGxSZXF1ZXN0NDQwNjY5NzY5 | 316 | add AG News dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jxmorris12",
"id": 13238952,
"login": "jxmorris12",
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jxmorris12"
} | [] | closed | false | null | [] | null | 1 | "2020-06-26T16:11:58Z" | "2020-06-30T09:58:08Z" | "2020-06-30T08:31:55Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/316.diff",
"html_url": "https://github.com/huggingface/datasets/pull/316",
"merged_at": "2020-06-30T08:31:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/316.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/316"
} | adds support for the AG-News topic classification dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/316/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/316/timeline | null | null | true | [
"Thanks @jxmorris12 for adding this adding. \r\nCan you please add a small description of the PR?"
] |
https://api.github.com/repos/huggingface/datasets/issues/315 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/315/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/315/comments | https://api.github.com/repos/huggingface/datasets/issues/315/events | https://github.com/huggingface/datasets/issues/315 | 645,888,943 | MDU6SXNzdWU2NDU4ODg5NDM= | 315 | [Question] Best way to batch a large dataset? | {
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jarednielsen",
"id": 4564897,
"login": "jarednielsen",
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jarednielsen"
} | [
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] | open | false | null | [] | null | 11 | "2020-06-25T22:30:20Z" | "2020-10-27T15:38:17Z" | null | CONTRIBUTOR | null | null | null | I'm training on large datasets such as Wikipedia and BookCorpus. Following the instructions in [the tutorial notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb), I see the following recommended for TensorFlow:
```python
train_tf_dataset = train_tf_dataset.filter(remove_none_values, load_from_cache_file=False)
columns = ['input_ids', 'token_type_ids', 'attention_mask', 'start_positions', 'end_positions']
train_tf_dataset.set_format(type='tensorflow', columns=columns)
features = {x: train_tf_dataset[x].to_tensor(default_value=0, shape=[None, tokenizer.max_len]) for x in columns[:3]}
labels = {"output_1": train_tf_dataset["start_positions"].to_tensor(default_value=0, shape=[None, 1])}
labels["output_2"] = train_tf_dataset["end_positions"].to_tensor(default_value=0, shape=[None, 1])
### Question about this last line ###
tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8)
```
This code works for something like WikiText-2. However, scaling up to WikiText-103, the last line takes 5-10 minutes to run. I assume it is because tf.data.Dataset.from_tensor_slices() is pulling everything into memory, not lazily loading. This approach won't scale up to datasets 25x larger such as Wikipedia.
So I tried manual batching using `dataset.select()`:
```python
idxs = np.random.randint(len(dataset), size=bsz)
batch = dataset.select(idxs).map(lambda example: {"input_ids": tokenizer(example["text"])})
tf_batch = tf.constant(batch["ids"], dtype=tf.int64)
```
This appears to create a new Apache Arrow dataset with every batch I grab, and then tries to cache it. The runtime of `dataset.select([0, 1])` appears to be much worse than `dataset[:2]`. So using `select()` doesn't seem to be performant enough for a training loop.
Is there a performant scalable way to lazily load batches of nlp Datasets? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/315/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/315/timeline | null | null | false | [
"Update: I think I've found a solution.\r\n\r\n```python\r\noutput_types = {\"input_ids\": tf.int64, \"token_type_ids\": tf.int64, \"attention_mask\": tf.int64}\r\ndef train_dataset_gen():\r\n for i in range(len(train_dataset)):\r\n yield train_dataset[i]\r\ntf_dataset = tf.data.Dataset.from_generator(train_dataset_gen, output_types=output_types)\r\n```\r\n\r\nloads WikiText-2 in 20 ms, and WikiText-103 in 20 ms. It appears to be lazily loading via indexing train_dataset.",
"Yes this is the current best solution. We should probably show it in the tutorial notebook.\r\n\r\nNote that this solution unfortunately doesn't allow to train on TPUs (yet). See #193 ",
"This approach still seems quite slow. When using TFRecords with a similar training loop, I get ~3.0-3.5 it/s on multi-node, multi-GPU training. I notice a pretty severe performance regression when scaling, with observed performance numbers. Since the allreduce step takes less than 100ms/it and I've achieved 80% scaling efficiency up to 64 GPUs, it must be the data pipeline.\r\n\r\n| Nodes | GPUs | Iterations/Second |\r\n| --- | --- | --- |\r\n| 1 | 2 | 2.01 |\r\n| 1 | 8 | 0.81 |\r\n| 2 | 16 | 0.37 |\r\n\r\nHere are performance metrics over 10k steps. The iteration speed appears to follow some sort of caching pattern. I would love to use `nlp` in my project, but a slowdown from 3.0 it/s to 0.3 it/s is too great to stomach.\r\n\r\n<img width=\"1361\" alt=\"Screen Shot 2020-07-02 at 8 29 22 AM\" src=\"https://user-images.githubusercontent.com/4564897/86378156-2f8d3900-bc3e-11ea-918b-c395c3df5377.png\">\r\n",
"An interesting alternative to investigate here would be to use the tf.io library which has some support for Arrow to TF conversion: https://www.tensorflow.org/io/api_docs/python/tfio/arrow/ArrowDataset\r\n\r\nThere are quite a few types supported, including lists so if the unsupported columns are dropped then we could maybe have a zero-copy mapping from Arrow to TensorFlow, including tokenized inputs and 1D tensors like the ones we mostly use in NLP: https://github.com/tensorflow/io/blob/322b3170c43ecac5c6af9e39dbd18fd747913e5a/tensorflow_io/arrow/python/ops/arrow_dataset_ops.py#L44-L72\r\n\r\nHere is an introduction on Arrow to TF using tf.io: https://medium.com/tensorflow/tensorflow-with-apache-arrow-datasets-cdbcfe80a59f",
"Interesting. There's no support for strings, but it does enable int and floats so that would work for tokenized inputs. \r\n\r\nArrowStreamDataset requires loading from a \"record batch iterator\", which can be instantiated from in-memory arrays as described here: https://arrow.apache.org/docs/python/ipc.html. \r\n\r\nBut the nlp.Dataset stores its data as a `pyarrow.lib.Table`, and the underlying features are `pyarrow.lib.ChunkedArray`. I can't find any documentation about lazily creating a record batch iterator from a ChunkedArray or a Table. Have you had any success?\r\n\r\nI can't find [any uses](https://grep.app/search?q=ArrowDataset&filter[lang][0]=Python) of tfio.arrow.ArrowDataset on GitHub.",
"You can use `to_batches` maybe?\r\nhttps://arrow.apache.org/docs/python/generated/pyarrow.Table.html#pyarrow.Table.to_batches",
"Also note that since #322 it is now possible to do\r\n```python\r\nids = [1, 10, 42, 100]\r\nbatch = dataset[ids]\r\n```\r\nFrom my experience it is quite fast but it can take lots of memory for large batches (haven't played that much with it).\r\nLet me know if you think there could be a better way to implement it. (current code is [here](https://github.com/huggingface/nlp/blob/78628649962671b4aaa31a6b24e7275533416845/src/nlp/arrow_dataset.py#L463))",
"Thanks @lhoestq! That format is much better to work with.\r\n\r\nI put together a benchmarking script. This doesn't measure the CPU-to-GPU efficiency, nor how it scales with multi-GPU multi-node training where many processes are making the same demands on the same dataset. But it does show some interesting results:\r\n\r\n```python\r\nimport nlp\r\nimport numpy as np\r\nimport tensorflow as tf\r\nimport time\r\n\r\ndset = nlp.load_dataset(\"wikitext\", \"wikitext-2-raw-v1\", split=\"train\")\r\ndset = dset.filter(lambda ex: len(ex[\"text\"]) > 0)\r\nbsz = 1024\r\nn_batches = 100\r\n\r\ndef single_item_gen():\r\n for i in range(len(dset)):\r\n yield dset[i]\r\n\r\ndef sequential_batch_gen():\r\n for i in range(0, len(dset), bsz):\r\n yield dset[i:i+bsz]\r\n\r\ndef random_batch_gen():\r\n for i in range(len(dset)):\r\n indices = list(np.random.randint(len(dset), size=(bsz,)))\r\n yield dset[indices]\r\n\r\noutput_types = {\"text\": tf.string}\r\nsingle_item = tf.data.Dataset.from_generator(single_item_gen, output_types=output_types).batch(bsz)\r\ninterleaved = tf.data.Dataset.range(10).interleave(\r\n lambda idx: tf.data.Dataset.from_generator(single_item_gen, output_types=output_types),\r\n cycle_length=10,\r\n)\r\nsequential_batch = tf.data.Dataset.from_generator(sequential_batch_gen, output_types=output_types)\r\nrandom_batch = tf.data.Dataset.from_generator(random_batch_gen, output_types=output_types)\r\n\r\ndef iterate(tf_dset):\r\n start = time.perf_counter()\r\n for i, batch in enumerate(tf_dset.take(n_batches)):\r\n pass\r\n elapsed = time.perf_counter() - start\r\n print(f\"{tf_dset} took {elapsed:.3f} secs\")\r\n\r\niterate(single_item)\r\niterate(interleaved)\r\niterate(sequential_batch)\r\niterate(random_batch)\r\n```\r\n\r\nResults:\r\n```\r\n<BatchDataset shapes: {text: <unknown>}, types: {text: tf.string}> took 23.005 secs\r\n<InterleaveDataset shapes: {text: <unknown>}, types: {text: tf.string}> took 0.135 secs\r\n<FlatMapDataset shapes: {text: <unknown>}, types: {text: tf.string}> took 0.074 secs\r\n<FlatMapDataset shapes: {text: <unknown>}, types: {text: tf.string}> took 0.550 secs\r\n```\r\n\r\n- Batching a generator which fetches a single item is terrible.\r\n- Interleaving performs well on a single process, but doesn't scale well to multi-GPU training. I believe the bottleneck here is in Arrow dataset locking or something similar. The numbers from the table above are with interleaving.\r\n- The sequential access dominates the random access (7x faster). Is there any way to bring random access times closer to sequential access? Maybe re-indexing the dataset after shuffling each pass over the data.",
"Hey @jarednielsen \r\n\r\nThanks for this very interesting analysis!! IMHO to read text data one should use `tf.data.TextLineDataset`. It would be interesting to compare what you have done with simply load with a `TextLineDataset` and see if there is a difference.\r\n\r\nA good example can be found here https://www.tensorflow.org/tutorials/load_data/text",
"Thanks! I'm not actually loading in raw text data, that was just the synthetic data I created for this benchmark. A more realistic use case would be a dataset of tokenized examples, which would be a dict of lists of integers. TensorFlow's TextLineDataset greedily loads the dataset into the graph itself, which can lead to out-of-memory errors - one of the main reason I'm so drawn to the `nlp` library is its zero-copy no-RAM approach to dataset loading and mapping. \r\n\r\nIt's quite helpful for running a preprocessing pipeline - a sample ELECTRA pipeline I've built is here: https://github.com/jarednielsen/deep-learning-models/blob/nlp/models/nlp/common/preprocess.py.",
"Sorry, I think I badly expressed myself, my bad. What I suggested is to compare with the usual loading textual data in pure TF with `TextLineDataset` with `nlp`. I know it is not recommended with very large datasets to use it, but I was curious to see how it behaves compared to a processing with `nlp` on smaller datasets.\r\n\r\nBTW your script looks very interesting, thanks for sharing!!"
] |
https://api.github.com/repos/huggingface/datasets/issues/314 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/314/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/314/comments | https://api.github.com/repos/huggingface/datasets/issues/314/events | https://github.com/huggingface/datasets/pull/314 | 645,461,174 | MDExOlB1bGxSZXF1ZXN0NDM5OTM4MTMw | 314 | Fixed singlular very minor spelling error | {
"avatar_url": "https://avatars.githubusercontent.com/u/40696362?v=4",
"events_url": "https://api.github.com/users/SchizoidBat/events{/privacy}",
"followers_url": "https://api.github.com/users/SchizoidBat/followers",
"following_url": "https://api.github.com/users/SchizoidBat/following{/other_user}",
"gists_url": "https://api.github.com/users/SchizoidBat/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SchizoidBat",
"id": 40696362,
"login": "SchizoidBat",
"node_id": "MDQ6VXNlcjQwNjk2MzYy",
"organizations_url": "https://api.github.com/users/SchizoidBat/orgs",
"received_events_url": "https://api.github.com/users/SchizoidBat/received_events",
"repos_url": "https://api.github.com/users/SchizoidBat/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SchizoidBat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SchizoidBat/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SchizoidBat"
} | [] | closed | false | null | [] | null | 1 | "2020-06-25T10:45:59Z" | "2020-06-26T08:46:41Z" | "2020-06-25T12:43:59Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/314.diff",
"html_url": "https://github.com/huggingface/datasets/pull/314",
"merged_at": "2020-06-25T12:43:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/314.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/314"
} | An instance of "independantly" was changed to "independently". That's all. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/314/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/314/timeline | null | null | true | [
"Thank you BatJeti! The storm-joker, aka the typo, finally got caught!"
] |
https://api.github.com/repos/huggingface/datasets/issues/313 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/313/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/313/comments | https://api.github.com/repos/huggingface/datasets/issues/313/events | https://github.com/huggingface/datasets/pull/313 | 645,390,088 | MDExOlB1bGxSZXF1ZXN0NDM5ODc4MDg5 | 313 | Add MWSC | {
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghomasHudson",
"id": 13795113,
"login": "ghomasHudson",
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghomasHudson"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
},
{
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
},
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | 1 | "2020-06-25T09:22:02Z" | "2020-06-30T08:28:11Z" | "2020-06-30T08:28:11Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/313.diff",
"html_url": "https://github.com/huggingface/datasets/pull/313",
"merged_at": "2020-06-30T08:28:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/313.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/313"
} | Adding the [Modified Winograd Schema Challenge](https://github.com/salesforce/decaNLP/blob/master/local_data/schema.txt) dataset which formed part of the [decaNLP](http://decanlp.com/) benchmark. Not sure how much use people would find for it it outside of the benchmark, but it is general purpose.
Code is heavily borrowed from the [decaNLP repo](https://github.com/salesforce/decaNLP/blob/1e9605f246b9e05199b28bde2a2093bc49feeeaa/text/torchtext/datasets/generic.py#L773-L877).
There's a few (possibly overly opinionated) design choices I made:
- I used the train/test/dev split [buried in the decaNLP code](https://github.com/salesforce/decaNLP/blob/1e9605f246b9e05199b28bde2a2093bc49feeeaa/text/torchtext/datasets/generic.py#L852-L855)
- I split out each example into the 2 alternatives. Originally the data uses the format:
```
The city councilmen refused the demonstrators a permit because they [feared/advocated] violence.
Who [feared/advocated] violence?
councilmen/demonstrators
```
I split into the 2 variants:
```
The city councilmen refused the demonstrators a permit because they feared violence.
Who feared violence?
councilmen/demonstrators
The city councilmen refused the demonstrators a permit because they advocated violence.
Who advocated violence?
councilmen/demonstrators
```
I can't see any use for having the options combined into a single example (splitting them is [the way decaNLP processes](https://github.com/salesforce/decaNLP/blob/1e9605f246b9e05199b28bde2a2093bc49feeeaa/text/torchtext/datasets/generic.py#L846-L850)) them. You can't train on both versions with them combined, and splitting the examples later would be a pain to do. I think [winogrande.py](https://github.com/huggingface/nlp/blob/master/datasets/winogrande/winogrande.py) presents the data in this way?
- I've not used the decaNLP framing (appending the options to the question e.g. `Who feared violence?
-- councilmen or demonstrators?`) but left it more generic by adding the options as a new key: `"options":["councilmen","demonstrators"]` This should be an easy thing to change using `map` if needed by a specific application.
Dataset is working as-is but if anyone has any thoughts/preferences on the design decisions here I'm definitely open to different choices. | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/313/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/313/timeline | null | null | true | [
"Looks good to me"
] |
https://api.github.com/repos/huggingface/datasets/issues/312 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/312/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/312/comments | https://api.github.com/repos/huggingface/datasets/issues/312/events | https://github.com/huggingface/datasets/issues/312 | 645,025,561 | MDU6SXNzdWU2NDUwMjU1NjE= | 312 | [Feature request] Add `shard()` method to dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jarednielsen",
"id": 4564897,
"login": "jarednielsen",
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jarednielsen"
} | [] | closed | false | null | [] | null | 2 | "2020-06-24T22:48:33Z" | "2020-07-06T12:35:36Z" | "2020-07-06T12:35:36Z" | CONTRIBUTOR | null | null | null | Currently, to shard a dataset into 10 pieces on different ranks, you can run
```python
rank = 3 # for example
size = 10
dataset = nlp.load_dataset('wikitext', 'wikitext-2-raw-v1', split=f"train[{rank*10}%:{(rank+1)*10}%]")
```
However, this breaks down if you have a number of ranks that doesn't divide cleanly into 100, such as 64 ranks. Is there interest in adding a method shard() that looks like this?
```python
rank = 3
size = 64
dataset = nlp.load_dataset("wikitext", "wikitext-2-raw-v1", split="train").shard(rank=rank, size=size)
```
TensorFlow has a similar API: https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shard. I'd be happy to contribute this code. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/312/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/312/timeline | null | completed | false | [
"Hi Jared,\r\nInteresting, thanks for raising this question. You can also do that after loading with `dataset.select()` or `dataset.filter()` which let you keep only a specific subset of rows in a dataset.\r\nWhat is your use-case for sharding?",
"Thanks for the pointer to those functions! It's still a little more verbose since you have to manually calculate which ids each rank would keep, but definitely works.\r\n\r\nMy use case is multi-node, multi-GPU training and avoiding global batches of duplicate elements. I'm using horovod. You can shuffle indices, or set random seeds, but explicitly sharding the dataset up front is the safest and clearest way I've found to do so."
] |
https://api.github.com/repos/huggingface/datasets/issues/311 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/311/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/311/comments | https://api.github.com/repos/huggingface/datasets/issues/311/events | https://github.com/huggingface/datasets/pull/311 | 645,013,131 | MDExOlB1bGxSZXF1ZXN0NDM5NTQ3OTg0 | 311 | Add qa_zre | {
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghomasHudson",
"id": 13795113,
"login": "ghomasHudson",
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghomasHudson"
} | [] | closed | false | null | [] | null | 0 | "2020-06-24T22:17:22Z" | "2020-06-29T16:37:38Z" | "2020-06-29T16:37:38Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/311.diff",
"html_url": "https://github.com/huggingface/datasets/pull/311",
"merged_at": "2020-06-29T16:37:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/311.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/311"
} | Adding the QA-ZRE dataset from ["Zero-Shot Relation Extraction via Reading Comprehension"](http://nlp.cs.washington.edu/zeroshot/).
A common processing step seems to be replacing the `XXX` placeholder with the `subject`. I've left this out as it's something you could easily do with `map`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/311/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/311/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/310 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/310/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/310/comments | https://api.github.com/repos/huggingface/datasets/issues/310/events | https://github.com/huggingface/datasets/pull/310 | 644,806,720 | MDExOlB1bGxSZXF1ZXN0NDM5MzY1MDg5 | 310 | add wikisql | {
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghomasHudson",
"id": 13795113,
"login": "ghomasHudson",
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghomasHudson"
} | [] | closed | false | null | [] | null | 1 | "2020-06-24T18:00:35Z" | "2020-06-25T12:32:25Z" | "2020-06-25T12:32:25Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/310.diff",
"html_url": "https://github.com/huggingface/datasets/pull/310",
"merged_at": "2020-06-25T12:32:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/310.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/310"
} | Adding the [WikiSQL](https://github.com/salesforce/WikiSQL) dataset.
Interesting things to note:
- Have copied the function (`_convert_to_human_readable`) which converts the SQL query to a human-readable (string) format as this is what most people will want when actually using this dataset for NLP applications.
- `conds` was originally a tuple but is converted to a dictionary to support differing types.
Would be nice to add the logical_form metrics too at some point. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/310/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/310/timeline | null | null | true | [
"That's great work @ghomasHudson !"
] |
https://api.github.com/repos/huggingface/datasets/issues/309 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/309/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/309/comments | https://api.github.com/repos/huggingface/datasets/issues/309/events | https://github.com/huggingface/datasets/pull/309 | 644,783,822 | MDExOlB1bGxSZXF1ZXN0NDM5MzQ1NzYz | 309 | Add narrative qa | {
"avatar_url": "https://avatars.githubusercontent.com/u/8019486?v=4",
"events_url": "https://api.github.com/users/Varal7/events{/privacy}",
"followers_url": "https://api.github.com/users/Varal7/followers",
"following_url": "https://api.github.com/users/Varal7/following{/other_user}",
"gists_url": "https://api.github.com/users/Varal7/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Varal7",
"id": 8019486,
"login": "Varal7",
"node_id": "MDQ6VXNlcjgwMTk0ODY=",
"organizations_url": "https://api.github.com/users/Varal7/orgs",
"received_events_url": "https://api.github.com/users/Varal7/received_events",
"repos_url": "https://api.github.com/users/Varal7/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Varal7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Varal7/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Varal7"
} | [] | closed | false | null | [] | null | 11 | "2020-06-24T17:26:18Z" | "2020-09-03T09:02:10Z" | "2020-09-03T09:02:09Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/309.diff",
"html_url": "https://github.com/huggingface/datasets/pull/309",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/309.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/309"
} | Test cases for dummy data don't pass
Only contains data for summaries (not whole story) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/309/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/309/timeline | null | null | true | [
"Does it make sense to download the full stories? I remember attempting to implement this dataset a while ago and ended up with something like:\r\n```python\r\n def _split_generators(self, dl_manager):\r\n \"\"\"Returns SplitGenerators.\"\"\"\r\n\r\n dl_dir = dl_manager.download_and_extract(_DOWNLOAD_URL)\r\n data_dir = os.path.join(dl_dir, \"narrativeqa-master\")\r\n\r\n urls = {\"test\":{}, \"train\": {},\"valid\":{}}\r\n with open(os.path.join(data_dir,\"documents.csv\")) as f_in:\r\n csv_reader = csv.reader(f_in)\r\n next(csv_reader) # discard header row\r\n for i,row in enumerate(csv_reader):\r\n if i > 1572:\r\n break\r\n if row != []:\r\n urls[row[1]][row[0]] = row[3]\r\n\r\n url_files = {}\r\n for key in urls.keys():\r\n url_files[key] = dl_manager.download_and_extract(urls[key])\r\n\r\n return [\r\n nlp.SplitGenerator(\r\n name=nlp.Split.TRAIN,\r\n gen_kwargs={\r\n \"data_dir\":data_dir,\r\n \"split\":\"train\",\r\n \"doc_id_to_path\":url_files[\"train\"]\r\n }\r\n ),\r\n ....\r\n```\r\nIt does end up cluttering your huggingface cache dir though.",
"Also since there doesn't seem to be any meaning in the order of answer_1 and answer_2, it might make sense to combine them (see [squad.py](https://github.com/huggingface/nlp/blob/8b0ffc85e4e52ae1f18d31be99b6c70b82c991ca/datasets/squad/squad.py#L86-L88)):\r\n```python\r\n\"answers\": nlp.features.Sequence({\r\n \"text\": nlp.Value(\"string\"),\r\n \"tokenized\": nlp.features.Sequence(nlp.Value(\"string\"))\r\n})\r\n```\r\n(the tokenized features should also probably be lists of strings not just strings - see [natural_questions.py](https://github.com/huggingface/nlp/blob/4cd34287300a1135ce7b22f6dd209ca305c71b3a/datasets/natural_questions/natural_questions.py#L83))\r\n\r\nAgain, this is a personal preference thing, but it might be useful to combine the document-related features:\r\n```python\r\n{\r\n \"document\": {\r\n \"id\": nlp.Value(\"string\"),\r\n \"kind\": nlp.Value(\"string\"),\r\n \"url\": nlp.Value(\"string\"),\r\n \"file_size\": nlp.Value(\"int32\"),\r\n \"word_count\": nlp.Value(\"int32\"),\r\n \"start\": nlp.Value(\"string\"),\r\n \"end\": nlp.Value(\"string\"),\r\n \"wiki_url\": nlp.Value(\"string\"),\r\n \"wiki_title\": nlp.Value(\"string\"),\r\n \"summary\": nlp.features.Sequence({\r\n \"text\": nlp.Value(\"string\"),\r\n \"tokens\": nlp.features.Sequence(nlp.Value(\"string\"))\r\n }),\r\n \"text\": nlp.Value(\"string\"),\r\n },\r\n \"question\": nlp.features.Sequence({\r\n \"text\": nlp.Value(\"string\"),\r\n \"tokens\": nlp.features.Sequence(nlp.Value(\"string\"))\r\n }),\r\n \"answers\": nlp.features.Sequence({\r\n \"text\": nlp.Value(\"string\"),\r\n \"tokens\": nlp.features.Sequence(nlp.Value(\"string\"))\r\n })\r\n}\r\n```",
"Did you manage to fix the dummy data @Varal7 ?",
"@lhoestq do you think it's acceptable for the `dl_manager` to go grab all the individual stories from project gutenburg? I've got a working version of that but it does clutter up your huggingface cache somewhat.\r\n\r\nThe real value (and original purpose) of this dataset is doing question answering on the full text.",
"> @lhoestq do you think it's acceptable for the `dl_manager` to go grab all the individual stories from project gutenburg? I've got a working version of that but it does clutter up your huggingface cache somewhat.\r\n> \r\n> The real value (and original purpose) of this dataset is doing question answering on the full text.\r\n\r\nWhat's the problem exactly with the cache ?",
"Nothing, just that because each story is a separate download it gets a bit messy as all 1573 files are under `~/.cache/hugginface/datasets` rather than organized under a subdir.\r\n\r\nProbably doesn't matter to the end user though.",
"Yea I agree it's a mess. I just created #393 to make things easier.",
"I got the PR merged to have a cleaner the cache directory (everything is downloaded inside the 'downloads' sub-directory).\r\nFeel free to download all the stories then @ghomasHudson @Varal7 x)\r\nIf you have the possibility of downloading a compressed file with most of the stories at once it would be better though.",
"Looks good @lhoestq . The problem I'm having at the moment is that stories from project Gutenberg occasionally fail. All books are out of copyright so we should be able to host them. \r\n\r\nHere's a zip file of the full text if we have anywhere to put them: https://drive.google.com/file/d/17jOR7NqvzDwSlPXrlHaYV-PGI8JG-KY5/view?usp=sharing\r\n",
"I put the zip file here @ghomasHudson \r\nhttps://storage.googleapis.com/huggingface-nlp/datasets/narrative_qa/narrativeqa_full_text.zip\r\n\r\nSorry for the delay",
"Closing in favor of #499"
] |
https://api.github.com/repos/huggingface/datasets/issues/308 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/308/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/308/comments | https://api.github.com/repos/huggingface/datasets/issues/308/events | https://github.com/huggingface/datasets/pull/308 | 644,195,251 | MDExOlB1bGxSZXF1ZXN0NDM4ODYyMzYy | 308 | Specify utf-8 encoding for MRPC files | {
"avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4",
"events_url": "https://api.github.com/users/patpizio/events{/privacy}",
"followers_url": "https://api.github.com/users/patpizio/followers",
"following_url": "https://api.github.com/users/patpizio/following{/other_user}",
"gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patpizio",
"id": 15801338,
"login": "patpizio",
"node_id": "MDQ6VXNlcjE1ODAxMzM4",
"organizations_url": "https://api.github.com/users/patpizio/orgs",
"received_events_url": "https://api.github.com/users/patpizio/received_events",
"repos_url": "https://api.github.com/users/patpizio/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patpizio/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patpizio"
} | [] | closed | false | null | [] | null | 0 | "2020-06-23T22:44:36Z" | "2020-06-25T12:52:21Z" | "2020-06-25T12:16:10Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/308.diff",
"html_url": "https://github.com/huggingface/datasets/pull/308",
"merged_at": "2020-06-25T12:16:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/308.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/308"
} | Fixes #307, again probably a Windows-related issue. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/308/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/308/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/307 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/307/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/307/comments | https://api.github.com/repos/huggingface/datasets/issues/307/events | https://github.com/huggingface/datasets/issues/307 | 644,187,262 | MDU6SXNzdWU2NDQxODcyNjI= | 307 | Specify encoding for MRPC | {
"avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4",
"events_url": "https://api.github.com/users/patpizio/events{/privacy}",
"followers_url": "https://api.github.com/users/patpizio/followers",
"following_url": "https://api.github.com/users/patpizio/following{/other_user}",
"gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patpizio",
"id": 15801338,
"login": "patpizio",
"node_id": "MDQ6VXNlcjE1ODAxMzM4",
"organizations_url": "https://api.github.com/users/patpizio/orgs",
"received_events_url": "https://api.github.com/users/patpizio/received_events",
"repos_url": "https://api.github.com/users/patpizio/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patpizio/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patpizio"
} | [] | closed | false | null | [] | null | 0 | "2020-06-23T22:24:49Z" | "2020-06-25T12:16:09Z" | "2020-06-25T12:16:09Z" | CONTRIBUTOR | null | null | null | Same as #242, but with MRPC: on Windows, I get a `UnicodeDecodeError` when I try to download the dataset:
```python
dataset = nlp.load_dataset('glue', 'mrpc')
```
```python
Downloading and preparing dataset glue/mrpc (download: Unknown size, generated: Unknown size, total: Unknown size) to C:\Users\Python\.cache\huggingface\datasets\glue\mrpc\1.0.0...
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in incomplete_dir(dirname)
369 try:
--> 370 yield tmp_dir
371 if os.path.isdir(dirname):
~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
430 verify_infos = not save_infos and not ignore_verifications
--> 431 self._download_and_prepare(
432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
482 # Prepare split will record examples associated to the split
--> 483 self._prepare_split(split_generator, **prepare_split_kwargs)
484 except OSError:
~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in _prepare_split(self, split_generator)
663 generator = self._generate_examples(**split_generator.gen_kwargs)
--> 664 for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False):
665 example = self.info.features.encode_example(record)
~\Miniconda3\envs\nlp\lib\site-packages\tqdm\notebook.py in __iter__(self, *args, **kwargs)
217 try:
--> 218 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs):
219 # return super(tqdm...) will not catch exception
~\Miniconda3\envs\nlp\lib\site-packages\tqdm\std.py in __iter__(self)
1128 try:
-> 1129 for obj in iterable:
1130 yield obj
~\Miniconda3\envs\nlp\lib\site-packages\nlp\datasets\glue\7fc58099eb3983a04c8dac8500b70d27e6eceae63ffb40d7900c977897bb58c6\glue.py in _generate_examples(self, data_file, split, mrpc_files)
514 examples = self._generate_example_mrpc_files(mrpc_files=mrpc_files, split=split)
--> 515 for example in examples:
516 yield example["idx"], example
~\Miniconda3\envs\nlp\lib\site-packages\nlp\datasets\glue\7fc58099eb3983a04c8dac8500b70d27e6eceae63ffb40d7900c977897bb58c6\glue.py in _generate_example_mrpc_files(self, mrpc_files, split)
576 reader = csv.DictReader(f, delimiter="\t", quoting=csv.QUOTE_NONE)
--> 577 for n, row in enumerate(reader):
578 is_row_in_dev = [row["#1 ID"], row["#2 ID"]] in dev_ids
~\Miniconda3\envs\nlp\lib\csv.py in __next__(self)
110 self.fieldnames
--> 111 row = next(self.reader)
112 self.line_num = self.reader.line_num
~\Miniconda3\envs\nlp\lib\encodings\cp1252.py in decode(self, input, final)
22 def decode(self, input, final=False):
---> 23 return codecs.charmap_decode(input,self.errors,decoding_table)[0]
24
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 1180: character maps to <undefined>
```
The fix is the same: specify `utf-8` encoding when opening the file. The previous fix didn't work as MRPC's download process is different from the others in GLUE.
I am going to propose a new PR :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/307/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/307/timeline | null | completed | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/306 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/306/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/306/comments | https://api.github.com/repos/huggingface/datasets/issues/306/events | https://github.com/huggingface/datasets/pull/306 | 644,176,078 | MDExOlB1bGxSZXF1ZXN0NDM4ODQ2MTI3 | 306 | add pg19 dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/108653?v=4",
"events_url": "https://api.github.com/users/lucidrains/events{/privacy}",
"followers_url": "https://api.github.com/users/lucidrains/followers",
"following_url": "https://api.github.com/users/lucidrains/following{/other_user}",
"gists_url": "https://api.github.com/users/lucidrains/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lucidrains",
"id": 108653,
"login": "lucidrains",
"node_id": "MDQ6VXNlcjEwODY1Mw==",
"organizations_url": "https://api.github.com/users/lucidrains/orgs",
"received_events_url": "https://api.github.com/users/lucidrains/received_events",
"repos_url": "https://api.github.com/users/lucidrains/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lucidrains/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucidrains/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lucidrains"
} | [] | closed | false | null | [] | null | 12 | "2020-06-23T22:03:52Z" | "2020-07-06T07:55:59Z" | "2020-07-06T07:55:59Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/306.diff",
"html_url": "https://github.com/huggingface/datasets/pull/306",
"merged_at": "2020-07-06T07:55:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/306.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/306"
} | https://github.com/huggingface/nlp/issues/274
Add functioning PG19 dataset with dummy data
`cos_e.py` was just auto-linted by `make style` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/306/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/306/timeline | null | null | true | [
"@lucidrains - Thanks a lot for making the PR - PG19 is a super important dataset! Thanks for making it. Many people are asking for PG-19, so it would be great to have that in the library as soon as possible @thomwolf .",
"@mariamabarham yup! around 11GB!",
"I'm looking forward to our first deep learning written novel already lol. It's definitely happening",
"Good to merge IMO.",
"Oh I just noticed but as we changed the urls to download the files, we have to update `dataset_infos.json`.\r\nCould you re-rurn `nlp-cli test ./datasets/pg19 --save_infos` ?",
"@lhoestq on it!",
"should be good!",
"@lhoestq - I think it's good to merge no?",
"`dataset_infos.json` is still not up to date with the new urls (we can see that there are urls like `gs://deepmind-gutenberg/train/*` instead of `https://storage.googleapis.com/deepmind-gutenberg/train/*` in the json file)\r\n\r\nCan you check that you re-ran the command to update the json file, and that you pushed the changes @lucidrains ?",
"@lhoestq ohhh, I made the change in this commit https://github.com/lucidrains/nlp/commit/f3e23d823ad9942031be80b7c4e4212c592cd90c , that's interesting that the pull request didn't pick it up. maybe it's because I did it on another machine, let me check and get back to you!",
"@lhoestq wrong branch 😅 thanks for catching! ",
"Awesome thanks 🎉"
] |
https://api.github.com/repos/huggingface/datasets/issues/305 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/305/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/305/comments | https://api.github.com/repos/huggingface/datasets/issues/305/events | https://github.com/huggingface/datasets/issues/305 | 644,148,149 | MDU6SXNzdWU2NDQxNDgxNDk= | 305 | Importing downloaded package repository fails | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [
{
"color": "25b21e",
"default": false,
"description": "A bug in a metric script",
"id": 2067393914,
"name": "metric bug",
"node_id": "MDU6TGFiZWwyMDY3MzkzOTE0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug"
}
] | closed | false | null | [] | null | 0 | "2020-06-23T21:09:05Z" | "2020-07-30T16:44:23Z" | "2020-07-30T16:44:23Z" | MEMBER | null | null | null | The `get_imports` function in `src/nlp/load.py` has a feature to download a package as a zip archive of the github repository and import functions from the unpacked directory. This is used for example in the `metrics/coval.py` file, and would be useful to add BLEURT (@ankparikh).
Currently however, the code seems to have trouble with imports within the package. For example:
```
import nlp
coval = nlp.load_metric('coval')
```
yields:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/yacine/Code/nlp/src/nlp/load.py", line 432, in load_metric
metric_cls = import_main_class(module_path, dataset=False)
File "/home/yacine/Code/nlp/src/nlp/load.py", line 57, in import_main_class
module = importlib.import_module(module_path)
File "/home/yacine/anaconda3/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/yacine/Code/nlp/src/nlp/metrics/coval/a78807df33ac45edbb71799caf2b3b47e55df4fd690267808fe963a5e8b30952/coval.py", line 21, in <module>
from .coval_backend.conll import reader # From: https://github.com/ns-moosavi/coval
File "/home/yacine/Code/nlp/src/nlp/metrics/coval/a78807df33ac45edbb71799caf2b3b47e55df4fd690267808fe963a5e8b30952/coval_backend/conll/reader.py", line 2, in <module>
from conll import mention
ModuleNotFoundError: No module named 'conll'
```
Not sure what the fix would be there. | {
"+1": 0,
"-1": 0,
"confused": 1,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/305/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/305/timeline | null | completed | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/304 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/304/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/304/comments | https://api.github.com/repos/huggingface/datasets/issues/304/events | https://github.com/huggingface/datasets/issues/304 | 644,091,970 | MDU6SXNzdWU2NDQwOTE5NzA= | 304 | Problem while printing doc string when instantiating multiple metrics. | {
"avatar_url": "https://avatars.githubusercontent.com/u/51091425?v=4",
"events_url": "https://api.github.com/users/codehunk628/events{/privacy}",
"followers_url": "https://api.github.com/users/codehunk628/followers",
"following_url": "https://api.github.com/users/codehunk628/following{/other_user}",
"gists_url": "https://api.github.com/users/codehunk628/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/codehunk628",
"id": 51091425,
"login": "codehunk628",
"node_id": "MDQ6VXNlcjUxMDkxNDI1",
"organizations_url": "https://api.github.com/users/codehunk628/orgs",
"received_events_url": "https://api.github.com/users/codehunk628/received_events",
"repos_url": "https://api.github.com/users/codehunk628/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/codehunk628/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codehunk628/subscriptions",
"type": "User",
"url": "https://api.github.com/users/codehunk628"
} | [
{
"color": "25b21e",
"default": false,
"description": "A bug in a metric script",
"id": 2067393914,
"name": "metric bug",
"node_id": "MDU6TGFiZWwyMDY3MzkzOTE0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug"
}
] | closed | false | null | [] | null | 0 | "2020-06-23T19:32:05Z" | "2020-07-22T09:50:58Z" | "2020-07-22T09:50:58Z" | CONTRIBUTOR | null | null | null | When I load more than one metric and try to print doc string of a particular metric,. It shows the doc strings of all imported metric one after the other which looks quite confusing and clumsy.
Attached [Colab](https://colab.research.google.com/drive/13H0ZgyQ2se0mqJ2yyew0bNEgJuHaJ8H3?usp=sharing) Notebook for problem clarification.. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/304/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/304/timeline | null | completed | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/303 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/303/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/303/comments | https://api.github.com/repos/huggingface/datasets/issues/303/events | https://github.com/huggingface/datasets/pull/303 | 643,912,464 | MDExOlB1bGxSZXF1ZXN0NDM4NjI3Nzcw | 303 | allow to move files across file systems | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-06-23T14:56:08Z" | "2020-06-23T15:08:44Z" | "2020-06-23T15:08:43Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/303.diff",
"html_url": "https://github.com/huggingface/datasets/pull/303",
"merged_at": "2020-06-23T15:08:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/303.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/303"
} | Users are allowed to use the `cache_dir` that they want.
Therefore it can happen that we try to move files across filesystems.
We were using `os.rename` that doesn't allow that, so I changed some of them to `shutil.move`.
This should fix #301 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/303/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/303/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/302 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/302/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/302/comments | https://api.github.com/repos/huggingface/datasets/issues/302/events | https://github.com/huggingface/datasets/issues/302 | 643,910,418 | MDU6SXNzdWU2NDM5MTA0MTg= | 302 | Question - Sign Language Datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AmitMY",
"id": 5757359,
"login": "AmitMY",
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AmitMY"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] | closed | false | null | [] | null | 3 | "2020-06-23T14:53:40Z" | "2020-11-25T11:25:33Z" | "2020-11-25T11:25:33Z" | CONTRIBUTOR | null | null | null | An emerging field in NLP is SLP - sign language processing.
I was wondering about adding datasets here, specifically because it's shaping up to be large and easily usable.
The metrics for sign language to text translation are the same.
So, what do you think about (me, or others) adding datasets here?
An example dataset would be [RWTH-PHOENIX-Weather 2014 T](https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/)
For every item in the dataset, the data object includes:
1. video_path - path to mp4 file
2. pose_path - a path to `.pose` file with human pose landmarks
3. openpose_path - a path to a `.json` file with human pose landmarks
4. gloss - string
5. text - string
6. video_metadata - height, width, frames, framerate
------
To make it a tad more complicated - what if sign language libraries add requirements to `nlp`? for example, sign language is commonly annotated using `ilex`, `eaf`, or `srt` files, which are all loadable as text, but there is no reason for the dataset to parse that file by itself, if libraries exist to do so. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/302/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/302/timeline | null | completed | false | [
"Even more complicating - \r\n\r\nAs I see it, datasets can have \"addons\".\r\nFor example, the WebNLG dataset is a dataset for data-to-text. However, a work of mine and other works enriched this dataset with text plans / underlying text structures. In that case, I see a need to load the dataset \"WebNLG\" with \"plans\" addon.\r\n\r\nSame for sign language - if there is a dataset of videos, one addon can be to run OpenPose, another to run ARKit4 pose estimation, and another to run PoseNet, or even just a video embedding addon. (which are expensive to run individually for everyone who wants to use these data)\r\n\r\nThis is something I dabbled with my own implementation to a [research datasets library](https://github.com/AmitMY/meta-scholar/) and I love to get the discussion going on these topics.",
"This is a really cool idea !\r\nThe example for data objects you gave for the RWTH-PHOENIX-Weather 2014 T dataset can totally fit inside the library.\r\n\r\nFor your point about formats like `ilex`, `eaf`, or `srt`, it is possible to use any library in your dataset script.\r\nHowever most user probably won't need these libraries, as most datasets don't need them, and therefore it's unlikely that we will have them in the minimum requirements to use `nlp` (we want to keep it as light-weight as possible). If a user wants to load your dataset and doesn't have the libraries you need, an error is raised asking the user to install them.\r\n\r\nMore generally, we plan to have something like a `requirements.txt` per dataset. This could also be a place for addons as you said. What do you think ?",
"Thanks, Quentin, I think a `requirements.txt` per dataset will be a good thing.\r\nI will work on adding this dataset next week, and once we sort all of the kinks, I'll add more."
] |
https://api.github.com/repos/huggingface/datasets/issues/301 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/301/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/301/comments | https://api.github.com/repos/huggingface/datasets/issues/301/events | https://github.com/huggingface/datasets/issues/301 | 643,763,525 | MDU6SXNzdWU2NDM3NjM1MjU= | 301 | Setting cache_dir gives error on wikipedia download | {
"avatar_url": "https://avatars.githubusercontent.com/u/33862536?v=4",
"events_url": "https://api.github.com/users/hallvagi/events{/privacy}",
"followers_url": "https://api.github.com/users/hallvagi/followers",
"following_url": "https://api.github.com/users/hallvagi/following{/other_user}",
"gists_url": "https://api.github.com/users/hallvagi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hallvagi",
"id": 33862536,
"login": "hallvagi",
"node_id": "MDQ6VXNlcjMzODYyNTM2",
"organizations_url": "https://api.github.com/users/hallvagi/orgs",
"received_events_url": "https://api.github.com/users/hallvagi/received_events",
"repos_url": "https://api.github.com/users/hallvagi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hallvagi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hallvagi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hallvagi"
} | [] | closed | false | null | [] | null | 2 | "2020-06-23T11:31:44Z" | "2020-06-24T07:05:07Z" | "2020-06-24T07:05:07Z" | NONE | null | null | null | First of all thank you for a super handy library! I'd like to download large files to a specific drive so I set `cache_dir=my_path`. This works fine with e.g. imdb and squad. But on wikipedia I get an error:
```
nlp.load_dataset('wikipedia', '20200501.de', split = 'train', cache_dir=my_path)
```
```
OSError Traceback (most recent call last)
<ipython-input-2-23551344d7bc> in <module>
1 import nlp
----> 2 nlp.load_dataset('wikipedia', '20200501.de', split = 'train', cache_dir=path)
~/anaconda3/envs/fastai2/lib/python3.7/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
522 download_mode=download_mode,
523 ignore_verifications=ignore_verifications,
--> 524 save_infos=save_infos,
525 )
526
~/anaconda3/envs/fastai2/lib/python3.7/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
385 with utils.temporary_assignment(self, "_cache_dir", tmp_data_dir):
386 reader = ArrowReader(self._cache_dir, self.info)
--> 387 reader.download_from_hf_gcs(self._cache_dir, self._relative_data_dir(with_version=True))
388 downloaded_info = DatasetInfo.from_directory(self._cache_dir)
389 self.info.update(downloaded_info)
~/anaconda3/envs/fastai2/lib/python3.7/site-packages/nlp/arrow_reader.py in download_from_hf_gcs(self, cache_dir, relative_data_dir)
231 remote_dataset_info = os.path.join(remote_cache_dir, "dataset_info.json")
232 downloaded_dataset_info = cached_path(remote_dataset_info)
--> 233 os.rename(downloaded_dataset_info, os.path.join(cache_dir, "dataset_info.json"))
234 if self._info is not None:
235 self._info.update(self._info.from_directory(cache_dir))
OSError: [Errno 18] Invalid cross-device link: '/home/local/NTU/nn/.cache/huggingface/datasets/025fa4fd4f04aaafc9e939260fbc8f0bb190ce14c61310c8ae1ddd1dcb31f88c.9637f367b6711a79ca478be55fe6989b8aea4941b7ef7adc67b89ff403020947' -> '/data/nn/nlp/wikipedia/20200501.de/1.0.0.incomplete/dataset_info.json'
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/301/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/301/timeline | null | completed | false | [
"Whoops didn't mean to close this one.\r\nI did some changes, could you try to run it from the master branch ?",
"Now it works, thanks!"
] |
https://api.github.com/repos/huggingface/datasets/issues/300 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/300/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/300/comments | https://api.github.com/repos/huggingface/datasets/issues/300/events | https://github.com/huggingface/datasets/pull/300 | 643,688,304 | MDExOlB1bGxSZXF1ZXN0NDM4NDQ4Mjk1 | 300 | Fix bertscore references | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-06-23T09:38:59Z" | "2020-06-23T14:47:38Z" | "2020-06-23T14:47:37Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/300.diff",
"html_url": "https://github.com/huggingface/datasets/pull/300",
"merged_at": "2020-06-23T14:47:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/300.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/300"
} | I added some type checking for metrics. There was an issue where a metric could interpret a string a a list. A `ValueError` is raised if a string is given instead of a list.
Moreover I added support for both strings and lists of strings for `references` in `bertscore`, as it is the case in the original code.
Both ways work:
```
import nlp
scorer = nlp.load_metric("bertscore")
with open("pred.txt") as p, open("ref.txt") as g:
for lp, lg in zip(p, g):
scorer.add(lp, [lg])
score = scorer.compute(lang="en")
```
```
import nlp
scorer = nlp.load_metric("bertscore")
with open("pred.txt") as p, open("ref.txt") as g:
for lp, lg in zip(p, g):
scorer.add(lp, lg)
score = scorer.compute(lang="en")
```
This should fix #295 and #238 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/300/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/300/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/299 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/299/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/299/comments | https://api.github.com/repos/huggingface/datasets/issues/299/events | https://github.com/huggingface/datasets/pull/299 | 643,611,557 | MDExOlB1bGxSZXF1ZXN0NDM4Mzg0NDgw | 299 | remove some print in snli file | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | 1 | "2020-06-23T07:46:06Z" | "2020-06-23T08:10:46Z" | "2020-06-23T08:10:44Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/299.diff",
"html_url": "https://github.com/huggingface/datasets/pull/299",
"merged_at": "2020-06-23T08:10:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/299.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/299"
} | This PR removes unwanted `print` statements in some files such as `snli.py` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/299/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/299/timeline | null | null | true | [
"I guess you can just rebase from master to fix the CI"
] |
https://api.github.com/repos/huggingface/datasets/issues/298 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/298/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/298/comments | https://api.github.com/repos/huggingface/datasets/issues/298/events | https://github.com/huggingface/datasets/pull/298 | 643,603,804 | MDExOlB1bGxSZXF1ZXN0NDM4Mzc4MDM4 | 298 | Add searchable datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 8 | "2020-06-23T07:33:03Z" | "2020-06-26T07:50:44Z" | "2020-06-26T07:50:43Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/298.diff",
"html_url": "https://github.com/huggingface/datasets/pull/298",
"merged_at": "2020-06-26T07:50:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/298.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/298"
} | # Better support for Numpy format + Add Indexed Datasets
I was working on adding Indexed Datasets but in the meantime I had to also add more support for Numpy arrays in the lib.
## Better support for Numpy format
New features:
- New fast method to convert Numpy arrays from Arrow structure (up to x100 speed up) using Pandas.
- Allow to output Numpy arrays in batched `.map`, which was the only missing part to fully support Numpy arrays.
Pandas offers fast zero-copy Numpy arrays conversion from Arrow structures.
Using it we can speed up the reading of memory-mapped Numpy array stored in Arrow format.
With these changes you can easily compute embeddings of texts using `.map()`. For example:
```python
def embed(text):
tokenized_example = tokenizer.encode(text, return_tensors="pt")
embeddings = bert_encoder(tokenized_examples).numpy()
return embeddings
dset_with_embeddings = dset.map(lambda example: {"embeddings": embed(example["text])})
```
And then reading the embeddings from the arrow format is be very fast.
PS1: Note that right now only 1d arrays are supported.
PS2: It seems possible to do without pandas but it will require more _trickery_.
PS3: I did a simple benchmark with google colab that you can view here:
https://colab.research.google.com/drive/1QlLTR6LRwYOKGJ-hTHmHyolE3wJzvfFg?usp=sharing
## Add Indexed Datasets
For many retrieval tasks it is convenient to index a dataset to be able to run fast queries.
For example for models like DPR, REALM, RAG etc. that are models for Open Domain QA, the retrieval step is very important.
Therefore I added two ways to add an index to a column of a dataset:
1) You can index it using a Dense Index like Faiss. It is used to index vectors.
Faiss is a library for efficient similarity search and clustering of dense vectors.
It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM.
2) You can index it using a Sparse Index like Elasticsearch. It is used to index text and run queries based on BM25 similarity.
Example of usage:
```python
ds = nlp.load_dataset('crime_and_punish', split='train')
ds_with_embeddings = ds.map(lambda example: {'embeddings': embed(example['line']})) # `embed` outputs a `np.array`
ds_with_embeddings.add_vector_index(column='embeddings')
scores, retrieved_examples = ds_with_embeddings.get_nearest(column='embeddings', query=embed('my new query'), k=10)
```
```python
ds = nlp.load_dataset('crime_and_punish', split='train')
es_client = elasticsearch.Elasticsearch()
ds.add_text_index(column='line', es_client=es_client, index_name="my_es_index")
scores, retrieved_examples = ds.get_nearest(column='line', query='my new query', k=10)
```
PS4: Faiss allows to specify many options for the [index](https://github.com/facebookresearch/faiss/wiki/The-index-factory) and for [GPU settings](https://github.com/facebookresearch/faiss/wiki/Faiss-on-the-GPU). I made sure that the user has full control over those settings.
## Tests
I added tests for Faiss, Elasticsearch and indexed datasets.
I had to edit the CI config because all the test scripts were not being run by CircleCI.
------------------
I'd be really happy to have some feedbacks :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/298/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/298/timeline | null | null | true | [
"Looks very cool! Only looked at it superficially though",
"Alright I think I've checked all your comments, thanks :)\r\n\r\nMoreover I just added a way to serialize faiss indexes.\r\nThis is important because for big datasets the index construction can take some time.\r\n\r\nExamples:\r\n\r\n```python\r\nds = nlp.load_dataset('crime_and_punish', split='train')\r\nds_with_embeddings = ds.map(lambda example: {'embeddings': embed(example['line']}))\r\nds_with_embeddings.add_faiss_index(column='embeddings')\r\n# query\r\nscores, retrieved_examples = ds_with_embeddings.get_nearest_examples('embeddings', embed('my new query'), k=10)\r\n# save index\r\nds_with_embeddings.get_index('embeddings').save('my_index.faiss')\r\n```\r\n\r\n```python\r\nds = nlp.load_dataset('crime_and_punish', split='train')\r\n# load index\r\nfaiss_index = nlp.search.FaissIndex.load('my_index.faiss')\r\nds.add_faiss_index('embeddings', faiss_index=faiss_index)\r\n# query\r\nscores, retrieved_examples = ds.get_nearest_examples('embeddings', embed('my new query'), k=10)\r\n```\r\n\r\nLet me know what you think",
"Nice!\r\n\r\nHere are a few comments:\r\n\r\nI think it would be good to separate (1) the name of the column we use for indexing and (2) the name of the index itself, at least in our head. As I understand it, once the index is created, the column we used to create it is irrelevant so the column name will only be relevant in the `add_faiss_index` and we should be able to supply a different index name, e.g. `my_faiss_index`. When we reload an index, we don't really care about the column that was used to create it, right? so it's maybe better to have an `index_name` (which default to the column name for a simple user experience but it can also be something else and this should be clear in our head when we define the API).\r\n\r\nI'm wondering if we should not have a triple of methods for each retrieval engine: `add_xxx_index`, `save_xxx_index` and `load_xxx_index` when `xxx` can be `faiss` or `elasticsearch`. I'm not a fan of exposing `nlp.search.FaissIndex` unless you think there is a strong reason to have the user learn this abstraction.\r\n\r\nLast but not least, I think we should already think about hosting index on our S3. I would maybe go for something like this: host the index serialized with the cached dataset on user-provided namespaces:\r\n```python\r\nwiki_indexed = load_dataset('thom/wiki_indexed_with_dpr_faiss')\r\n```",
"I agree, I just changed to using `index_name` and having add/save/load methods",
"To summarize:\r\n\r\n\r\n```python\r\nds = nlp.load_dataset('crime_and_punish', split='train')\r\nds_with_embeddings = ds.map(lambda example: {'embeddings': embed(example['line']}))\r\nds_with_embeddings.add_faiss_index(column='embeddings')\r\n# query\r\nscores, retrieved_examples = ds_with_embeddings.get_nearest_examples('embeddings', embed('my new query'), k=10)\r\n# save index\r\nds_with_embeddings.save_faiss_index('embeddings', 'my_index.faiss')\r\n```\r\n\r\n```python\r\nds = nlp.load_dataset('crime_and_punish', split='train')\r\n# load index\r\nds.load_faiss_index('embeddings', 'my_index.faiss')\r\n# query\r\nscores, retrieved_examples = ds.get_nearest_examples('embeddings', embed('my new query'), k=10)\r\n```",
"Good to me. I understand that for now there is no check that the index matches the dataset on loading.\r\nMaybe just add a basic test on the number of examples?",
"Ok I think this one is ready now",
"Looks like the CI is having troubles to pass because of `tests/test_dataset_common.py::AWSDatasetTest::test_builder_configs_{<insert_rando_dataset_name_here>}`, `requests.exceptions.ConnectionError` :/"
] |
https://api.github.com/repos/huggingface/datasets/issues/297 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/297/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/297/comments | https://api.github.com/repos/huggingface/datasets/issues/297/events | https://github.com/huggingface/datasets/issues/297 | 643,444,625 | MDU6SXNzdWU2NDM0NDQ2MjU= | 297 | Error in Demo for Specific Datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/60150701?v=4",
"events_url": "https://api.github.com/users/s-jse/events{/privacy}",
"followers_url": "https://api.github.com/users/s-jse/followers",
"following_url": "https://api.github.com/users/s-jse/following{/other_user}",
"gists_url": "https://api.github.com/users/s-jse/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/s-jse",
"id": 60150701,
"login": "s-jse",
"node_id": "MDQ6VXNlcjYwMTUwNzAx",
"organizations_url": "https://api.github.com/users/s-jse/orgs",
"received_events_url": "https://api.github.com/users/s-jse/received_events",
"repos_url": "https://api.github.com/users/s-jse/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/s-jse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/s-jse/subscriptions",
"type": "User",
"url": "https://api.github.com/users/s-jse"
} | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | [] | null | 3 | "2020-06-23T00:38:42Z" | "2020-07-17T17:43:06Z" | "2020-07-17T17:43:06Z" | NONE | null | null | null | Selecting `natural_questions` or `newsroom` dataset in the online demo results in an error similar to the following.
![image](https://user-images.githubusercontent.com/60150701/85347842-ac861900-b4ae-11ea-98c4-a53a00934783.png)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/297/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/297/timeline | null | completed | false | [
"Thanks for reporting these errors :)\r\n\r\nI can actually see two issues here.\r\n\r\nFirst, datasets like `natural_questions` require apache_beam to be processed. Right now the import is not at the right place so we have this error message. However, even the imports are fixed, the nlp viewer doesn't actually have the resources to process NQ right now so we'll have to wait until we have a version that we've already processed on our google storage (that's what we've done for wikipedia for example).\r\n\r\nSecond, datasets like `newsroom` require manual downloads as we're not allowed to redistribute the data ourselves (if I'm not wrong). An error message should be displayed saying that we're not allowed to show the dataset.\r\n\r\nI can fix the first issue with the imports but for the second one I think we'll have to see with @srush to show a message for datasets that require manual downloads (it can be checked whether a dataset requires manual downloads if `dataset_builder_instance.manual_download_instructions is not None`).\r\n\r\n",
"I added apache-beam to the viewer. We can think about how to add newsroom. ",
"We don't plan to host the source files of newsroom ourselves for now.\r\nYou can still get the dataset if you follow the download instructions given by `dataset = load_dataset('newsroom')` though.\r\nThe viewer also shows the instructions now.\r\n\r\nClosing this one. If you have other questions, feel free to re-open :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/296 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/296/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/296/comments | https://api.github.com/repos/huggingface/datasets/issues/296/events | https://github.com/huggingface/datasets/issues/296 | 643,423,717 | MDU6SXNzdWU2NDM0MjM3MTc= | 296 | snli -1 labels | {
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jxmorris12",
"id": 13238952,
"login": "jxmorris12",
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jxmorris12"
} | [] | closed | false | null | [] | null | 4 | "2020-06-22T23:33:30Z" | "2020-06-23T14:41:59Z" | "2020-06-23T14:41:58Z" | CONTRIBUTOR | null | null | null | I'm trying to train a model on the SNLI dataset. Why does it have so many -1 labels?
```
import nlp
from collections import Counter
data = nlp.load_dataset('snli')['train']
print(Counter(data['label']))
Counter({0: 183416, 2: 183187, 1: 182764, -1: 785})
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/296/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/296/timeline | null | completed | false | [
"@jxmorris12 , we use `-1` to label examples for which `gold label` is missing (`gold label = -` in the original dataset). ",
"Thanks @mariamabarham! so the original dataset is missing some labels? That is weird. Is standard practice just to discard those examples training/eval?",
"Yes the original dataset is missing some labels maybe @sleepinyourhat , @gangeli can correct me if I'm wrong \r\nFor my personal opinion at least if you want your model to learn to predict no answer (-1) you can leave it their but otherwise you can discard them. ",
"thanks @mariamabarham :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/295 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/295/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/295/comments | https://api.github.com/repos/huggingface/datasets/issues/295/events | https://github.com/huggingface/datasets/issues/295 | 643,245,412 | MDU6SXNzdWU2NDMyNDU0MTI= | 295 | Improve input warning for evaluation metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/19514537?v=4",
"events_url": "https://api.github.com/users/Tiiiger/events{/privacy}",
"followers_url": "https://api.github.com/users/Tiiiger/followers",
"following_url": "https://api.github.com/users/Tiiiger/following{/other_user}",
"gists_url": "https://api.github.com/users/Tiiiger/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Tiiiger",
"id": 19514537,
"login": "Tiiiger",
"node_id": "MDQ6VXNlcjE5NTE0NTM3",
"organizations_url": "https://api.github.com/users/Tiiiger/orgs",
"received_events_url": "https://api.github.com/users/Tiiiger/received_events",
"repos_url": "https://api.github.com/users/Tiiiger/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Tiiiger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tiiiger/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Tiiiger"
} | [] | closed | false | null | [] | null | 0 | "2020-06-22T17:28:57Z" | "2020-06-23T14:47:37Z" | "2020-06-23T14:47:37Z" | NONE | null | null | null | Hi,
I am the author of `bert_score`. Recently, we received [ an issue ](https://github.com/Tiiiger/bert_score/issues/62) reporting a problem in using `bert_score` from the `nlp` package (also see #238 in this repo). After looking into this, I realized that the problem arises from the format `nlp.Metric` takes input.
Here is a minimal example:
```python
import nlp
scorer = nlp.load_metric("bertscore")
with open("pred.txt") as p, open("ref.txt") as g:
for lp, lg in zip(p, g):
scorer.add(lp, lg)
score = scorer.compute(lang="en")
```
The problem in the above code is that `scorer.add()` expects a list of strings as input for the references. As a result, the `scorer` here would take a list of characters in `lg` to be the references. The correct implementation would be calling
```python
scorer.add(lp, [lg])
```
I just want to raise this issue to you to prevent future user errors of a similar kind. I assume some simple type checking can prevent this from happening?
Thanks! | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/295/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/295/timeline | null | completed | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/294 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/294/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/294/comments | https://api.github.com/repos/huggingface/datasets/issues/294/events | https://github.com/huggingface/datasets/issues/294 | 643,181,179 | MDU6SXNzdWU2NDMxODExNzk= | 294 | Cannot load arxiv dataset on MacOS? | {
"avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4",
"events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}",
"followers_url": "https://api.github.com/users/JohnGiorgi/followers",
"following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JohnGiorgi",
"id": 8917831,
"login": "JohnGiorgi",
"node_id": "MDQ6VXNlcjg5MTc4MzE=",
"organizations_url": "https://api.github.com/users/JohnGiorgi/orgs",
"received_events_url": "https://api.github.com/users/JohnGiorgi/received_events",
"repos_url": "https://api.github.com/users/JohnGiorgi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JohnGiorgi"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | [] | null | 4 | "2020-06-22T15:46:55Z" | "2020-06-30T15:25:10Z" | "2020-06-30T15:25:10Z" | CONTRIBUTOR | null | null | null | I am having trouble loading the `"arxiv"` config from the `"scientific_papers"` dataset on MacOS. When I try loading the dataset with:
```python
arxiv = nlp.load_dataset("scientific_papers", "arxiv")
```
I get the following stack trace:
```bash
JSONDecodeError Traceback (most recent call last)
<ipython-input-2-8e00c55d5a59> in <module>
----> 1 arxiv = nlp.load_dataset("scientific_papers", "arxiv")
~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
522 download_mode=download_mode,
523 ignore_verifications=ignore_verifications,
--> 524 save_infos=save_infos,
525 )
526
~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
430 verify_infos = not save_infos and not ignore_verifications
431 self._download_and_prepare(
--> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
433 )
434 # Sync info
~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
481 try:
482 # Prepare split will record examples associated to the split
--> 483 self._prepare_split(split_generator, **prepare_split_kwargs)
484 except OSError:
485 raise OSError("Cannot find data file. " + (self.manual_download_instructions or ""))
~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/builder.py in _prepare_split(self, split_generator)
662
663 generator = self._generate_examples(**split_generator.gen_kwargs)
--> 664 for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False):
665 example = self.info.features.encode_example(record)
666 writer.write(example)
~/miniconda3/envs/t2t/lib/python3.7/site-packages/tqdm/std.py in __iter__(self)
1106 fp_write=getattr(self.fp, 'write', sys.stderr.write))
1107
-> 1108 for obj in iterable:
1109 yield obj
1110 # Update and possibly print the progressbar.
~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/datasets/scientific_papers/107a416c0e1958cb846f5934b5aae292f7884a5b27e86af3f3ef1a093e058bbc/scientific_papers.py in _generate_examples(self, path)
114 # "section_names": list[str], list of section names.
115 # "sections": list[list[str]], list of sections (list of paragraphs)
--> 116 d = json.loads(line)
117 summary = "\n".join(d["abstract_text"])
118 # In original paper, <S> and </S> are not used in vocab during training
~/miniconda3/envs/t2t/lib/python3.7/json/__init__.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
346 parse_int is None and parse_float is None and
347 parse_constant is None and object_pairs_hook is None and not kw):
--> 348 return _default_decoder.decode(s)
349 if cls is None:
350 cls = JSONDecoder
~/miniconda3/envs/t2t/lib/python3.7/json/decoder.py in decode(self, s, _w)
335
336 """
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
338 end = _w(s, end).end()
339 if end != len(s):
~/miniconda3/envs/t2t/lib/python3.7/json/decoder.py in raw_decode(self, s, idx)
351 """
352 try:
--> 353 obj, end = self.scan_once(s, idx)
354 except StopIteration as err:
355 raise JSONDecodeError("Expecting value", s, err.value) from None
JSONDecodeError: Unterminated string starting at: line 1 column 46983 (char 46982)
163502 examples [02:10, 2710.68 examples/s]
```
I am not sure how to trace back to the specific JSON file that has the "Unterminated string". Also, I do not get this error on colab so I suspect it may be MacOS specific. Copy pasting the relevant lines from `transformers-cli env` below:
- Platform: Darwin-19.5.0-x86_64-i386-64bit
- Python version: 3.7.5
- PyTorch version (GPU?): 1.5.0 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
Any ideas? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/294/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/294/timeline | null | completed | false | [
"I couldn't replicate this issue on my macbook :/\r\nCould you try to play with different encodings in `with open(path, encoding=...) as f` in scientific_papers.py:L108 ?",
"I was able to track down the file causing the problem by adding the following to `scientific_papers.py` (starting at line 116):\r\n\r\n```python\r\n from json import JSONDecodeError\r\n try:\r\n d = json.loads(line)\r\n summary = \"\\n\".join(d[\"abstract_text\"])\r\n except JSONDecodeError:\r\n print(path, line)\r\n```\r\n\r\n\r\n\r\nFor me it was at: `/Users/johngiorgi/.cache/huggingface/datasets/f87fd498c5003cbe253a2af422caa1e58f87a4fd74cb3e67350c635c8903b259/arxiv-dataset/train.txt` with `\"article_id\": \"1407.3051\"`.\r\n\r\nNot really 100% sure at the moment, but it looks like this specific substring from `\"article_text\"` may be causing the problem?\r\n\r\n```\r\n\"after the missing - mass scale adjustment , the validity of the corrections was tested in the @xmath85 productions at 1.69 gev/@xmath1 . in fig . [\", \"fig : calibrations ] ( a ) , we show the missing - mass spectrum in the @xmath86 region in the @xmath87 reaction at 1.69 gev/@xmath1 . a fitting result with a lorentzian function for the @xmath86 ( dashed line ) and the three - body phas\r\n```\r\n\r\nperhaps because it appears to be truncated. I (think) I can recreate the problem by doing the following:\r\n\r\n```python\r\nimport json\r\n\r\n# A minimal example of the json file that causes the error\r\ninvalid_json = '{\"article_id\": \"1407.3051\", \"article_text\": [\"the missing - mass resolution was obtained to be 2.8 @xmath3 0.1 mev/@xmath4 ( fwhm ) , which corresponds to the missing - mass resolution of 3.2 @xmath3 0.2 mev/@xmath4 ( fwhm ) at the @xmath6 cusp region in the @xmath0 reaction .\", \"this resolution is at least by a factor of 2 better than the previous measurement with the same reaction ( 3.2@xmath595.5 mev/@xmath4 in @xmath84 ) @xcite .\", \"after the missing - mass scale adjustment , the validity of the corrections was tested in the @xmath85 productions at 1.69 gev/@xmath1 . in fig . [\", \"fig : calibrations ] ( a ) , we show the missing - mass spectrum in the @xmath86 region in the @xmath87 reaction at 1.69 gev/@xmath1 . a fitting result with a lorentzian function for the @xmath86 ( dashed line ) and the three - body phas' \r\n# The line of code from `scientific_papers.py` which appears to cause the error\r\njson.loads(invalid_json)\r\n```\r\n\r\nThis is as far as I get before I am stumped.",
"I just checked inside `train.txt` and this line isn't truncated for me (line 163577).\r\nCould you try to clear your cache and re-download the dataset ?",
"Ah the turn-it-off-turn-it-on again solution! That did it, thanks a lot :) "
] |
https://api.github.com/repos/huggingface/datasets/issues/293 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/293/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/293/comments | https://api.github.com/repos/huggingface/datasets/issues/293/events | https://github.com/huggingface/datasets/pull/293 | 642,942,182 | MDExOlB1bGxSZXF1ZXN0NDM3ODM1ODI4 | 293 | Don't test community datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-06-22T10:15:33Z" | "2020-06-22T11:07:00Z" | "2020-06-22T11:06:59Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/293.diff",
"html_url": "https://github.com/huggingface/datasets/pull/293",
"merged_at": "2020-06-22T11:06:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/293.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/293"
} | This PR disables testing for community datasets on aws.
It should fix the CI that is currently failing. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/293/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/293/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/292 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/292/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/292/comments | https://api.github.com/repos/huggingface/datasets/issues/292/events | https://github.com/huggingface/datasets/pull/292 | 642,897,797 | MDExOlB1bGxSZXF1ZXN0NDM3Nzk4NTM2 | 292 | Update metadata for x_stance dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/5830820?v=4",
"events_url": "https://api.github.com/users/jvamvas/events{/privacy}",
"followers_url": "https://api.github.com/users/jvamvas/followers",
"following_url": "https://api.github.com/users/jvamvas/following{/other_user}",
"gists_url": "https://api.github.com/users/jvamvas/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jvamvas",
"id": 5830820,
"login": "jvamvas",
"node_id": "MDQ6VXNlcjU4MzA4MjA=",
"organizations_url": "https://api.github.com/users/jvamvas/orgs",
"received_events_url": "https://api.github.com/users/jvamvas/received_events",
"repos_url": "https://api.github.com/users/jvamvas/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jvamvas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jvamvas/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jvamvas"
} | [] | closed | false | null | [] | null | 3 | "2020-06-22T09:13:26Z" | "2020-06-23T08:07:24Z" | "2020-06-23T08:07:24Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/292.diff",
"html_url": "https://github.com/huggingface/datasets/pull/292",
"merged_at": "2020-06-23T08:07:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/292.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/292"
} | Thank you for featuring the x_stance dataset in your library. This PR updates some metadata:
- Citation: Replace preprint with proceedings
- URL: Use a URL with long-term availability
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/292/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/292/timeline | null | null | true | [
"Great! Thanks @jvamvas for these updates.\r\n",
"I have fixed a warning. The remaining test failure is due to an unrelated dataset.",
"We just fixed the other dataset on master. Could you rebase from master and push to rerun the CI ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/291 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/291/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/291/comments | https://api.github.com/repos/huggingface/datasets/issues/291/events | https://github.com/huggingface/datasets/pull/291 | 642,688,450 | MDExOlB1bGxSZXF1ZXN0NDM3NjM1NjMy | 291 | break statement not required | {
"avatar_url": "https://avatars.githubusercontent.com/u/12967587?v=4",
"events_url": "https://api.github.com/users/mayurnewase/events{/privacy}",
"followers_url": "https://api.github.com/users/mayurnewase/followers",
"following_url": "https://api.github.com/users/mayurnewase/following{/other_user}",
"gists_url": "https://api.github.com/users/mayurnewase/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mayurnewase",
"id": 12967587,
"login": "mayurnewase",
"node_id": "MDQ6VXNlcjEyOTY3NTg3",
"organizations_url": "https://api.github.com/users/mayurnewase/orgs",
"received_events_url": "https://api.github.com/users/mayurnewase/received_events",
"repos_url": "https://api.github.com/users/mayurnewase/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mayurnewase/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mayurnewase/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mayurnewase"
} | [] | closed | false | null | [] | null | 3 | "2020-06-22T01:40:55Z" | "2020-06-23T17:57:58Z" | "2020-06-23T09:37:02Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/291.diff",
"html_url": "https://github.com/huggingface/datasets/pull/291",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/291.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/291"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/291/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/291/timeline | null | null | true | [
"I guess,test failing due to connection error?",
"We just fixed the other dataset on master. Could you rebase from master and push to rerun the CI ?",
"If I'm not wrong this function returns None if no main class was found.\r\nI think it makes things less clear not to have a return at the end of the function.\r\nI guess we can have one return in the for loop instead of the break statement, AND one return at the end to explicitly return None.\r\nWhat do you think ?"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/290 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/290/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/290/comments | https://api.github.com/repos/huggingface/datasets/issues/290/events | https://github.com/huggingface/datasets/issues/290 | 641,978,286 | MDU6SXNzdWU2NDE5NzgyODY= | 290 | ConnectionError - Eli5 dataset download | {
"avatar_url": "https://avatars.githubusercontent.com/u/8490096?v=4",
"events_url": "https://api.github.com/users/JovanNj/events{/privacy}",
"followers_url": "https://api.github.com/users/JovanNj/followers",
"following_url": "https://api.github.com/users/JovanNj/following{/other_user}",
"gists_url": "https://api.github.com/users/JovanNj/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JovanNj",
"id": 8490096,
"login": "JovanNj",
"node_id": "MDQ6VXNlcjg0OTAwOTY=",
"organizations_url": "https://api.github.com/users/JovanNj/orgs",
"received_events_url": "https://api.github.com/users/JovanNj/received_events",
"repos_url": "https://api.github.com/users/JovanNj/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JovanNj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JovanNj/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JovanNj"
} | [] | closed | false | null | [] | null | 2 | "2020-06-19T13:40:33Z" | "2020-06-20T13:22:24Z" | "2020-06-20T13:22:24Z" | NONE | null | null | null | Hi, I have a problem with downloading Eli5 dataset. When typing `nlp.load_dataset('eli5')`, I get ConnectionError: Couldn't reach https://storage.googleapis.com/huggingface-nlp/cache/datasets/eli5/LFQA_reddit/1.0.0/explain_like_im_five-train_eli5.arrow
I would appreciate if you could help me with this issue. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/290/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/290/timeline | null | completed | false | [
"It should ne fixed now, thanks for reporting this one :)\r\nIt was an issue on our google storage.\r\n\r\nLet me now if you're still facing this issue.",
"It works now, thanks for prompt help!"
] |
https://api.github.com/repos/huggingface/datasets/issues/289 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/289/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/289/comments | https://api.github.com/repos/huggingface/datasets/issues/289/events | https://github.com/huggingface/datasets/pull/289 | 641,934,194 | MDExOlB1bGxSZXF1ZXN0NDM3MDc0MTM3 | 289 | update xsum | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | 3 | "2020-06-19T12:28:32Z" | "2020-06-22T13:27:26Z" | "2020-06-22T07:20:07Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/289.diff",
"html_url": "https://github.com/huggingface/datasets/pull/289",
"merged_at": "2020-06-22T07:20:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/289.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/289"
} | This PR makes the following update to the xsum dataset:
- Manual download is not required anymore
- dataset can be loaded as follow: `nlp.load_dataset('xsum')`
**Important**
Instead of using on outdated url to download the data: "https://raw.githubusercontent.com/EdinburghNLP/XSum/master/XSum-Dataset/XSum-TRAINING-DEV-TEST-SPLIT-90-5-5.json"
a more up-to-date url stored here: https://s3.amazonaws.com/datasets.huggingface.co/summarization/xsum.tar.gz is used
, so that the user does not need to manually download the data anymore.
There might be slight breaking changes here for xsum. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/289/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/289/timeline | null | null | true | [
"Looks cool!\r\n@mariamabarham can you add a detailed description here what exactly is changed and how the user can load xsum now?",
"And a rebase should solve the conflicts",
"This is a super useful PR :-) @sshleifer - maybe you can take a look at the updated version of xsum if you can use it for your use case. Now, one should be able to just load it with:\r\n\r\n```python \r\nnlp.load_datasets(\"xsum\", ....) # no manual dir required anymore\r\n```\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/288 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/288/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/288/comments | https://api.github.com/repos/huggingface/datasets/issues/288/events | https://github.com/huggingface/datasets/issues/288 | 641,888,610 | MDU6SXNzdWU2NDE4ODg2MTA= | 288 | Error at the first example in README: AttributeError: module 'dill' has no attribute '_dill' | {
"avatar_url": "https://avatars.githubusercontent.com/u/14964542?v=4",
"events_url": "https://api.github.com/users/wutong8023/events{/privacy}",
"followers_url": "https://api.github.com/users/wutong8023/followers",
"following_url": "https://api.github.com/users/wutong8023/following{/other_user}",
"gists_url": "https://api.github.com/users/wutong8023/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wutong8023",
"id": 14964542,
"login": "wutong8023",
"node_id": "MDQ6VXNlcjE0OTY0NTQy",
"organizations_url": "https://api.github.com/users/wutong8023/orgs",
"received_events_url": "https://api.github.com/users/wutong8023/received_events",
"repos_url": "https://api.github.com/users/wutong8023/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wutong8023/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wutong8023/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wutong8023"
} | [] | closed | false | null | [] | null | 5 | "2020-06-19T11:01:22Z" | "2020-06-21T09:05:11Z" | "2020-06-21T09:05:11Z" | NONE | null | null | null | /Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:469: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:470: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:471: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:472: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:473: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:476: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/Users/parasol_tree/anaconda3/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
return f(*args, **kwds)
/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Traceback (most recent call last):
File "/Users/parasol_tree/Resource/019 - Github/AcademicEnglishToolkit /test.py", line 7, in <module>
import nlp
File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/__init__.py", line 27, in <module>
from .arrow_dataset import Dataset
File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/arrow_dataset.py", line 31, in <module>
from nlp.utils.py_utils import dumps
File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/utils/__init__.py", line 20, in <module>
from .download_manager import DownloadManager, GenerateMode
File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/utils/download_manager.py", line 25, in <module>
from .py_utils import flatten_nested, map_nested, size_str
File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/utils/py_utils.py", line 244, in <module>
class Pickler(dill.Pickler):
File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/utils/py_utils.py", line 247, in Pickler
dispatch = dill._dill.MetaCatchingDict(dill.Pickler.dispatch.copy())
AttributeError: module 'dill' has no attribute '_dill' | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/288/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/288/timeline | null | completed | false | [
"It looks like the bug comes from `dill`. Which version of `dill` are you using ?",
"Thank you. It is version 0.2.6, which version is better?",
"0.2.6 is three years old now, maybe try a more recent one, e.g. the current 0.3.2 if you can?",
"Thanks guys! I upgraded dill and it works.",
"Awesome"
] |
https://api.github.com/repos/huggingface/datasets/issues/287 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/287/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/287/comments | https://api.github.com/repos/huggingface/datasets/issues/287/events | https://github.com/huggingface/datasets/pull/287 | 641,800,227 | MDExOlB1bGxSZXF1ZXN0NDM2OTY0NTg0 | 287 | fix squad_v2 metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-06-19T08:24:46Z" | "2020-06-19T08:33:43Z" | "2020-06-19T08:33:41Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/287.diff",
"html_url": "https://github.com/huggingface/datasets/pull/287",
"merged_at": "2020-06-19T08:33:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/287.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/287"
} | Fix #280
The imports were wrong | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/287/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/287/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/286 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/286/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/286/comments | https://api.github.com/repos/huggingface/datasets/issues/286/events | https://github.com/huggingface/datasets/pull/286 | 641,585,758 | MDExOlB1bGxSZXF1ZXN0NDM2NzkzMjI4 | 286 | Add ANLI dataset. | {
"avatar_url": "https://avatars.githubusercontent.com/u/11016329?v=4",
"events_url": "https://api.github.com/users/easonnie/events{/privacy}",
"followers_url": "https://api.github.com/users/easonnie/followers",
"following_url": "https://api.github.com/users/easonnie/following{/other_user}",
"gists_url": "https://api.github.com/users/easonnie/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/easonnie",
"id": 11016329,
"login": "easonnie",
"node_id": "MDQ6VXNlcjExMDE2MzI5",
"organizations_url": "https://api.github.com/users/easonnie/orgs",
"received_events_url": "https://api.github.com/users/easonnie/received_events",
"repos_url": "https://api.github.com/users/easonnie/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/easonnie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/easonnie/subscriptions",
"type": "User",
"url": "https://api.github.com/users/easonnie"
} | [] | closed | false | null | [] | null | 1 | "2020-06-18T22:27:30Z" | "2020-06-22T12:23:27Z" | "2020-06-22T12:23:27Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/286.diff",
"html_url": "https://github.com/huggingface/datasets/pull/286",
"merged_at": "2020-06-22T12:23:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/286.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/286"
} | I completed all the steps in https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset and push the code for ANLI. Please let me know if there are any errors. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/286/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/286/timeline | null | null | true | [
"Awesome!! Thanks @easonnie.\r\nLet's wait for additional reviews maybe from @lhoestq @patrickvonplaten @jplu"
] |
https://api.github.com/repos/huggingface/datasets/issues/285 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/285/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/285/comments | https://api.github.com/repos/huggingface/datasets/issues/285/events | https://github.com/huggingface/datasets/pull/285 | 641,360,702 | MDExOlB1bGxSZXF1ZXN0NDM2NjAyMjk4 | 285 | Consistent formatting of citations | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | 1 | "2020-06-18T16:25:23Z" | "2020-06-22T08:09:25Z" | "2020-06-22T08:09:24Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/285.diff",
"html_url": "https://github.com/huggingface/datasets/pull/285",
"merged_at": "2020-06-22T08:09:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/285.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/285"
} | #283 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/285/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/285/timeline | null | null | true | [
"Circle CI shuold be green :-) "
] |
https://api.github.com/repos/huggingface/datasets/issues/284 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/284/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/284/comments | https://api.github.com/repos/huggingface/datasets/issues/284/events | https://github.com/huggingface/datasets/pull/284 | 641,337,217 | MDExOlB1bGxSZXF1ZXN0NDM2NTgxODQ2 | 284 | Fix manual download instructions | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | 5 | "2020-06-18T15:59:57Z" | "2020-06-19T08:24:21Z" | "2020-06-19T08:24:19Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/284.diff",
"html_url": "https://github.com/huggingface/datasets/pull/284",
"merged_at": "2020-06-19T08:24:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/284.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/284"
} | This PR replaces the static `DatasetBulider` variable `MANUAL_DOWNLOAD_INSTRUCTIONS` by a property function `manual_download_instructions()`.
Some datasets like XTREME and all WMT need the manual data dir only for a small fraction of the possible configs.
After some brainstorming with @mariamabarham and @lhoestq, we came to the conclusion that having a property function `manual_download_instructions()` gives us more flexibility to decide on a per config basis in the dataset builder if manual download instructions are needed.
Also this PR should unblock solves a bug with `wmt16 - ro-en`
@sshleifer from this branch you should be able to succesfully run
```python
import nlp
ds = nlp.load_dataset('./datasets/wmt16', 'ro-en')
```
and once this PR is merged S3 should be synched so that
```python
import nlp
ds = nlp.load_dataset("wmt16", "ro-en")
```
works as well.
**Important**: Since `MANUAL_DOWNLOAD_INSTRUCTIONS` was not really exposed to the user, this PR should not be a problem regarding backward compatibility. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/284/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/284/timeline | null | null | true | [
"Verified that this works, thanks!",
"But I get\r\n```python\r\nConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/./datasets/wmt16/wmt16.py\r\n```\r\nWhen I try from jupyter on brutasse or my mac. (the jupyter server is run from transformers).\r\n\r\n\r\nBoth machines can run\r\n```bash\r\naws s3 ls s3://datasets.huggingface.co/nlp/datasets/wmt16/\r\n```\r\nbut it seems one must be in the nlp directory to run the command?\r\n\r\n(I ran `pip install -e . ` on this branch in both situations.)\r\n\r\n\r\n",
"`https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/./datasets/wmt16/wmt16.py` looks very weird.\r\n\r\n(Also, S3 is not a file-system, it's a flat key-value store)",
"Good to merge I think @lhoestq ",
"> But I get\r\n> \r\n> ```python\r\n> ConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/./datasets/wmt16/wmt16.py\r\n> ```\r\n> \r\n> When I try from jupyter on brutasse or my mac. (the jupyter server is run from transformers).\r\n> \r\n> Both machines can run\r\n> \r\n> ```shell\r\n> aws s3 ls s3://datasets.huggingface.co/nlp/datasets/wmt16/\r\n> ```\r\n> \r\n> but it seems one must be in the nlp directory to run the command?\r\n> \r\n> (I ran `pip install -e . ` on this branch in both situations.)\r\n\r\nAs soon as it is on master, the dataset script wmt16.py will be synced on S3 and you'll be able to do `load_dataset(\"wmt16\")`"
] |
https://api.github.com/repos/huggingface/datasets/issues/283 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/283/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/283/comments | https://api.github.com/repos/huggingface/datasets/issues/283/events | https://github.com/huggingface/datasets/issues/283 | 641,270,439 | MDU6SXNzdWU2NDEyNzA0Mzk= | 283 | Consistent formatting of citations | {
"avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4",
"events_url": "https://api.github.com/users/srush/events{/privacy}",
"followers_url": "https://api.github.com/users/srush/followers",
"following_url": "https://api.github.com/users/srush/following{/other_user}",
"gists_url": "https://api.github.com/users/srush/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/srush",
"id": 35882,
"login": "srush",
"node_id": "MDQ6VXNlcjM1ODgy",
"organizations_url": "https://api.github.com/users/srush/orgs",
"received_events_url": "https://api.github.com/users/srush/received_events",
"repos_url": "https://api.github.com/users/srush/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srush/subscriptions",
"type": "User",
"url": "https://api.github.com/users/srush"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
}
] | null | 0 | "2020-06-18T14:48:45Z" | "2020-06-22T17:30:46Z" | "2020-06-22T17:30:46Z" | CONTRIBUTOR | null | null | null | The citations are all of a different format, some have "```" and have text inside, others are proper bibtex.
Can we make it so that they all are proper citations, i.e. parse by the bibtex spec:
https://bibtexparser.readthedocs.io/en/master/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/283/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/283/timeline | null | completed | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/282 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/282/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/282/comments | https://api.github.com/repos/huggingface/datasets/issues/282/events | https://github.com/huggingface/datasets/pull/282 | 641,217,759 | MDExOlB1bGxSZXF1ZXN0NDM2NDgxNzMy | 282 | Update dataset_info from gcs | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-06-18T13:41:15Z" | "2020-06-18T16:24:52Z" | "2020-06-18T16:24:51Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/282.diff",
"html_url": "https://github.com/huggingface/datasets/pull/282",
"merged_at": "2020-06-18T16:24:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/282.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/282"
} | Some datasets are hosted on gcs (wikipedia for example). In this PR I make sure that, when a user loads such datasets, the file_instructions are built using the dataset_info.json from gcs and not from the info extracted from the local `dataset_infos.json` (the one that contain the info for each config). Indeed local files may end up outdated.
Furthermore, to avoid outdated dataset_infos.json, I now make sure that each time you run `load_dataset` it also tries to update the file locally.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/282/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/282/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/281 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/281/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/281/comments | https://api.github.com/repos/huggingface/datasets/issues/281/events | https://github.com/huggingface/datasets/issues/281 | 641,067,856 | MDU6SXNzdWU2NDEwNjc4NTY= | 281 | Private/sensitive data | {
"avatar_url": "https://avatars.githubusercontent.com/u/6368040?v=4",
"events_url": "https://api.github.com/users/MFreidank/events{/privacy}",
"followers_url": "https://api.github.com/users/MFreidank/followers",
"following_url": "https://api.github.com/users/MFreidank/following{/other_user}",
"gists_url": "https://api.github.com/users/MFreidank/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MFreidank",
"id": 6368040,
"login": "MFreidank",
"node_id": "MDQ6VXNlcjYzNjgwNDA=",
"organizations_url": "https://api.github.com/users/MFreidank/orgs",
"received_events_url": "https://api.github.com/users/MFreidank/received_events",
"repos_url": "https://api.github.com/users/MFreidank/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MFreidank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MFreidank/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MFreidank"
} | [] | closed | false | null | [] | null | 3 | "2020-06-18T09:47:27Z" | "2020-06-20T13:15:12Z" | "2020-06-20T13:15:12Z" | CONTRIBUTOR | null | null | null | Hi all,
Thanks for this fantastic library, it makes it very easy to do prototyping for NLP projects interchangeably between TF/Pytorch.
Unfortunately, there is data that cannot easily be shared publicly as it may contain sensitive information.
Is there support/a plan to support such data with NLP, e.g. by reading it from local sources?
Use case flow could look like this: use NLP to prototype an approach on similar, public data and apply the resulting prototype on sensitive/private data without the need to rethink data processing pipelines.
Many thanks for your responses ahead of time and kind regards,
MFreidank | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/281/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/281/timeline | null | completed | false | [
"Hi @MFreidank, you should already be able to load a dataset from local sources, indeed. (ping @lhoestq and @jplu)\r\n\r\nWe're also thinking about the ability to host private datasets on a hosted bucket with permission management, but that's further down the road.",
"Hi @MFreidank, it is possible to load a dataset from your local storage, but only CSV/TSV and JSON are supported. To load a dataset in JSON format:\r\n\r\n```\r\nnlp.load_dataset(path=\"json\", data_files={nlp.Split.TRAIN: [\"path/to/train.json\"], nlp.Split.TEST: [\"path/to/test.json\"]})\r\n```\r\n\r\nFor CSV/TSV datasets, you have to replace `json` by `csv`.",
"Hi @julien-c @jplu,\r\nThanks for sharing this solution with me, it helps, this is what I was looking for. \r\nIf not already there and only missed by me, this could be a great addition in the docs.\r\n\r\nClosing my issue as resolved, thanks again."
] |
https://api.github.com/repos/huggingface/datasets/issues/280 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/280/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/280/comments | https://api.github.com/repos/huggingface/datasets/issues/280/events | https://github.com/huggingface/datasets/issues/280 | 640,677,615 | MDU6SXNzdWU2NDA2Nzc2MTU= | 280 | Error with SquadV2 Metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/32203792?v=4",
"events_url": "https://api.github.com/users/avinregmi/events{/privacy}",
"followers_url": "https://api.github.com/users/avinregmi/followers",
"following_url": "https://api.github.com/users/avinregmi/following{/other_user}",
"gists_url": "https://api.github.com/users/avinregmi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/avinregmi",
"id": 32203792,
"login": "avinregmi",
"node_id": "MDQ6VXNlcjMyMjAzNzky",
"organizations_url": "https://api.github.com/users/avinregmi/orgs",
"received_events_url": "https://api.github.com/users/avinregmi/received_events",
"repos_url": "https://api.github.com/users/avinregmi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/avinregmi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avinregmi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/avinregmi"
} | [] | closed | false | null | [] | null | 0 | "2020-06-17T19:10:54Z" | "2020-06-19T08:33:41Z" | "2020-06-19T08:33:41Z" | NONE | null | null | null | I can't seem to import squad v2 metrics.
**squad_metric = nlp.load_metric('squad_v2')**
**This throws me an error.:**
```
ImportError Traceback (most recent call last)
<ipython-input-8-170b6a170555> in <module>
----> 1 squad_metric = nlp.load_metric('squad_v2')
~/env/lib64/python3.6/site-packages/nlp/load.py in load_metric(path, name, process_id, num_process, data_dir, experiment_id, in_memory, download_config, **metric_init_kwargs)
426 """
427 module_path = prepare_module(path, download_config=download_config, dataset=False)
--> 428 metric_cls = import_main_class(module_path, dataset=False)
429 metric = metric_cls(
430 name=name,
~/env/lib64/python3.6/site-packages/nlp/load.py in import_main_class(module_path, dataset)
55 """
56 importlib.invalidate_caches()
---> 57 module = importlib.import_module(module_path)
58
59 if dataset:
/usr/lib64/python3.6/importlib/__init__.py in import_module(name, package)
124 break
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
127
128
/usr/lib64/python3.6/importlib/_bootstrap.py in _gcd_import(name, package, level)
/usr/lib64/python3.6/importlib/_bootstrap.py in _find_and_load(name, import_)
/usr/lib64/python3.6/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_)
/usr/lib64/python3.6/importlib/_bootstrap.py in _load_unlocked(spec)
/usr/lib64/python3.6/importlib/_bootstrap_external.py in exec_module(self, module)
/usr/lib64/python3.6/importlib/_bootstrap.py in _call_with_frames_removed(f, *args, **kwds)
~/env/lib64/python3.6/site-packages/nlp/metrics/squad_v2/a15e787c76889174874386d3def75321f0284c11730d2a57e28fe1352c9b5c7a/squad_v2.py in <module>
16
17 import nlp
---> 18 from .evaluate import evaluate
19
20 _CITATION = """\
ImportError: cannot import name 'evaluate'
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/280/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/280/timeline | null | completed | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/279 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/279/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/279/comments | https://api.github.com/repos/huggingface/datasets/issues/279/events | https://github.com/huggingface/datasets/issues/279 | 640,611,692 | MDU6SXNzdWU2NDA2MTE2OTI= | 279 | Dataset Preprocessing Cache with .map() function not working as expected | {
"avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4",
"events_url": "https://api.github.com/users/sarahwie/events{/privacy}",
"followers_url": "https://api.github.com/users/sarahwie/followers",
"following_url": "https://api.github.com/users/sarahwie/following{/other_user}",
"gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sarahwie",
"id": 8027676,
"login": "sarahwie",
"node_id": "MDQ6VXNlcjgwMjc2NzY=",
"organizations_url": "https://api.github.com/users/sarahwie/orgs",
"received_events_url": "https://api.github.com/users/sarahwie/received_events",
"repos_url": "https://api.github.com/users/sarahwie/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sarahwie"
} | [] | closed | false | null | [] | null | 5 | "2020-06-17T17:17:21Z" | "2021-07-06T21:43:28Z" | "2021-04-18T23:43:49Z" | NONE | null | null | null | I've been having issues with reproducibility when loading and processing datasets with the `.map` function. I was only able to resolve them by clearing all of the cache files on my system.
Is there a way to disable using the cache when processing a dataset? As I make minor processing changes on the same dataset, I want to be able to be certain the data is being re-processed rather than loaded from a cached file.
Could you also help me understand a bit more about how the caching functionality is used for pre-processing? E.g. how is it determined when to load from a cache vs. reprocess.
I was particularly having an issue where the correct dataset splits were loaded, but as soon as I applied the `.map()` function to each split independently, they somehow all exited this process having been converted to the test set.
Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/279/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/279/timeline | null | completed | false | [
"When you're processing a dataset with `.map`, it checks whether it has already done this computation using a hash based on the function and the input (using some fancy serialization with `dill`). If you found that it doesn't work as expected in some cases, let us know !\r\n\r\nGiven that, you can still force to re-process using `.map(my_func, load_from_cache_file=False)` if you want to.\r\n\r\nI am curious about the problem you have with splits. It makes me think about #160 that was an issue of version 0.1.0. What version of `nlp` are you running ? Could you give me more details ?",
"Thanks, that's helpful! I was running 0.1.0, but since upgraded to 0.2.1. I can't reproduce the issue anymore as I've cleared the cache & everything now seems to be running fine since the upgrade. I've added some checks to my code, so if I do encounter it again I will reopen this issue.",
"Just checking in, the cache sometimes still does not work when I make changes in my processing function in version `1.2.1`. The changes made to my data processing function only propagate to the dataset when I use `load_from_cache_file=False` or clear the cache. Is this a system-specific issue?",
"Hi @sarahwie \r\nThe data are reloaded from the cache if the hash of the function you provide is the same as a computation you've done before. The hash is computed by recursively looking at the python objects of the function you provide.\r\n\r\nIf you think there's an issue, can you share the function you used or a google colab please ?",
"I can't reproduce it, so I'll close for now."
] |
https://api.github.com/repos/huggingface/datasets/issues/278 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/278/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/278/comments | https://api.github.com/repos/huggingface/datasets/issues/278/events | https://github.com/huggingface/datasets/issues/278 | 640,518,917 | MDU6SXNzdWU2NDA1MTg5MTc= | 278 | MemoryError when loading German Wikipedia | {
"avatar_url": "https://avatars.githubusercontent.com/u/4698028?v=4",
"events_url": "https://api.github.com/users/gregburman/events{/privacy}",
"followers_url": "https://api.github.com/users/gregburman/followers",
"following_url": "https://api.github.com/users/gregburman/following{/other_user}",
"gists_url": "https://api.github.com/users/gregburman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gregburman",
"id": 4698028,
"login": "gregburman",
"node_id": "MDQ6VXNlcjQ2OTgwMjg=",
"organizations_url": "https://api.github.com/users/gregburman/orgs",
"received_events_url": "https://api.github.com/users/gregburman/received_events",
"repos_url": "https://api.github.com/users/gregburman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gregburman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gregburman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gregburman"
} | [] | closed | false | null | [] | null | 7 | "2020-06-17T15:06:21Z" | "2020-06-19T12:53:02Z" | "2020-06-19T12:53:02Z" | NONE | null | null | null | Hi, first off let me say thank you for all the awesome work you're doing at Hugging Face across all your projects (NLP, Transformers, Tokenizers) - they're all amazing contributions to us working with NLP models :)
I'm trying to download the German Wikipedia dataset as follows:
```
wiki = nlp.load_dataset("wikipedia", "20200501.de", split="train")
```
However, when I do so, I get the following error:
```
Downloading and preparing dataset wikipedia/20200501.de (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/ubuntu/.cache/huggingface/datasets/wikipedia/20200501.de/1.0.0...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/anaconda3/envs/albert/lib/python3.7/site-packages/nlp/load.py", line 520, in load_dataset
save_infos=save_infos,
File "/home/ubuntu/anaconda3/envs/albert/lib/python3.7/site-packages/nlp/builder.py", line 433, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/ubuntu/anaconda3/envs/albert/lib/python3.7/site-packages/nlp/builder.py", line 824, in _download_and_prepare
"\n\t`{}`".format(usage_example)
nlp.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/
If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory).
Example of usage:
`load_dataset('wikipedia', '20200501.de', beam_runner='DirectRunner')`
```
So, following on from the example usage at the bottom, I tried specifying `beam_runner='DirectRunner`, however when I do this after about 20 min after the data has all downloaded, I get a `MemoryError` as warned.
This isn't an issue for the English or French Wikipedia datasets (I've tried both), as neither seem to require that `beam_runner` be specified. Can you please clarify why this is an issue for the German dataset?
My nlp version is 0.2.1.
Thank you! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/278/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/278/timeline | null | completed | false | [
"Hi !\r\n\r\nAs you noticed, \"big\" datasets like Wikipedia require apache beam to be processed.\r\nHowever users usually don't have an apache beam runtime available (spark, dataflow, etc.) so our goal for this library is to also make available processed versions of these datasets, so that users can just download and use them right away.\r\n\r\nThis is the case for english and french wikipedia right now: we've processed them ourselves and now they are available from our google storage. However we've not processed the german one (yet).",
"Hi @lhoestq \r\n\r\nThank you for your quick reply. I thought this might be the case, that the processing was done for some languages and not for others. Is there any set timeline for when other languages (German, Italian) will be processed?\r\n\r\nGiven enough memory, is it possible to process the data ourselves by specifying the `beam_runner`?",
"Adding them is definitely in our short term objectives. I'll be working on this early next week :)\r\n\r\nAlthough if you have an apache beam runtime feel free to specify the beam runner. You can find more info [here](https://github.com/huggingface/nlp/blob/master/docs/beam_dataset.md) on how to make it work on Dataflow but you can adapt it for Spark or any other beam runtime (by changing the `runner`).\r\n\r\nHowever if you don't have a beam runtime and even if you have enough memory, I discourage you to use the `DirectRunner` on the german or italian wikipedia. According to Apache Beam documentation it was made for testing purposes and therefore it is memory-inefficient.",
"German is [almost] done @gregburman",
"I added the German and the Italian Wikipedia to our google cloud storage:\r\nFirst update the `nlp` package to 0.3.0:\r\n```bash\r\npip install nlp --upgrade\r\n```\r\nand then\r\n```python\r\nfrom nlp import load_dataset\r\nwiki_de = load_dataset(\"wikipedia\", \"20200501.de\")\r\nwiki_it = load_dataset(\"wikipedia\", \"20200501.it\")\r\n```\r\nThe datasets are downloaded and directly ready to use (no processing).",
"Hi @lhoestq \r\n\r\nWow, thanks so much, that's **really** incredible! I was considering looking at creating my own Beam Dataset, as per the doc you linked, but instead opted to process the data myself using `wikiextractor`. However, now that this is available, I'll definitely switch across and use it.\r\n\r\nThanks so much for the incredible work, this really helps out our team considerably!\r\n\r\nHave a great (and well-deserved ;) weekend ahead!\r\n\r\nP.S. I'm not sure if I should close the issue here - if so I'm happy to do so.",
"Thanks for your message, glad I could help :)\r\nClosing this one."
] |
https://api.github.com/repos/huggingface/datasets/issues/277 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/277/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/277/comments | https://api.github.com/repos/huggingface/datasets/issues/277/events | https://github.com/huggingface/datasets/issues/277 | 640,163,053 | MDU6SXNzdWU2NDAxNjMwNTM= | 277 | Empty samples in glue/qqp | {
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/richarddwang",
"id": 17963619,
"login": "richarddwang",
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/richarddwang"
} | [] | closed | false | null | [] | null | 2 | "2020-06-17T05:54:52Z" | "2020-06-21T00:21:45Z" | "2020-06-21T00:21:45Z" | CONTRIBUTOR | null | null | null | ```
qqp = nlp.load_dataset('glue', 'qqp')
print(qqp['train'][310121])
print(qqp['train'][362225])
```
```
{'question1': 'How can I create an Android app?', 'question2': '', 'label': 0, 'idx': 310137}
{'question1': 'How can I develop android app?', 'question2': '', 'label': 0, 'idx': 362246}
```
Notice that question 2 is empty string.
BTW, I have checked and these two are the only naughty ones in all splits of qqp. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/277/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/277/timeline | null | completed | false | [
"We are only wrapping the original dataset.\r\n\r\nMaybe try to ask on the GLUE mailing list or reach out to the original authors?",
"Tanks for the suggestion, I'll try to ask GLUE benchmark.\r\nI'll first close the issue, post the following up here afterwards, and reopen the issue if needed. "
] |
https://api.github.com/repos/huggingface/datasets/issues/276 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/276/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/276/comments | https://api.github.com/repos/huggingface/datasets/issues/276/events | https://github.com/huggingface/datasets/pull/276 | 639,490,858 | MDExOlB1bGxSZXF1ZXN0NDM1MDY5Nzg5 | 276 | Fix metric compute (original_instructions missing) | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 2 | "2020-06-16T08:52:01Z" | "2020-06-18T07:41:45Z" | "2020-06-18T07:41:44Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/276.diff",
"html_url": "https://github.com/huggingface/datasets/pull/276",
"merged_at": "2020-06-18T07:41:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/276.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/276"
} | When loading arrow data we added in cc8d250 a way to specify the instructions that were used to store them with the loaded dataset.
However metrics load data the same way but don't need instructions (we use one single file).
In this PR I just make `original_instructions` optional when reading files to load a `Dataset` object.
This should fix #269 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/276/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/276/timeline | null | null | true | [
"Awesome! This is working now:\r\n\r\n```python\r\nimport nlp \r\nseqeval = nlp.load_metric(\"seqeval\") \r\ny_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] \r\ny_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] \r\n\r\nresults = seqeval.compute(y_true, y_pred)\r\n```\r\n\r\nI heavily need this fix for an upcoming `nlp` integration PR for Transformers (token classification example) 😅",
"Haha nice ! We'll ship this fix with the next release that will probably come out on thursday :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/275 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/275/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/275/comments | https://api.github.com/repos/huggingface/datasets/issues/275/events | https://github.com/huggingface/datasets/issues/275 | 639,439,052 | MDU6SXNzdWU2Mzk0MzkwNTI= | 275 | NonMatchingChecksumError when loading pubmed dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/48441753?v=4",
"events_url": "https://api.github.com/users/DavideStenner/events{/privacy}",
"followers_url": "https://api.github.com/users/DavideStenner/followers",
"following_url": "https://api.github.com/users/DavideStenner/following{/other_user}",
"gists_url": "https://api.github.com/users/DavideStenner/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DavideStenner",
"id": 48441753,
"login": "DavideStenner",
"node_id": "MDQ6VXNlcjQ4NDQxNzUz",
"organizations_url": "https://api.github.com/users/DavideStenner/orgs",
"received_events_url": "https://api.github.com/users/DavideStenner/received_events",
"repos_url": "https://api.github.com/users/DavideStenner/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DavideStenner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DavideStenner/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DavideStenner"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | [] | null | 1 | "2020-06-16T07:31:51Z" | "2020-06-19T07:37:07Z" | "2020-06-19T07:37:07Z" | NONE | null | null | null | I get this error when i run `nlp.load_dataset('scientific_papers', 'pubmed', split = 'train[:50%]')`.
The error is:
```
---------------------------------------------------------------------------
NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-2-7742dea167d0> in <module>()
----> 1 df = nlp.load_dataset('scientific_papers', 'pubmed', split = 'train[:50%]')
2 df = pd.DataFrame(df)
3 gc.collect()
3 frames
/usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
518 download_mode=download_mode,
519 ignore_verifications=ignore_verifications,
--> 520 save_infos=save_infos,
521 )
522
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
431 verify_infos = not save_infos and not ignore_verifications
432 self._download_and_prepare(
--> 433 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
434 )
435 # Sync info
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
468 # Checksums verification
469 if verify_infos:
--> 470 verify_checksums(self.info.download_checksums, dl_manager.get_recorded_sizes_checksums())
471 for split_generator in split_generators:
472 if str(split_generator.split_info.name).lower() == "all":
/usr/local/lib/python3.6/dist-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums)
34 bad_urls = [url for url in expected_checksums if expected_checksums[url] != recorded_checksums[url]]
35 if len(bad_urls) > 0:
---> 36 raise NonMatchingChecksumError(str(bad_urls))
37 logger.info("All the checksums matched successfully.")
38
NonMatchingChecksumError: ['https://drive.google.com/uc?id=1b3rmCSIoh6VhD4HKWjI4HOW-cSwcwbeC&export=download', 'https://drive.google.com/uc?id=1lvsqvsFi3W-pE1SqNZI0s8NR9rC1tsja&export=download']
```
I'm currently working on google colab.
That is quite strange because yesterday it was fine.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/275/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/275/timeline | null | completed | false | [
"For some reason the files are not available for unauthenticated users right now (like the download service of this package). Instead of downloading the right files, it downloads the html of the error.\r\nAccording to the error it should be back again in 24h.\r\n\r\n![image](https://user-images.githubusercontent.com/42851186/84751599-096c6580-afbd-11ea-97f3-ee4aef791711.png)\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/274 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/274/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/274/comments | https://api.github.com/repos/huggingface/datasets/issues/274/events | https://github.com/huggingface/datasets/issues/274 | 639,156,625 | MDU6SXNzdWU2MzkxNTY2MjU= | 274 | PG-19 | {
"avatar_url": "https://avatars.githubusercontent.com/u/108653?v=4",
"events_url": "https://api.github.com/users/lucidrains/events{/privacy}",
"followers_url": "https://api.github.com/users/lucidrains/followers",
"following_url": "https://api.github.com/users/lucidrains/following{/other_user}",
"gists_url": "https://api.github.com/users/lucidrains/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lucidrains",
"id": 108653,
"login": "lucidrains",
"node_id": "MDQ6VXNlcjEwODY1Mw==",
"organizations_url": "https://api.github.com/users/lucidrains/orgs",
"received_events_url": "https://api.github.com/users/lucidrains/received_events",
"repos_url": "https://api.github.com/users/lucidrains/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lucidrains/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucidrains/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lucidrains"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | 4 | "2020-06-15T21:02:26Z" | "2020-07-06T15:35:02Z" | "2020-07-06T15:35:02Z" | CONTRIBUTOR | null | null | null | Hi, and thanks for all your open-sourced work, as always!
I was wondering if you would be open to adding PG-19 to your collection of datasets. https://github.com/deepmind/pg19 It is often used for benchmarking long-range language modeling. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/274/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/274/timeline | null | completed | false | [
"Sounds good! Do you want to give it a try?",
"Ok, I'll see if I can figure it out tomorrow!",
"Got around to this today, and so far so good, I'm able to download and load pg19 locally. However, I think there may be an issue with the dummy data, and testing in general.\r\n\r\nThe problem lies in the fact that each book from pg19 actually resides as its own text file in a google cloud folder that denotes the split, where the book id is the name of the text file. https://console.cloud.google.com/storage/browser/deepmind-gutenberg/train/ I don't believe there's anywhere else (even in the supplied metadata), where the mapping of id -> split can be found.\r\n\r\nTherefore I end up making a network call `tf.io.gfile.listdir` to get all the files within each of the split directories. https://github.com/lucidrains/nlp/commit/adbacbd85decc80db2347d0882e7dab4faa6fd03#diff-cece8f166a85dd927caf574ba303d39bR78\r\n\r\nDoes this network call need to be eventually stubbed out for testing?",
"Ohh nevermind, I think I can use `download_custom` here with `listdir` as the custom function. Ok, I'll keep trying to make the dummy data work!"
] |
https://api.github.com/repos/huggingface/datasets/issues/273 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/273/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/273/comments | https://api.github.com/repos/huggingface/datasets/issues/273/events | https://github.com/huggingface/datasets/pull/273 | 638,968,054 | MDExOlB1bGxSZXF1ZXN0NDM0NjM0MzU4 | 273 | update cos_e to add cos_e v1.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | 0 | "2020-06-15T16:03:22Z" | "2020-06-16T08:25:54Z" | "2020-06-16T08:25:52Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/273.diff",
"html_url": "https://github.com/huggingface/datasets/pull/273",
"merged_at": "2020-06-16T08:25:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/273.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/273"
} | This PR updates the cos_e dataset to add v1.0 as requested here #163
@nazneenrajani | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/273/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/273/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/272 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/272/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/272/comments | https://api.github.com/repos/huggingface/datasets/issues/272/events | https://github.com/huggingface/datasets/pull/272 | 638,307,313 | MDExOlB1bGxSZXF1ZXN0NDM0MTExOTQ3 | 272 | asd | {
"avatar_url": "https://avatars.githubusercontent.com/u/66900970?v=4",
"events_url": "https://api.github.com/users/sn696/events{/privacy}",
"followers_url": "https://api.github.com/users/sn696/followers",
"following_url": "https://api.github.com/users/sn696/following{/other_user}",
"gists_url": "https://api.github.com/users/sn696/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sn696",
"id": 66900970,
"login": "sn696",
"node_id": "MDQ6VXNlcjY2OTAwOTcw",
"organizations_url": "https://api.github.com/users/sn696/orgs",
"received_events_url": "https://api.github.com/users/sn696/received_events",
"repos_url": "https://api.github.com/users/sn696/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sn696/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sn696/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sn696"
} | [] | closed | false | null | [] | null | 0 | "2020-06-14T08:20:38Z" | "2020-06-14T09:16:41Z" | "2020-06-14T09:16:41Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/272.diff",
"html_url": "https://github.com/huggingface/datasets/pull/272",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/272.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/272"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/272/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/272/timeline | null | null | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/271 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/271/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/271/comments | https://api.github.com/repos/huggingface/datasets/issues/271/events | https://github.com/huggingface/datasets/pull/271 | 638,135,754 | MDExOlB1bGxSZXF1ZXN0NDMzOTg3NDkw | 271 | Fix allociné dataset configuration | {
"avatar_url": "https://avatars.githubusercontent.com/u/37028092?v=4",
"events_url": "https://api.github.com/users/TheophileBlard/events{/privacy}",
"followers_url": "https://api.github.com/users/TheophileBlard/followers",
"following_url": "https://api.github.com/users/TheophileBlard/following{/other_user}",
"gists_url": "https://api.github.com/users/TheophileBlard/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TheophileBlard",
"id": 37028092,
"login": "TheophileBlard",
"node_id": "MDQ6VXNlcjM3MDI4MDky",
"organizations_url": "https://api.github.com/users/TheophileBlard/orgs",
"received_events_url": "https://api.github.com/users/TheophileBlard/received_events",
"repos_url": "https://api.github.com/users/TheophileBlard/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TheophileBlard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheophileBlard/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TheophileBlard"
} | [] | closed | false | null | [] | null | 6 | "2020-06-13T10:12:10Z" | "2020-06-18T07:41:21Z" | "2020-06-18T07:41:20Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/271.diff",
"html_url": "https://github.com/huggingface/datasets/pull/271",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/271.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/271"
} | This is a patch for #244. According to the [live nlp viewer](url), the Allociné dataset must be loaded with :
```python
dataset = load_dataset('allocine', 'allocine')
```
This is redundant, as there is only one "dataset configuration", and should only be:
```python
dataset = load_dataset('allocine')
```
This is my mistake, because the code for [`allocine.py`](https://github.com/huggingface/nlp/blob/master/datasets/allocine/allocine.py) was inspired by [`imdb.py`](https://github.com/huggingface/nlp/blob/master/datasets/imdb/imdb.py), which also force the user to specify the "dataset configuration" (even if there is only one).
I believe this PR should solve this issue, making the Allociné dataset more convenient to use. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/271/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/271/timeline | null | null | true | [
"Actually when there is only one configuration, then you don't need to specify the configuration in `load_dataset`. You can run:\r\n```python\r\ndataset = load_dataset('allocine')\r\n```\r\nand it works.\r\n\r\nMaybe we should take that into account in the nlp viewer @srush ?",
"@lhoestq Just to understand the exact semantics. Are you suggesting that if there is exactly 1 configuration I should not show the configuration menu and just treat it as if there were 0 configurations? ",
"The configuration menu is fine imo.\r\nIt was more about the code snippet presented in the viewer.\r\nFor example for Allociné it currently shows this snippet to load the dataset:\r\n```python\r\n!pip install nlp\r\nfrom nlp import load_dataset\r\ndataset = load_dataset('allocine', 'allocine')\r\n```\r\nHowever for datasets with one or zero configurations, the second argument in `load_dataset` is optional. For Allociné, that has one configuration, we can expect to show instead:\r\n```python\r\n!pip install nlp\r\nfrom nlp import load_dataset\r\ndataset = load_dataset('allocine')\r\n```",
"> Actually when there is only one configuration, then you don't need to specify the configuration in `load_dataset`. You can run:\r\n> \r\n> ```python\r\n> dataset = load_dataset('allocine')\r\n> ```\r\n> \r\n> and it works.\r\n> \r\n> Maybe we should take that into account in the nlp viewer @srush ?\r\n\r\nOh ok, I didn't expect it would work! \r\n\r\nAnyway, I think it's intrinsically better to simply remove the optional parameter. \r\nThe dummy data folder architecture seems also more logical this way.\r\n",
"Fixed in the viewer. Checked that allocine works.",
"Awesome thanks :)\r\n\r\nClosing this."
] |
https://api.github.com/repos/huggingface/datasets/issues/270 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/270/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/270/comments | https://api.github.com/repos/huggingface/datasets/issues/270/events | https://github.com/huggingface/datasets/issues/270 | 638,121,617 | MDU6SXNzdWU2MzgxMjE2MTc= | 270 | c4 dataset is not viewable in nlpviewer demo | {
"avatar_url": "https://avatars.githubusercontent.com/u/6441313?v=4",
"events_url": "https://api.github.com/users/rajarsheem/events{/privacy}",
"followers_url": "https://api.github.com/users/rajarsheem/followers",
"following_url": "https://api.github.com/users/rajarsheem/following{/other_user}",
"gists_url": "https://api.github.com/users/rajarsheem/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rajarsheem",
"id": 6441313,
"login": "rajarsheem",
"node_id": "MDQ6VXNlcjY0NDEzMTM=",
"organizations_url": "https://api.github.com/users/rajarsheem/orgs",
"received_events_url": "https://api.github.com/users/rajarsheem/received_events",
"repos_url": "https://api.github.com/users/rajarsheem/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rajarsheem/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajarsheem/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rajarsheem"
} | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | [] | null | 1 | "2020-06-13T08:26:16Z" | "2020-10-27T15:35:29Z" | "2020-10-27T15:35:13Z" | NONE | null | null | null | I get the following error when I try to view the c4 dataset in [nlpviewer](https://huggingface.co/nlp/viewer/)
```python
ModuleNotFoundError: No module named 'langdetect'
Traceback:
File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/ScriptRunner.py", line 322, in _run_script
exec(code, module.__dict__)
File "/home/sasha/nlp_viewer/run.py", line 54, in <module>
configs = get_confs(option.id)
File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/caching.py", line 591, in wrapped_func
return get_or_create_cached_value()
File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/caching.py", line 575, in get_or_create_cached_value
return_value = func(*args, **kwargs)
File "/home/sasha/nlp_viewer/run.py", line 48, in get_confs
builder_cls = nlp.load.import_main_class(module_path, dataset=True)
File "/home/sasha/.local/lib/python3.7/site-packages/nlp/load.py", line 57, in import_main_class
module = importlib.import_module(module_path)
File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/sasha/.local/lib/python3.7/site-packages/nlp/datasets/c4/88bb1b1435edad3fb772325710c4a43327cbf4a23b9030094556e6f01e14ec19/c4.py", line 29, in <module>
from .c4_utils import (
File "/home/sasha/.local/lib/python3.7/site-packages/nlp/datasets/c4/88bb1b1435edad3fb772325710c4a43327cbf4a23b9030094556e6f01e14ec19/c4_utils.py", line 29, in <module>
import langdetect
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/270/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/270/timeline | null | completed | false | [
"C4 is too large to be shown in the viewer"
] |
https://api.github.com/repos/huggingface/datasets/issues/269 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/269/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/269/comments | https://api.github.com/repos/huggingface/datasets/issues/269/events | https://github.com/huggingface/datasets/issues/269 | 638,106,774 | MDU6SXNzdWU2MzgxMDY3NzQ= | 269 | Error in metric.compute: missing `original_instructions` argument | {
"avatar_url": "https://avatars.githubusercontent.com/u/1668462?v=4",
"events_url": "https://api.github.com/users/zphang/events{/privacy}",
"followers_url": "https://api.github.com/users/zphang/followers",
"following_url": "https://api.github.com/users/zphang/following{/other_user}",
"gists_url": "https://api.github.com/users/zphang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zphang",
"id": 1668462,
"login": "zphang",
"node_id": "MDQ6VXNlcjE2Njg0NjI=",
"organizations_url": "https://api.github.com/users/zphang/orgs",
"received_events_url": "https://api.github.com/users/zphang/received_events",
"repos_url": "https://api.github.com/users/zphang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zphang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zphang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zphang"
} | [
{
"color": "25b21e",
"default": false,
"description": "A bug in a metric script",
"id": 2067393914,
"name": "metric bug",
"node_id": "MDU6TGFiZWwyMDY3MzkzOTE0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | 0 | "2020-06-13T06:26:54Z" | "2020-06-18T07:41:44Z" | "2020-06-18T07:41:44Z" | NONE | null | null | null | I'm running into an error using metrics for computation in the latest master as well as version 0.2.1. Here is a minimal example:
```python
import nlp
rte_metric = nlp.load_metric('glue', name="rte")
rte_metric.compute(
[0, 0, 1, 1],
[0, 1, 0, 1],
)
```
```
181 # Read the predictions and references
182 reader = ArrowReader(path=self.data_dir, info=None)
--> 183 self.data = reader.read_files(node_files)
184
185 # Release all of our locks
TypeError: read_files() missing 1 required positional argument: 'original_instructions'
```
I believe this might have been introduced with cc8d2508b75f7ba0e5438d0686ee02dcec43c7f4, which added the `original_instructions` argument. Elsewhere, an empty-string default is provided--perhaps that could be done here too? | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/269/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/269/timeline | null | completed | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/268 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/268/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/268/comments | https://api.github.com/repos/huggingface/datasets/issues/268/events | https://github.com/huggingface/datasets/pull/268 | 637,848,056 | MDExOlB1bGxSZXF1ZXN0NDMzNzU5NzQ1 | 268 | add Rotten Tomatoes Movie Review sentences sentiment dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jxmorris12",
"id": 13238952,
"login": "jxmorris12",
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jxmorris12"
} | [] | closed | false | null | [] | null | 1 | "2020-06-12T15:53:59Z" | "2020-06-18T07:46:24Z" | "2020-06-18T07:46:23Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/268.diff",
"html_url": "https://github.com/huggingface/datasets/pull/268",
"merged_at": "2020-06-18T07:46:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/268.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/268"
} | Sentence-level movie reviews v1.0 from here: http://www.cs.cornell.edu/people/pabo/movie-review-data/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/268/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/268/timeline | null | null | true | [
"@jplu @thomwolf @patrickvonplaten @lhoestq -- How do I request reviewers? Thanks."
] |
https://api.github.com/repos/huggingface/datasets/issues/267 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/267/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/267/comments | https://api.github.com/repos/huggingface/datasets/issues/267/events | https://github.com/huggingface/datasets/issues/267 | 637,415,545 | MDU6SXNzdWU2Mzc0MTU1NDU= | 267 | How can I load/find WMT en-romanian? | {
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sshleifer",
"id": 6045025,
"login": "sshleifer",
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sshleifer"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
}
] | null | 1 | "2020-06-12T01:09:37Z" | "2020-06-19T08:24:19Z" | "2020-06-19T08:24:19Z" | CONTRIBUTOR | null | null | null | I believe it is from `wmt16`
When I run
```python
wmt = nlp.load_dataset('wmt16')
```
I get:
```python
AssertionError: The dataset wmt16 with config cs-en requires manual data.
Please follow the manual download instructions: Some of the wmt configs here, require a manual download.
Please look into wmt.py to see the exact path (and file name) that has to
be downloaded.
.
Manual data can be loaded with `nlp.load(wmt16, data_dir='<path/to/manual/data>')
```
There is no wmt.py,as the error message suggests, and wmt16.py doesn't have manual download instructions.
Any idea how to do this?
Thanks in advance!
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/267/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/267/timeline | null | completed | false | [
"I will take a look :-) "
] |
https://api.github.com/repos/huggingface/datasets/issues/266 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/266/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/266/comments | https://api.github.com/repos/huggingface/datasets/issues/266/events | https://github.com/huggingface/datasets/pull/266 | 637,156,392 | MDExOlB1bGxSZXF1ZXN0NDMzMTk1NDgw | 266 | Add sort, shuffle, test_train_split and select methods | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
} | [] | closed | false | null | [] | null | 4 | "2020-06-11T16:22:20Z" | "2020-06-18T16:23:25Z" | "2020-06-18T16:23:24Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/266.diff",
"html_url": "https://github.com/huggingface/datasets/pull/266",
"merged_at": "2020-06-18T16:23:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/266.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/266"
} | Add a bunch of methods to reorder/split/select rows in a dataset:
- `dataset.select(indices)`: Create a new dataset with rows selected following the list/array of indices (which can have a different size than the dataset and contain duplicated indices, the only constrain is that all the integers in the list must be smaller than the dataset size, otherwise we're indexing outside the dataset...)
- `dataset.sort(column_name)`: sort a dataset according to a column (has to be a column with a numpy compatible type)
- `dataset.shuffle(seed)`: shuffle a dataset rows
- `dataset.train_test_split(test_size, train_size)`: Return a dictionary with two random train and test subsets (`train` and `test` ``Dataset`` splits)
All these methods are **not** in-place which means they return new ``Dataset``.
This is the default behavior in the library.
Fix #147 #166 #259 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/266/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/266/timeline | null | null | true | [
"Nice !\r\n\r\nAlso it looks like we can have a train_test_split method for free:\r\n```python\r\ntrain_indices, test_indices = train_test_split(range(len(dataset)))\r\ntrain = dataset.sort(indices=train_indices)\r\ntest = dataset.sort(indices=test_indices)\r\n```\r\n\r\nand a shuffling method for free:\r\n```python\r\nshuffled_indices = shuffle(range(len(dataset)))\r\nshuffled_dataset = dataset.sort(indices=shuffled_indices)\r\n```\r\n\r\nMaybe we can have a specific API for train_test_split and shuffle. They are two features asked quite often (see #147, #166)",
"Ok, I think this one is ready to merge.\r\n\r\n@patrickvonplaten @jplu @mariamabarham @joeddav @n1t0 @julien-c you may want to give it a look, it adds a bunch of methods to reorder/split/select rows in a dataset:\r\n- `dataset.select(indices)`: Create a new dataset with rows selected following the list/array of indices (which can have a different size than the dataset and contain duplicated indices, the only constrain is that all the integers in the list must be smaller than the dataset size, otherwise we're indexing outside the dataset...)\r\n- `dataset.sort(column_name)`: sort a dataset according to a column (has to be a column with a numpy compatible type)\r\n- `dataset.shuffle(seed)`: shuffle a dataset rows\r\n- `dataset.train_test_split(test_size, train_size)`: Return a dictionary with two random train and test subsets (`train` and `test` ``Dataset`` splits)\r\n\r\nAll these methods are **not** in-place which means they return new ``Dataset``, which is the default behavior in the library.",
"> Might be a solution to put 0.25 and 0.75 as default values for respectively `test_size` and `train_size`. WDYT?\r\n\r\nAccording to sklearn documentation, it is indeed set to 0.25 and 0.75 if both `test_size` and `train_size` are None.\r\nLet me add it.",
"I think we're good to go now :) @joeddav @thomwolf @jplu "
] |
https://api.github.com/repos/huggingface/datasets/issues/265 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/265/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/265/comments | https://api.github.com/repos/huggingface/datasets/issues/265/events | https://github.com/huggingface/datasets/pull/265 | 637,139,220 | MDExOlB1bGxSZXF1ZXN0NDMzMTgxNDMz | 265 | Add pyarrow warning colab | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-06-11T15:57:51Z" | "2020-08-02T18:14:36Z" | "2020-06-12T08:14:16Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/265.diff",
"html_url": "https://github.com/huggingface/datasets/pull/265",
"merged_at": "2020-06-12T08:14:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/265.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/265"
} | When a user installs `nlp` on google colab, then google colab doesn't update pyarrow, and the runtime needs to be restarted to use the updated version of pyarrow.
This is an issue because `nlp` requires the updated version to work correctly.
In this PR I added en error that is shown to the user in google colab if the user tries to `import nlp` without having restarted the runtime. The error tells the user to restart the runtime. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/265/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/265/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/264 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/264/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/264/comments | https://api.github.com/repos/huggingface/datasets/issues/264/events | https://github.com/huggingface/datasets/pull/264 | 637,106,170 | MDExOlB1bGxSZXF1ZXN0NDMzMTU0ODQ4 | 264 | Fix small issues creating dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-06-11T15:20:16Z" | "2020-06-12T08:15:57Z" | "2020-06-12T08:15:56Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/264.diff",
"html_url": "https://github.com/huggingface/datasets/pull/264",
"merged_at": "2020-06-12T08:15:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/264.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/264"
} | Fix many small issues mentioned in #249:
- don't force to install apache beam for commands
- fix None cache dir when using `dl_manager.download_custom`
- added new extras in `setup.py` named `dev` that contains tests and quality dependencies
- mock dataset sizes when running tests with dummy data
- add a note about the naming convention of datasets (camel case - snake case) in CONTRIBUTING.md
This should help users create their datasets.
Next step is the `add_dataset.md` docs :) | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/264/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/264/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/263 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/263/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/263/comments | https://api.github.com/repos/huggingface/datasets/issues/263/events | https://github.com/huggingface/datasets/issues/263 | 637,028,015 | MDU6SXNzdWU2MzcwMjgwMTU= | 263 | [Feature request] Support for external modality for language datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/1479733?v=4",
"events_url": "https://api.github.com/users/aleSuglia/events{/privacy}",
"followers_url": "https://api.github.com/users/aleSuglia/followers",
"following_url": "https://api.github.com/users/aleSuglia/following{/other_user}",
"gists_url": "https://api.github.com/users/aleSuglia/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aleSuglia",
"id": 1479733,
"login": "aleSuglia",
"node_id": "MDQ6VXNlcjE0Nzk3MzM=",
"organizations_url": "https://api.github.com/users/aleSuglia/orgs",
"received_events_url": "https://api.github.com/users/aleSuglia/received_events",
"repos_url": "https://api.github.com/users/aleSuglia/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aleSuglia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aleSuglia/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aleSuglia"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] | closed | false | null | [] | null | 5 | "2020-06-11T13:42:18Z" | "2022-02-10T13:26:35Z" | "2022-02-10T13:26:35Z" | CONTRIBUTOR | null | null | null | # Background
In recent years many researchers have advocated that learning meanings from text-based only datasets is just like asking a human to "learn to speak by listening to the radio" [[E. Bender and A. Koller,2020](https://openreview.net/forum?id=GKTvAcb12b), [Y. Bisk et. al, 2020](https://arxiv.org/abs/2004.10151)]. Therefore, the importance of multi-modal datasets for the NLP community is of paramount importance for next-generation models. For this reason, I raised a [concern](https://github.com/huggingface/nlp/pull/236#issuecomment-639832029) related to the best way to integrate external features in NLP datasets (e.g., visual features associated with an image, audio features associated with a recording, etc.). This would be of great importance for a more systematic way of representing data for ML models that are learning from multi-modal data.
# Language + Vision
## Use case
Typically, people working on Language+Vision tasks, have a reference dataset (either in JSON or JSONL format) and for each example, they have an identifier that specifies the reference image. For a practical example, you can refer to the [GQA](https://cs.stanford.edu/people/dorarad/gqa/download.html#seconddown) dataset.
Currently, images are represented by either pooling-based features (average pooling of ResNet or VGGNet features, see [DeVries et.al, 2017](https://arxiv.org/abs/1611.08481), [Shekhar et.al, 2019](https://www.aclweb.org/anthology/N19-1265.pdf)) where you have a single vector for every image. Another option is to use a set of feature maps for every image extracted from a specific layer of a CNN (see [Xu et.al, 2015](https://arxiv.org/abs/1502.03044)). A more recent option, especially with large-scale multi-modal transformers [Li et. al, 2019](https://arxiv.org/abs/1908.03557), is to use FastRCNN features.
For all these types of features, people use one of the following formats:
1. [HD5F](https://pypi.org/project/h5py/)
2. [NumPy](https://numpy.org/doc/stable/reference/generated/numpy.savez.html)
3. [LMDB](https://lmdb.readthedocs.io/en/release/)
## Implementation considerations
I was thinking about possible ways of implementing this feature. As mentioned above, depending on the model, different visual features can be used. This step usually relies on another model (say ResNet-101) that is used to generate the visual features for each image used in the dataset. Typically, this step is done in a separate script that completes the feature generation procedure. The usual processing steps for these datasets are the following:
1. Download dataset
2. Download images associated with the dataset
3. Write a script that generates the visual features for every image and store them in a specific file
4. Create a DataLoader that maps the visual features to the corresponding language example
In my personal projects, I've decided to ignore HD5F because it doesn't have out-of-the-box support for multi-processing (see this PyTorch [issue](https://github.com/pytorch/pytorch/issues/11929)). I've been successfully using a NumPy compressed file for each image so that I can store any sort of information in it.
For ease of use of all these Language+Vision datasets, it would be really handy to have a way to associate the visual features with the text and store them in an efficient way. That's why I immediately thought about the HuggingFace NLP backend based on Apache Arrow. The assumption here is that the external modality will be mapped to a N-dimensional tensor so easily represented by a NumPy array.
Looking forward to hearing your thoughts about it! | {
"+1": 18,
"-1": 0,
"confused": 0,
"eyes": 4,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 23,
"url": "https://api.github.com/repos/huggingface/datasets/issues/263/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/263/timeline | null | completed | false | [
"Thanks a lot, @aleSuglia for the very detailed and introductive feature request.\r\nIt seems like we could build something pretty useful here indeed.\r\n\r\nOne of the questions here is that Arrow doesn't have built-in support for generic \"tensors\" in records but there might be ways to do that in a clean way. We'll probably try to tackle this during the summer.",
"I was looking into Facebook MMF and apparently they decided to use LMDB to store additional features associated with every example: https://github.com/facebookresearch/mmf/blob/master/mmf/datasets/databases/features_database.py\r\n\r\n",
"I saw the Mozilla common_voice dataset in model hub, which has mp3 audio recordings as part it. It's use predominantly maybe in ASR and TTS, but dataset is a Language + Voice Dataset similar to @aleSuglia's point about Language + Vision. \r\n\r\nhttps://huggingface.co/datasets/common_voice",
"Hey @thomwolf, are there any updates on this? I would love to contribute if possible!\r\n\r\nThanks, \r\nAlessandro ",
"Hi @aleSuglia :) In today's new release 1.17 of `datasets` we introduce a new feature type `Image` that allows to store images directly in a dataset, next to text features and labels for example. There is also an `Audio` feature type, for datasets containing audio data. For tensors there are `Array2D`, `Array3D`, etc. feature types\r\n\r\nNote that both Image and Audio feature types take care of decoding the images/audio data if needed. The returned images are PIL images, and the audio signals are decoded as numpy arrays.\r\n\r\nAnd `datasets` also leverage end-to-end zero copy from the arrow data for all of them, for maximum speed :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/262 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/262/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/262/comments | https://api.github.com/repos/huggingface/datasets/issues/262/events | https://github.com/huggingface/datasets/pull/262 | 636,702,849 | MDExOlB1bGxSZXF1ZXN0NDMyODI3Mzcz | 262 | Add new dataset ANLI Round 1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/11016329?v=4",
"events_url": "https://api.github.com/users/easonnie/events{/privacy}",
"followers_url": "https://api.github.com/users/easonnie/followers",
"following_url": "https://api.github.com/users/easonnie/following{/other_user}",
"gists_url": "https://api.github.com/users/easonnie/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/easonnie",
"id": 11016329,
"login": "easonnie",
"node_id": "MDQ6VXNlcjExMDE2MzI5",
"organizations_url": "https://api.github.com/users/easonnie/orgs",
"received_events_url": "https://api.github.com/users/easonnie/received_events",
"repos_url": "https://api.github.com/users/easonnie/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/easonnie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/easonnie/subscriptions",
"type": "User",
"url": "https://api.github.com/users/easonnie"
} | [] | closed | false | null | [] | null | 1 | "2020-06-11T04:14:57Z" | "2020-06-12T22:03:03Z" | "2020-06-12T22:03:03Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/262.diff",
"html_url": "https://github.com/huggingface/datasets/pull/262",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/262.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/262"
} | Adding new dataset [ANLI](https://github.com/facebookresearch/anli/).
I'm not familiar with how to add new dataset. Let me know if there is any issue. I only include round 1 data here. There will be round 2, round 3 and more in the future with potentially different format. I think it will be better to separate them. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/262/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/262/timeline | null | null | true | [
"Hello ! Thanks for adding this one :)\r\n\r\nThis looks great, you just have to do the last steps to make the CI pass.\r\nI can see that two things are missing:\r\n1. the dummy data that is used to test that the script is working as expected\r\n2. the json file with all the infos about the dataset\r\n\r\nYou can see the steps to help you create the dummy data and generate the dataset_infos.json file right [here](https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset)"
] |
https://api.github.com/repos/huggingface/datasets/issues/261 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/261/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/261/comments | https://api.github.com/repos/huggingface/datasets/issues/261/events | https://github.com/huggingface/datasets/issues/261 | 636,372,380 | MDU6SXNzdWU2MzYzNzIzODA= | 261 | Downloading dataset error with pyarrow.lib.RecordBatch | {
"avatar_url": "https://avatars.githubusercontent.com/u/5248968?v=4",
"events_url": "https://api.github.com/users/cuent/events{/privacy}",
"followers_url": "https://api.github.com/users/cuent/followers",
"following_url": "https://api.github.com/users/cuent/following{/other_user}",
"gists_url": "https://api.github.com/users/cuent/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cuent",
"id": 5248968,
"login": "cuent",
"node_id": "MDQ6VXNlcjUyNDg5Njg=",
"organizations_url": "https://api.github.com/users/cuent/orgs",
"received_events_url": "https://api.github.com/users/cuent/received_events",
"repos_url": "https://api.github.com/users/cuent/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cuent/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cuent/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cuent"
} | [] | closed | false | null | [] | null | 2 | "2020-06-10T16:04:19Z" | "2020-06-11T14:35:12Z" | "2020-06-11T14:35:12Z" | NONE | null | null | null | I am trying to download `sentiment140` and I have the following error
```
/usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
518 download_mode=download_mode,
519 ignore_verifications=ignore_verifications,
--> 520 save_infos=save_infos,
521 )
522
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
418 verify_infos = not save_infos and not ignore_verifications
419 self._download_and_prepare(
--> 420 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
421 )
422 # Sync info
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
472 try:
473 # Prepare split will record examples associated to the split
--> 474 self._prepare_split(split_generator, **prepare_split_kwargs)
475 except OSError:
476 raise OSError("Cannot find data file. " + (self.MANUAL_DOWNLOAD_INSTRUCTIONS or ""))
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in _prepare_split(self, split_generator)
652 for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False):
653 example = self.info.features.encode_example(record)
--> 654 writer.write(example)
655 num_examples, num_bytes = writer.finalize()
656
/usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in write(self, example, writer_batch_size)
143 self._build_writer(pa_table=pa.Table.from_pydict(example))
144 if writer_batch_size is not None and len(self.current_rows) >= writer_batch_size:
--> 145 self.write_on_file()
146
147 def write_batch(
/usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in write_on_file(self)
127 else:
128 # All good
--> 129 self._write_array_on_file(pa_array)
130 self.current_rows = []
131
/usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in _write_array_on_file(self, pa_array)
96 def _write_array_on_file(self, pa_array):
97 """Write a PyArrow Array"""
---> 98 pa_batch = pa.RecordBatch.from_struct_array(pa_array)
99 self._num_bytes += pa_array.nbytes
100 self.pa_writer.write_batch(pa_batch)
AttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
```
I installed the last version and ran the following command:
```python
import nlp
sentiment140 = nlp.load_dataset('sentiment140', cache_dir='/content')
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/261/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/261/timeline | null | completed | false | [
"When you install `nlp` for the first time on a Colab runtime, it updates the `pyarrow` library that was already on colab. This update shows this message on colab:\r\n```\r\nWARNING: The following packages were previously imported in this runtime:\r\n [pyarrow]\r\nYou must restart the runtime in order to use newly installed versions.\r\n```\r\nYou just have to restart the runtime and it should be fine.\r\nIf you don't restart, then it breaks like in your message.",
"Yeah, that worked! Thanks :) "
] |
https://api.github.com/repos/huggingface/datasets/issues/260 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/260/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/260/comments | https://api.github.com/repos/huggingface/datasets/issues/260/events | https://github.com/huggingface/datasets/pull/260 | 636,261,118 | MDExOlB1bGxSZXF1ZXN0NDMyNDY3NDM5 | 260 | Consistency fixes | {
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/julien-c",
"id": 326577,
"login": "julien-c",
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"repos_url": "https://api.github.com/users/julien-c/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"type": "User",
"url": "https://api.github.com/users/julien-c"
} | [] | closed | false | null | [] | null | 0 | "2020-06-10T13:44:42Z" | "2020-06-11T10:34:37Z" | "2020-06-11T10:34:36Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/260.diff",
"html_url": "https://github.com/huggingface/datasets/pull/260",
"merged_at": "2020-06-11T10:34:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/260.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/260"
} | A few bugs I've found while hacking | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/260/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/260/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/259 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/259/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/259/comments | https://api.github.com/repos/huggingface/datasets/issues/259/events | https://github.com/huggingface/datasets/issues/259 | 636,239,529 | MDU6SXNzdWU2MzYyMzk1Mjk= | 259 | documentation missing how to split a dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/2873355?v=4",
"events_url": "https://api.github.com/users/fotisj/events{/privacy}",
"followers_url": "https://api.github.com/users/fotisj/followers",
"following_url": "https://api.github.com/users/fotisj/following{/other_user}",
"gists_url": "https://api.github.com/users/fotisj/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fotisj",
"id": 2873355,
"login": "fotisj",
"node_id": "MDQ6VXNlcjI4NzMzNTU=",
"organizations_url": "https://api.github.com/users/fotisj/orgs",
"received_events_url": "https://api.github.com/users/fotisj/received_events",
"repos_url": "https://api.github.com/users/fotisj/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fotisj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fotisj/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fotisj"
} | [] | closed | false | null | [] | null | 7 | "2020-06-10T13:18:13Z" | "2023-03-14T13:56:07Z" | "2020-06-18T22:20:24Z" | NONE | null | null | null | I am trying to understand how to split a dataset ( as arrow_dataset).
I know I can do something like this to access a split which is already in the original dataset :
`ds_test = nlp.load_dataset('imdb, split='test') `
But how can I split ds_test into a test and a validation set (without reading the data into memory and keeping the arrow_dataset as container)?
I guess it has something to do with the module split :-) but there is no real documentation in the code but only a reference to a longer description:
> See the [guide on splits](https://github.com/huggingface/nlp/tree/master/docs/splits.md) for more information.
But the guide seems to be missing.
To clarify: I know that this has been modelled after the dataset of tensorflow and that some of the documentation there can be used [like this one](https://www.tensorflow.org/datasets/splits). But to come back to the example above: I cannot simply split the testset doing this:
`ds_test = nlp.load_dataset('imdb, split='test'[:5000]) `
`ds_val = nlp.load_dataset('imdb, split='test'[5000:])`
because the imdb test data is sorted by class (probably not a good idea anyway)
| {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/259/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/259/timeline | null | completed | false | [
"this seems to work for my specific problem:\r\n\r\n`self.train_ds, self.test_ds, self.val_ds = map(_prepare_ds, ('train', 'test[:25%]+test[50%:75%]', 'test[75%:]'))`",
"Currently you can indeed split a dataset using `ds_test = nlp.load_dataset('imdb, split='test[:5000]')` (works also with percentages).\r\n\r\nHowever right now we don't have a way to shuffle a dataset but we are thinking about it in the discussion in #166. Feel free to share your thoughts about it.\r\n\r\nOne trick that you can do until we have a better solution is to shuffle and split the indices of your dataset:\r\n```python\r\nimport nlp\r\nfrom sklearn.model_selection import train_test_split\r\n\r\nimdb = nlp.load_dataset('imbd', split='test')\r\ntest_indices, val_indices = train_test_split(range(len(imdb)))\r\n```\r\n\r\nand then to iterate each split:\r\n```python\r\nfor i in test_indices:\r\n example = imdb[i]\r\n ...\r\n```\r\n",
"I added a small guide [here](https://github.com/huggingface/nlp/tree/master/docs/splits.md) that explains how to split a dataset. It is very similar to the tensorflow datasets guide, as we kept the same logic.",
"Thanks a lot, the new explanation is very helpful!\r\n\r\nAbout using train_test_split from sklearn: I stumbled across the [same error message as this user ](https://github.com/huggingface/nlp/issues/147 )and thought it can't be used at the moment in this context. Will check it out again.\r\n\r\nOne of the problems is how to shuffle very large datasets, which don't fit into the memory. Well, one strategy could be shuffling data in sections. But in a case where the data is sorted by the labels you have to swap larger sections first. \r\n",
"We added a way to shuffle datasets (shuffle the indices and then reorder to make a new dataset).\r\nYou can do `shuffled_dset = dataset.shuffle(seed=my_seed)`. It shuffles the whole dataset.\r\nThere is also `dataset.train_test_split()` which if very handy (with the same signature as sklearn).\r\n\r\nClosing this issue as we added the docs for splits and tools to split datasets. Thanks again for your feedback !",
"https://huggingface.co/docs/datasets/v1.0.1/package_reference/builder_classes.html#datasets.Split still links to https://github.com/huggingface/datasets/tree/main/docs/splits.md which is a 404\r\n",
"The updated documentation doesn't link to this anymore: https://huggingface.co/docs/datasets/v2.10.0/en/package_reference/builder_classes#datasets.Split"
] |
https://api.github.com/repos/huggingface/datasets/issues/258 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/258/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/258/comments | https://api.github.com/repos/huggingface/datasets/issues/258/events | https://github.com/huggingface/datasets/issues/258 | 635,859,525 | MDU6SXNzdWU2MzU4NTk1MjU= | 258 | Why is dataset after tokenization far more larger than the orginal one ? | {
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/richarddwang",
"id": 17963619,
"login": "richarddwang",
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/richarddwang"
} | [] | closed | false | null | [] | null | 4 | "2020-06-10T01:27:07Z" | "2020-06-10T12:46:34Z" | "2020-06-10T12:46:34Z" | CONTRIBUTOR | null | null | null | I tokenize wiki dataset by `map` and cache the results.
```
def tokenize_tfm(example):
example['input_ids'] = hf_fast_tokenizer.convert_tokens_to_ids(hf_fast_tokenizer.tokenize(example['text']))
return example
wiki = nlp.load_dataset('wikipedia', '20200501.en', cache_dir=cache_dir)['train']
wiki.map(tokenize_tfm, cache_file_name=cache_dir/"wikipedia/20200501.en/1.0.0/tokenized_wiki.arrow")
```
and when I see their size
```
ls -l --block-size=M
17460M wikipedia-train.arrow
47511M tokenized_wiki.arrow
```
The tokenized one is over 2x size of original one.
Is there something I did wrong ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/258/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/258/timeline | null | completed | false | [
"Hi ! This is because `.map` added the new column `input_ids` to the dataset, and so all the other columns were kept. Therefore the dataset size increased a lot.\r\n If you want to only keep the `input_ids` column, you can stash the other ones by specifying `remove_columns=[\"title\", \"text\"]` in the arguments of `.map`",
"Hi ! Thanks for your reply.\r\n\r\nBut since size of `input_ids` < size of `text`, I am wondering why\r\nsize of `input_ids` + `text` > 2x the size of `text` 🤔",
"Hard to tell... This is probably related to the way apache arrow compresses lists of integers, that may be different from the compression of strings.",
"Thanks for your point. 😀, It might be answer.\r\nSince this is hard to know, I'll close this issue.\r\nBut if somebody knows more details, please comment below ~ 😁"
] |
https://api.github.com/repos/huggingface/datasets/issues/257 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/257/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/257/comments | https://api.github.com/repos/huggingface/datasets/issues/257/events | https://github.com/huggingface/datasets/issues/257 | 635,620,979 | MDU6SXNzdWU2MzU2MjA5Nzk= | 257 | Tokenizer pickling issue fix not landed in `nlp` yet? | {
"avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4",
"events_url": "https://api.github.com/users/sarahwie/events{/privacy}",
"followers_url": "https://api.github.com/users/sarahwie/followers",
"following_url": "https://api.github.com/users/sarahwie/following{/other_user}",
"gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sarahwie",
"id": 8027676,
"login": "sarahwie",
"node_id": "MDQ6VXNlcjgwMjc2NzY=",
"organizations_url": "https://api.github.com/users/sarahwie/orgs",
"received_events_url": "https://api.github.com/users/sarahwie/received_events",
"repos_url": "https://api.github.com/users/sarahwie/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sarahwie"
} | [] | closed | false | null | [] | null | 2 | "2020-06-09T17:12:34Z" | "2020-06-10T21:45:32Z" | "2020-06-09T17:26:53Z" | NONE | null | null | null | Unless I recreate an arrow_dataset from my loaded nlp dataset myself (which I think does not use the cache by default), I get the following error when applying the map function:
```
dataset = nlp.load_dataset('cos_e')
tokenizer = GPT2TokenizerFast.from_pretrained('gpt2', cache_dir=cache_dir)
for split in dataset.keys():
dataset[split].map(lambda x: some_function(x, tokenizer))
```
```
06/09/2020 10:09:19 - INFO - nlp.builder - Constructing Dataset for split train[:10], from /home/sarahw/.cache/huggingface/datasets/cos_e/default/0.0.1
Traceback (most recent call last):
File "generation/input_to_label_and_rationale.py", line 390, in <module>
main()
File "generation/input_to_label_and_rationale.py", line 263, in main
dataset[split] = dataset[split].map(lambda x: input_to_explanation_plus_label(x, tokenizer, max_length, datasource=data_args.task_name, wt5=(model_class=='t5'), expl_only=model_args.rationale_only), batched=False)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/nlp/arrow_dataset.py", line 522, in map
cache_file_name = self._get_cache_file_path(function, cache_kwargs)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/nlp/arrow_dataset.py", line 381, in _get_cache_file_path
function_bytes = dumps(function)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/nlp/utils/py_utils.py", line 257, in dumps
dump(obj, file)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/nlp/utils/py_utils.py", line 250, in dump
Pickler(file).dump(obj)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 445, in dump
StockPickler.dump(self, obj)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 485, in dump
self.save(obj)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save
f(self, obj) # Call unbound method with explicit self
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 1410, in save_function
pickler.save_reduce(_create_function, (obj.__code__,
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 690, in save_reduce
save(args)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save
f(self, obj) # Call unbound method with explicit self
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 899, in save_tuple
save(element)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save
f(self, obj) # Call unbound method with explicit self
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 899, in save_tuple
save(element)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save
f(self, obj) # Call unbound method with explicit self
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 1147, in save_cell
pickler.save_reduce(_create_cell, (f,), obj=obj)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 690, in save_reduce
save(args)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save
f(self, obj) # Call unbound method with explicit self
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 884, in save_tuple
save(element)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 601, in save
self.save_reduce(obj=obj, *rv)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 715, in save_reduce
save(state)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save
f(self, obj) # Call unbound method with explicit self
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 912, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 969, in save_dict
self._batch_setitems(obj.items())
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 995, in _batch_setitems
save(v)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 601, in save
self.save_reduce(obj=obj, *rv)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 715, in save_reduce
save(state)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save
f(self, obj) # Call unbound method with explicit self
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 912, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 969, in save_dict
self._batch_setitems(obj.items())
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 995, in _batch_setitems
save(v)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 576, in save
rv = reduce(self.proto)
TypeError: cannot pickle 'Tokenizer' object
```
Fix seems to be in the tokenizers [`0.8.0.dev1 pre-release`](https://github.com/huggingface/tokenizers/issues/87), which I can't install with any package managers. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/257/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/257/timeline | null | completed | false | [
"Yes, the new release of tokenizers solves this and should be out soon.\r\nIn the meantime, you can install it with `pip install tokenizers==0.8.0-dev2`",
"If others run into this issue, a quick fix is to use python 3.6 instead of 3.7+. Serialization differences between the 3rd party `dataclasses` package for 3.6 and the built in `dataclasses` in 3.7+ cause the issue.\r\n\r\nProbably a dumb fix, but it works for me."
] |
https://api.github.com/repos/huggingface/datasets/issues/256 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/256/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/256/comments | https://api.github.com/repos/huggingface/datasets/issues/256/events | https://github.com/huggingface/datasets/issues/256 | 635,596,295 | MDU6SXNzdWU2MzU1OTYyOTU= | 256 | [Feature request] Add a feature to dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4",
"events_url": "https://api.github.com/users/sarahwie/events{/privacy}",
"followers_url": "https://api.github.com/users/sarahwie/followers",
"following_url": "https://api.github.com/users/sarahwie/following{/other_user}",
"gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sarahwie",
"id": 8027676,
"login": "sarahwie",
"node_id": "MDQ6VXNlcjgwMjc2NzY=",
"organizations_url": "https://api.github.com/users/sarahwie/orgs",
"received_events_url": "https://api.github.com/users/sarahwie/received_events",
"repos_url": "https://api.github.com/users/sarahwie/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sarahwie"
} | [] | closed | false | null | [] | null | 5 | "2020-06-09T16:38:12Z" | "2020-06-09T16:51:42Z" | "2020-06-09T16:51:42Z" | NONE | null | null | null | Is there a straightforward way to add a field to the arrow_dataset, prior to performing map? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/256/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/256/timeline | null | completed | false | [
"Do you have an example of what you would like to do? (you can just add a field in the output of the unction you give to map and this will add this field in the output table)",
"Given another source of data loaded in, I want to pre-add it to the dataset so that it aligns with the indices of the arrow dataset prior to performing map.\r\n\r\nE.g. \r\n```\r\nnew_info = list of length dataset['train']\r\n\r\ndataset['train'] = dataset['train'].map(lambda x: some_function(x, new_info[index of x]))\r\n\r\ndef some_function(x, new_info_x):\r\n # adds new_info[index of x] as a field to x\r\n x['new_info'] = new_info_x\r\n return x\r\n```\r\nI was thinking to instead create a new field in the arrow dataset so that instance x contains all the necessary information when map function is applied (since I don't have index information to pass to map function).",
"This is what I have so far: \r\n\r\n```\r\nimport pyarrow as pa\r\nfrom nlp.arrow_dataset import Dataset\r\n\r\naug_dataset = dataset['train'][:]\r\naug_dataset['new_info'] = new_info\r\n\r\n#reformat as arrow-table\r\nschema = dataset['train'].schema\r\n\r\n# this line doesn't work:\r\nschema.append(pa.field('new_info', pa.int32()))\r\n\r\ntable = pa.Table.from_pydict(\r\n aug_dataset,\r\n schema=schema\r\n)\r\ndataset['train'] = Dataset(table) \r\n```",
"Maybe you can use `with_indices`?\r\n\r\n```python\r\nnew_info = list of length dataset['train']\r\n\r\ndef some_function(indice, x):\r\n # adds new_info[index of x] as a field to x\r\n x['new_info'] = new_info_x[indice]\r\n return x\r\n\r\ndataset['train'] = dataset['train'].map(some_function, with_indices=True)\r\n```",
"Oh great. That should work. I missed that in the documentation- thanks :) "
] |
https://api.github.com/repos/huggingface/datasets/issues/255 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/255/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/255/comments | https://api.github.com/repos/huggingface/datasets/issues/255/events | https://github.com/huggingface/datasets/pull/255 | 635,300,822 | MDExOlB1bGxSZXF1ZXN0NDMxNjg3MDM0 | 255 | Add dataset/piaf | {
"avatar_url": "https://avatars.githubusercontent.com/u/36986299?v=4",
"events_url": "https://api.github.com/users/RachelKer/events{/privacy}",
"followers_url": "https://api.github.com/users/RachelKer/followers",
"following_url": "https://api.github.com/users/RachelKer/following{/other_user}",
"gists_url": "https://api.github.com/users/RachelKer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/RachelKer",
"id": 36986299,
"login": "RachelKer",
"node_id": "MDQ6VXNlcjM2OTg2Mjk5",
"organizations_url": "https://api.github.com/users/RachelKer/orgs",
"received_events_url": "https://api.github.com/users/RachelKer/received_events",
"repos_url": "https://api.github.com/users/RachelKer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/RachelKer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RachelKer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/RachelKer"
} | [] | closed | false | null | [] | null | 1 | "2020-06-09T10:16:01Z" | "2020-06-12T08:31:27Z" | "2020-06-12T08:31:27Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/255.diff",
"html_url": "https://github.com/huggingface/datasets/pull/255",
"merged_at": "2020-06-12T08:31:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/255.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/255"
} | Small SQuAD-like French QA dataset [PIAF](https://www.aclweb.org/anthology/2020.lrec-1.673.pdf) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/255/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/255/timeline | null | null | true | [
"Very nice !"
] |
https://api.github.com/repos/huggingface/datasets/issues/254 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/254/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/254/comments | https://api.github.com/repos/huggingface/datasets/issues/254/events | https://github.com/huggingface/datasets/issues/254 | 635,057,568 | MDU6SXNzdWU2MzUwNTc1Njg= | 254 | [Feature request] Be able to remove a specific sample of the dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/astariul",
"id": 43774355,
"login": "astariul",
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"repos_url": "https://api.github.com/users/astariul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/astariul"
} | [] | closed | false | null | [] | null | 1 | "2020-06-09T02:22:13Z" | "2020-06-09T08:41:38Z" | "2020-06-09T08:41:38Z" | NONE | null | null | null | As mentioned in #117, it's currently not possible to remove a sample of the dataset.
But it is a important use case : After applying some preprocessing, some samples might be empty for example. We should be able to remove these samples from the dataset, or at least mark them as `removed` so when iterating the dataset, we don't iterate these samples.
I think it should be a feature. What do you think ?
---
Any work-around in the meantime ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/254/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/254/timeline | null | completed | false | [
"Oh yes you can now do that with the `dataset.filter()` method that was added in #214 "
] |
https://api.github.com/repos/huggingface/datasets/issues/253 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/253/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/253/comments | https://api.github.com/repos/huggingface/datasets/issues/253/events | https://github.com/huggingface/datasets/pull/253 | 634,791,939 | MDExOlB1bGxSZXF1ZXN0NDMxMjgwOTYz | 253 | add flue dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | 10 | "2020-06-08T17:11:09Z" | "2023-09-24T09:46:03Z" | "2020-07-16T07:50:59Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/253.diff",
"html_url": "https://github.com/huggingface/datasets/pull/253",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/253.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/253"
} | This PR add the Flue dataset as requested in this issue #223 . @lbourdois made a detailed description in that issue.
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/253/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/253/timeline | null | null | true | [
"The dummy data file was wrong. I only fixed it for the book config. Even though the tests are all green here, this should also be fixed for all other configs. Could you take a look there @mariamabarham ? ",
"Hi @mariamabarham \r\n\r\nFLUE can indeed become a very interesting benchmark for french NLP !\r\nUnfortunately, it seems that we've both been working on adding it to the repo...\r\nI was going to open a pull request before I came across yours.\r\nI didn't want to open a duplicate, that's why I'm commenting here (I hope it's not rude).\r\n\r\nWhen I look at your code there is one issue that jump out at me: for both `vsd` and `nsd`, the labels are missing. I believe this is more a data issue, as they were not kept in the cleaned dataframes of #223. I think the *word sense disambiguation* task was a bit misunderstood. \r\n\r\nMaybe you should directly use the data provided by FLUE for these ?",
"Hi @TheophileBlard thanks for pointing this out. I will give a look at it or maybe if you already done it you can update this PR. Also I haven't added yet the parsing datasets, I submited a request to get access to them. If you already have them, you can also add them.",
"Hi,\r\n\r\nAs @TheophileBlard pointed out, the labels for the vsd and nsd stains are missing.\r\n\r\nFor the wsd, it is my mistake, I added the files containing the labels on the drive.\r\nThere is still the join to do between the files that I didn't have time to do. It can be done after importing the two files, however if you wish to have a single dataframe already containing all the information, I could do it but only when I have free time because I have a lot of work at the moment at INSERM with the covid.\r\n\r\nFor the nsd, I've downloaded the files at https://zenodo.org/record/3549806, and if you do the same you'll see that they don't contain any labels.\r\nIn the files, you can see that some words have a WN code. I don't know what it corresponds to. On the FLUE github, they say to use the disambiguate tool (https://github.com/getalp/disambiguate) but I don't understand what he's doing.\r\n\r\n@mariamabarham for the parsing datasets, I have them in my possession. What it does that I haven't shared them is that they are licensed and you have to make a request to their creators. They give them away very easily for research purposes. For another use, you have to ask a commercial licence. All this means that if the data is freely available on your librairy, their licence and their application form are no longer of interest, which is why I did not add them.\r\nAfterwards, maybe the authors will change their policies and decide to make the data freely available through your librairy",
"@mariamabarham @lbourdois, Yea I don't think we can had the parsing datasets without asking the authors permission first. I also hope they'll change their policy.\r\n\r\nRegarding `vsd` and `nsd`, if I understand well the task, the labels are \"word senses\" and the goal is to find the correct word sense for each ambiguous word. For `vsd` there is one ambiguous verb per sentence, and the labels we manually annotated with \"wiktionary senses\". For `nsd`, there are multiple ambiguous word per sentence, and the labels are WordNet Princeton Identifiers (hence the WN tag). This dataset was translated in french & automatically aligned.\r\n\r\nImo, for these 2 datasets, each example should be made of:\r\n- a list of string tokens (the words of the sentence)\r\n- a list of string labels (the word senses or 'O' when the word is not ambiguous.\r\n\r\nIn fact, for `vsd` it could be even simpler, with a single string label (as there is only one ambiguous verb), + some \"idx\" feature to indicate the location of the ambiguous verb.\r\n\r\nUnfortunately, I cannot update your PR as I'm not a maintainer of the project. Maybe we could work together on a fork ? Here's [mine](https://github.com/TheophileBlard/nlp/commits/flue-benchmark).\r\n",
"Hi\r\n\r\nAny news about this PR ?\r\nBecause thinking back FLUE basically offers only two new datasets : those for the Word Sense Disambiguation task (vsd and nsd).\r\n\r\nWouldn't it be more clever to make separate PRs to add the datasets of the other tasks which are multi-lingual (and therefore can be used for other languages) ?\r\n\r\nXNLI being already present on your library, there would only be PAWS-X (datasets and bibtex available here : https://github.com/google-research-datasets/paws/tree/master/pawsx) and the Webis-CLS-10 dataset (dataset : https://zenodo.org/record/3251672#.XvCXN-d8taQ and bibtex : https://zenodo.org/record/3251672/export/hx#.XvCXZ-d8taQ) to do.\r\n\r\nAnd next for the FLUE benchmark, all you would have to do would be to use your own library by making an nlp.load_dataset() (for example nlp.load_dataset('xnli') which is already present in your library) for each of the datasets of the benchmark tasks and to keep only the 'fr' data.\r\n\r\n\r\n\r\nAlso @mariamabarham , did you get any feedback for the parsing task dataset request?\r\nIn case of refusal from the authors, there are other datasets in French to perform this task and in this case, I would open a new topic\r\n",
"Hi @lbourdois ,\r\nPAWS-X is also present in the lib, it's part of `xtreme` dataset, so it can be loaded by `nlp.load_dataset('xtreme', 'PAWS-X.fr')` for the french version.\r\nI think the parsing and the Word Sense Disambiguation task datasets are the only missing in the lib now. \r\nI did not get a feedback yet for the parsing dataset.\r\n",
"By the way, @TheophileBlard I commented some days ago in your fork. It would be great if you can maybe open a new PR with your code or if you have a better way to make it available to others for review.",
"> By the way, @TheophileBlard I commented some days ago in your fork. It would be great if you can maybe open a new PR with your code or if you have a better way to make it available to others for review.\r\n\r\nYea sorry, missed that! I think @lbourdois has a point, it helps no one to have the same dataset in multiple places. I will try to find some time to adapt the code of my fork and open PRs for `Webis-CLS-10` and `nsd`/`vsd`. Maybe we should group `nsd`/`vsd` together ?",
"Shall we close this PR then ? @mariamabarham @TheophileBlard @lbourdois "
] |
https://api.github.com/repos/huggingface/datasets/issues/252 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/252/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/252/comments | https://api.github.com/repos/huggingface/datasets/issues/252/events | https://github.com/huggingface/datasets/issues/252 | 634,563,239 | MDU6SXNzdWU2MzQ1NjMyMzk= | 252 | NonMatchingSplitsSizesError error when reading the IMDB dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/17463361?v=4",
"events_url": "https://api.github.com/users/antmarakis/events{/privacy}",
"followers_url": "https://api.github.com/users/antmarakis/followers",
"following_url": "https://api.github.com/users/antmarakis/following{/other_user}",
"gists_url": "https://api.github.com/users/antmarakis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/antmarakis",
"id": 17463361,
"login": "antmarakis",
"node_id": "MDQ6VXNlcjE3NDYzMzYx",
"organizations_url": "https://api.github.com/users/antmarakis/orgs",
"received_events_url": "https://api.github.com/users/antmarakis/received_events",
"repos_url": "https://api.github.com/users/antmarakis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/antmarakis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antmarakis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/antmarakis"
} | [] | closed | false | null | [] | null | 4 | "2020-06-08T12:26:24Z" | "2021-08-27T15:20:58Z" | "2020-06-08T14:01:26Z" | NONE | null | null | null | Hi!
I am trying to load the `imdb` dataset with this line:
`dataset = nlp.load_dataset('imdb', data_dir='/A/PATH', cache_dir='/A/PATH')`
but I am getting the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/load.py", line 517, in load_dataset
save_infos=save_infos,
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/builder.py", line 363, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/builder.py", line 421, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/utils/info_utils.py", line 70, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=33442202, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=5929447, num_examples=4537, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}]
```
Am I overlooking something? Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/252/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/252/timeline | null | completed | false | [
"I just tried on my side and I didn't encounter your problem.\r\nApparently the script doesn't generate all the examples on your side.\r\n\r\nCan you provide the version of `nlp` you're using ?\r\nCan you try to clear your cache and re-run the code ?",
"I updated it, that was it, thanks!",
"Hello, I am facing the same problem... how do you clear the huggingface cache?",
"Hi ! The cache is at ~/.cache/huggingface\r\nYou can just delete this folder if needed :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/251 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/251/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/251/comments | https://api.github.com/repos/huggingface/datasets/issues/251/events | https://github.com/huggingface/datasets/pull/251 | 634,544,977 | MDExOlB1bGxSZXF1ZXN0NDMxMDgwMDkw | 251 | Better access to all dataset information | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
} | [] | closed | false | null | [] | null | 0 | "2020-06-08T11:56:50Z" | "2020-06-12T08:13:00Z" | "2020-06-12T08:12:58Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/251.diff",
"html_url": "https://github.com/huggingface/datasets/pull/251",
"merged_at": "2020-06-12T08:12:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/251.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/251"
} | Moves all the dataset info down one level from `dataset.info.XXX` to `dataset.XXX`
This way it's easier to access `dataset.feature['label']` for instance
Also, add the original split instructions used to create the dataset in `dataset.split`
Ex:
```
from nlp import load_dataset
stsb = load_dataset('glue', name='stsb', split='train')
stsb.split
>>> NamedSplit('train')
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/251/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/251/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/250 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/250/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/250/comments | https://api.github.com/repos/huggingface/datasets/issues/250/events | https://github.com/huggingface/datasets/pull/250 | 634,416,751 | MDExOlB1bGxSZXF1ZXN0NDMwOTcyMzg4 | 250 | Remove checksum download in c4 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 1 | "2020-06-08T09:13:00Z" | "2020-08-25T07:04:56Z" | "2020-06-08T09:16:59Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/250.diff",
"html_url": "https://github.com/huggingface/datasets/pull/250",
"merged_at": "2020-06-08T09:16:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/250.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/250"
} | There was a line from the original tfds script that was still there and causing issues when loading the c4 script. This one should fix #233 and allow anyone to load the c4 script to generate the dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/250/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/250/timeline | null | null | true | [
"Commenting again in case [previous thread](https://github.com/huggingface/nlp/pull/233) was inactive.\r\n\r\n@lhoestq I am facing `IsADirectoryError` while downloading with this command.\r\nCan you pls look into it & help me.\r\nI'm using version 0.4.0 of `nlp`.\r\n\r\n```\r\ndataset = load_dataset(\"c4\", 'en', data_dir='.', beam_runner='DirectRunner')\r\n```\r\n\r\nHere's the complete stack trace.\r\n\r\n```\r\nDownloading and preparing dataset c4/en (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to /home/devops/.cache/huggingface/datasets/c4/en/2.3.0/096df5a27756d51957c959a2499453e60a08154971fceb017bbb29f54b11bef7...\r\n\r\n---------------------------------------------------------------------------\r\nIsADirectoryError Traceback (most recent call last)\r\n<ipython-input-11-f622e6705e03> in <module>\r\n----> 1 dataset = load_dataset(\"c4\", 'en', data_dir='.', beam_runner='DirectRunner')\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 547 # Download and prepare data\r\n 548 builder_instance.download_and_prepare(\r\n--> 549 download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,\r\n 550 )\r\n 551 \r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 461 if not downloaded_from_gcs:\r\n 462 self._download_and_prepare(\r\n--> 463 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 464 )\r\n 465 # Sync info\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos)\r\n 964 pipeline = beam_utils.BeamPipeline(runner=beam_runner, options=beam_options,)\r\n 965 super(BeamBasedBuilder, self)._download_and_prepare(\r\n--> 966 dl_manager, verify_infos=False, pipeline=pipeline,\r\n 967 ) # TODO handle verify_infos in beam datasets\r\n 968 # Run pipeline\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 516 split_dict = SplitDict(dataset_name=self.name)\r\n 517 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 518 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 519 # Checksums verification\r\n 520 if verify_infos:\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/datasets/c4/096df5a27756d51957c959a2499453e60a08154971fceb017bbb29f54b11bef7/c4.py in _split_generators(self, dl_manager, pipeline)\r\n 187 if self.config.realnewslike:\r\n 188 files_to_download[\"realnews_domains\"] = _REALNEWS_DOMAINS_URL\r\n--> 189 file_paths = dl_manager.download_and_extract(files_to_download)\r\n 190 \r\n 191 if self.config.webtextlike:\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/download_manager.py in download_and_extract(self, url_or_urls)\r\n 218 extracted_path(s): `str`, extracted paths of given URL(s).\r\n 219 \"\"\"\r\n--> 220 return self.extract(self.download(url_or_urls))\r\n 221 \r\n 222 def get_recorded_sizes_checksums(self):\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/download_manager.py in download(self, url_or_urls)\r\n 156 lambda url: cached_path(url, download_config=self._download_config,), url_or_urls,\r\n 157 )\r\n--> 158 self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths)\r\n 159 return downloaded_path_or_paths\r\n 160 \r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/download_manager.py in _record_sizes_checksums(self, url_or_urls, downloaded_path_or_paths)\r\n 106 flattened_downloaded_path_or_paths = flatten_nested(downloaded_path_or_paths)\r\n 107 for url, path in zip(flattened_urls_or_urls, flattened_downloaded_path_or_paths):\r\n--> 108 self._recorded_sizes_checksums[url] = get_size_checksum_dict(path)\r\n 109 \r\n 110 def download_custom(self, url_or_urls, custom_download):\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/info_utils.py in get_size_checksum_dict(path)\r\n 77 \"\"\"Compute the file size and the sha256 checksum of a file\"\"\"\r\n 78 m = sha256()\r\n---> 79 with open(path, \"rb\") as f:\r\n 80 for chunk in iter(lambda: f.read(1 << 20), b\"\"):\r\n 81 m.update(chunk)\r\n\r\nIsADirectoryError: [Errno 21] Is a directory: '/'\r\n\r\n```\r\n\r\nCan anyone please try to see what I am doing wrong or is this a bug?"
] |
https://api.github.com/repos/huggingface/datasets/issues/249 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/249/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/249/comments | https://api.github.com/repos/huggingface/datasets/issues/249/events | https://github.com/huggingface/datasets/issues/249 | 633,393,443 | MDU6SXNzdWU2MzMzOTM0NDM= | 249 | [Dataset created] some critical small issues when I was creating a dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/richarddwang",
"id": 17963619,
"login": "richarddwang",
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/richarddwang"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | 2 | "2020-06-07T12:58:54Z" | "2020-06-12T08:28:51Z" | "2020-06-12T08:28:51Z" | CONTRIBUTOR | null | null | null | Hi, I successfully created a dataset and has made a pr #248.
But I have encountered several problems when I was creating it, and those should be easy to fix.
1. Not found dataset_info.json
should be fixed by #241 , eager to wait it be merged.
2. Forced to install `apach_beam`
If we should install it, then it might be better to include it in the pakcage dependency or specified in `CONTRIBUTING.md`
```
Traceback (most recent call last):
File "nlp-cli", line 10, in <module>
from nlp.commands.run_beam import RunBeamCommand
File "/home/yisiang/nlp/src/nlp/commands/run_beam.py", line 6, in <module>
import apache_beam as beam
ModuleNotFoundError: No module named 'apache_beam'
```
3. `cached_dir` is `None`
```
File "/home/yisiang/nlp/src/nlp/datasets/bookscorpus/aea0bd5142d26df645a8fce23d6110bb95ecb81772bb2a1f29012e329191962c/bookscorpus.py", line 88, in _split_generators
downloaded_path_or_paths = dl_manager.download_custom(_GDRIVE_FILE_ID, download_file_from_google_drive)
File "/home/yisiang/nlp/src/nlp/utils/download_manager.py", line 128, in download_custom
downloaded_path_or_paths = map_nested(url_to_downloaded_path, url_or_urls)
File "/home/yisiang/nlp/src/nlp/utils/py_utils.py", line 172, in map_nested
return function(data_struct)
File "/home/yisiang/nlp/src/nlp/utils/download_manager.py", line 126, in url_to_downloaded_path
return os.path.join(self._download_config.cache_dir, hash_url_to_filename(url))
File "/home/yisiang/miniconda3/envs/nlppr/lib/python3.7/posixpath.py", line 80, in join
a = os.fspath(a)
```
This is because this line
https://github.com/huggingface/nlp/blob/2e0a8639a79b1abc848cff5c669094d40bba0f63/src/nlp/commands/test.py#L30-L32
And I add `--cache_dir="...."` to `python nlp-cli test datasets/<your-dataset-folder> --save_infos --all_configs` in the doc, finally I could pass this error.
But it seems to ignore my arg and use `/home/yisiang/.cache/huggingface/datasets/bookscorpus/plain_text/1.0.0` as cahe_dir
4. There is no `pytest`
So maybe in the doc we should specify a step to install pytest
5. Not enough capacity in my `/tmp`
When run test for dummy data, I don't know why it ask me for 5.6g to download something,
```
def download_and_prepare
...
if not utils.has_sufficient_disk_space(self.info.size_in_bytes or 0, directory=self._cache_dir_root):
raise IOError(
"Not enough disk space. Needed: {} (download: {}, generated: {})".format(
utils.size_str(self.info.size_in_bytes or 0),
utils.size_str(self.info.download_size or 0),
> utils.size_str(self.info.dataset_size or 0),
)
)
E OSError: Not enough disk space. Needed: 5.62 GiB (download: 1.10 GiB, generated: 4.52 GiB)
```
I add a `processed_temp_dir="some/dir"; raw_temp_dir="another/dir"` to 71, and the test passed
https://github.com/huggingface/nlp/blob/a67a6c422dece904b65d18af65f0e024e839dbe8/tests/test_dataset_common.py#L70-L72
I suggest we can create tmp dir under the `/home/user/tmp` but not `/tmp`, because take our lab server for example, everyone use `/tmp` thus it has not much capacity. Or at least we can improve error message, so the user know is what directory has no space and how many has it lefted. Or we could do both.
6. name of datasets
I was surprised by the dataset name `books_corpus`, and didn't know it is from `class BooksCorpus(nlp.GeneratorBasedBuilder)` . I change it to `Bookscorpus` afterwards. I think this point shold be also on the doc.
7. More thorough doc to how to create `dataset.py`
I believe there will be.
**Feel free to close this issue** if you think these are solved. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/249/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/249/timeline | null | completed | false | [
"Thanks for noticing all these :) They should be easy to fix indeed",
"Alright I think I fixed all the problems you mentioned. Thanks again, that will be useful for many people.\r\nThere is still more work needed for point 7. but we plan to have some nice docs soon."
] |
https://api.github.com/repos/huggingface/datasets/issues/248 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/248/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/248/comments | https://api.github.com/repos/huggingface/datasets/issues/248/events | https://github.com/huggingface/datasets/pull/248 | 633,390,427 | MDExOlB1bGxSZXF1ZXN0NDMwMDQ0MzU0 | 248 | add Toronto BooksCorpus | {
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/richarddwang",
"id": 17963619,
"login": "richarddwang",
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/richarddwang"
} | [] | closed | false | null | [] | null | 11 | "2020-06-07T12:54:56Z" | "2020-06-12T08:45:03Z" | "2020-06-12T08:45:02Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/248.diff",
"html_url": "https://github.com/huggingface/datasets/pull/248",
"merged_at": "2020-06-12T08:45:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/248.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/248"
} | 1. I knew there is a branch `toronto_books_corpus`
- After I downloaded it, I found it is all non-english, and only have one row.
- It seems that it cites the wrong paper
- according to papar using it, it is called `BooksCorpus` but not `TornotoBooksCorpus`
2. It use a text mirror in google drive
- `bookscorpus.py` include a function `download_file_from_google_drive` , maybe you will want to put it elsewhere.
- text mirror is found in this [comment on the issue](https://github.com/soskek/bookcorpus/issues/24#issuecomment-556024973), and it said to have the same statistics as the one in the paper.
- You may want to download it and put it on your gs in case of it disappears someday.
3. Copyright ?
The paper has said
> **The BookCorpus Dataset.** In order to train our sentence similarity model we collected a corpus of 11,038 books ***from the web***. These are __**free books written by yet unpublished authors**__. We only included books that had more than 20K words in order to filter out perhaps noisier shorter stories. The dataset has books in 16 different genres, e.g., Romance (2,865 books), Fantasy (1,479), Science fiction (786), Teen (430), etc. Table 2 highlights the summary statistics of our book corpus.
and we have changed the form (not books), so I don't think it should have that problems. Or we can state that use it at your own risk or only for academic use. I know @thomwolf should know these things more.
This should solved #131 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/248/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/248/timeline | null | null | true | [
"Thanks for adding this one !\r\n\r\nAbout the three points you mentioned:\r\n1. I think the `toronto_books_corpus` branch can be removed @mariamabarham ? \r\n2. You can use the download manager to download from google drive. For you case you can just do something like \r\n```python\r\nURL = \"https://drive.google.com/uc?export=download&id=16KCjV9z_FHm8LgZw05RSuk4EsAWPOP_z\"\r\n...\r\narch_path = dl_manager.download_and_extract(URL)\r\n```\r\nAlso this is is an unofficial host of the dataset, we should probably host it ourselves if we can.\r\n3. Not sure about the copyright here, but I maybe @thomwolf has better insights about it. ",
"Yes it can be removed",
"I just downloaded the file and put it on gs. The public url is\r\nhttps://storage.googleapis.com/huggingface-nlp/datasets/toronto_books_corpus/bookcorpus.tar.bz2\r\n\r\nCould you try to change the url to this one and heck that everything is ok ?",
"In `books.py`\r\n```\r\nURL = \"https://storage.googleapis.com/huggingface-nlp/datasets/toronto_books_corpus/bookcorpus.tar.bz2\"\r\n```\r\n```\r\nPython 3.7.6 (default, Jan 8 2020, 19:59:22) \r\n[GCC 7.3.0] :: Anaconda, Inc. on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from nlp import load_dataset\r\n>>> book = load_dataset(\"nlp/datasets/bookscorpus/books.py\", cache_dir='~/tmp')\r\nDownloading and preparing dataset bookscorpus/plain_text (download: 1.10 GiB, generated: 4.52 GiB, total: 5.62 GiB) to /home/yisiang/tmp/bookscorpus/plain_text/1.0.0...\r\nDownloading: 100%|███████████████████████████████████████████████████████████| 1.18G/1.18G [00:39<00:00, 30.0MB/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/yisiang/nlp/src/nlp/load.py\", line 520, in load_dataset\r\n save_infos=save_infos,\r\n File \"/home/yisiang/nlp/src/nlp/builder.py\", line 420, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/home/yisiang/nlp/src/nlp/builder.py\", line 460, in _download_and_prepare\r\n verify_checksums(self.info.download_checksums, dl_manager.get_recorded_sizes_checksums())\r\n File \"/home/yisiang/nlp/src/nlp/utils/info_utils.py\", line 31, in verify_checksums\r\n raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))\r\nnlp.utils.info_utils.ExpectedMoreDownloadedFiles: {'16KCjV9z_FHm8LgZw05RSuk4EsAWPOP_z'}\r\n>>>\r\n```\r\n\r\nBTW, I notice the path `huggingface-nlp/datasets/toronto_books_corpus`, does it mean I have to change folder name \"bookscorpus\" to \"toronto_books_corpus\"",
"> In `books.py`\r\n> \r\n> ```\r\n> URL = \"https://storage.googleapis.com/huggingface-nlp/datasets/toronto_books_corpus/bookcorpus.tar.bz2\"\r\n> ```\r\n> \r\n> ```\r\n> Python 3.7.6 (default, Jan 8 2020, 19:59:22) \r\n> [GCC 7.3.0] :: Anaconda, Inc. on linux\r\n> Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n> >>> from nlp import load_dataset\r\n> >>> book = load_dataset(\"nlp/datasets/bookscorpus/books.py\", cache_dir='~/tmp')\r\n> Downloading and preparing dataset bookscorpus/plain_text (download: 1.10 GiB, generated: 4.52 GiB, total: 5.62 GiB) to /home/yisiang/tmp/bookscorpus/plain_text/1.0.0...\r\n> Downloading: 100%|███████████████████████████████████████████████████████████| 1.18G/1.18G [00:39<00:00, 30.0MB/s]\r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 1, in <module>\r\n> File \"/home/yisiang/nlp/src/nlp/load.py\", line 520, in load_dataset\r\n> save_infos=save_infos,\r\n> File \"/home/yisiang/nlp/src/nlp/builder.py\", line 420, in download_and_prepare\r\n> dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n> File \"/home/yisiang/nlp/src/nlp/builder.py\", line 460, in _download_and_prepare\r\n> verify_checksums(self.info.download_checksums, dl_manager.get_recorded_sizes_checksums())\r\n> File \"/home/yisiang/nlp/src/nlp/utils/info_utils.py\", line 31, in verify_checksums\r\n> raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))\r\n> nlp.utils.info_utils.ExpectedMoreDownloadedFiles: {'16KCjV9z_FHm8LgZw05RSuk4EsAWPOP_z'}\r\n> >>>\r\n> ```\r\n> \r\n> BTW, I notice the path `huggingface-nlp/datasets/toronto_books_corpus`, does it mean I have to change folder name \"bookscorpus\" to \"toronto_books_corpus\"\r\n\r\nLet me change the url to match \"bookscorpus\", so that you don't have to change anything. Good catch.\r\n\r\nAbout the error you're getting: you just have to remove the `dataset_infos.json` and regenerate it",
"The new url is https://storage.googleapis.com/huggingface-nlp/datasets/bookscorpus/bookcorpus.tar.bz2",
"Hi, I found I made a mistake. I found the ELECTRA paper refer it as \"BooksCorpus\", but actually it is caleld \"BookCorpus\", according to the original paper. Sorry, I should have checked the original paper .\r\n\r\nCan you do me a favor and change the url path to ` https://storage.googleapis.com/huggingface-nlp/datasets/bookcorpus/bookcorpus.tar.bz2` ?",
"Yep I'm doing it right now. Could you please rename all the references to `bookscorpus` and `BooksCorpus` to `book_corpus` and `BookCorpus` (with the right casing) ?",
"Thank you @lhoestq ,\r\nJust to confirm it fits your naming convention\r\n* make the file path `book_corpus/book_corpus.py` ?\r\n* make `class Bookscorpus(nlp.GeneratorBasedBuilder)` -> `BookCorpus` (which make cache folder name `book_corpus` and user use `load_dataset('book_corpus')`) ?\r\n(Cuz I found \"HellaSwag\" dataset is named \"nlp/datasets/hellaswag\" and `class Hellaswag` )",
"Oh yea you're right about the Hellaswag example. We should keep the \"_\" symbol to replace spaces. As there are no space in BookCorpus, what we should do here is use:\r\n- class name: 'Bookcorpus'\r\n- script name: `bookcorpus/bookcorpus.py`\r\n- use url https://storage.googleapis.com/huggingface-nlp/datasets/bookcorpus/bookcorpus.tar.bz2\r\nAnd therefore the dataset name will be `bookcorpus`\r\n\r\nDon't forget to regenerate the `dataset_infos.json` and we'll be good :D ",
"Awesome thanks :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/247 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/247/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/247/comments | https://api.github.com/repos/huggingface/datasets/issues/247/events | https://github.com/huggingface/datasets/pull/247 | 632,380,078 | MDExOlB1bGxSZXF1ZXN0NDI5MTMwMzQ2 | 247 | Make all dataset downloads deterministic by applying `sorted` to glob and os.listdir | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | 3 | "2020-06-06T11:02:10Z" | "2020-06-08T09:18:16Z" | "2020-06-08T09:18:14Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/247.diff",
"html_url": "https://github.com/huggingface/datasets/pull/247",
"merged_at": "2020-06-08T09:18:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/247.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/247"
} | This PR makes all datasets loading deterministic by applying `sorted()` to all `glob.glob` and `os.listdir` statements.
Are there other "non-deterministic" functions apart from `glob.glob()` and `os.listdir()` that you can think of @thomwolf @lhoestq @mariamabarham @jplu ?
**Important**
It does break backward compatibility for these datasets because
1. When loading the complete dataset the order in which the examples are saved is different now
2. When loading only part of a split, the examples themselves might be different.
@patrickvonplaten - the nlp / longformer notebook has to be updated since the examples might now be different | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/247/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/247/timeline | null | null | true | [
"That's great!\r\n\r\nI think it would be nice to test \"deterministic-ness\" of datasets in CI if we can do it (should be left for future PR of course)\r\n\r\nHere is a possibility (maybe there are other ways to do it): given that we may soon have efficient and large-scale hashing (cf our discussion on versioning/tracability), we could incorporate a hash of the final Arrow Dataset to the `dataset.json` file and have a test on it as well as CI on a diversity of platform to test the hash (Win/Mac/Linux + various python/env).\r\nWhat do you think @lhoestq @patrickvonplaten?",
"> That's great!\r\n> \r\n> I think it would be nice to test \"deterministic-ness\" of datasets in CI if we can do it (should be left for future PR of course)\r\n> \r\n> Here is a possibility (maybe there are other ways to do it): given that we may soon have efficient and large-scale hashing (cf our discussion on versioning/tracability), we could incorporate a hash of the final Arrow Dataset to the `dataset.json` file and have a test on it as well as CI on a diversity of platform to test the hash (Win/Mac/Linux + various python/env).\r\n> What do you think @lhoestq @patrickvonplaten?\r\n\r\nI think that's a great idea! The test should be a `RUN_SLOW` test, since it takes a considerable amount of time to download the dataset and generate the examples, but I think we should add some kind of hash test for each dataset.",
"Really nice!!"
] |
https://api.github.com/repos/huggingface/datasets/issues/246 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/246/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/246/comments | https://api.github.com/repos/huggingface/datasets/issues/246/events | https://github.com/huggingface/datasets/issues/246 | 632,380,054 | MDU6SXNzdWU2MzIzODAwNTQ= | 246 | What is the best way to cache a dataset? | {
"avatar_url": "https://avatars.githubusercontent.com/u/112599?v=4",
"events_url": "https://api.github.com/users/Mistobaan/events{/privacy}",
"followers_url": "https://api.github.com/users/Mistobaan/followers",
"following_url": "https://api.github.com/users/Mistobaan/following{/other_user}",
"gists_url": "https://api.github.com/users/Mistobaan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Mistobaan",
"id": 112599,
"login": "Mistobaan",
"node_id": "MDQ6VXNlcjExMjU5OQ==",
"organizations_url": "https://api.github.com/users/Mistobaan/orgs",
"received_events_url": "https://api.github.com/users/Mistobaan/received_events",
"repos_url": "https://api.github.com/users/Mistobaan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Mistobaan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mistobaan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Mistobaan"
} | [] | closed | false | null | [] | null | 2 | "2020-06-06T11:02:07Z" | "2020-07-09T09:15:07Z" | "2020-07-09T09:15:07Z" | NONE | null | null | null | For example if I want to use streamlit with a nlp dataset:
```
@st.cache
def load_data():
return nlp.load_dataset('squad')
```
This code raises the error "uncachable object"
Right now I just fixed with a constant for my specific case:
```
@st.cache(hash_funcs={pyarrow.lib.Buffer: lambda b: 0})
```
But I was curious to know what is the best way in general
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/246/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/246/timeline | null | completed | false | [
"Everything is already cached by default in 🤗nlp (in particular dataset\nloading and all the “map()” operations) so I don’t think you need to do any\nspecific caching in streamlit.\n\nTell us if you feel like it’s not the case.\n\nOn Sat, 6 Jun 2020 at 13:02, Fabrizio Milo <notifications@github.com> wrote:\n\n> For example if I want to use streamlit with a nlp dataset:\n>\n> @st.cache\n> def load_data():\n> return nlp.load_dataset('squad')\n>\n> This code raises the error \"uncachable object\"\n>\n> Right now I just fixed with a constant for my specific case:\n>\n> @st.cache(hash_funcs={pyarrow.lib.Buffer: lambda b: 0})\n>\n> But I was curious to know what is the best way in general\n>\n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/nlp/issues/246>, or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABYDIHKAKO7CWGX2QY55UXLRVIO3ZANCNFSM4NV333RQ>\n> .\n>\n",
"Closing this one. Feel free to re-open if you have other questions !"
] |
https://api.github.com/repos/huggingface/datasets/issues/245 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/245/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/245/comments | https://api.github.com/repos/huggingface/datasets/issues/245/events | https://github.com/huggingface/datasets/issues/245 | 631,985,108 | MDU6SXNzdWU2MzE5ODUxMDg= | 245 | SST-2 test labels are all -1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jxmorris12",
"id": 13238952,
"login": "jxmorris12",
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jxmorris12"
} | [] | closed | false | null | [] | null | 10 | "2020-06-05T21:41:42Z" | "2021-12-08T00:47:32Z" | "2020-06-06T16:56:41Z" | CONTRIBUTOR | null | null | null | I'm trying to test a model on the SST-2 task, but all the labels I see in the test set are -1.
```
>>> import nlp
>>> glue = nlp.load_dataset('glue', 'sst2')
>>> glue
{'train': Dataset(schema: {'sentence': 'string', 'label': 'int64', 'idx': 'int32'}, num_rows: 67349), 'validation': Dataset(schema: {'sentence': 'string', 'label': 'int64', 'idx': 'int32'}, num_rows: 872), 'test': Dataset(schema: {'sentence': 'string', 'label': 'int64', 'idx': 'int32'}, num_rows: 1821)}
>>> list(l['label'] for l in glue['test'])
[-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1]
``` | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/245/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/245/timeline | null | completed | false | [
"this also happened to me with `nlp.load_dataset('glue', 'mnli')`",
"Yes, this is because the test sets for glue are hidden so the labels are\nnot publicly available. You can read the glue paper for more details.\n\nOn Sat, 6 Jun 2020 at 18:16, Jack Morris <notifications@github.com> wrote:\n\n> this also happened to me with nlp.load_datasets('glue', 'mnli')\n>\n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/nlp/issues/245#issuecomment-640083980>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABYDIHMVQD2EDX2HTZUXG5DRVJTWRANCNFSM4NVG3AKQ>\n> .\n>\n",
"Thanks @thomwolf!",
"@thomwolf shouldn't this be visible in the .info and/or in the .features?",
"It should be in the datasets card (the README.md and on the hub) in my opinion. What do you think @yjernite?",
"I checked both before I got to looking at issues, so that would be fine as well.\r\n\r\nSome additional thoughts on this: Is there a specific reason why the \"test\" split even has a \"label\" column if it isn't tagged. Shouldn't there just not be any. Seems more transparent",
"I'm a little confused with the data size.\r\n`sst2` dataset is referenced to `Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank` and the link of the dataset in the paper is https://nlp.stanford.edu/sentiment/index.html which is often shown in GLUE/SST2 reference.\r\nFrom the original data, the standard train/dev/test splits split is 6920/872/1821 for binary classification. \r\nWhy in GLUE/SST2 the train/dev/test split is 67,349/872/1,821 ? \r\n\r\n",
"> I'm a little confused with the data size.\r\n> `sst2` dataset is referenced to `Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank` and the link of the dataset in the paper is https://nlp.stanford.edu/sentiment/index.html which is often shown in GLUE/SST2 reference.\r\n> From the original data, the standard train/dev/test splits split is 6920/872/1821 for binary classification.\r\n> Why in GLUE/SST2 the train/dev/test split is 67,349/872/1,821 ?\r\n\r\nHave you figured out this problem? AFAIK, the original sst-2 dataset is totally different from the GLUE/sst-2. Do you think so?",
"@yc1999 Sorry, I didn't solve this conflict. In the end, I just use a local data file provided by the previous work I followed(for consistent comparison), not use `datasets` package.\r\n\r\nRelated information: https://github.com/thunlp/OpenAttack/issues/146#issuecomment-766323571",
"@yc1999 I find that the original SST-2 dataset (6,920/872/1,821) can be loaded from https://huggingface.co/datasets/gpt3mix/sst2 or built with SST data and the scripts in https://github.com/prrao87/fine-grained-sentiment/tree/master/data/sst.\r\nThe GLUE/SST-2 dataset (67,349/872/1,821) should be a completely different version.\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/244 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/244/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/244/comments | https://api.github.com/repos/huggingface/datasets/issues/244/events | https://github.com/huggingface/datasets/pull/244 | 631,869,155 | MDExOlB1bGxSZXF1ZXN0NDI4NjgxMTcx | 244 | Add Allociné Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/37028092?v=4",
"events_url": "https://api.github.com/users/TheophileBlard/events{/privacy}",
"followers_url": "https://api.github.com/users/TheophileBlard/followers",
"following_url": "https://api.github.com/users/TheophileBlard/following{/other_user}",
"gists_url": "https://api.github.com/users/TheophileBlard/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TheophileBlard",
"id": 37028092,
"login": "TheophileBlard",
"node_id": "MDQ6VXNlcjM3MDI4MDky",
"organizations_url": "https://api.github.com/users/TheophileBlard/orgs",
"received_events_url": "https://api.github.com/users/TheophileBlard/received_events",
"repos_url": "https://api.github.com/users/TheophileBlard/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TheophileBlard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheophileBlard/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TheophileBlard"
} | [] | closed | false | null | [] | null | 3 | "2020-06-05T19:19:26Z" | "2020-06-11T07:47:26Z" | "2020-06-11T07:47:26Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/244.diff",
"html_url": "https://github.com/huggingface/datasets/pull/244",
"merged_at": "2020-06-11T07:47:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/244.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/244"
} | This is a french binary sentiment classification dataset, which was used to train this model: https://huggingface.co/tblard/tf-allocine.
Basically, it's a french "IMDB" dataset, with more reviews.
More info on [this repo](https://github.com/TheophileBlard/french-sentiment-analysis-with-bert). | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/244/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/244/timeline | null | null | true | [
"great work @TheophileBlard ",
"LGTM, thanks a lot for adding dummy data tests :-) Was it difficult to create the correct dummy data folder? ",
"It was pretty easy actually. Documentation is on point !"
] |
https://api.github.com/repos/huggingface/datasets/issues/243 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/243/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/243/comments | https://api.github.com/repos/huggingface/datasets/issues/243/events | https://github.com/huggingface/datasets/pull/243 | 631,735,848 | MDExOlB1bGxSZXF1ZXN0NDI4NTY2MTEy | 243 | Specify utf-8 encoding for GLUE | {
"avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4",
"events_url": "https://api.github.com/users/patpizio/events{/privacy}",
"followers_url": "https://api.github.com/users/patpizio/followers",
"following_url": "https://api.github.com/users/patpizio/following{/other_user}",
"gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patpizio",
"id": 15801338,
"login": "patpizio",
"node_id": "MDQ6VXNlcjE1ODAxMzM4",
"organizations_url": "https://api.github.com/users/patpizio/orgs",
"received_events_url": "https://api.github.com/users/patpizio/received_events",
"repos_url": "https://api.github.com/users/patpizio/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patpizio/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patpizio"
} | [] | closed | false | null | [] | null | 1 | "2020-06-05T16:33:00Z" | "2020-06-17T21:16:06Z" | "2020-06-08T08:42:01Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/243.diff",
"html_url": "https://github.com/huggingface/datasets/pull/243",
"merged_at": "2020-06-08T08:42:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/243.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/243"
} | #242
This makes the GLUE-MNLI dataset readable on my machine, not sure if it's a Windows-only bug. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/243/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/243/timeline | null | null | true | [
"Thanks for fixing the encoding :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/242 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/242/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/242/comments | https://api.github.com/repos/huggingface/datasets/issues/242/events | https://github.com/huggingface/datasets/issues/242 | 631,733,683 | MDU6SXNzdWU2MzE3MzM2ODM= | 242 | UnicodeDecodeError when downloading GLUE-MNLI | {
"avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4",
"events_url": "https://api.github.com/users/patpizio/events{/privacy}",
"followers_url": "https://api.github.com/users/patpizio/followers",
"following_url": "https://api.github.com/users/patpizio/following{/other_user}",
"gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patpizio",
"id": 15801338,
"login": "patpizio",
"node_id": "MDQ6VXNlcjE1ODAxMzM4",
"organizations_url": "https://api.github.com/users/patpizio/orgs",
"received_events_url": "https://api.github.com/users/patpizio/received_events",
"repos_url": "https://api.github.com/users/patpizio/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patpizio/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patpizio"
} | [] | closed | false | null | [] | null | 2 | "2020-06-05T16:30:01Z" | "2020-06-09T16:06:47Z" | "2020-06-08T08:45:03Z" | CONTRIBUTOR | null | null | null | When I run
```python
dataset = nlp.load_dataset('glue', 'mnli')
```
I get an encoding error (could it be because I'm using Windows?) :
```python
# Lots of error log lines later...
~\Miniconda3\envs\nlp\lib\site-packages\tqdm\std.py in __iter__(self)
1128 try:
-> 1129 for obj in iterable:
1130 yield obj
~\Miniconda3\envs\nlp\lib\site-packages\nlp\datasets\glue\5256cc2368cf84497abef1f1a5f66648522d5854b225162148cb8fc78a5a91cc\glue.py in _generate_examples(self, data_file, split, mrpc_files)
529
--> 530 for n, row in enumerate(reader):
531 if is_cola_non_test:
~\Miniconda3\envs\nlp\lib\csv.py in __next__(self)
110 self.fieldnames
--> 111 row = next(self.reader)
112 self.line_num = self.reader.line_num
~\Miniconda3\envs\nlp\lib\encodings\cp1252.py in decode(self, input, final)
22 def decode(self, input, final=False):
---> 23 return codecs.charmap_decode(input,self.errors,decoding_table)[0]
24
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 6744: character maps to <undefined>
```
Anyway this can be solved by specifying to decode in UTF when reading the csv file. I am proposing a PR if that's okay. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/242/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/242/timeline | null | completed | false | [
"It should be good now, thanks for noticing and fixing it ! I would say that it was because you are on windows but not 100% sure",
"On Windows Python supports Unicode almost everywhere, but one of the notable exceptions is open() where it uses the locale encoding schema. So platform independent python scripts would always set the encoding='utf-8' in calls to open explicitly. \r\nIn the meantime: since Python 3.7 Windows users can set the default encoding for everything including open() to Unicode by setting this environment variable: set PYTHONUTF8=1 (details can be found in [PEP 540](https://www.python.org/dev/peps/pep-0540/))\r\n\r\nFor me this fixed the problem described by the OP."
] |
https://api.github.com/repos/huggingface/datasets/issues/241 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/241/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/241/comments | https://api.github.com/repos/huggingface/datasets/issues/241/events | https://github.com/huggingface/datasets/pull/241 | 631,703,079 | MDExOlB1bGxSZXF1ZXN0NDI4NTQwMDM0 | 241 | Fix empty cache dir | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 2 | "2020-06-05T15:45:22Z" | "2020-06-08T08:35:33Z" | "2020-06-08T08:35:31Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/241.diff",
"html_url": "https://github.com/huggingface/datasets/pull/241",
"merged_at": "2020-06-08T08:35:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/241.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/241"
} | If the cache dir of a dataset is empty, the dataset fails to load and throws a FileNotFounfError. We could end up with empty cache dir because there was a line in the code that created the cache dir without using a temp dir. Using a temp dir is useful as it gets renamed to the real cache dir only if the full process is successful.
So I removed this bad line, and I also reordered things a bit to make sure that we always use a temp dir. I also added warning if we still end up with empty cache dirs in the future.
This should fix #239
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/241/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/241/timeline | null | null | true | [
"Looks great! Will this change force all cached datasets to be redownloaded? But even if it does, it shoud not be a big problem, I think",
"> Looks great! Will this change force all cached datasets to be redownloaded? But even if it does, it shoud not be a big problem, I think\r\n\r\nNo it shouldn't force to redownload"
] |
https://api.github.com/repos/huggingface/datasets/issues/240 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/240/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/240/comments | https://api.github.com/repos/huggingface/datasets/issues/240/events | https://github.com/huggingface/datasets/issues/240 | 631,434,677 | MDU6SXNzdWU2MzE0MzQ2Nzc= | 240 | Deterministic dataset loading | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | 4 | "2020-06-05T09:03:26Z" | "2020-06-08T09:18:14Z" | "2020-06-08T09:18:14Z" | CONTRIBUTOR | null | null | null | When calling:
```python
import nlp
dataset = nlp.load_dataset("trivia_qa", split="validation[:1%]")
```
the resulting dataset is not deterministic over different google colabs.
After talking to @thomwolf, I suspect the reason to be the use of `glob.glob` in line:
https://github.com/huggingface/nlp/blob/2e0a8639a79b1abc848cff5c669094d40bba0f63/datasets/trivia_qa/trivia_qa.py#L180
which seems to return an ordering of files that depends on the filesystem:
https://stackoverflow.com/questions/6773584/how-is-pythons-glob-glob-ordered
I think we should go through all the dataset scripts and make sure to have deterministic behavior.
A simple solution for `glob.glob()` would be to just replace it with `sorted(glob.glob())` to have everything sorted by name.
What do you think @lhoestq? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/240/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/240/timeline | null | completed | false | [
"Yes good point !",
"I think using `sorted(glob.glob())` would actually solve this problem. Can you think of other reasons why dataset loading might not be deterministic? @mariamabarham @yjernite @lhoestq @thomwolf . \r\n\r\nI can do a sweep through the dataset scripts and fix the glob.glob() if you guys are ok with it",
"I'm pretty sure it would solve the problem too.\r\n\r\nThe only other dataset that is not deterministic right now is `blog_authorship_corpus` (see #215) but this is a problem related to string encodings.",
"I think we should do the same also for `os.list_dir`"
] |
https://api.github.com/repos/huggingface/datasets/issues/239 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/239/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/239/comments | https://api.github.com/repos/huggingface/datasets/issues/239/events | https://github.com/huggingface/datasets/issues/239 | 631,340,440 | MDU6SXNzdWU2MzEzNDA0NDA= | 239 | [Creating new dataset] Not found dataset_info.json | {
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/richarddwang",
"id": 17963619,
"login": "richarddwang",
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/richarddwang"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | 5 | "2020-06-05T06:15:04Z" | "2020-06-07T13:01:04Z" | "2020-06-07T13:01:04Z" | CONTRIBUTOR | null | null | null | Hi, I am trying to create Toronto Book Corpus. #131
I ran
`~/nlp % python nlp-cli test datasets/bookcorpus --save_infos --all_configs`
but this doesn't create `dataset_info.json` and try to use it
```
INFO:nlp.load:Checking datasets/bookcorpus/bookcorpus.py for additional imports.
INFO:filelock:Lock 139795325778640 acquired on datasets/bookcorpus/bookcorpus.py.lock
INFO:nlp.load:Found main folder for dataset datasets/bookcorpus/bookcorpus.py at /home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/datasets/bookcorpus
INFO:nlp.load:Found specific version folder for dataset datasets/bookcorpus/bookcorpus.py at /home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/datasets/bookcorpus/8e84759446cf68d0b0deb3417e60cc331f30a3bbe58843de18a0f48e87d1efd9
INFO:nlp.load:Found script file from datasets/bookcorpus/bookcorpus.py to /home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/datasets/bookcorpus/8e84759446cf68d0b0deb3417e60cc331f30a3bbe58843de18a0f48e87d1efd9/bookcorpus.py
INFO:nlp.load:Couldn't find dataset infos file at datasets/bookcorpus/dataset_infos.json
INFO:nlp.load:Found metadata file for dataset datasets/bookcorpus/bookcorpus.py at /home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/datasets/bookcorpus/8e84759446cf68d0b0deb3417e60cc331f30a3bbe58843de18a0f48e87d1efd9/bookcorpus.json
INFO:filelock:Lock 139795325778640 released on datasets/bookcorpus/bookcorpus.py.lock
INFO:nlp.builder:Overwrite dataset info from restored data version.
INFO:nlp.info:Loading Dataset info from /home/yisiang/.cache/huggingface/datasets/book_corpus/plain_text/1.0.0
Traceback (most recent call last):
File "nlp-cli", line 37, in <module>
service.run()
File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/commands/test.py", line 78, in run
builders.append(builder_cls(name=config.name, data_dir=self._data_dir))
File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/builder.py", line 610, in __init__
super(GeneratorBasedBuilder, self).__init__(*args, **kwargs)
File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/builder.py", line 152, in __init__
self.info = DatasetInfo.from_directory(self._cache_dir)
File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/info.py", line 157, in from_directory
with open(os.path.join(dataset_info_dir, DATASET_INFO_FILENAME), "r") as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/yisiang/.cache/huggingface/datasets/book_corpus/plain_text/1.0.0/dataset_info.json'
```
btw, `ls /home/yisiang/.cache/huggingface/datasets/book_corpus/plain_text/1.0.0/` show me nothing is in the directory.
I have also pushed the script to my fork [bookcorpus.py](https://github.com/richardyy1188/nlp/blob/bookcorpusdev/datasets/bookcorpus/bookcorpus.py).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/239/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/239/timeline | null | completed | false | [
"I think you can just `rm` this directory and it should be good :)",
"@lhoestq - this seems to happen quite often (already the 2nd issue). Can we maybe delete this automatically?",
"Yes I have an idea of what's going on. I'm sure I can fix that",
"Hi, I rebase my local copy to `fix-empty-cache-dir`, and try to run again `python nlp-cli test datasets/bookcorpus --save_infos --all_configs`.\r\n\r\nI got this, \r\n```\r\nTraceback (most recent call last):\r\n File \"nlp-cli\", line 10, in <module>\r\n from nlp.commands.run_beam import RunBeamCommand\r\n File \"/home/yisiang/nlp/src/nlp/commands/run_beam.py\", line 6, in <module>\r\n import apache_beam as beam\r\nModuleNotFoundError: No module named 'apache_beam'\r\n```\r\nAnd after I installed it. I got this\r\n```\r\nFile \"/home/yisiang/nlp/src/nlp/datasets/bookcorpus/aea0bd5142d26df645a8fce23d6110bb95ecb81772bb2a1f29012e329191962c/bookcorpus.py\", line 88, in _split_generators\r\n downloaded_path_or_paths = dl_manager.download_custom(_GDRIVE_FILE_ID, download_file_from_google_drive)\r\n File \"/home/yisiang/nlp/src/nlp/utils/download_manager.py\", line 128, in download_custom\r\n downloaded_path_or_paths = map_nested(url_to_downloaded_path, url_or_urls)\r\n File \"/home/yisiang/nlp/src/nlp/utils/py_utils.py\", line 172, in map_nested\r\n return function(data_struct)\r\n File \"/home/yisiang/nlp/src/nlp/utils/download_manager.py\", line 126, in url_to_downloaded_path\r\n return os.path.join(self._download_config.cache_dir, hash_url_to_filename(url))\r\n File \"/home/yisiang/miniconda3/envs/nlppr/lib/python3.7/posixpath.py\", line 80, in join\r\n a = os.fspath(a)\r\n```\r\nThe problem is when I print `self._download_config.cache_dir` using pdb, it is `None`.\r\n\r\nDid I miss something ? Or can you provide a workaround first so I can keep testing my script ?",
"I'll close this issue because I brings more reports in another issue #249 ."
] |
https://api.github.com/repos/huggingface/datasets/issues/238 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/238/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/238/comments | https://api.github.com/repos/huggingface/datasets/issues/238/events | https://github.com/huggingface/datasets/issues/238 | 631,260,143 | MDU6SXNzdWU2MzEyNjAxNDM= | 238 | [Metric] Bertscore : Warning : Empty candidate sentence; Setting recall to be 0. | {
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/astariul",
"id": 43774355,
"login": "astariul",
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"repos_url": "https://api.github.com/users/astariul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/astariul"
} | [
{
"color": "25b21e",
"default": false,
"description": "A bug in a metric script",
"id": 2067393914,
"name": "metric bug",
"node_id": "MDU6TGFiZWwyMDY3MzkzOTE0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug"
}
] | closed | false | null | [] | null | 1 | "2020-06-05T02:14:47Z" | "2020-06-29T17:10:19Z" | "2020-06-29T17:10:19Z" | NONE | null | null | null | When running BERT-Score, I'm meeting this warning :
> Warning: Empty candidate sentence; Setting recall to be 0.
Code :
```
import nlp
metric = nlp.load_metric("bertscore")
scores = metric.compute(["swag", "swags"], ["swags", "totally something different"], lang="en", device=0)
```
---
**What am I doing wrong / How can I hide this warning ?** | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/238/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/238/timeline | null | completed | false | [
"This print statement comes from the official implementation of bert_score (see [here](https://github.com/Tiiiger/bert_score/blob/master/bert_score/utils.py#L343)). The warning shows up only if the attention mask outputs no candidate.\r\nRight now we want to only use official code for metrics to have fair evaluations, so I'm not sure we can do anything about it. Maybe you can try to create an issue on their [repo](https://github.com/Tiiiger/bert_score) ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/237 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/237/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/237/comments | https://api.github.com/repos/huggingface/datasets/issues/237/events | https://github.com/huggingface/datasets/issues/237 | 631,199,940 | MDU6SXNzdWU2MzExOTk5NDA= | 237 | Can't download MultiNLI | {
"avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4",
"events_url": "https://api.github.com/users/patpizio/events{/privacy}",
"followers_url": "https://api.github.com/users/patpizio/followers",
"following_url": "https://api.github.com/users/patpizio/following{/other_user}",
"gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patpizio",
"id": 15801338,
"login": "patpizio",
"node_id": "MDQ6VXNlcjE1ODAxMzM4",
"organizations_url": "https://api.github.com/users/patpizio/orgs",
"received_events_url": "https://api.github.com/users/patpizio/received_events",
"repos_url": "https://api.github.com/users/patpizio/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patpizio/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patpizio"
} | [] | closed | false | null | [] | null | 3 | "2020-06-04T23:05:21Z" | "2020-06-06T10:51:34Z" | "2020-06-06T10:51:34Z" | CONTRIBUTOR | null | null | null | When I try to download MultiNLI with
```python
dataset = load_dataset('multi_nli')
```
I get this long error:
```python
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-13-3b11f6be4cb9> in <module>
1 # Load a dataset and print the first examples in the training set
2 # nli_dataset = nlp.load_dataset('multi_nli')
----> 3 dataset = load_dataset('multi_nli')
4 # nli_dataset = nlp.load_dataset('multi_nli', split='validation_matched[:10%]')
5 # print(nli_dataset['train'][0])
~\Miniconda3\envs\nlp\lib\site-packages\nlp\load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
514
515 # Download and prepare data
--> 516 builder_instance.download_and_prepare(
517 download_config=download_config,
518 download_mode=download_mode,
~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
417 with utils.temporary_assignment(self, "_cache_dir", tmp_data_dir):
418 verify_infos = not save_infos and not ignore_verifications
--> 419 self._download_and_prepare(
420 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
421 )
~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
455 split_dict = SplitDict(dataset_name=self.name)
456 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 457 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
458 # Checksums verification
459 if verify_infos:
~\Miniconda3\envs\nlp\lib\site-packages\nlp\datasets\multi_nli\60774175381b9f3f1e6ae1028229e3cdb270d50379f45b9f2c01008f50f09e6b\multi_nli.py in _split_generators(self, dl_manager)
99 def _split_generators(self, dl_manager):
100
--> 101 downloaded_dir = dl_manager.download_and_extract(
102 "http://storage.googleapis.com/tfds-data/downloads/multi_nli/multinli_1.0.zip"
103 )
~\Miniconda3\envs\nlp\lib\site-packages\nlp\utils\download_manager.py in download_and_extract(self, url_or_urls)
214 extracted_path(s): `str`, extracted paths of given URL(s).
215 """
--> 216 return self.extract(self.download(url_or_urls))
217
218 def get_recorded_sizes_checksums(self):
~\Miniconda3\envs\nlp\lib\site-packages\nlp\utils\download_manager.py in extract(self, path_or_paths)
194 path_or_paths.
195 """
--> 196 return map_nested(
197 lambda path: cached_path(path, extract_compressed_file=True, force_extract=False), path_or_paths,
198 )
~\Miniconda3\envs\nlp\lib\site-packages\nlp\utils\py_utils.py in map_nested(function, data_struct, dict_only, map_tuple)
168 return tuple(mapped)
169 # Singleton
--> 170 return function(data_struct)
171
172
~\Miniconda3\envs\nlp\lib\site-packages\nlp\utils\download_manager.py in <lambda>(path)
195 """
196 return map_nested(
--> 197 lambda path: cached_path(path, extract_compressed_file=True, force_extract=False), path_or_paths,
198 )
199
~\Miniconda3\envs\nlp\lib\site-packages\nlp\utils\file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
231 if is_zipfile(output_path):
232 with ZipFile(output_path, "r") as zip_file:
--> 233 zip_file.extractall(output_path_extracted)
234 zip_file.close()
235 elif tarfile.is_tarfile(output_path):
~\Miniconda3\envs\nlp\lib\zipfile.py in extractall(self, path, members, pwd)
1644
1645 for zipinfo in members:
-> 1646 self._extract_member(zipinfo, path, pwd)
1647
1648 @classmethod
~\Miniconda3\envs\nlp\lib\zipfile.py in _extract_member(self, member, targetpath, pwd)
1698
1699 with self.open(member, pwd=pwd) as source, \
-> 1700 open(targetpath, "wb") as target:
1701 shutil.copyfileobj(source, target)
1702
OSError: [Errno 22] Invalid argument: 'C:\\Users\\Python\\.cache\\huggingface\\datasets\\3e12413b8ec69f22dfcfd54a79d1ba9e7aac2e18e334bbb6b81cca64fd16bffc\\multinli_1.0\\Icon\r'
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/237/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/237/timeline | null | completed | false | [
"You should use `load_dataset('glue', 'mnli')`",
"Thanks! I thought I had to use the same code displayed in the live viewer:\r\n```python\r\n!pip install nlp\r\nfrom nlp import load_dataset\r\ndataset = load_dataset('multi_nli', 'plain_text')\r\n```\r\nYour suggestion works, even if then I got a different issue (#242). ",
"Glad it helps !\nThough I am not one of hf team, but maybe you should close this issue first."
] |
https://api.github.com/repos/huggingface/datasets/issues/236 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/236/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/236/comments | https://api.github.com/repos/huggingface/datasets/issues/236/events | https://github.com/huggingface/datasets/pull/236 | 631,099,875 | MDExOlB1bGxSZXF1ZXN0NDI4MDUwNzI4 | 236 | CompGuessWhat?! dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/1479733?v=4",
"events_url": "https://api.github.com/users/aleSuglia/events{/privacy}",
"followers_url": "https://api.github.com/users/aleSuglia/followers",
"following_url": "https://api.github.com/users/aleSuglia/following{/other_user}",
"gists_url": "https://api.github.com/users/aleSuglia/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aleSuglia",
"id": 1479733,
"login": "aleSuglia",
"node_id": "MDQ6VXNlcjE0Nzk3MzM=",
"organizations_url": "https://api.github.com/users/aleSuglia/orgs",
"received_events_url": "https://api.github.com/users/aleSuglia/received_events",
"repos_url": "https://api.github.com/users/aleSuglia/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aleSuglia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aleSuglia/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aleSuglia"
} | [] | closed | false | null | [] | null | 9 | "2020-06-04T19:45:50Z" | "2020-06-11T09:43:42Z" | "2020-06-11T07:45:21Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/236.diff",
"html_url": "https://github.com/huggingface/datasets/pull/236",
"merged_at": "2020-06-11T07:45:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/236.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/236"
} | Hello,
Thanks for the amazing library that you put together. I'm Alessandro Suglia, the first author of CompGuessWhat?!, a recently released dataset for grounded language learning accepted to ACL 2020 ([https://compguesswhat.github.io](https://compguesswhat.github.io)).
This pull-request adds the CompGuessWhat?! splits that have been extracted from the original dataset. This is only part of our evaluation framework because there is also an additional split of the dataset that has a completely different set of games. I didn't integrate it yet because I didn't know what would be the best practice in this case. Let me clarify the scenario.
In our paper, we have a main dataset (let's call it `compguesswhat-gameplay`) and a zero-shot dataset (let's call it `compguesswhat-zs-gameplay`). In the current code of the pull-request, I have only integrated `compguesswhat-gameplay`. I was thinking that it would be nice to have the `compguesswhat-zs-gameplay` in the same dataset class by simply specifying some particular option to the `nlp.load_dataset()` factory. For instance:
```python
cgw = nlp.load_dataset("compguesswhat")
cgw_zs = nlp.load_dataset("compguesswhat", zero_shot=True)
```
The other option would be to have a separate dataset class. Any preferences? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/236/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/236/timeline | null | null | true | [
"Hi @aleSuglia, thanks for this great PR. Indeed you can have both datasets in one file. You need to add a config class which will allows you to specify the different subdataset names and then you will be able to load them as follow.\r\nnlp.load_dataset(\"compguesswhat\", \"compguesswhat-gameplay\") \r\nnlp.load_dataset(\"compguesswhat\", \"compguesswhat-zs-gameplay\").\r\n\r\nMaybe you can refer to this file https://github.com/huggingface/nlp/blob/master/datasets/discofuse/discofuse.py",
"@mariamabarham Thanks for your suggestions. I've followed your advice and integrated the additional dataset using another `DatasetConfig` class. It looks like all tests passed. What do you think?",
"great @aleSuglia. I requested an additional review from @thomwolf @lhoestq and @patrickvonplaten @jplu . You can merge it after an approval from one of them",
"Looks great! Thanks for adding the dummy data :-) ",
"Not sure whether it's the most appropriate place but I'll ask another design question. For Vision+Language dataset, is very common to have visual features associated with each example. At the moment, for instance, I'm only integrating the image identifier so that people can later on lookup the image features during training. Do you recommend this approach or do you think it should be done in a different way?\r\n\r\nThank you for your answer!",
"Hi @aleSuglia your remark on the visual features is a good point.\r\n\r\nWe haven't started to dive deeply into how CV datasets are usually structured (cc @sgugger)\r\n\r\nDo you have a pointer to how visual features are currently loaded and accessed by people using GuessCompWhat? ",
"@thomwolf As far as I know, people using Language+Vision tasks they typically have their reference dataset (either in JSON or JSONL format) and for each example in it they have an identifier that specifies the reference image. Currently, images are represented by either pooling-based visual features (average pooling of ResNet or VGGNet features, see [DeVries et.al, 2017](https://arxiv.org/abs/1611.08481), [Shekhar et.al, 2019](https://www.aclweb.org/anthology/N19-1265.pdf)) where you have a single vector for every image. Another option is to use a set of feature maps for every image extracted from a specific layer of a CNN (see [Xu et.al, 2015](https://arxiv.org/abs/1502.03044)). A more common and recent option, especially with large-scale multi-modal transformers [Li et. al, 2019](https://arxiv.org/abs/1908.03557), is to use FastRCNN features. \r\n\r\nFor all these types of features, people use either HD5F or NumPy compressed representations. In my personal projects, I've ditched altogether HD5F because it doesn't have out-of-the-box support for multi-processing (unless you have an ad-hoc installation of it). I've been successfully using a NumPy compressed file for each image so that I can store any sort of information in it (see [numpy.savez](https://numpy.org/doc/stable/reference/generated/numpy.savez.html)). However, I believe that Apache Arrow would be a really good fit for this type of features. \r\n\r\nLooking forward to hearing your thoughts about it!",
"Awesome work on this one thanks :)",
"@thomwolf I was thinking that I should create an issue regarding the visual features so that we can keep track of it for future work. I think it would be great to have it in NLP and I'll be happy to contribute. Let me know what you think :) "
] |
https://api.github.com/repos/huggingface/datasets/issues/235 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/235/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/235/comments | https://api.github.com/repos/huggingface/datasets/issues/235/events | https://github.com/huggingface/datasets/pull/235 | 630,952,297 | MDExOlB1bGxSZXF1ZXN0NDI3OTM1MjQ0 | 235 | Add experimental datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [] | closed | false | null | [] | null | 6 | "2020-06-04T15:54:56Z" | "2020-06-12T15:38:55Z" | "2020-06-12T15:38:55Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/235.diff",
"html_url": "https://github.com/huggingface/datasets/pull/235",
"merged_at": "2020-06-12T15:38:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/235.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/235"
} | ## Adding an *experimental datasets* folder
After using the 🤗nlp library for some time, I find that while it makes it super easy to create new memory-mapped datasets with lots of cool utilities, a lot of what I want to do doesn't work well with the current `MockDownloader` based testing paradigm, making it hard to share my work with the community.
My suggestion would be to add a **datasets\_experimental** folder so we can start making these new datasets public without having to completely re-think testing for every single one. We would allow contributors to submit dataset PRs in this folder, but require an explanation for why the current testing suite doesn't work for them. We can then aggregate the feedback and periodically see what's missing from the current tests.
I have added a **datasets\_experimental** folder to the repository and S3 bucket with two initial datasets: ELI5 (explainlikeimfive) and a Wikipedia Snippets dataset to support indexing (wiki\_snippets)
### ELI5
#### Dataset description
This allows people to download the [ELI5: Long Form Question Answering](https://arxiv.org/abs/1907.09190) dataset, along with two variants based on the r/askscience and r/AskHistorians. Full Reddit dumps for each month are downloaded from [pushshift](https://files.pushshift.io/reddit/), filtered for submissions and comments from the desired subreddits, then deleted one at a time to save space. The resulting dataset is split into a training, validation, and test dataset for r/explainlikeimfive, r/askscience, and r/AskHistorians respectively, where each item is a question along with all of its high scoring answers.
#### Issues with the current testing
1. the list of files to be downloaded is not pre-defined, but rather determined by parsing an index web page at run time. This is necessary as the name and compression type of the dump files changes from month to month as the pushshift website is maintained. Currently, the dummy folder requires the user to know which files will be downloaded.
2. to save time, the script works on the compressed files using the corresponding python packages rather than first running `download\_and\_extract` then filtering the extracted files.
### Wikipedia Snippets
#### Dataset description
This script creates a *snippets* version of a source Wikipedia dataset: each article is split into passages of fixed length which can then be indexed using ElasticSearch or a dense indexer. The script currently handles all **wikipedia** and **wiki40b** source datasets, and allows the user to choose the passage length and how much overlap they want across passages. In addition to the passage text, each snippet also has the article title, list of titles of sections covered by the text, and information to map the passage back to the initial dataset at the paragraph and character level.
#### Issues with the current testing
1. The DatasetBuilder needs to call `nlp.load_dataset()`. Currently, testing is not recursive (the test doesn't know where to find the dummy data for the source dataset)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/235/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/235/timeline | null | null | true | [
"I think it would be nicer to not create a new folder `datasets_experimental` , but just put your datasets also into the folder `datasets` for the following reasons:\r\n\r\n- From my point of view, the datasets are not very different from the other datasets (assuming that we soon have C4, and the beam datasets) so I don't see why we require a new dataset folder\r\n\r\n- I'm not a big fan of adding a boolean flag to the `load_dataset()` function that basically switches between folder names on S3. The user has to know whether a dataset script is experimental or not. User installing nlp with pip won't see that there are folders called `datasets` and `datasets_experimental`\r\n\r\n- If we do this just to bypass the test, I think a good solution could be: For all tests that are too complicated to be currently tested with the testing framework, we can add a class variable called `do_test = False` to the dataset builder class and a default `do_test = True` to the abstract dataset class and skip all tests that have that variable in the dataset test framework similar to what is done to beam datasets: https://github.com/huggingface/nlp/blob/2e0a8639a79b1abc848cff5c669094d40bba0f63/tests/test_dataset_common.py#L79 \r\nWe can also print a warning for all dataset tests having `do_test = False`. This variable would only concern testing and we would not have a problem removing it at a later stage IMO.\r\n\r\n- This way the datascripts are backward compatible and can be used with earlier versions of `nlp` (not that this matters too much atm) \r\n\r\nWhat is your opinion on this @lhoestq @thomwolf ?",
"Very cool to have add those datasets :)\r\nI understand that making the dummy data for this case is not fun. I'm sure we'll be able to add them soon. For now it's still interesting to have them in the library, even if we can't test all the code with dummy data.\r\n\r\nI like the idea of the `do_tests=False` class variable. \r\nHowever it would be cool to test at least that we can load the module and instantiate the builder (only ignore the dummy data test for now). In that case a better name could be `test_dummy_data=False` or something like that.\r\n\r\nIf we want to be picky we can also add a warning in `_download_and_prepare` to tell the user that datasets with `test_dummy_data=False` are still experimental.",
"Yeah I really like the idea of a partial test.\r\n\r\nMy main concern with the class variable is visibility, but having a warning would help with that. Maybe even get the user to agree > \"are you sure you want to go ahead?\"",
"> Very cool to have add those datasets :)\r\n> I understand that making the dummy data for this case is not fun. I'm sure we'll be able to add them soon. For now it's still interesting to have them in the library, even if we can't test all the code with dummy data.\r\n> \r\n> I like the idea of the `do_tests=False` class variable.\r\n> However it would be cool to test at least that we can load the module and instantiate the builder (only ignore the dummy data test for now). In that case a better name could be `test_dummy_data=False` or something like that.\r\n> \r\n> If we want to be picky we can also add a warning in `_download_and_prepare` to tell the user that datasets with `test_dummy_data=False` are still experimental.\r\n\r\n`test_dummy_data=False` sounds good to me!",
"There we go: added a `test_dummy_data` class variable that is `False` by default for the `BeamBasedBuilder` and `True` for everyone else (except the new `explainlikeimfive` and `wiki_snippets`)\r\n\r\nNote that `wiki_snippets` should become obsolete as soon as @lhoestq adds in the `IndexedDataset` class",
"Great! LGTM!"
] |
https://api.github.com/repos/huggingface/datasets/issues/234 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/234/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/234/comments | https://api.github.com/repos/huggingface/datasets/issues/234/events | https://github.com/huggingface/datasets/issues/234 | 630,534,427 | MDU6SXNzdWU2MzA1MzQ0Mjc= | 234 | Huggingface NLP, Uploading custom dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42269506?v=4",
"events_url": "https://api.github.com/users/Nouman97/events{/privacy}",
"followers_url": "https://api.github.com/users/Nouman97/followers",
"following_url": "https://api.github.com/users/Nouman97/following{/other_user}",
"gists_url": "https://api.github.com/users/Nouman97/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Nouman97",
"id": 42269506,
"login": "Nouman97",
"node_id": "MDQ6VXNlcjQyMjY5NTA2",
"organizations_url": "https://api.github.com/users/Nouman97/orgs",
"received_events_url": "https://api.github.com/users/Nouman97/received_events",
"repos_url": "https://api.github.com/users/Nouman97/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Nouman97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nouman97/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Nouman97"
} | [] | closed | false | null | [] | null | 4 | "2020-06-04T05:59:06Z" | "2020-07-06T09:33:26Z" | "2020-07-06T09:33:26Z" | NONE | null | null | null | Hello,
Does anyone know how we can call our custom dataset using the nlp.load command? Let's say that I have a dataset based on the same format as that of squad-v1.1, how am I supposed to load it using huggingface nlp.
Thank you! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/234/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/234/timeline | null | completed | false | [
"What do you mean 'custom' ? You may want to elaborate on it when ask a question.\r\n\r\nAnyway, there are two things you may interested\r\n`nlp.Dataset.from_file` and `load_dataset(..., cache_dir=)`",
"To load a dataset you need to have a script that defines the format of the examples, the splits and the way to generate examples. As your dataset has the same format of squad, you can just copy the squad script (see the [datasets](https://github.com/huggingface/nlp/tree/master/datasets) forlder) and just replace the url to load the data to your local or remote path.\r\n\r\nThen what you can do is `load_dataset(<path/to/your/script>)`",
"Also if you want to upload your script, you should be able to use the `nlp-cli`.\r\n\r\nUnfortunately the upload feature was not shipped in the latest version 0.2.0. so right now you can either clone the repo to use it or wait for the next release. We will add some docs to explain how to upload datasets.\r\n",
"Since the latest release 0.2.1 you can use \r\n```bash\r\nnlp-cli upload_dataset <path/to/dataset>\r\n```\r\nwhere `<path/to/dataset>` is a path to a folder containing your script (ex: `squad.py`).\r\nThis will upload the script under your namespace on our S3.\r\n\r\nOptionally the folder can also contain `dataset_infos.json` generated using\r\n```bash\r\nnlp-cli test <path/to/dataset> --all_configs --save_infos\r\n```\r\n\r\nThen you should be able to do\r\n```python\r\nnlp.load_dataset(\"my_namespace/dataset_name\")\r\n```"
] |