url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.07B
node_id
stringlengths
18
32
number
int64
1
3.39k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
1 value
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
int64
1,587B
1,639B
updated_at
int64
1,587B
1,639B
closed_at
int64
1,587B
1,639B
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/818
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/818/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/818/comments
https://api.github.com/repos/huggingface/datasets/issues/818/events
https://github.com/huggingface/datasets/pull/818
739,173,861
MDExOlB1bGxSZXF1ZXN0NTE3ODgzMzk0
818
Fix type hints pickling in python 3.6
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,604,939,267,000
1,604,999,223,000
1,604,999,222,000
MEMBER
null
Type hints can't be properly pickled in python 3.6. This was causing errors the `run_mlm.py` script from `transformers` with python 3.6 However Cloupickle proposed a [fix](https://github.com/cloudpipe/cloudpickle/pull/318/files) to make it work anyway. The idea is just to implement the pickling/unpickling of parameterized type hints. There is one detail though: since in python 3.6 we can't use `isinstance` on type hints, then we can't use pickle saving functions registry directly. Therefore we just wrap the `save_global` method of the Pickler. This should fix https://github.com/huggingface/transformers/issues/8212 for python 3.6 and make `run_mlm.py` support python 3.6 cc @sgugger
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/818/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/818/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/818", "html_url": "https://github.com/huggingface/datasets/pull/818", "diff_url": "https://github.com/huggingface/datasets/pull/818.diff", "patch_url": "https://github.com/huggingface/datasets/pull/818.patch", "merged_at": 1604999221000 }
true
https://api.github.com/repos/huggingface/datasets/issues/817
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/817/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/817/comments
https://api.github.com/repos/huggingface/datasets/issues/817/events
https://github.com/huggingface/datasets/issues/817
739,145,369
MDU6SXNzdWU3MzkxNDUzNjk=
817
Add MRQA dataset
{ "login": "VictorSanh", "id": 16107619, "node_id": "MDQ6VXNlcjE2MTA3NjE5", "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VictorSanh", "html_url": "https://github.com/VictorSanh", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "repos_url": "https://api.github.com/users/VictorSanh/repos", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "Done! cf #1117 and #1022" ]
1,604,937,139,000
1,607,096,682,000
1,607,096,681,000
MEMBER
null
## Adding a Dataset - **Name:** MRQA - **Description:** Collection of different (subsets of) QA datasets all converted to the same format to evaluate out-of-domain generalization (the datasets come from different domains, distributions, etc.). Some datasets are used for training and others are used for evaluation. This dataset was collected as part of MRQA 2019's shared task - **Paper:** https://arxiv.org/abs/1910.09753 - **Data:** https://github.com/mrqa/MRQA-Shared-Task-2019 - **Motivation:** Out-of-domain generalization is becoming (has become) a de-factor evaluation for NLU systems Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/817/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/817/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/816
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/816/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/816/comments
https://api.github.com/repos/huggingface/datasets/issues/816/events
https://github.com/huggingface/datasets/issues/816
739,102,686
MDU6SXNzdWU3MzkxMDI2ODY=
816
[Caching] Dill globalvars() output order is not deterministic and can cause cache issues.
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "To show the issue:\r\n```\r\npython -c \"from datasets.fingerprint import Hasher; a=[]; func = lambda : len(a); print(Hasher.hash(func))\"\r\n```\r\ndoesn't always return the same ouput since `globs` is a dictionary with \"a\" and \"len\" as keys but sometimes not in the same order" ]
1,604,934,080,000
1,605,108,050,000
1,605,108,050,000
MEMBER
null
Dill uses `dill.detect.globalvars` to get the globals used by a function in a recursive dump. `globalvars` returns a dictionary of all the globals that a dumped function needs. However the order of the keys in this dict is not deterministic and can cause caching issues. To fix that one could register an implementation of dill's `save_function` in the `datasets` pickler that sorts the globals keys before dumping a function.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/816/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/816/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/815
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/815/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/815/comments
https://api.github.com/repos/huggingface/datasets/issues/815/events
https://github.com/huggingface/datasets/issues/815
738,842,092
MDU6SXNzdWU3Mzg4NDIwOTI=
815
Is dataset iterative or not?
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "Hello !\r\nCould you give more details ?\r\n\r\nIf you mean iter through one dataset then yes, `Dataset` object does implement the `__iter__` method so you can use \r\n```python\r\nfor example in dataset:\r\n # do something\r\n```\r\n\r\nIf you want to iter through several datasets you can first concatenate them\r\n```python\r\nfrom datasets import concatenate_datasets\r\n\r\nnew_dataset = concatenate_datasets([dataset1, dataset2])\r\n```\r\nLet me know if this helps !", "Hi Huggingface/Datasets team,\nI want to use the datasets inside Seq2SeqDataset here\nhttps://github.com/huggingface/transformers/blob/master/examples/seq2seq/utils.py\nand there I need to return back each line from the datasets and I am not\nsure how to access each line and implement this?\nIt seems it also has get_item attribute? so I was not sure if this is\niterative dataset? or if this is non-iterable datasets?\nthanks.\n\n\n\nOn Mon, Nov 9, 2020 at 10:18 AM Quentin Lhoest <notifications@github.com>\nwrote:\n\n> Hello !\n> Could you give more details ?\n>\n> If you mean iter through one dataset then yes, Dataset object does\n> implement the __iter__ method so you can use\n>\n> for example in dataset:\n> # do something\n>\n> If you want to iter through several datasets you can first concatenate them\n>\n> from datasets import concatenate_datasets\n> new_dataset = concatenate_datasets([dataset1, dataset2])\n>\n> Let me know if this helps !\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/815#issuecomment-723881199>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ARPXHHYRLSSYW6NZN2HYDBTSO6XV5ANCNFSM4TPB7OWA>\n> .\n>\n", "could you tell me please if datasets also has __getitem__ any idea on how\nto integrate it with Seq2SeqDataset is appreciated thanks\n\nOn Mon, Nov 9, 2020 at 10:22 AM Rabeeh Karimi Mahabadi <rabeeh@google.com>\nwrote:\n\n> Hi Huggingface/Datasets team,\n> I want to use the datasets inside Seq2SeqDataset here\n> https://github.com/huggingface/transformers/blob/master/examples/seq2seq/utils.py\n> and there I need to return back each line from the datasets and I am not\n> sure how to access each line and implement this?\n> It seems it also has get_item attribute? so I was not sure if this is\n> iterative dataset? or if this is non-iterable datasets?\n> thanks.\n>\n>\n>\n> On Mon, Nov 9, 2020 at 10:18 AM Quentin Lhoest <notifications@github.com>\n> wrote:\n>\n>> Hello !\n>> Could you give more details ?\n>>\n>> If you mean iter through one dataset then yes, Dataset object does\n>> implement the __iter__ method so you can use\n>>\n>> for example in dataset:\n>> # do something\n>>\n>> If you want to iter through several datasets you can first concatenate\n>> them\n>>\n>> from datasets import concatenate_datasets\n>> new_dataset = concatenate_datasets([dataset1, dataset2])\n>>\n>> Let me know if this helps !\n>>\n>> —\n>> You are receiving this because you authored the thread.\n>> Reply to this email directly, view it on GitHub\n>> <https://github.com/huggingface/datasets/issues/815#issuecomment-723881199>,\n>> or unsubscribe\n>> <https://github.com/notifications/unsubscribe-auth/ARPXHHYRLSSYW6NZN2HYDBTSO6XV5ANCNFSM4TPB7OWA>\n>> .\n>>\n>\n", "`datasets.Dataset` objects implement indeed `__getitem__`. It returns a dictionary with one field per column.\r\n\r\nWe've not added the integration of the datasets library for the seq2seq utilities yet. The current seq2seq utilities are based on text files.\r\n\r\nHowever as soon as you have a `datasets.Dataset` with columns \"tgt_texts\" (str), \"src_texts\" (str), and \"id\" (int) you should be able to implement your own Seq2SeqDataset class that wraps your dataset object. Does that make sense to you ?", "Hi\nI am sorry for asking it multiple times but I am not getting the dataloader\ntype, could you confirm if the dataset library returns back an iterable\ntype dataloader or a mapping type one where one has access to __getitem__,\nin the former case, one can iterate with __iter__, and how I can configure\nit to return the data back as the iterative type? I am dealing with\nlarge-scale datasets and I do not want to bring all in memory\nthanks for your help\nBest regards\nRabeeh\n\nOn Mon, Nov 9, 2020 at 11:17 AM Quentin Lhoest <notifications@github.com>\nwrote:\n\n> datasets.Dataset objects implement indeed __getitem__. It returns a\n> dictionary with one field per column.\n>\n> We've not added the integration of the datasets library for the seq2seq\n> utilities yet. The current seq2seq utilities are based on text files.\n>\n> However as soon as you have a datasets.Dataset with columns \"tgt_texts\"\n> (str), \"src_texts\" (str), and \"id\" (int) you should be able to implement\n> your own Seq2SeqDataset class that wraps your dataset object. Does that\n> make sense ?\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/815#issuecomment-723915556>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ARPXHHYOC22EM7F666BZSOTSO66R3ANCNFSM4TPB7OWA>\n> .\n>\n", "`datasets.Dataset` objects are both iterative and mapping types: it has both `__iter__` and `__getitem__`\r\nFor example you can do\r\n```python\r\nfor example in dataset:\r\n # do something\r\n```\r\nor\r\n```python\r\nfor i in range(len(dataset)):\r\n example = dataset[i]\r\n # do something\r\n```\r\nWhen you do that, one and only one example is loaded into memory at a time.", "Hi there, \r\nHere is what I am trying, this is not working for me in map-style datasets, could you please tell me how to use datasets with being able to access ___getitem__ ? could you assist me please correcting this example? I need map-style datasets which is formed from concatenation of two datasets from your library. thanks \r\n\r\n\r\n```\r\nimport datasets\r\ndataset1 = load_dataset(\"squad\", split=\"train[:10]\")\r\ndataset1 = dataset1.map(lambda example: {\"src_texts\": \"question: {0} context: {1} \".format(\r\n example[\"question\"], example[\"context\"]),\r\n \"tgt_texts\": example[\"answers\"][\"text\"][0]}, remove_columns=dataset1.column_names)\r\ndataset2 = load_dataset(\"imdb\", split=\"train[:10]\")\r\ndataset2 = dataset2.map(lambda example: {\"src_texts\": \"imdb: \" + example[\"text\"],\r\n \"tgt_texts\": str(example[\"label\"])}, remove_columns=dataset2.column_names)\r\ntrain_dataset = datasets.concatenate_datasets([dataset1, dataset2])\r\ntrain_dataset.set_format(type='torch', columns=['src_texts', 'tgt_texts'])\r\ndataloader = torch.utils.data.DataLoader(train_dataset, batch_size=32)\r\nfor id, batch in enumerate(dataloader):\r\n print(batch)\r\n\r\n```", "closed since I found this response on the issue https://github.com/huggingface/datasets/issues/469" ]
1,604,913,108,000
1,605,005,403,000
1,605,005,403,000
NONE
null
Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/815/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/815/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/814
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/814/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/814/comments
https://api.github.com/repos/huggingface/datasets/issues/814/events
https://github.com/huggingface/datasets/issues/814
738,500,443
MDU6SXNzdWU3Mzg1MDA0NDM=
814
Joining multiple datasets
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "found a solution here https://discuss.pytorch.org/t/train-simultaneously-on-two-datasets/649/35, closed for now, thanks " ]
1,604,852,370,000
1,604,864,328,000
1,604,864,328,000
NONE
null
Hi I have multiple iterative datasets from your library with different size and I want to join them in a way that each datasets is sampled equally, so smaller datasets more, larger one less, could you tell me how to implement this in pytorch? thanks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/814/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/814/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/812
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/812/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/812/comments
https://api.github.com/repos/huggingface/datasets/issues/812/events
https://github.com/huggingface/datasets/issues/812
738,340,217
MDU6SXNzdWU3MzgzNDAyMTc=
812
Too much logging
{ "login": "dspoka", "id": 6183050, "node_id": "MDQ6VXNlcjYxODMwNTA=", "avatar_url": "https://avatars.githubusercontent.com/u/6183050?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dspoka", "html_url": "https://github.com/dspoka", "followers_url": "https://api.github.com/users/dspoka/followers", "following_url": "https://api.github.com/users/dspoka/following{/other_user}", "gists_url": "https://api.github.com/users/dspoka/gists{/gist_id}", "starred_url": "https://api.github.com/users/dspoka/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dspoka/subscriptions", "organizations_url": "https://api.github.com/users/dspoka/orgs", "repos_url": "https://api.github.com/users/dspoka/repos", "events_url": "https://api.github.com/users/dspoka/events{/privacy}", "received_events_url": "https://api.github.com/users/dspoka/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Thanks for reporting :) \r\nI agree these one should be hidden when the logging level is warning, we'll fix that", "+1, the amount of logging is excessive.\r\n\r\nMost of it indeed comes from `filelock.py`, though there are occasionally messages from other sources too. Below is an example (all of these messages were logged after I already called `datasets.logging.set_verbosity_error()`)\r\n\r\n```\r\nI1109 21:26:01.742688 139785006901056 filelock.py:318] Lock 139778216292192 released on /home/kitaev/.cache/huggingface/datasets/9ed4f2e133395826175a892c70611f68522c7bc61a35476e8b51a31afb76e4bf.e6f3e3f3e3875a07469d1cfd32e16e1d06b149616b11eef2d081c43d515b492d.py.lock\r\nI1109 21:26:01.747898 139785006901056 filelock.py:274] Lock 139778216290176 acquired on /home/kitaev/.cache/huggingface/datasets/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock\r\nI1109 21:26:01.748258 139785006901056 filelock.py:318] Lock 139778216290176 released on /home/kitaev/.cache/huggingface/datasets/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock\r\nI1109 21:26:01.748412 139785006901056 filelock.py:274] Lock 139778215853024 acquired on /home/kitaev/.cache/huggingface/datasets/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock\r\nI1109 21:26:01.748497 139785006901056 filelock.py:318] Lock 139778215853024 released on /home/kitaev/.cache/huggingface/datasets/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock\r\nI1109 21:07:17.029001 140301730502464 filelock.py:274] Lock 140289479304360 acquired on /home/kitaev/.cache/huggingface/datasets/b16d3a04bf2cad1346896852bf120ba846ea1bebb1cd60255bb3a1a2bbcc3a67.ec871b06a00118091ec63eff0a641fddcb8d3c7cd52e855bbb2be28944df4b82.py.lock\r\nI1109 21:07:17.029341 140301730502464 filelock.py:318] Lock 140289479304360 released on /home/kitaev/.cache/huggingface/datasets/b16d3a04bf2cad1346896852bf120ba846ea1bebb1cd60255bb3a1a2bbcc3a67.ec871b06a00118091ec63eff0a641fddcb8d3c7cd52e855bbb2be28944df4b82.py.lock\r\nI1109 21:07:17.058964 140301730502464 filelock.py:274] Lock 140251889388120 acquired on /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow.lock\r\nI1109 21:07:17.060933 140301730502464 filelock.py:318] Lock 140251889388120 released on /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow.lock\r\nI1109 21:07:17.061067 140301730502464 filelock.py:274] Lock 140296072521488 acquired on /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow.lock\r\nI1109 21:07:17.069736 140301730502464 metric.py:400] Removing /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow\r\nI1109 21:07:17.069949 140301730502464 filelock.py:318] Lock 140296072521488 released on /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow.lock\r\n```", "So how to solve this problem?", "In the latest version of the lib the logs about locks are at the DEBUG level so you won't see them by default.\r\nAlso `set_verbosity_warning` does take into account these logs now.\r\nCan you try to update the lib ?\r\n```\r\npip install --upgrade datasets\r\n```", "Thanks. For some reason I have to use the older version. Is that possible I can fix this by some surface-level trick?\r\n\r\nI'm still using 1.13 version datasets.", "On older versions you can use\r\n```python\r\nimport logging\r\n\r\nlogging.getLogger(\"filelock\").setLevel(logging.WARNING)\r\n```", "Whoa Thank you! It works!" ]
1,604,793,390,000
1,611,671,494,000
1,605,546,402,000
NONE
null
I'm doing this in the beginning of my script: from datasets.utils import logging as datasets_logging datasets_logging.set_verbosity_warning() but I'm still getting these logs: [2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock [2020-11-07 15:45:41,909][filelock][INFO] - Lock 139958278886176 released on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock using datasets version = 1.1.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/812/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/812/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/810
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/810/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/810/comments
https://api.github.com/repos/huggingface/datasets/issues/810/events
https://github.com/huggingface/datasets/pull/810
737,878,370
MDExOlB1bGxSZXF1ZXN0NTE2ODQzMzQ3
810
Fix seqeval metric
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,604,679,103,000
1,604,930,669,000
1,604,930,668,000
MEMBER
null
The current seqeval metric returns the following error when computed: ``` ~/.cache/huggingface/modules/datasets_modules/metrics/seqeval/78a944d83252b5a16c9a2e49f057f4c6e02f18cc03349257025a8c9aea6524d8/seqeval.py in _compute(self, predictions, references, suffix) 102 scores = {} 103 for type_name, score in report.items(): --> 104 scores[type_name]["precision"] = score["precision"] 105 scores[type_name]["recall"] = score["recall"] 106 scores[type_name]["f1"] = score["f1-score"] KeyError: 'LOC' ``` This is because the current code basically tries to do: ``` scores = {} scores["LOC"]["precision"] = some_value ``` which does not work in python. This PR fixes that while keeping the previous nested structure of results, with the same keys.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/810/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/810/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/810", "html_url": "https://github.com/huggingface/datasets/pull/810", "diff_url": "https://github.com/huggingface/datasets/pull/810.diff", "patch_url": "https://github.com/huggingface/datasets/pull/810.patch", "merged_at": 1604930667000 }
true
https://api.github.com/repos/huggingface/datasets/issues/809
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/809/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/809/comments
https://api.github.com/repos/huggingface/datasets/issues/809/events
https://github.com/huggingface/datasets/issues/809
737,832,701
MDU6SXNzdWU3Mzc4MzI3MDE=
809
Add Google Taskmaster dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "Hey @yjernite. Was going to start working on this but found taskmaster 1,2 & 3 in the datasets library already so think this can be closed now?", "You are absolutely right :) \r\n\r\nClosed by https://github.com/huggingface/datasets/pull/1193 https://github.com/huggingface/datasets/pull/1197 https://github.com/huggingface/datasets/pull/1213" ]
1,604,675,441,000
1,618,924,166,000
1,618,924,166,000
MEMBER
null
## Adding a Dataset - **Name:** Taskmaster - **Description:** A large dataset of task-oriented dialogue with annotated goals (55K dialogues covering entertainment and travel reservations) - **Paper:** https://arxiv.org/abs/1909.05358 - **Data:** https://github.com/google-research-datasets/Taskmaster - **Motivation:** One of few annotated datasets of this size for goal-oriented dialogue Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/809/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/809/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/808
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/808/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/808/comments
https://api.github.com/repos/huggingface/datasets/issues/808/events
https://github.com/huggingface/datasets/pull/808
737,638,942
MDExOlB1bGxSZXF1ZXN0NTE2NjQ0NDc0
808
dataset(dgs): initial dataset loading script
{ "login": "AmitMY", "id": 5757359, "node_id": "MDQ6VXNlcjU3NTczNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmitMY", "html_url": "https://github.com/AmitMY", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "repos_url": "https://api.github.com/users/AmitMY/repos", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @AmitMY, \r\n\r\nWere you able to figure this out?", "I did not.\r\nWith all the limitations this repo currently has, I had to create a repo of my own using tfds to mitigate them. \r\nhttps://github.com/sign-language-processing/datasets/tree/master/sign_language_datasets/datasets/dgs_corpus\r\n\r\nClosing as I don't know how to support this PR further" ]
1,604,657,683,000
1,616,480,335,000
1,616,480,335,000
CONTRIBUTOR
null
When trying to create dummy data I get: > Dataset datasets with config None seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has t o be created with less guidance. Make sure you create the file dummy_data. I am not sure how to manually create the dummy_data (what exactly it should contain) Also note, this library says: > ImportError: To be able to use this dataset, you need to install the following dependencies['pympi'] using 'pip install pympi' for instance' When you actually need to `pip install pympi-ling`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/808/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/808/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/808", "html_url": "https://github.com/huggingface/datasets/pull/808", "diff_url": "https://github.com/huggingface/datasets/pull/808.diff", "patch_url": "https://github.com/huggingface/datasets/pull/808.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/807
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/807/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/807/comments
https://api.github.com/repos/huggingface/datasets/issues/807/events
https://github.com/huggingface/datasets/issues/807
737,509,954
MDU6SXNzdWU3Mzc1MDk5NTQ=
807
load_dataset for LOCAL CSV files report CONNECTION ERROR
{ "login": "shexuan", "id": 25664170, "node_id": "MDQ6VXNlcjI1NjY0MTcw", "avatar_url": "https://avatars.githubusercontent.com/u/25664170?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shexuan", "html_url": "https://github.com/shexuan", "followers_url": "https://api.github.com/users/shexuan/followers", "following_url": "https://api.github.com/users/shexuan/following{/other_user}", "gists_url": "https://api.github.com/users/shexuan/gists{/gist_id}", "starred_url": "https://api.github.com/users/shexuan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shexuan/subscriptions", "organizations_url": "https://api.github.com/users/shexuan/orgs", "repos_url": "https://api.github.com/users/shexuan/repos", "events_url": "https://api.github.com/users/shexuan/events{/privacy}", "received_events_url": "https://api.github.com/users/shexuan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi !\r\nThe url works on my side.\r\n\r\nIs the url working in your navigator ?\r\nAre you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?", "> Hi !\r\n> The url works on my side.\r\n> \r\n> Is the url working in your navigator ?\r\n> Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n\r\nI tried another server, it's working now. Thanks a lot.\r\n\r\nAnd I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?", "It seems my network frequently crashed so most time it cannot work.", "\r\n\r\n\r\n> > Hi !\r\n> > The url works on my side.\r\n> > Is the url working in your navigator ?\r\n> > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n> \r\n> I tried another server, it's working now. Thanks a lot.\r\n> \r\n> And I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?\r\n\r\nI download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`? \r\n\r\nThanks :D", "hello, how did you solve this problems?\r\n\r\n> > > Hi !\r\n> > > The url works on my side.\r\n> > > Is the url working in your navigator ?\r\n> > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n> > \r\n> > \r\n> > I tried another server, it's working now. Thanks a lot.\r\n> > And I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?\r\n> \r\n> I download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`?\r\n> \r\n> Thanks :D\r\n\r\nhello, I tried this. but it still failed. how do you fix this error?", "> hello, how did you solve this problems?\r\n> \r\n> > > > Hi !\r\n> > > > The url works on my side.\r\n> > > > Is the url working in your navigator ?\r\n> > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n> > > \r\n> > > \r\n> > > I tried another server, it's working now. Thanks a lot.\r\n> > > And I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?\r\n> > \r\n> > \r\n> > I download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`?\r\n> > Thanks :D\r\n> \r\n> hello, I tried this. but it still failed. how do you fix this error?\r\n\r\n你把那个脚本下载到你本地安装目录下,然后 `load_dataset(csv_script_path, data_fiels)`\r\n\r\n", "> > hello, how did you solve this problems?\r\n> > > > > Hi !\r\n> > > > > The url works on my side.\r\n> > > > > Is the url working in your navigator ?\r\n> > > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n> > > > \r\n> > > > \r\n> > > > I tried another server, it's working now. Thanks a lot.\r\n> > > > And I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?\r\n> > > \r\n> > > \r\n> > > I download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`?\r\n> > > Thanks :D\r\n> > \r\n> > \r\n> > hello, I tried this. but it still failed. how do you fix this error?\r\n> \r\n> 你把那个脚本下载到你本地安装目录下,然后 `load_dataset(csv_script_path, data_fiels)`\r\n\r\n好的好的!解决了,感谢感谢!!!", "> \r\n> \r\n> > hello, how did you solve this problems?\r\n> > > > > Hi !\r\n> > > > > The url works on my side.\r\n> > > > > Is the url working in your navigator ?\r\n> > > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n> > > > \r\n> > > > \r\n> > > > I tried another server, it's working now. Thanks a lot.\r\n> > > > And I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?\r\n> > > \r\n> > > \r\n> > > I download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`?\r\n> > > Thanks :D\r\n> > \r\n> > \r\n> > hello, I tried this. but it still failed. how do you fix this error?\r\n> \r\n> 你把那个脚本下载到你本地安装目录下,然后 `load_dataset(csv_script_path, data_fiels)`\r\n\r\n我照着做了,然后报错。\r\nValueError: unable to parse C:/Software/Anaconda/envs/ptk_gpu2/Lib/site-packages/datasets\\dataset_infos.json as a URL or as a local path\r\n\r\n`---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-5-fd2106a3f053> in <module>\r\n----> 1 dataset = load_dataset('C:/Software/Anaconda/envs/ptk_gpu2/Lib/site-packages/datasets/csv.py', data_files='./test.csv', delimiter=',', autogenerate_column_names=False)\r\n\r\nC:\\Software\\Anaconda\\envs\\ptk_gpu2\\lib\\site-packages\\datasets\\load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)\r\n 588 # Download/copy dataset processing script\r\n 589 module_path, hash = prepare_module(\r\n--> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True\r\n 591 )\r\n 592 \r\n\r\nC:\\Software\\Anaconda\\envs\\ptk_gpu2\\lib\\site-packages\\datasets\\load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs)\r\n 296 local_dataset_infos_path = cached_path(\r\n 297 dataset_infos,\r\n--> 298 download_config=download_config,\r\n 299 )\r\n 300 except (FileNotFoundError, ConnectionError):\r\n\r\nC:\\Software\\Anaconda\\envs\\ptk_gpu2\\lib\\site-packages\\datasets\\utils\\file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)\r\n 316 else:\r\n 317 # Something unknown\r\n--> 318 raise ValueError(\"unable to parse {} as a URL or as a local path\".format(url_or_filename))\r\n 319 \r\n 320 if download_config.extract_compressed_file and output_path is not None:\r\n\r\nValueError: unable to parse C:/Software/Anaconda/envs/ptk_gpu2/Lib/site-packages/datasets\\dataset_infos.json as a URL or as a local path\r\n\r\n`", "I also experienced this issue this morning. Looks like something specific to windows.\r\nI'm working on a fix", "I opened a PR @wn1652400018", "> \r\n> \r\n> I opened a PR @wn1652400018\r\n\r\nThanks you!, It works very well." ]
1,604,644,384,000
1,610,328,627,000
1,605,331,834,000
NONE
null
## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).reshape(300,4)) df.to_csv('test.csv', header=False, index=False) print('datasets version: ', datasets.__version__) print('pytorch version: ', torch.__version__) print('transformers version: ', transformers.__version__) # output: datasets version: 1.1.2 pytorch version: 1.5.0 transformers version: 3.2.0 ``` when I load data through `dataset`: ``` dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ``` Error infos: ``` ConnectionError Traceback (most recent call last) <ipython-input-17-bbdadb9a0c78> in <module> ----> 1 dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 588 # Download/copy dataset processing script 589 module_path, hash = prepare_module( --> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True 591 ) 592 ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 266 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version) 267 try: --> 268 local_path = cached_path(file_path, download_config=download_config) 269 except FileNotFoundError: 270 if script_version is not None: ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 306 user_agent=download_config.user_agent, 307 local_files_only=download_config.local_files_only, --> 308 use_etag=download_config.use_etag, 309 ) 310 elif os.path.exists(url_or_filename): ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag) 473 elif response is not None and response.status_code == 404: 474 raise FileNotFoundError("Couldn't find file at {}".format(url)) --> 475 raise ConnectionError("Couldn't reach {}".format(url)) 476 477 # Try a second time ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py ``` And I try to connect to the site with requests: ``` import requests requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ``` Similarly Error occurs: ``` --------------------------------------------------------------------------- ConnectionRefusedError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 159 conn = connection.create_connection( --> 160 (self._dns_host, self.port), self.timeout, **extra_kw 161 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 83 if err is not None: ---> 84 raise err 85 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 73 sock.bind(source_address) ---> 74 sock.connect(sa) 75 return sock ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: NewConnectionError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 676 headers=headers, --> 677 chunked=chunked, 678 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 380 try: --> 381 self._validate_conn(conn) 382 except (SocketTimeout, BaseSSLError) as e: ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn) 975 if not getattr(conn, "sock", None): # AppEngine might not have `.sock` --> 976 conn.connect() 977 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in connect(self) 307 # Add certificate verification --> 308 conn = self._new_conn() 309 hostname = self.host ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 171 raise NewConnectionError( --> 172 self, "Failed to establish a new connection: %s" % e 173 ) NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused During handling of the above exception, another exception occurred: MaxRetryError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 448 retries=self.max_retries, --> 449 timeout=timeout 450 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 724 retries = retries.increment( --> 725 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] 726 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace) 438 if new_retry.is_exhausted(): --> 439 raise MaxRetryError(_pool, url, error or ResponseError(cause)) 440 MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) During handling of the above exception, another exception occurred: ConnectionError Traceback (most recent call last) <ipython-input-20-18cc3eb4a049> in <module> 1 import requests 2 ----> 3 requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in head(url, **kwargs) 102 103 kwargs.setdefault('allow_redirects', False) --> 104 return request('head', url, **kwargs) 105 106 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in request(method, url, **kwargs) 59 # cases, and look like a memory leak in others. 60 with sessions.Session() as session: ---> 61 return session.request(method=method, url=url, **kwargs) 62 63 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 528 } 529 send_kwargs.update(settings) --> 530 resp = self.send(prep, **send_kwargs) 531 532 return resp ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in send(self, request, **kwargs) 641 642 # Send the request --> 643 r = adapter.send(request, **kwargs) 644 645 # Total elapsed time of the request (approximately) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 514 raise SSLError(e, request=request) 515 --> 516 raise ConnectionError(e, request=request) 517 518 except ClosedPoolError as e: ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/807/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/807/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/806
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/806/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/806/comments
https://api.github.com/repos/huggingface/datasets/issues/806/events
https://github.com/huggingface/datasets/issues/806
737,215,430
MDU6SXNzdWU3MzcyMTU0MzA=
806
Quail dataset urls are out of date
{ "login": "ngdodd", "id": 4889636, "node_id": "MDQ6VXNlcjQ4ODk2MzY=", "avatar_url": "https://avatars.githubusercontent.com/u/4889636?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ngdodd", "html_url": "https://github.com/ngdodd", "followers_url": "https://api.github.com/users/ngdodd/followers", "following_url": "https://api.github.com/users/ngdodd/following{/other_user}", "gists_url": "https://api.github.com/users/ngdodd/gists{/gist_id}", "starred_url": "https://api.github.com/users/ngdodd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ngdodd/subscriptions", "organizations_url": "https://api.github.com/users/ngdodd/orgs", "repos_url": "https://api.github.com/users/ngdodd/repos", "events_url": "https://api.github.com/users/ngdodd/events{/privacy}", "received_events_url": "https://api.github.com/users/ngdodd/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Thanks for reporting.\r\nWe should fix the urls and use quail 1.3.\r\nIf you want to contribute feel free to fix the urls and open a PR :) ", "Done! PR [https://github.com/huggingface/datasets/pull/820](https://github.com/huggingface/datasets/pull/820)\r\n\r\nUpdated links and also regenerated the metadata and dummy data for v1.3 in order to pass verifications as described here: [https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset](https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset). ", "Closing since #820 is merged.\r\nThanks again for fixing the urls :)" ]
1,604,605,219,000
1,605,016,971,000
1,605,016,971,000
CONTRIBUTOR
null
<h3>Code</h3> ``` from datasets import load_dataset quail = load_dataset('quail') ``` <h3>Error</h3> ``` FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/text-machine-lab/quail/master/quail_v1.2/xml/ordered/quail_1.2_train.xml ``` As per [quail v1.3 commit](https://github.com/text-machine-lab/quail/commit/506501cfa34d9ec6c042d31026ba6fea6bcec8ff) it looks like the location and suggested ordering has changed. In [https://github.com/huggingface/datasets/blob/master/datasets/quail/quail.py#L52-L58](https://github.com/huggingface/datasets/blob/master/datasets/quail/quail.py#L52-L58) the quail v1.2 datasets are being pointed to, which don't exist anymore.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/806/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/806/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/804
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/804/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/804/comments
https://api.github.com/repos/huggingface/datasets/issues/804/events
https://github.com/huggingface/datasets/issues/804
736,858,507
MDU6SXNzdWU3MzY4NTg1MDc=
804
Empty output/answer in TriviaQA test set (both in 'kilt_tasks' and 'trivia_qa')
{ "login": "PaulLerner", "id": 25532159, "node_id": "MDQ6VXNlcjI1NTMyMTU5", "avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PaulLerner", "html_url": "https://github.com/PaulLerner", "followers_url": "https://api.github.com/users/PaulLerner/followers", "following_url": "https://api.github.com/users/PaulLerner/following{/other_user}", "gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}", "starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions", "organizations_url": "https://api.github.com/users/PaulLerner/orgs", "repos_url": "https://api.github.com/users/PaulLerner/repos", "events_url": "https://api.github.com/users/PaulLerner/events{/privacy}", "received_events_url": "https://api.github.com/users/PaulLerner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "cc @yjernite is this expected ?", "Yes: TriviaQA has a private test set for the leaderboard [here](https://competitions.codalab.org/competitions/17208)\r\n\r\nFor the KILT training and validation portions, you need to link the examples from the TriviaQA dataset as detailed here:\r\nhttps://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md", "Oh ok, I guess I read the paper too fast 😅, thank you for your answer!" ]
1,604,576,281,000
1,604,931,299,000
1,604,931,298,000
CONTRIBUTOR
null
# The issue It's all in the title, it appears to be fine on the train and validation sets. Is there some kind of mapping to do like for the questions (see https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md) ? # How to reproduce ```py from datasets import load_dataset kilt_tasks = load_dataset("kilt_tasks") trivia_qa = load_dataset('trivia_qa', 'unfiltered.nocontext') # both in "kilt_tasks" In [18]: any([output['answer'] for output in kilt_tasks['test_triviaqa']['output']]) Out[18]: False # and "trivia_qa" In [13]: all([answer['value'] == '<unk>' for answer in trivia_qa['test']['answer']]) Out[13]: True # appears to be fine on the train and validation sets. In [14]: all([answer['value'] == '<unk>' for answer in trivia_qa['train']['answer']]) Out[14]: False In [15]: all([answer['value'] == '<unk>' for answer in trivia_qa['validation']['answer']]) Out[15]: False In [16]: any([output['answer'] for output in kilt_tasks['train_triviaqa']['output']]) Out[16]: True In [17]: any([output['answer'] for output in kilt_tasks['validation_triviaqa']['output']]) Out[17]: True ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/804/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/804/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/803
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/803/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/803/comments
https://api.github.com/repos/huggingface/datasets/issues/803/events
https://github.com/huggingface/datasets/pull/803
736,818,917
MDExOlB1bGxSZXF1ZXN0NTE1OTY1ODE2
803
fix: typos in tutorial to map KILT and TriviaQA
{ "login": "PaulLerner", "id": 25532159, "node_id": "MDQ6VXNlcjI1NTMyMTU5", "avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PaulLerner", "html_url": "https://github.com/PaulLerner", "followers_url": "https://api.github.com/users/PaulLerner/followers", "following_url": "https://api.github.com/users/PaulLerner/following{/other_user}", "gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}", "starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions", "organizations_url": "https://api.github.com/users/PaulLerner/orgs", "repos_url": "https://api.github.com/users/PaulLerner/repos", "events_url": "https://api.github.com/users/PaulLerner/events{/privacy}", "received_events_url": "https://api.github.com/users/PaulLerner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,604,572,920,000
1,604,999,287,000
1,604,999,287,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/803/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/803/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/803", "html_url": "https://github.com/huggingface/datasets/pull/803", "diff_url": "https://github.com/huggingface/datasets/pull/803.diff", "patch_url": "https://github.com/huggingface/datasets/pull/803.patch", "merged_at": 1604999287000 }
true
https://api.github.com/repos/huggingface/datasets/issues/802
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/802/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/802/comments
https://api.github.com/repos/huggingface/datasets/issues/802/events
https://github.com/huggingface/datasets/pull/802
736,296,343
MDExOlB1bGxSZXF1ZXN0NTE1NTM1MDI0
802
Add XGlue
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Really cool to add XGlue, this will be a nice addition !\r\n\r\nSplits shouldn't depend on the language. There must be configurations for each language, as we're doing for xnli, xtreme, etc.\r\nFor example for XGlue we'll have these configurations: NER.de, NER.en etc." ]
1,604,510,994,000
1,606,838,308,000
1,606,838,307,000
MEMBER
null
Dataset is ready to merge. An important feature of this dataset is that for each config the train data is in English, while dev and test data are in multiple languages. Therefore, @lhoestq and I decided offline that we will give the dataset the following API, *e.g.* for ```python load_dataset("xglue", "ner") # would give the splits 'train', 'validation.en', 'test.en', 'validation.es', 'test.es', ... ``` => therefore one can load a single language test via ```python load_dataset("xglue", "ner", split="test.es") ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/802/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/802/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/802", "html_url": "https://github.com/huggingface/datasets/pull/802", "diff_url": "https://github.com/huggingface/datasets/pull/802.diff", "patch_url": "https://github.com/huggingface/datasets/pull/802.patch", "merged_at": 1606838307000 }
true
https://api.github.com/repos/huggingface/datasets/issues/801
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/801/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/801/comments
https://api.github.com/repos/huggingface/datasets/issues/801/events
https://github.com/huggingface/datasets/issues/801
735,790,876
MDU6SXNzdWU3MzU3OTA4NzY=
801
How to join two datasets?
{ "login": "shangw-nvidia", "id": 66387198, "node_id": "MDQ6VXNlcjY2Mzg3MTk4", "avatar_url": "https://avatars.githubusercontent.com/u/66387198?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shangw-nvidia", "html_url": "https://github.com/shangw-nvidia", "followers_url": "https://api.github.com/users/shangw-nvidia/followers", "following_url": "https://api.github.com/users/shangw-nvidia/following{/other_user}", "gists_url": "https://api.github.com/users/shangw-nvidia/gists{/gist_id}", "starred_url": "https://api.github.com/users/shangw-nvidia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shangw-nvidia/subscriptions", "organizations_url": "https://api.github.com/users/shangw-nvidia/orgs", "repos_url": "https://api.github.com/users/shangw-nvidia/repos", "events_url": "https://api.github.com/users/shangw-nvidia/events{/privacy}", "received_events_url": "https://api.github.com/users/shangw-nvidia/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi this is also my question. thanks ", "Hi ! Currently the only way to add new fields to a dataset is by using `.map` and picking items from the other dataset\r\n", "Closing this one. Feel free to re-open if you have other questions about this issue.\r\n\r\nAlso linking another discussion about joining datasets: #853 " ]
1,604,461,991,000
1,608,732,178,000
1,608,732,178,000
NONE
null
Hi, I'm wondering if it's possible to join two (preprocessed) datasets with the same number of rows but different labels? I'm currently trying to create paired sentences for BERT from `wikipedia/'20200501.en`, and I couldn't figure out a way to create a paired sentence using `.map()` where the second sentence is **not** the next sentence (i.e., from a different article) of the first sentence. Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/801/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/801/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/800
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/800/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/800/comments
https://api.github.com/repos/huggingface/datasets/issues/800/events
https://github.com/huggingface/datasets/pull/800
735,772,775
MDExOlB1bGxSZXF1ZXN0NTE1MTAyMjc3
800
Update loading_metrics.rst
{ "login": "ayushidalmia", "id": 5400513, "node_id": "MDQ6VXNlcjU0MDA1MTM=", "avatar_url": "https://avatars.githubusercontent.com/u/5400513?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ayushidalmia", "html_url": "https://github.com/ayushidalmia", "followers_url": "https://api.github.com/users/ayushidalmia/followers", "following_url": "https://api.github.com/users/ayushidalmia/following{/other_user}", "gists_url": "https://api.github.com/users/ayushidalmia/gists{/gist_id}", "starred_url": "https://api.github.com/users/ayushidalmia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ayushidalmia/subscriptions", "organizations_url": "https://api.github.com/users/ayushidalmia/orgs", "repos_url": "https://api.github.com/users/ayushidalmia/repos", "events_url": "https://api.github.com/users/ayushidalmia/events{/privacy}", "received_events_url": "https://api.github.com/users/ayushidalmia/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,604,458,631,000
1,605,108,512,000
1,605,108,512,000
CONTRIBUTOR
null
Minor bug
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/800/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/800/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/800", "html_url": "https://github.com/huggingface/datasets/pull/800", "diff_url": "https://github.com/huggingface/datasets/pull/800.diff", "patch_url": "https://github.com/huggingface/datasets/pull/800.patch", "merged_at": 1605108512000 }
true
https://api.github.com/repos/huggingface/datasets/issues/799
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/799/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/799/comments
https://api.github.com/repos/huggingface/datasets/issues/799/events
https://github.com/huggingface/datasets/pull/799
735,551,165
MDExOlB1bGxSZXF1ZXN0NTE0OTIzNDMx
799
switch amazon reviews class label order
{ "login": "joeddav", "id": 9353833, "node_id": "MDQ6VXNlcjkzNTM4MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joeddav", "html_url": "https://github.com/joeddav", "followers_url": "https://api.github.com/users/joeddav/followers", "following_url": "https://api.github.com/users/joeddav/following{/other_user}", "gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}", "starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joeddav/subscriptions", "organizations_url": "https://api.github.com/users/joeddav/orgs", "repos_url": "https://api.github.com/users/joeddav/repos", "events_url": "https://api.github.com/users/joeddav/events{/privacy}", "received_events_url": "https://api.github.com/users/joeddav/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,604,428,738,000
1,604,429,054,000
1,604,429,050,000
MEMBER
null
Switches the label order to be more intuitive for amazon reviews, #791.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/799/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/799/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/799", "html_url": "https://github.com/huggingface/datasets/pull/799", "diff_url": "https://github.com/huggingface/datasets/pull/799.diff", "patch_url": "https://github.com/huggingface/datasets/pull/799.patch", "merged_at": 1604429050000 }
true
https://api.github.com/repos/huggingface/datasets/issues/794
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/794/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/794/comments
https://api.github.com/repos/huggingface/datasets/issues/794/events
https://github.com/huggingface/datasets/issues/794
735,158,725
MDU6SXNzdWU3MzUxNTg3MjU=
794
self.options cannot be converted to a Python object for pickling
{ "login": "hzqjyyx", "id": 9635713, "node_id": "MDQ6VXNlcjk2MzU3MTM=", "avatar_url": "https://avatars.githubusercontent.com/u/9635713?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hzqjyyx", "html_url": "https://github.com/hzqjyyx", "followers_url": "https://api.github.com/users/hzqjyyx/followers", "following_url": "https://api.github.com/users/hzqjyyx/following{/other_user}", "gists_url": "https://api.github.com/users/hzqjyyx/gists{/gist_id}", "starred_url": "https://api.github.com/users/hzqjyyx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hzqjyyx/subscriptions", "organizations_url": "https://api.github.com/users/hzqjyyx/orgs", "repos_url": "https://api.github.com/users/hzqjyyx/repos", "events_url": "https://api.github.com/users/hzqjyyx/events{/privacy}", "received_events_url": "https://api.github.com/users/hzqjyyx/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! Thanks for reporting that's a bug on master indeed.\r\nWe'll fix that soon" ]
1,604,395,654,000
1,605,807,338,000
1,605,807,338,000
NONE
null
Hi, Currently I am trying to load csv file with customized read_options. And the latest master seems broken if we pass the ReadOptions object. Here is a code snippet ```python from datasets import load_dataset from pyarrow.csv import ReadOptions load_dataset("csv", data_files=["out.csv"], read_options=ReadOptions(block_size=16*1024*1024)) ``` error is `self.options cannot be converted to a Python object for pickling` Would you mind to take a look? Thanks! ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-28-ab83fec2ded4> in <module> ----> 1 load_dataset("csv", data_files=["out.csv"], read_options=ReadOptions(block_size=16*1024*1024)) /tmp/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 602 hash=hash, 603 features=features, --> 604 **config_kwargs, 605 ) 606 /tmp/datasets/src/datasets/builder.py in __init__(self, cache_dir, name, hash, features, **config_kwargs) 162 name, 163 custom_features=features, --> 164 **config_kwargs, 165 ) 166 /tmp/datasets/src/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs) 281 ) 282 else: --> 283 suffix = Hasher.hash(config_kwargs_to_add_to_suffix) 284 285 if builder_config.data_files is not None: /tmp/datasets/src/datasets/fingerprint.py in hash(cls, value) 51 return cls.dispatch[type(value)](cls, value) 52 else: ---> 53 return cls.hash_default(value) 54 55 def update(self, value): /tmp/datasets/src/datasets/fingerprint.py in hash_default(cls, value) 44 @classmethod 45 def hash_default(cls, value): ---> 46 return cls.hash_bytes(dumps(value)) 47 48 @classmethod /tmp/datasets/src/datasets/utils/py_utils.py in dumps(obj) 365 file = StringIO() 366 with _no_cache_fields(obj): --> 367 dump(obj, file) 368 return file.getvalue() 369 /tmp/datasets/src/datasets/utils/py_utils.py in dump(obj, file) 337 def dump(obj, file): 338 """pickle an object to a file""" --> 339 Pickler(file, recurse=True).dump(obj) 340 return 341 ~/.local/lib/python3.6/site-packages/dill/_dill.py in dump(self, obj) 444 raise PicklingError(msg) 445 else: --> 446 StockPickler.dump(self, obj) 447 stack.clear() # clear record of 'recursion-sensitive' pickled objects 448 return /usr/lib/python3.6/pickle.py in dump(self, obj) 407 if self.proto >= 4: 408 self.framer.start_framing() --> 409 self.save(obj) 410 self.write(STOP) 411 self.framer.end_framing() /usr/lib/python3.6/pickle.py in save(self, obj, save_persistent_id) 474 f = self.dispatch.get(t) 475 if f is not None: --> 476 f(self, obj) # Call unbound method with explicit self 477 return 478 ~/.local/lib/python3.6/site-packages/dill/_dill.py in save_module_dict(pickler, obj) 931 # we only care about session the first pass thru 932 pickler._session = False --> 933 StockPickler.save_dict(pickler, obj) 934 log.info("# D2") 935 return /usr/lib/python3.6/pickle.py in save_dict(self, obj) 819 820 self.memoize(obj) --> 821 self._batch_setitems(obj.items()) 822 823 dispatch[dict] = save_dict /usr/lib/python3.6/pickle.py in _batch_setitems(self, items) 850 k, v = tmp[0] 851 save(k) --> 852 save(v) 853 write(SETITEM) 854 # else tmp is empty, and we're done /usr/lib/python3.6/pickle.py in save(self, obj, save_persistent_id) 494 reduce = getattr(obj, "__reduce_ex__", None) 495 if reduce is not None: --> 496 rv = reduce(self.proto) 497 else: 498 reduce = getattr(obj, "__reduce__", None) ~/.local/lib/python3.6/site-packages/pyarrow/_csv.cpython-36m-x86_64-linux-gnu.so in pyarrow._csv.ReadOptions.__reduce_cython__() TypeError: self.options cannot be converted to a Python object for pickling ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/794/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/794/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/793
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/793/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/793/comments
https://api.github.com/repos/huggingface/datasets/issues/793/events
https://github.com/huggingface/datasets/pull/793
735,105,907
MDExOlB1bGxSZXF1ZXN0NTE0NTU2NzY5
793
[Datasets] fix discofuse links
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,604,390,625,000
1,604,391,401,000
1,604,391,400,000
MEMBER
null
The discofuse links were changed: https://github.com/google-research-datasets/discofuse/commit/d27641016eb5b3eb2af03c7415cfbb2cbebe8558. The old links are broken I changed the links and created the new dataset_infos.json. Pinging @thomwolf @lhoestq for notification.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/793/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/793/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/793", "html_url": "https://github.com/huggingface/datasets/pull/793", "diff_url": "https://github.com/huggingface/datasets/pull/793.diff", "patch_url": "https://github.com/huggingface/datasets/pull/793.patch", "merged_at": 1604391400000 }
true
https://api.github.com/repos/huggingface/datasets/issues/792
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/792/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/792/comments
https://api.github.com/repos/huggingface/datasets/issues/792/events
https://github.com/huggingface/datasets/issues/792
734,693,652
MDU6SXNzdWU3MzQ2OTM2NTI=
792
KILT dataset: empty string in triviaqa input field
{ "login": "PaulLerner", "id": 25532159, "node_id": "MDQ6VXNlcjI1NTMyMTU5", "avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PaulLerner", "html_url": "https://github.com/PaulLerner", "followers_url": "https://api.github.com/users/PaulLerner/followers", "following_url": "https://api.github.com/users/PaulLerner/following{/other_user}", "gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}", "starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions", "organizations_url": "https://api.github.com/users/PaulLerner/orgs", "repos_url": "https://api.github.com/users/PaulLerner/repos", "events_url": "https://api.github.com/users/PaulLerner/events{/privacy}", "received_events_url": "https://api.github.com/users/PaulLerner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Just found out about https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md\r\n(Not very clear in https://huggingface.co/datasets/kilt_tasks links to http://github.com/huggingface/datasets/datasets/kilt_tasks/README.md which is dead, closing the issue though :))" ]
1,604,338,434,000
1,604,572,499,000
1,604,572,499,000
CONTRIBUTOR
null
# What happened Both train and test splits of the triviaqa dataset (part of the KILT benchmark) seem to have empty string in their input field (unlike the natural questions dataset, part of the same benchmark) # Versions KILT version is `1.0.0` `datasets` version is `1.1.2` [more here](https://gist.github.com/PaulLerner/3768c8d25f723edbac20d99b6a4056c1) # How to reproduce ```py In [1]: from datasets import load_dataset In [4]: dataset = load_dataset("kilt_tasks") # everything works fine, removed output for a better readibility Dataset kilt_tasks downloaded and prepared to /people/lerner/.cache/huggingface/datasets/kilt_tasks/all_tasks/1.0.0/821c4295a2c35db2847585918d9c47d7f028f1a26b78825d8e77cd3aeb2621a1. Subsequent calls will reuse this data. # empty string in triviaqa input field In [36]: dataset['train_triviaqa'][0] Out[36]: {'id': 'dpql_5197', 'input': '', 'meta': {'left_context': '', 'mention': '', 'obj_surface': {'text': []}, 'partial_evidence': {'end_paragraph_id': [], 'meta': [], 'section': [], 'start_paragraph_id': [], 'title': [], 'wikipedia_id': []}, 'right_context': '', 'sub_surface': {'text': []}, 'subj_aliases': {'text': []}, 'template_questions': {'text': []}}, 'output': {'answer': ['five £', '5 £', '£5', 'five £'], 'meta': [], 'provenance': [{'bleu_score': [1.0], 'end_character': [248], 'end_paragraph_id': [30], 'meta': [], 'section': ['Section::::Question of legal tender.\n'], 'start_character': [246], 'start_paragraph_id': [30], 'title': ['Banknotes of the pound sterling'], 'wikipedia_id': ['270680']}]}} In [35]: dataset['train_triviaqa']['input'][:10] Out[35]: ['', '', '', '', '', '', '', '', '', ''] # same with test set In [37]: dataset['test_triviaqa']['input'][:10] Out[37]: ['', '', '', '', '', '', '', '', '', ''] # works fine with natural questions In [34]: dataset['train_nq']['input'][:10] Out[34]: ['how i.met your mother who is the mother', 'who had the most wins in the nfl', 'who played mantis guardians of the galaxy 2', 'what channel is the premier league on in france', "god's not dead a light in the darkness release date", 'who is the current president of un general assembly', 'when do the eclipse supposed to take place', 'what is the name of the sea surrounding dubai', 'who holds the nba record for most points in a career', 'when did the new maze runner movie come out'] ``` Stay safe :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/792/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/792/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/791
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/791/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/791/comments
https://api.github.com/repos/huggingface/datasets/issues/791/events
https://github.com/huggingface/datasets/pull/791
734,656,518
MDExOlB1bGxSZXF1ZXN0NTE0MTg0MzU5
791
add amazon reviews
{ "login": "joeddav", "id": 9353833, "node_id": "MDQ6VXNlcjkzNTM4MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joeddav", "html_url": "https://github.com/joeddav", "followers_url": "https://api.github.com/users/joeddav/followers", "following_url": "https://api.github.com/users/joeddav/following{/other_user}", "gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}", "starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joeddav/subscriptions", "organizations_url": "https://api.github.com/users/joeddav/orgs", "repos_url": "https://api.github.com/users/joeddav/repos", "events_url": "https://api.github.com/users/joeddav/events{/privacy}", "received_events_url": "https://api.github.com/users/joeddav/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@patrickvonplaten Yeah this is adapted from tfds so a lot is just how they wrote the code. Addressed your comments and also simplified the weird `AmazonUSReviewsConfig` definition. Will merge once tests pass.", "Thanks for checking this one :) \r\nLooks good to me \r\n\r\nJust one question : is there a particular reason to use `names=[\"Y\", \"N\"]` in this order ? Usually the positive label is at index 1 and the negative one at index 0 for binary classification", "> is there a particular reason to use `names=[\"Y\", \"N\"]` in this order ? Usually the positive label is at index 1 and the negative one at index 0 for binary classification\r\n\r\nHmm that's a good point. I'll submit a quick fix.\r\n\r\n" ]
1,604,335,377,000
1,604,434,506,000
1,604,421,837,000
MEMBER
null
Adds the Amazon US Reviews dataset as requested in #353. Converted from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/amazon_us_reviews). cc @clmnt @sshleifer
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/791/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/791/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/791", "html_url": "https://github.com/huggingface/datasets/pull/791", "diff_url": "https://github.com/huggingface/datasets/pull/791.diff", "patch_url": "https://github.com/huggingface/datasets/pull/791.patch", "merged_at": 1604421837000 }
true
https://api.github.com/repos/huggingface/datasets/issues/790
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/790/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/790/comments
https://api.github.com/repos/huggingface/datasets/issues/790/events
https://github.com/huggingface/datasets/issues/790
734,470,197
MDU6SXNzdWU3MzQ0NzAxOTc=
790
Error running pip install -e ".[dev]" on MacOS 10.13.6: faiss/python does not exist
{ "login": "shawwn", "id": 59632, "node_id": "MDQ6VXNlcjU5NjMy", "avatar_url": "https://avatars.githubusercontent.com/u/59632?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shawwn", "html_url": "https://github.com/shawwn", "followers_url": "https://api.github.com/users/shawwn/followers", "following_url": "https://api.github.com/users/shawwn/following{/other_user}", "gists_url": "https://api.github.com/users/shawwn/gists{/gist_id}", "starred_url": "https://api.github.com/users/shawwn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shawwn/subscriptions", "organizations_url": "https://api.github.com/users/shawwn/orgs", "repos_url": "https://api.github.com/users/shawwn/repos", "events_url": "https://api.github.com/users/shawwn/events{/privacy}", "received_events_url": "https://api.github.com/users/shawwn/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I saw that `faiss-cpu` 1.6.4.post2 was released recently to fix the installation on macos. It should work now", "Closing this one.\r\nFeel free to re-open if you still have issues" ]
1,604,320,595,000
1,605,017,102,000
1,605,017,102,000
NONE
null
I was following along with https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset when I ran into this error. ```sh git clone https://github.com/huggingface/datasets cd datasets virtualenv venv -p python3 --system-site-packages source venv/bin/activate pip install -e ".[dev]" ``` ![image](https://user-images.githubusercontent.com/59632/97868518-72871800-1cd5-11eb-9cd2-37d4e9d20b39.png) ![image](https://user-images.githubusercontent.com/59632/97868592-977b8b00-1cd5-11eb-8f3c-0c409616149c.png) Python 3.7.7
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/790/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/790/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/789
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/789/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/789/comments
https://api.github.com/repos/huggingface/datasets/issues/789/events
https://github.com/huggingface/datasets/pull/789
734,237,839
MDExOlB1bGxSZXF1ZXN0NTEzODM1MzE0
789
dataset(ncslgr): add initial loading script
{ "login": "AmitMY", "id": 5757359, "node_id": "MDQ6VXNlcjU3NTczNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmitMY", "html_url": "https://github.com/AmitMY", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "repos_url": "https://api.github.com/users/AmitMY/repos", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @AmitMY, sorry for leaving you hanging for a minute :) \r\n\r\nWe've developed a new pipeline for adding datasets with a few extra steps, including adding a dataset card. You can find the full process [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md)\r\n\r\nWould you be up for adding the tags and description in the README.md so we can merge this cool dataset?", "@lhoestq should be ready for another review :) ", "Awesome thank you !\r\n\r\nIt looks like the PR now includes changes from other PR that were previously merged. \r\nFeel free to create another branch and another PR so that we can have a clean diff.\r\n", "Closing for #958 " ]
1,604,299,810,000
1,606,830,097,000
1,606,830,096,000
CONTRIBUTOR
null
Its a small dataset, but its heavily annotated https://www.bu.edu/asllrp/ncslgr.html ![image](https://user-images.githubusercontent.com/5757359/97838609-3c539380-1ce9-11eb-885b-a15d4c91ea49.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/789/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/789/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/789", "html_url": "https://github.com/huggingface/datasets/pull/789", "diff_url": "https://github.com/huggingface/datasets/pull/789.diff", "patch_url": "https://github.com/huggingface/datasets/pull/789.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/788
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/788/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/788/comments
https://api.github.com/repos/huggingface/datasets/issues/788/events
https://github.com/huggingface/datasets/issues/788
734,136,124
MDU6SXNzdWU3MzQxMzYxMjQ=
788
failed to reuse cache
{ "login": "WangHexie", "id": 31768052, "node_id": "MDQ6VXNlcjMxNzY4MDUy", "avatar_url": "https://avatars.githubusercontent.com/u/31768052?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WangHexie", "html_url": "https://github.com/WangHexie", "followers_url": "https://api.github.com/users/WangHexie/followers", "following_url": "https://api.github.com/users/WangHexie/following{/other_user}", "gists_url": "https://api.github.com/users/WangHexie/gists{/gist_id}", "starred_url": "https://api.github.com/users/WangHexie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WangHexie/subscriptions", "organizations_url": "https://api.github.com/users/WangHexie/orgs", "repos_url": "https://api.github.com/users/WangHexie/repos", "events_url": "https://api.github.com/users/WangHexie/events{/privacy}", "received_events_url": "https://api.github.com/users/WangHexie/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,604,284,956,000
1,604,319,975,000
1,604,319,975,000
NONE
null
I packed the `load_dataset ` in a function of class, and cached data in a directory. But when I import the class and use the function, the data still have to be downloaded again. The information (Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to ******) which logged to terminal shows the path is right to the cache directory, but the files still have to be downloaded again.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/788/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/788/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/787
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/787/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/787/comments
https://api.github.com/repos/huggingface/datasets/issues/787/events
https://github.com/huggingface/datasets/pull/787
734,070,162
MDExOlB1bGxSZXF1ZXN0NTEzNjk5MTQz
787
Adding nli_tr dataset
{ "login": "e-budur", "id": 2246791, "node_id": "MDQ6VXNlcjIyNDY3OTE=", "avatar_url": "https://avatars.githubusercontent.com/u/2246791?v=4", "gravatar_id": "", "url": "https://api.github.com/users/e-budur", "html_url": "https://github.com/e-budur", "followers_url": "https://api.github.com/users/e-budur/followers", "following_url": "https://api.github.com/users/e-budur/following{/other_user}", "gists_url": "https://api.github.com/users/e-budur/gists{/gist_id}", "starred_url": "https://api.github.com/users/e-budur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/e-budur/subscriptions", "organizations_url": "https://api.github.com/users/e-budur/orgs", "repos_url": "https://api.github.com/users/e-budur/repos", "events_url": "https://api.github.com/users/e-budur/events{/privacy}", "received_events_url": "https://api.github.com/users/e-budur/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thank you @lhoestq for the time you take to review our pull request. We appreciate your help.\r\n\r\nWe've made the changes you described. Hope that it is ready for being merged. Please let me know if you have any additional requests for revisions. " ]
1,604,267,384,000
1,605,207,962,000
1,605,207,962,000
CONTRIBUTOR
null
Hello, In this pull request, we have implemented the necessary interface to add our recent dataset [NLI-TR](https://github.com/boun-tabi/NLI-TR). The datasets will be presented on a full paper at EMNLP 2020 this month. [[arXiv link] ](https://arxiv.org/pdf/2004.14963.pdf) The dataset is the neural machine translation of SNLI and MultiNLI datasets into Turkish. So, we followed a similar format with the original datasets hosted in the HuggingFace datasets hub. Our dataset is designed to be accessed as follows by following the interface of the GLUE dataset that provides multiple datasets in a single interface over the HuggingFace datasets hub. ``` from datasets import load_dataset multinli_tr = load_dataset("nli_tr", "multinli_tr") snli_tr = load_dataset("nli_tr", "snli_tr") ``` Thanks for your help in reviewing our pull request.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/787/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/787/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/787", "html_url": "https://github.com/huggingface/datasets/pull/787", "diff_url": "https://github.com/huggingface/datasets/pull/787.diff", "patch_url": "https://github.com/huggingface/datasets/pull/787.patch", "merged_at": 1605207962000 }
true
https://api.github.com/repos/huggingface/datasets/issues/785
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/785/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/785/comments
https://api.github.com/repos/huggingface/datasets/issues/785/events
https://github.com/huggingface/datasets/pull/785
733,719,419
MDExOlB1bGxSZXF1ZXN0NTEzNDMyNTM1
785
feat(aslg_pc12): add dev and test data splits
{ "login": "AmitMY", "id": 5757359, "node_id": "MDQ6VXNlcjU3NTczNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmitMY", "html_url": "https://github.com/AmitMY", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "repos_url": "https://api.github.com/users/AmitMY/repos", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! I'm not sure we should make this split decision arbitrarily on our side. Users can split it afterwards to whatever they want using `dataset.train_test_split` for example.\r\nMoreover it looks like there's already papers that use this dataset and propose their own splits ([here](http://xanthippi.ceid.upatras.gr/HealthSign/resources/Publications/sitis_paper_25_10.pdf) 80-20) \r\nWhat do you think ?", "I was not aware of the `train_test_split` method, thanks!\r\nSoe ven though it contributes to reproducibility, no need to do this split then." ]
1,604,150,738,000
1,605,022,170,000
1,605,022,170,000
CONTRIBUTOR
null
For reproducibility sake, it's best if there are defined dev and test splits. The original paper author did not define splits for the entire dataset, not for the sample loaded via this library, so I decided to define: - 5/7th for train - 1/7th for dev - 1/7th for test
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/785/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/785/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/785", "html_url": "https://github.com/huggingface/datasets/pull/785", "diff_url": "https://github.com/huggingface/datasets/pull/785.diff", "patch_url": "https://github.com/huggingface/datasets/pull/785.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/784
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/784/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/784/comments
https://api.github.com/repos/huggingface/datasets/issues/784/events
https://github.com/huggingface/datasets/issues/784
733,700,463
MDU6SXNzdWU3MzM3MDA0NjM=
784
Issue with downloading Wikipedia data for low resource language
{ "login": "SamuelCahyawijaya", "id": 2826602, "node_id": "MDQ6VXNlcjI4MjY2MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/2826602?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SamuelCahyawijaya", "html_url": "https://github.com/SamuelCahyawijaya", "followers_url": "https://api.github.com/users/SamuelCahyawijaya/followers", "following_url": "https://api.github.com/users/SamuelCahyawijaya/following{/other_user}", "gists_url": "https://api.github.com/users/SamuelCahyawijaya/gists{/gist_id}", "starred_url": "https://api.github.com/users/SamuelCahyawijaya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SamuelCahyawijaya/subscriptions", "organizations_url": "https://api.github.com/users/SamuelCahyawijaya/orgs", "repos_url": "https://api.github.com/users/SamuelCahyawijaya/repos", "events_url": "https://api.github.com/users/SamuelCahyawijaya/events{/privacy}", "received_events_url": "https://api.github.com/users/SamuelCahyawijaya/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hello, maybe you could ty to use another date for the wikipedia dump (see the available [dates](https://dumps.wikimedia.org/jvwiki) here for `jv`) ?", "@lhoestq\r\n\r\nI've tried `load_dataset('wikipedia', '20200501.zh', beam_runner='DirectRunner')` and got the same `FileNotFoundError` as @SamuelCahyawijaya.\r\n\r\nAlso, using another date (e.g. `load_dataset('wikipedia', '20201120.zh', beam_runner='DirectRunner')`) will give the following error message.\r\n\r\n```\r\nValueError: BuilderConfig 20201120.zh not found. Available: ['20200501.aa', '20200501.ab', '20200501.ace', '20200501.ady', '20200501.af', '20200501.ak', '20200501.als', '20200501.am', '20200501.an', '20200501.ang', '20200501.ar', '20200501.arc', '20200501.arz', '20200501.as', '20200501.ast', '20200501.atj', '20200501.av', '20200501.ay', '20200501.az', '20200501.azb', '20200501.ba', '20200501.bar', '20200501.bat-smg', '20200501.bcl', '20200501.be', '20200501.be-x-old', '20200501.bg', '20200501.bh', '20200501.bi', '20200501.bjn', '20200501.bm', '20200501.bn', '20200501.bo', '20200501.bpy', '20200501.br', '20200501.bs', '20200501.bug', '20200501.bxr', '20200501.ca', '20200501.cbk-zam', '20200501.cdo', '20200501.ce', '20200501.ceb', '20200501.ch', '20200501.cho', '20200501.chr', '20200501.chy', '20200501.ckb', '20200501.co', '20200501.cr', '20200501.crh', '20200501.cs', '20200501.csb', '20200501.cu', '20200501.cv', '20200501.cy', '20200501.da', '20200501.de', '20200501.din', '20200501.diq', '20200501.dsb', '20200501.dty', '20200501.dv', '20200501.dz', '20200501.ee', '20200501.el', '20200501.eml', '20200501.en', '20200501.eo', '20200501.es', '20200501.et', '20200501.eu', '20200501.ext', '20200501.fa', '20200501.ff', '20200501.fi', '20200501.fiu-vro', '20200501.fj', '20200501.fo', '20200501.fr', '20200501.frp', '20200501.frr', '20200501.fur', '20200501.fy', '20200501.ga', '20200501.gag', '20200501.gan', '20200501.gd', '20200501.gl', '20200501.glk', '20200501.gn', '20200501.gom', '20200501.gor', '20200501.got', '20200501.gu', '20200501.gv', '20200501.ha', '20200501.hak', '20200501.haw', '20200501.he', '20200501.hi', '20200501.hif', '20200501.ho', '20200501.hr', '20200501.hsb', '20200501.ht', '20200501.hu', '20200501.hy', '20200501.ia', '20200501.id', '20200501.ie', '20200501.ig', '20200501.ii', '20200501.ik', '20200501.ilo', '20200501.inh', '20200501.io', '20200501.is', '20200501.it', '20200501.iu', '20200501.ja', '20200501.jam', '20200501.jbo', '20200501.jv', '20200501.ka', '20200501.kaa', '20200501.kab', '20200501.kbd', '20200501.kbp', '20200501.kg', '20200501.ki', '20200501.kj', '20200501.kk', '20200501.kl', '20200501.km', '20200501.kn', '20200501.ko', '20200501.koi', '20200501.krc', '20200501.ks', '20200501.ksh', '20200501.ku', '20200501.kv', '20200501.kw', '20200501.ky', '20200501.la', '20200501.lad', '20200501.lb', '20200501.lbe', '20200501.lez', '20200501.lfn', '20200501.lg', '20200501.li', '20200501.lij', '20200501.lmo', '20200501.ln', '20200501.lo', '20200501.lrc', '20200501.lt', '20200501.ltg', '20200501.lv', '20200501.mai', '20200501.map-bms', '20200501.mdf', '20200501.mg', '20200501.mh', '20200501.mhr', '20200501.mi', '20200501.min', '20200501.mk', '20200501.ml', '20200501.mn', '20200501.mr', '20200501.mrj', '20200501.ms', '20200501.mt', '20200501.mus', '20200501.mwl', '20200501.my', '20200501.myv', '20200501.mzn', '20200501.na', '20200501.nah', '20200501.nap', '20200501.nds', '20200501.nds-nl', '20200501.ne', '20200501.new', '20200501.ng', '20200501.nl', '20200501.nn', '20200501.no', '20200501.nov', '20200501.nrm', '20200501.nso', '20200501.nv', '20200501.ny', '20200501.oc', '20200501.olo', '20200501.om', '20200501.or', '20200501.os', '20200501.pa', '20200501.pag', '20200501.pam', '20200501.pap', '20200501.pcd', '20200501.pdc', '20200501.pfl', '20200501.pi', '20200501.pih', '20200501.pl', '20200501.pms', '20200501.pnb', '20200501.pnt', '20200501.ps', '20200501.pt', '20200501.qu', '20200501.rm', '20200501.rmy', '20200501.rn', '20200501.ro', '20200501.roa-rup', '20200501.roa-tara', '20200501.ru', '20200501.rue', '20200501.rw', '20200501.sa', '20200501.sah', '20200501.sat', '20200501.sc', '20200501.scn', '20200501.sco', '20200501.sd', '20200501.se', '20200501.sg', '20200501.sh', '20200501.si', '20200501.simple', '20200501.sk', '20200501.sl', '20200501.sm', '20200501.sn', '20200501.so', '20200501.sq', '20200501.sr', '20200501.srn', '20200501.ss', '20200501.st', '20200501.stq', '20200501.su', '20200501.sv', '20200501.sw', '20200501.szl', '20200501.ta', '20200501.tcy', '20200501.te', '20200501.tet', '20200501.tg', '20200501.th', '20200501.ti', '20200501.tk', '20200501.tl', '20200501.tn', '20200501.to', '20200501.tpi', '20200501.tr', '20200501.ts', '20200501.tt', '20200501.tum', '20200501.tw', '20200501.ty', '20200501.tyv', '20200501.udm', '20200501.ug', '20200501.uk', '20200501.ur', '20200501.uz', '20200501.ve', '20200501.vec', '20200501.vep', '20200501.vi', '20200501.vls', '20200501.vo', '20200501.wa', '20200501.war', '20200501.wo', '20200501.wuu', '20200501.xal', '20200501.xh', '20200501.xmf', '20200501.yi', '20200501.yo', '20200501.za', '20200501.zea', '20200501.zh', '20200501.zh-classical', '20200501.zh-min-nan', '20200501.zh-yue', '20200501.zu']\r\n```\r\n\r\nI am pretty sure that `https://dumps.wikimedia.org/enwiki/20201120/dumpstatus.json` exists.", "Thanks for reporting I created a PR to make the custom config work (language=\"zh\", date=\"20201120\").", "@lhoestq Thanks!" ]
1,604,144,400,000
1,624,584,931,000
1,606,318,933,000
NONE
null
Hi, I tried to download Sundanese and Javanese wikipedia data with the following snippet ``` jv_wiki = datasets.load_dataset('wikipedia', '20200501.jv', beam_runner='DirectRunner') su_wiki = datasets.load_dataset('wikipedia', '20200501.su', beam_runner='DirectRunner') ``` And I get the following error for these two languages: Javanese ``` FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/jvwiki/20200501/dumpstatus.json ``` Sundanese ``` FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/suwiki/20200501/dumpstatus.json ``` I found from https://github.com/huggingface/datasets/issues/577#issuecomment-688435085 that for small languages, they are directly downloaded and parsed from the Wikipedia dump site, but both of `https://dumps.wikimedia.org/jvwiki/20200501/dumpstatus.json` and `https://dumps.wikimedia.org/suwiki/20200501/dumpstatus.json` are no longer valid. Any suggestions on how to handle this issue? Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/784/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/784/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/783
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/783/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/783/comments
https://api.github.com/repos/huggingface/datasets/issues/783/events
https://github.com/huggingface/datasets/pull/783
733,536,254
MDExOlB1bGxSZXF1ZXN0NTEzMzAwODUz
783
updated links to v1.3 of quail, fixed the description
{ "login": "annargrs", "id": 1450322, "node_id": "MDQ6VXNlcjE0NTAzMjI=", "avatar_url": "https://avatars.githubusercontent.com/u/1450322?v=4", "gravatar_id": "", "url": "https://api.github.com/users/annargrs", "html_url": "https://github.com/annargrs", "followers_url": "https://api.github.com/users/annargrs/followers", "following_url": "https://api.github.com/users/annargrs/following{/other_user}", "gists_url": "https://api.github.com/users/annargrs/gists{/gist_id}", "starred_url": "https://api.github.com/users/annargrs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/annargrs/subscriptions", "organizations_url": "https://api.github.com/users/annargrs/orgs", "repos_url": "https://api.github.com/users/annargrs/repos", "events_url": "https://api.github.com/users/annargrs/events{/privacy}", "received_events_url": "https://api.github.com/users/annargrs/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "we're using quail 1.3 now thanks.\r\nclosing this one" ]
1,604,094,453,000
1,606,691,119,000
1,606,691,118,000
NONE
null
updated links to v1.3 of quail, fixed the description
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/783/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/783/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/783", "html_url": "https://github.com/huggingface/datasets/pull/783", "diff_url": "https://github.com/huggingface/datasets/pull/783.diff", "patch_url": "https://github.com/huggingface/datasets/pull/783.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/782
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/782/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/782/comments
https://api.github.com/repos/huggingface/datasets/issues/782/events
https://github.com/huggingface/datasets/pull/782
733,316,463
MDExOlB1bGxSZXF1ZXN0NTEzMTE2MTM0
782
Fix metric deletion when attribuets are missing
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,604,074,570,000
1,604,076,473,000
1,604,076,472,000
MEMBER
null
When you call `del` on a metric we want to make sure that the arrow attributes are not already deleted. I just added `if hasattr(...)` to make sure it doesn't crash
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/782/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/782/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/782", "html_url": "https://github.com/huggingface/datasets/pull/782", "diff_url": "https://github.com/huggingface/datasets/pull/782.diff", "patch_url": "https://github.com/huggingface/datasets/pull/782.patch", "merged_at": 1604076472000 }
true
https://api.github.com/repos/huggingface/datasets/issues/781
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/781/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/781/comments
https://api.github.com/repos/huggingface/datasets/issues/781/events
https://github.com/huggingface/datasets/pull/781
733,168,609
MDExOlB1bGxSZXF1ZXN0NTEyOTkyMzQw
781
Add XNLI train set
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,604,064,113,000
1,604,946,170,000
1,604,946,169,000
MEMBER
null
I added the train set that was built using the translated MNLI. Now you can load the dataset specifying one language: ```python from datasets import load_dataset xnli_en = load_dataset("xnli", "en") print(xnli_en["train"][0]) # {'hypothesis': 'Product and geography are what make cream skimming work .', 'label': 1, 'premise': 'Conceptually cream skimming has two basic dimensions - product and geography .'} print(xnli_en["test"][0]) # {'hypothesis': 'I havent spoken to him again.', 'label': 2, 'premise': "Well, I wasn't even thinking about that, but I was so frustrated, and, I ended up talking to him again."} ``` Cc @sgugger
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/781/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/781/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/781", "html_url": "https://github.com/huggingface/datasets/pull/781", "diff_url": "https://github.com/huggingface/datasets/pull/781.diff", "patch_url": "https://github.com/huggingface/datasets/pull/781.patch", "merged_at": 1604946169000 }
true
https://api.github.com/repos/huggingface/datasets/issues/780
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/780/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/780/comments
https://api.github.com/repos/huggingface/datasets/issues/780/events
https://github.com/huggingface/datasets/pull/780
732,738,647
MDExOlB1bGxSZXF1ZXN0NTEyNjM0MzI0
780
Add ASNQ dataset
{ "login": "mkserge", "id": 2992022, "node_id": "MDQ6VXNlcjI5OTIwMjI=", "avatar_url": "https://avatars.githubusercontent.com/u/2992022?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mkserge", "html_url": "https://github.com/mkserge", "followers_url": "https://api.github.com/users/mkserge/followers", "following_url": "https://api.github.com/users/mkserge/following{/other_user}", "gists_url": "https://api.github.com/users/mkserge/gists{/gist_id}", "starred_url": "https://api.github.com/users/mkserge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mkserge/subscriptions", "organizations_url": "https://api.github.com/users/mkserge/orgs", "repos_url": "https://api.github.com/users/mkserge/repos", "events_url": "https://api.github.com/users/mkserge/events{/privacy}", "received_events_url": "https://api.github.com/users/mkserge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Very nice !\r\nWhat do the `sentence1` and `sentence2` correspond to exactly ?\r\nAlso maybe you could use the `ClassLabel` feature type for the `label` field (see [snli](https://github.com/huggingface/datasets/blob/master/datasets/snli/snli.py) for example)", "> What do the `sentence1` and `sentence2` correspond to exactly ?\r\n\r\n`sentence1` is a question, and `sentence2` is a candidate answer sentence. The labels are [1, 2, 3, 4] defining a relation between the answer sentence and the question. For example, label 4 means that the answer sentence is inside the _long_answer_ passage AND that the _short_answer_ is within the answer sentence. All the other labels are the negatives with different characteristics. (the short_answer, long_answer terminology is borrowed from Google's NQ dataset)\r\n\r\nShould I label them simply as `question` and `answer`? I was going more with what I saw in the examples/run_glue.py script, but I realize now there is no restriction around this.\r\n\r\n> Also maybe you could use the `ClassLabel` feature type for the `label` field (see [snli](https://github.com/huggingface/datasets/blob/master/datasets/snli/snli.py) for example)\r\n\r\nI am finding it difficult to assign names to each class, but perhaps it's possible. Here's the description of each class from the paper.\r\n\r\n1. Sentences from the document that are in the long answer but do not contain the annotated short answers. It is possible that these sentences might contain the short answer.\r\n2. Sentences from the document that are not in the long answer but contain the short answer string, that is, such occurrence is purely accidental.\r\n3. Sentences from the document that are neither in the long answer nor contain the short answer.\r\n4. Sentences from the document that are in the long answer and do contain the annotated short answers.\r\n\r\nAny ideas?\r\n\r\n", "Yes it's better to have explicit feature names. Maybe go with question/answer or question/sentence.\r\nI read in the paper that 1,2 and 3 are considered negative and 4 positive.\r\nWe could have a binary classification label `label` (either positive of negative) and then two boolean fields `short_answser_in_sentence` and `sentence_in_long_answer`. What do you think ?", "> Yes it's better to have explicit feature names. Maybe go with question/answer or question/sentence.\r\n> I read in the paper that 1,2 and 3 are considered negative and 4 positive.\r\n> We could have a binary classification label `label` (either positive of negative) and then two boolean fields `short_answser_in_sentence` and `sentence_in_long_answer`. What do you think ?\r\n\r\nOk, sounds good. I went with `sentence` to keep it consistent with `short_answer_in_sentence` and `sentence_in_long_answer`. \r\n\r\nI changed it to a ClassLabel with pos and neg classes and added the two above as features. Let me know if this is not what you had in mind.\r\n\r\n" ]
1,604,014,316,000
1,605,000,383,000
1,605,000,383,000
CONTRIBUTOR
null
This pull request adds the ASNQ dataset. It is a dataset for answer sentence selection derived from Google Natural Questions (NQ) dataset (Kwiatkowski et al. 2019). The dataset details can be found in the paper at https://arxiv.org/abs/1911.04118 The dataset is authored by Siddhant Garg, Thuy Vu and Alessandro Moschitti. _Please note that I have no affiliation with the authors._ Repo: https://github.com/alexa/wqa_tanda
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/780/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/780/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/780", "html_url": "https://github.com/huggingface/datasets/pull/780", "diff_url": "https://github.com/huggingface/datasets/pull/780.diff", "patch_url": "https://github.com/huggingface/datasets/pull/780.patch", "merged_at": 1605000383000 }
true
https://api.github.com/repos/huggingface/datasets/issues/778
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/778/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/778/comments
https://api.github.com/repos/huggingface/datasets/issues/778/events
https://github.com/huggingface/datasets/issues/778
732,449,652
MDU6SXNzdWU3MzI0NDk2NTI=
778
Unexpected behavior when loading cached csv file?
{ "login": "dcfidalgo", "id": 15979778, "node_id": "MDQ6VXNlcjE1OTc5Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/15979778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dcfidalgo", "html_url": "https://github.com/dcfidalgo", "followers_url": "https://api.github.com/users/dcfidalgo/followers", "following_url": "https://api.github.com/users/dcfidalgo/following{/other_user}", "gists_url": "https://api.github.com/users/dcfidalgo/gists{/gist_id}", "starred_url": "https://api.github.com/users/dcfidalgo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dcfidalgo/subscriptions", "organizations_url": "https://api.github.com/users/dcfidalgo/orgs", "repos_url": "https://api.github.com/users/dcfidalgo/repos", "events_url": "https://api.github.com/users/dcfidalgo/events{/privacy}", "received_events_url": "https://api.github.com/users/dcfidalgo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Thanks for reporting.\r\nThe same issue was reported in #730 (but with the encodings instead of the delimiter). It was fixed by #770 .\r\nThe fix will be available in the next release :)", "Thanks for the prompt reply and terribly sorry for the spam! \r\nLooking forward to the new release! " ]
1,603,987,570,000
1,604,006,487,000
1,604,006,487,000
CONTRIBUTOR
null
I read a csv file from disk and forgot so specify the right delimiter. When i read the csv file again specifying the right delimiter it had no effect since it was using the cached dataset. I am not sure if this is unwanted behavior since i can always specify `download_mode="force_redownload"`. But i think it would be nice if the information what `delimiter` or what `column_names` were used would influence the identifier of the cached dataset. Small snippet to reproduce the behavior: ```python import datasets with open("dummy_data.csv", "w") as file: file.write("test,this;text\n") print(datasets.load_dataset("csv", data_files="dummy_data.csv", split="train").column_names) # ["test", "this;text"] print(datasets.load_dataset("csv", data_files="dummy_data.csv", split="train", delimiter=";").column_names) # still ["test", "this;text"] ``` By the way, thanks a lot for this amazing library! :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/778/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/778/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/777
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/777/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/777/comments
https://api.github.com/repos/huggingface/datasets/issues/777/events
https://github.com/huggingface/datasets/pull/777
732,376,648
MDExOlB1bGxSZXF1ZXN0NTEyMzI2ODM2
777
Better error message for uninitialized metric
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,603,982,570,000
1,603,984,706,000
1,603,984,704,000
MEMBER
null
When calling `metric.compute()` without having called `metric.add` or `metric.add_batch` at least once, the error was quite cryptic. I added a better error message Fix #729
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/777/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/777/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/777", "html_url": "https://github.com/huggingface/datasets/pull/777", "diff_url": "https://github.com/huggingface/datasets/pull/777.diff", "patch_url": "https://github.com/huggingface/datasets/pull/777.patch", "merged_at": 1603984703000 }
true
https://api.github.com/repos/huggingface/datasets/issues/776
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/776/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/776/comments
https://api.github.com/repos/huggingface/datasets/issues/776/events
https://github.com/huggingface/datasets/pull/776
732,343,550
MDExOlB1bGxSZXF1ZXN0NTEyMjk5NzQx
776
Allow custom split names in text dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Awesome! This will make the behaviour much more intuitive for some non-standard code.\r\n\r\nThanks!" ]
1,603,980,246,000
1,604,065,605,000
1,604,064,232,000
MEMBER
null
The `text` dataset used to return only splits like train, test and validation. Other splits were ignored. Now any split name is allowed. I did the same for `json`, `pandas` and `csv` Fix #735
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/776/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/776/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/776", "html_url": "https://github.com/huggingface/datasets/pull/776", "diff_url": "https://github.com/huggingface/datasets/pull/776.diff", "patch_url": "https://github.com/huggingface/datasets/pull/776.patch", "merged_at": 1604064232000 }
true
https://api.github.com/repos/huggingface/datasets/issues/775
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/775/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/775/comments
https://api.github.com/repos/huggingface/datasets/issues/775/events
https://github.com/huggingface/datasets/pull/775
732,287,504
MDExOlB1bGxSZXF1ZXN0NTEyMjUyODI3
775
Properly delete metrics when a process is killed
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,603,975,927,000
1,603,980,080,000
1,603,980,079,000
MEMBER
null
Tests are flaky when using metrics in distributed setup. There is because of one test that make sure that using two possibly incompatible metric computation (same exp id) either works or raises the right error. However if the error is raised, all the processes of the metric are killed, and the open files (arrow + lock files) are not closed correctly. This causes PermissionError on Windows when deleting the temporary directory. To fix that I added a `finally` clause in the function passed to multiprocess to properly close the files when the process exits.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/775/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/775/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/775", "html_url": "https://github.com/huggingface/datasets/pull/775", "diff_url": "https://github.com/huggingface/datasets/pull/775.diff", "patch_url": "https://github.com/huggingface/datasets/pull/775.patch", "merged_at": 1603980079000 }
true
https://api.github.com/repos/huggingface/datasets/issues/774
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/774/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/774/comments
https://api.github.com/repos/huggingface/datasets/issues/774/events
https://github.com/huggingface/datasets/pull/774
732,265,741
MDExOlB1bGxSZXF1ZXN0NTEyMjM0NjA0
774
[ROUGE] Add description to Rouge metric
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,603,973,972,000
1,603,994,150,000
1,603,994,148,000
MEMBER
null
Add information about case sensitivity to ROUGE.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/774/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/774/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/774", "html_url": "https://github.com/huggingface/datasets/pull/774", "diff_url": "https://github.com/huggingface/datasets/pull/774.diff", "patch_url": "https://github.com/huggingface/datasets/pull/774.patch", "merged_at": 1603994148000 }
true
https://api.github.com/repos/huggingface/datasets/issues/773
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/773/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/773/comments
https://api.github.com/repos/huggingface/datasets/issues/773/events
https://github.com/huggingface/datasets/issues/773
731,684,153
MDU6SXNzdWU3MzE2ODQxNTM=
773
Adding CC-100: Monolingual Datasets from Web Crawl Data
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
{ "login": "abhishekkrthakur", "id": 1183441, "node_id": "MDQ6VXNlcjExODM0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhishekkrthakur", "html_url": "https://github.com/abhishekkrthakur", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "type": "User", "site_admin": false }
[ { "login": "abhishekkrthakur", "id": 1183441, "node_id": "MDQ6VXNlcjExODM0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhishekkrthakur", "html_url": "https://github.com/abhishekkrthakur", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "type": "User", "site_admin": false } ]
null
[ "cc @aconneau ;) " ]
1,603,909,241,000
1,607,941,208,000
1,607,941,207,000
MEMBER
null
## Adding a Dataset - **Name:** CC-100: Monolingual Datasets from Web Crawl Data - **Description:** https://twitter.com/alex_conneau/status/1321507120848625665 - **Paper:** https://arxiv.org/abs/1911.02116 - **Data:** http://data.statmt.org/cc-100/ - **Motivation:** A large scale multi-lingual language modeling dataset. Text is de-duplicated and filtered by how "Wikipedia-like" it is, hopefully helping avoid some of the worst parts of the common crawl. Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/773/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/773/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/772
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/772/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/772/comments
https://api.github.com/repos/huggingface/datasets/issues/772/events
https://github.com/huggingface/datasets/pull/772
731,612,430
MDExOlB1bGxSZXF1ZXN0NTExNjg4ODMx
772
Fix metric with cache dir
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,603,903,393,000
1,603,964,084,000
1,603,964,083,000
MEMBER
null
The cache_dir provided by the user was concatenated twice and therefore causing FileNotFound errors. The tests didn't cover the case of providing `cache_dir=` for metrics because of a stupid issue (it was not using the right parameter). I remove the double concatenation and I fixed the tests Fix #728
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/772/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/772/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/772", "html_url": "https://github.com/huggingface/datasets/pull/772", "diff_url": "https://github.com/huggingface/datasets/pull/772.diff", "patch_url": "https://github.com/huggingface/datasets/pull/772.patch", "merged_at": 1603964082000 }
true
https://api.github.com/repos/huggingface/datasets/issues/770
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/770/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/770/comments
https://api.github.com/repos/huggingface/datasets/issues/770/events
https://github.com/huggingface/datasets/pull/770
731,445,222
MDExOlB1bGxSZXF1ZXN0NTExNTQ5MTg1
770
Fix custom builder caching
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,603,891,944,000
1,603,964,163,000
1,603,964,161,000
MEMBER
null
The cache directory of a dataset didn't take into account additional parameters that the user could specify such as `features` or any parameter of the builder configuration kwargs (ex: `encoding` for the `text` dataset). To fix that, the cache directory name now has a suffix that depends on all of them. Fix #730 Fix #750
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/770/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/770/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/770", "html_url": "https://github.com/huggingface/datasets/pull/770", "diff_url": "https://github.com/huggingface/datasets/pull/770.diff", "patch_url": "https://github.com/huggingface/datasets/pull/770.patch", "merged_at": 1603964161000 }
true
https://api.github.com/repos/huggingface/datasets/issues/766
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/766/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/766/comments
https://api.github.com/repos/huggingface/datasets/issues/766/events
https://github.com/huggingface/datasets/issues/766
730,669,596
MDU6SXNzdWU3MzA2Njk1OTY=
766
[GEM] add DART data-to-text generation dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "Is this a duplicate of #924 ?", "Yup, closing! Haven't been keeping track of the solved issues during the sprint." ]
1,603,820,044,000
1,607,002,638,000
1,607,002,638,000
MEMBER
null
## Adding a Dataset - **Name:** DART - **Description:** DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set. - **Paper:** https://arxiv.org/abs/2007.02871v1 - **Data:** https://github.com/Yale-LILY/dart - **Motivation:** the dataset will likely be included in the GEM benchmark Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/766/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/766/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/765
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/765/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/765/comments
https://api.github.com/repos/huggingface/datasets/issues/765/events
https://github.com/huggingface/datasets/issues/765
730,668,332
MDU6SXNzdWU3MzA2NjgzMzI=
765
[GEM] Add DART data-to-text generation dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[]
1,603,819,943,000
1,603,820,061,000
1,603,820,061,000
MEMBER
null
## Adding a Dataset - **Name:** DART - **Description:** DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set. - **Paper:** https://arxiv.org/abs/2007.02871v1 - **Data:** https://github.com/Yale-LILY/dart - **Motivation:** It will likely be included in the GEM generation evaluation benchmark Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/765/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/765/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/764
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/764/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/764/comments
https://api.github.com/repos/huggingface/datasets/issues/764/events
https://github.com/huggingface/datasets/pull/764
730,617,828
MDExOlB1bGxSZXF1ZXN0NTEwODkyMTk2
764
Adding Issue Template for Dataset Requests
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,603,816,628,000
1,603,819,526,000
1,603,819,525,000
MEMBER
null
adding .github/ISSUE_TEMPLATE/add-dataset.md
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/764/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/764/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/764", "html_url": "https://github.com/huggingface/datasets/pull/764", "diff_url": "https://github.com/huggingface/datasets/pull/764.diff", "patch_url": "https://github.com/huggingface/datasets/pull/764.patch", "merged_at": 1603819525000 }
true
https://api.github.com/repos/huggingface/datasets/issues/763
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/763/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/763/comments
https://api.github.com/repos/huggingface/datasets/issues/763/events
https://github.com/huggingface/datasets/pull/763
730,593,631
MDExOlB1bGxSZXF1ZXN0NTEwODcyMDYx
763
Fixed errors in bertscore related to custom baseline
{ "login": "juanjucm", "id": 36761132, "node_id": "MDQ6VXNlcjM2NzYxMTMy", "avatar_url": "https://avatars.githubusercontent.com/u/36761132?v=4", "gravatar_id": "", "url": "https://api.github.com/users/juanjucm", "html_url": "https://github.com/juanjucm", "followers_url": "https://api.github.com/users/juanjucm/followers", "following_url": "https://api.github.com/users/juanjucm/following{/other_user}", "gists_url": "https://api.github.com/users/juanjucm/gists{/gist_id}", "starred_url": "https://api.github.com/users/juanjucm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/juanjucm/subscriptions", "organizations_url": "https://api.github.com/users/juanjucm/orgs", "repos_url": "https://api.github.com/users/juanjucm/repos", "events_url": "https://api.github.com/users/juanjucm/events{/privacy}", "received_events_url": "https://api.github.com/users/juanjucm/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,603,814,915,000
1,603,907,965,000
1,603,907,965,000
CONTRIBUTOR
null
[bertscore version 0.3.6 ](https://github.com/Tiiiger/bert_score) added support for custom baseline files. This update added extra argument `baseline_path` to BERTScorer class as well as some extra boolean parameters `use_custom_baseline` in functions like `get_hash(model, num_layers, idf, rescale_with_baseline, use_custom_baseline)`. This PR fix those matching errors in bertscore metric implementation.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/763/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/763/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/763", "html_url": "https://github.com/huggingface/datasets/pull/763", "diff_url": "https://github.com/huggingface/datasets/pull/763.diff", "patch_url": "https://github.com/huggingface/datasets/pull/763.patch", "merged_at": 1603907965000 }
true
https://api.github.com/repos/huggingface/datasets/issues/762
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/762/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/762/comments
https://api.github.com/repos/huggingface/datasets/issues/762/events
https://github.com/huggingface/datasets/issues/762
730,586,972
MDU6SXNzdWU3MzA1ODY5NzI=
762
[GEM] Add Czech Restaurant data-to-text generation dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[]
1,603,814,447,000
1,607,002,664,000
1,607,002,664,000
MEMBER
null
- Paper: https://www.aclweb.org/anthology/W19-8670.pdf - Data: https://github.com/UFAL-DSG/cs_restaurant_dataset - The dataset will likely be part of the GEM benchmark
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/762/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/762/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/760
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/760/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/760/comments
https://api.github.com/repos/huggingface/datasets/issues/760/events
https://github.com/huggingface/datasets/issues/760
729,637,917
MDU6SXNzdWU3Mjk2Mzc5MTc=
760
Add meta-data to the HANS dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" }, { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false }
[ { "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false } ]
null
[]
1,603,724,213,000
1,607,002,714,000
1,607,002,714,000
MEMBER
null
The current version of the [HANS dataset](https://github.com/huggingface/datasets/blob/master/datasets/hans/hans.py) is missing the additional information provided for each example, including the sentence parses, heuristic and subcase.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/760/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/760/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/759
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/759/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/759/comments
https://api.github.com/repos/huggingface/datasets/issues/759/events
https://github.com/huggingface/datasets/issues/759
729,046,916
MDU6SXNzdWU3MjkwNDY5MTY=
759
(Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
{ "login": "AI678", "id": 63541083, "node_id": "MDQ6VXNlcjYzNTQxMDgz", "avatar_url": "https://avatars.githubusercontent.com/u/63541083?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AI678", "html_url": "https://github.com/AI678", "followers_url": "https://api.github.com/users/AI678/followers", "following_url": "https://api.github.com/users/AI678/following{/other_user}", "gists_url": "https://api.github.com/users/AI678/gists{/gist_id}", "starred_url": "https://api.github.com/users/AI678/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AI678/subscriptions", "organizations_url": "https://api.github.com/users/AI678/orgs", "repos_url": "https://api.github.com/users/AI678/repos", "events_url": "https://api.github.com/users/AI678/events{/privacy}", "received_events_url": "https://api.github.com/users/AI678/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Are you running the script on a machine with an internet connection ?", "Yes , I can browse the url through Google Chrome.", "Does this HEAD request return 200 on your machine ?\r\n```python\r\nimport requests \r\nrequests.head(\"https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py\")\r\n```\r\n\r\nIf it returns 200, could you try again to load the dataset ?", "Thank you very much for your response.\r\nWhen I run \r\n``` \r\nimport requests \r\nrequests.head(\"https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py\")\r\n```\r\nIt returns 200.\r\n\r\nAnd I try again to load the dataset. I got the following errors again. \r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\load.py\", line 608, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 475, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 531, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"C:\\Users\\666666\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\cnn_dailymail\\0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602\\cnn_dailymail.py\", line 253, in _split_generators\r\n dl_paths = dl_manager.download_and_extract(_DL_URLS)\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\download_manager.py\", line 254, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\download_manager.py\", line 175, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\py_utils.py\", line 224, in map_nested\r\n mapped = [\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\py_utils.py\", line 225, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\py_utils.py\", line 163, in _single_map_nested\r\n return function(data_struct)\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\file_utils.py\", line 300, in cached_path\r\n output_path = get_from_cache(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\file_utils.py\", line 475, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ\r\n\r\nConnection error happened but the url was different.\r\n\r\nI add the following code.\r\n```\r\nrequests.head(\"https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ\")\r\n```\r\nThis didn't return 200\r\nIt returned like this:\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connection.py\", line 159, in _new_conn\r\n conn = connection.create_connection(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\util\\connection.py\", line 84, in create_connection\r\n raise err\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\util\\connection.py\", line 74, in create_connection\r\n sock.connect(sa)\r\nTimeoutError: [WinError 10060] \r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connectionpool.py\", line 670, in urlopen\r\n httplib_response = self._make_request(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connectionpool.py\", line 381, in _make_request\r\n self._validate_conn(conn)\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connectionpool.py\", line 978, in _validate_conn\r\n conn.connect()\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connection.py\", line 309, in connect\r\n conn = self._new_conn()\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connection.py\", line 171, in _new_conn\r\n raise NewConnectionError(\r\nurllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x000001F6060618E0>: Failed to establish a new connection: [WinError 10060] ", "Is google drive blocked on your network ?\r\nFor me \r\n```python\r\nrequests.head(\"https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ\")\r\n```\r\nreturns 200", "I can browse the google drive through google chrome. It's weird. I can download the dataset through google drive manually.", "Could you try to update `requests` maybe ?\r\nIt works with 2.23.0 on my side", "My ```requests``` is 2.24.0 . It still can't return 200.", "Is it possible I download the dataset manually from google drive and use it for further test ? How can I do this ? I want to reproduce the model in this link https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16. But I can't download the dataset through load_dataset method . I have tried many times and the connection error always happens .\r\n", "The head request should definitely work, not sure what's going on on your side.\r\nIf you find a way to make it work, please post it here since other users might encounter the same issue.\r\n\r\nIf you don't manage to fix it you can use `load_dataset` on google colab and then save it using `dataset.save_to_disk(\"path/to/dataset\")`.\r\nThen you can download the directory on your machine and do\r\n```python\r\nfrom datasets import load_from_disk\r\ndataset = load_from_disk(\"path/to/local/dataset\")\r\n```", "Hi\r\nI want to know if this problem has been solved because I encountered a similar issue. Thanks.\r\n`train_data = datasets.load_dataset(\"xsum\", `split=\"train\")`\r\n`ConnectionError:` Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/xsum/xsum.py`", "Hi @smile0925 ! Do you have an internet connection ? Are you using some kind of proxy that may block the access to this file ?\r\n\r\nOtherwise you can try to update `datasets` since we introduced retries for http requests in the 1.2.0 version\r\n```\r\npip install --upgrade datasets\r\n```\r\nLet me know if that helps.", "Hi @lhoestq \r\nOh, may be you are right. I find that my server uses some kind of proxy that block the access to this file.\r\n![image](https://user-images.githubusercontent.com/46243662/106456211-2ca24180-64c8-11eb-831e-47e9b40e7da4.png)\r\n\r\n", "> Hi @lhoestq\r\n> Oh, may be you are right. I find that my server uses some kind of proxy that block the access to this file.\r\n> ![image](https://user-images.githubusercontent.com/46243662/106456211-2ca24180-64c8-11eb-831e-47e9b40e7da4.png)\r\n\r\nI have the same problem, have you solved it? Many thanks", "Hi @ZhengxiangShi \r\nYou can first try whether your network can access these files. I need to use VPN to access these files, so I download the files that cannot be accessed to the local in advance, and then use them in the code. Like this,\r\n`train_data = datasets.load_dataset(\"xsum.py\", split=\"train\")`" ]
1,603,640,097,000
1,628,100,609,000
1,628,100,609,000
NONE
null
Hey, I want to load the cnn-dailymail dataset for fine-tune. I write the code like this from datasets import load_dataset test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”) And I got the following errors. Traceback (most recent call last): File “test.py”, line 7, in test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“test”) File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\load.py”, line 589, in load_dataset module_path, hash = prepare_module( File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\load.py”, line 268, in prepare_module local_path = cached_path(file_path, download_config=download_config) File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\utils\file_utils.py”, line 300, in cached_path output_path = get_from_cache( File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\utils\file_utils.py”, line 475, in get_from_cache raise ConnectionError(“Couldn’t reach {}”.format(url)) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py How can I fix this ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/759/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/759/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/758
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/758/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/758/comments
https://api.github.com/repos/huggingface/datasets/issues/758/events
https://github.com/huggingface/datasets/issues/758
728,638,559
MDU6SXNzdWU3Mjg2Mzg1NTk=
758
Process 0 very slow when using num_procs with map to tokenizer
{ "login": "ksjae", "id": 17930170, "node_id": "MDQ6VXNlcjE3OTMwMTcw", "avatar_url": "https://avatars.githubusercontent.com/u/17930170?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ksjae", "html_url": "https://github.com/ksjae", "followers_url": "https://api.github.com/users/ksjae/followers", "following_url": "https://api.github.com/users/ksjae/following{/other_user}", "gists_url": "https://api.github.com/users/ksjae/gists{/gist_id}", "starred_url": "https://api.github.com/users/ksjae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ksjae/subscriptions", "organizations_url": "https://api.github.com/users/ksjae/orgs", "repos_url": "https://api.github.com/users/ksjae/repos", "events_url": "https://api.github.com/users/ksjae/events{/privacy}", "received_events_url": "https://api.github.com/users/ksjae/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Thanks for reporting.\r\nIs the distribution of text length of your data evenly distributed across your dataset ? I mean, could it be because the examples in the first part of your dataset are slower to process ?\r\nAlso could how many CPUs can you use for multiprocessing ?\r\n```python\r\nimport multiprocessing\r\nprint(multiprocessing.cpu_count())\r\n```\r\nWhich tokenizer are you using ?", "Using pre trained HF tokenizer. The result is the same with tokenizer multiprocessing off and on.\r\nI have (absolutely) no idea about the distribution, but since this issue occurs on all of my datasets(regardless of files), I don't think distribution is the problems.\r\n\r\nI can use up to 16 cores.", "Ok weird, I don't manage to reproduce this issue on my side.\r\nDoes it happen even with `num_proc=2` for example ?\r\nAlso could you provide more details about your OS and the versions of tokenizers/datasets/multiprocess that you're using ?", "Yes, I can confirm it also happens with ```num_proc=2```.\r\n```\r\ntokenizers 0.9.2\r\ndatasets 1.1.2\r\nmultiprocess 0.70.10\r\n```\r\n```\r\nLinux nipa2020-0629 4.4.0-178-generic #208-Ubuntu SMP Sun Apr 5 23:45:10 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux\r\n```", "I can't reproduce on my side unfortunately with the same versions.\r\n\r\nDo you have issues when doing multiprocessing with python ?\r\n```python\r\nfrom tqdm.auto import tqdm\r\nfrom multiprocess import Pool, RLock\r\n\r\ndef process_data(shard):\r\n # implement\r\n\r\nnum_proc = 8\r\nshards = [] # implement, this must be a list of size num_proc\r\n\r\nwith Pool(num_proc, initargs=(RLock(),), initializer=tqdm.set_lock) as pool:\r\n results = [pool.apply_async(process_data, shard=shard) for shard in shards]\r\n transformed_shards = [r.get() for r in results]\r\n```", "Nah, I'll just wait a few hours. Thank you for helping, though." ]
1,603,507,220,000
1,603,857,586,000
1,603,857,585,000
NONE
null
<img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png"> The code I am using is ``` dataset = load_dataset("text", data_files=[file_path], split='train') dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True, truncation=True, max_length=args.block_size), num_proc=8) dataset.set_format(type='torch', columns=['input_ids']) dataset.save_to_disk(file_path+'.arrow') ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/758/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/758/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/757
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/757/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/757/comments
https://api.github.com/repos/huggingface/datasets/issues/757/events
https://github.com/huggingface/datasets/issues/757
728,241,494
MDU6SXNzdWU3MjgyNDE0OTQ=
757
CUDA out of memory
{ "login": "li1117heex", "id": 47059217, "node_id": "MDQ6VXNlcjQ3MDU5MjE3", "avatar_url": "https://avatars.githubusercontent.com/u/47059217?v=4", "gravatar_id": "", "url": "https://api.github.com/users/li1117heex", "html_url": "https://github.com/li1117heex", "followers_url": "https://api.github.com/users/li1117heex/followers", "following_url": "https://api.github.com/users/li1117heex/following{/other_user}", "gists_url": "https://api.github.com/users/li1117heex/gists{/gist_id}", "starred_url": "https://api.github.com/users/li1117heex/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/li1117heex/subscriptions", "organizations_url": "https://api.github.com/users/li1117heex/orgs", "repos_url": "https://api.github.com/users/li1117heex/repos", "events_url": "https://api.github.com/users/li1117heex/events{/privacy}", "received_events_url": "https://api.github.com/users/li1117heex/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Could you provide more details ? What's the code you ran ?", "```python\r\ntokenizer = FunnelTokenizer.from_pretrained('funnel-transformer/small')\r\n\r\ndef tokenize(batch):\r\n return tokenizer(batch['text'], padding='max_length', truncation=True,max_length=512)\r\n\r\ndataset = load_dataset(\"bookcorpus\",split='train[:1000]').shuffle()\r\ndataset = dataset.map(tokenize, batched=True, batch_size=512)\r\n\r\n# dataset = LineByLineTextDataset(\r\n# tokenizer=tokenizer,\r\n# file_path=\"./wiki1000.txt\",\r\n# block_size=128\r\n# )\r\n\r\ndata_collator = DataCollatorForLanguageModeling(\r\n tokenizer=tokenizer, mlm=True, mlm_probability=0.15\r\n)\r\n\r\nconfig=FunnelConfig(\r\n return_dict=True\r\n)\r\n\r\nmodel= FunnelForMaskedLM(config=config)\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\"./checkpoints\",\r\n overwrite_output_dir=True,\r\n do_train=True,\r\n num_train_epochs=1,\r\n per_device_train_batch_size=16,\r\n per_device_eval_batch_size=16,\r\n save_steps=10000,\r\n logging_dir='./ptlogs'\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=dataset,\r\n)\r\ntrainer.train()\r\n```", "`RuntimeError: CUDA out of memory. Tried to allocate 954.00 MiB (GPU 0; 15.90 GiB total capacity; 14.35 GiB already allocated; 753.75 MiB free; 14.39 GiB reserved in total by PyTorch)\r\nException raised from malloc at /pytorch/c10/cuda/CUDACachingAllocator.cpp:272 (most recent call first):`\r\n\r\npart of error output", "from funnel model to bert model : error still happened\r\n\r\nfrom your dataset to LineByLineTextDataset : error disapeared", "notice i just loaded 1000 rows of data", "the error happens when executing loss.backward()", "Since you're using a data collator you don't need to tokenizer the dataset using `map`. Could you try not to use `map` and only the data collator instead ? The data collator is supposed to pad to the longest sequence in each batch afaik, instead of padding to 512.\r\n\r\nAlso cc @sgugger ", "Closing this one.\r\nFeel free to re-open if you have other questions about this issue" ]
1,603,461,420,000
1,608,732,389,000
1,608,732,389,000
NONE
null
In your dataset ,cuda run out of memory as long as the trainer begins: however, without changing any other element/parameter,just switch dataset to `LineByLineTextDataset`,everything becames OK.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/757/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/757/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/756
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/756/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/756/comments
https://api.github.com/repos/huggingface/datasets/issues/756/events
https://github.com/huggingface/datasets/pull/756
728,211,373
MDExOlB1bGxSZXF1ZXN0NTA4OTYwNTc3
756
Start community-provided dataset docs
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Oh, really cool @sshleifer!" ]
1,603,459,061,000
1,603,716,920,000
1,603,716,919,000
MEMBER
null
Continuation of #736 with clean fork. #### Old description This is what I did to get the pseudo-labels updated. Not sure if it generalizes, but I figured I would write it down. It was pretty easy because all I had to do was make properly formatted directories and change URLs. In slack @thomwolf called it a user-namespace dataset, but the docs call it community dataset. I think the first naming is clearer, but I didn't address that here. I didn't add metadata, will try that.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/756/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/756/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/756", "html_url": "https://github.com/huggingface/datasets/pull/756", "diff_url": "https://github.com/huggingface/datasets/pull/756.diff", "patch_url": "https://github.com/huggingface/datasets/pull/756.patch", "merged_at": 1603716919000 }
true
https://api.github.com/repos/huggingface/datasets/issues/755
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/755/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/755/comments
https://api.github.com/repos/huggingface/datasets/issues/755/events
https://github.com/huggingface/datasets/pull/755
728,203,821
MDExOlB1bGxSZXF1ZXN0NTA4OTU0NDI2
755
Start community-provided dataset docs V2
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,603,458,450,000
1,603,458,937,000
1,603,458,937,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/755/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/755/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/755", "html_url": "https://github.com/huggingface/datasets/pull/755", "diff_url": "https://github.com/huggingface/datasets/pull/755.diff", "patch_url": "https://github.com/huggingface/datasets/pull/755.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/754
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/754/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/754/comments
https://api.github.com/repos/huggingface/datasets/issues/754/events
https://github.com/huggingface/datasets/pull/754
727,863,105
MDExOlB1bGxSZXF1ZXN0NTA4NjczNzM2
754
Use full released xsum dataset
{ "login": "jbragg", "id": 2238344, "node_id": "MDQ6VXNlcjIyMzgzNDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jbragg", "html_url": "https://github.com/jbragg", "followers_url": "https://api.github.com/users/jbragg/followers", "following_url": "https://api.github.com/users/jbragg/following{/other_user}", "gists_url": "https://api.github.com/users/jbragg/gists{/gist_id}", "starred_url": "https://api.github.com/users/jbragg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jbragg/subscriptions", "organizations_url": "https://api.github.com/users/jbragg/orgs", "repos_url": "https://api.github.com/users/jbragg/repos", "events_url": "https://api.github.com/users/jbragg/events{/privacy}", "received_events_url": "https://api.github.com/users/jbragg/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq I took a shot at addressing your comments but the build scripts seem to be complaining about not being able to open dummy files. How do I resolve those errors without copying the full dataset into the dummy dir?", "Could you check that the names of the dummy data files are right ?\r\nYou can use \r\n```\r\ndatasets-cli dummy_data ./datasets/xum\r\n```\r\nto print the expected file names", "Ok @lhoestq looks like I got the tests to pass :)" ]
1,603,423,789,000
1,609,470,716,000
1,603,717,018,000
CONTRIBUTOR
null
#672 Fix xsum to expand coverage and include IDs Code based on parser from older version of `datasets/xsum/xsum.py` @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/754/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/754/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/754", "html_url": "https://github.com/huggingface/datasets/pull/754", "diff_url": "https://github.com/huggingface/datasets/pull/754.diff", "patch_url": "https://github.com/huggingface/datasets/pull/754.patch", "merged_at": 1603717018000 }
true
https://api.github.com/repos/huggingface/datasets/issues/753
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/753/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/753/comments
https://api.github.com/repos/huggingface/datasets/issues/753/events
https://github.com/huggingface/datasets/pull/753
727,434,935
MDExOlB1bGxSZXF1ZXN0NTA4MzI4ODM0
753
Fix doc links to viewer
{ "login": "Pierrci", "id": 5020707, "node_id": "MDQ6VXNlcjUwMjA3MDc=", "avatar_url": "https://avatars.githubusercontent.com/u/5020707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Pierrci", "html_url": "https://github.com/Pierrci", "followers_url": "https://api.github.com/users/Pierrci/followers", "following_url": "https://api.github.com/users/Pierrci/following{/other_user}", "gists_url": "https://api.github.com/users/Pierrci/gists{/gist_id}", "starred_url": "https://api.github.com/users/Pierrci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Pierrci/subscriptions", "organizations_url": "https://api.github.com/users/Pierrci/orgs", "repos_url": "https://api.github.com/users/Pierrci/repos", "events_url": "https://api.github.com/users/Pierrci/events{/privacy}", "received_events_url": "https://api.github.com/users/Pierrci/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,603,376,416,000
1,603,442,531,000
1,603,442,531,000
MEMBER
null
It seems #733 forgot some links in the doc :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/753/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/753/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/753", "html_url": "https://github.com/huggingface/datasets/pull/753", "diff_url": "https://github.com/huggingface/datasets/pull/753.diff", "patch_url": "https://github.com/huggingface/datasets/pull/753.patch", "merged_at": 1603442531000 }
true
https://api.github.com/repos/huggingface/datasets/issues/752
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/752/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/752/comments
https://api.github.com/repos/huggingface/datasets/issues/752/events
https://github.com/huggingface/datasets/issues/752
726,917,801
MDU6SXNzdWU3MjY5MTc4MDE=
752
Clicking on a metric in the search page points to datasets page giving "Missing dataset" warning
{ "login": "ogabrielluiz", "id": 24829397, "node_id": "MDQ6VXNlcjI0ODI5Mzk3", "avatar_url": "https://avatars.githubusercontent.com/u/24829397?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ogabrielluiz", "html_url": "https://github.com/ogabrielluiz", "followers_url": "https://api.github.com/users/ogabrielluiz/followers", "following_url": "https://api.github.com/users/ogabrielluiz/following{/other_user}", "gists_url": "https://api.github.com/users/ogabrielluiz/gists{/gist_id}", "starred_url": "https://api.github.com/users/ogabrielluiz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ogabrielluiz/subscriptions", "organizations_url": "https://api.github.com/users/ogabrielluiz/orgs", "repos_url": "https://api.github.com/users/ogabrielluiz/repos", "events_url": "https://api.github.com/users/ogabrielluiz/events{/privacy}", "received_events_url": "https://api.github.com/users/ogabrielluiz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for the report, can reproduce. Will fix", "Fixed now @ogabrielluiz " ]
1,603,320,983,000
1,603,383,582,000
1,603,383,582,000
NONE
null
Hi! Sorry if this isn't the right place to talk about the website, I just didn't exactly where to write this. Searching a metric in https://huggingface.co/metrics gives the right results but clicking on a metric (E.g ROUGE) points to https://huggingface.co/datasets/rouge. Clicking on a metric without searching points to the right page. Thanks for all the great work!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/752/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/752/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/751
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/751/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/751/comments
https://api.github.com/repos/huggingface/datasets/issues/751/events
https://github.com/huggingface/datasets/issues/751
726,820,191
MDU6SXNzdWU3MjY4MjAxOTE=
751
Error loading ms_marco v2.1 using load_dataset()
{ "login": "JainSahit", "id": 30478979, "node_id": "MDQ6VXNlcjMwNDc4OTc5", "avatar_url": "https://avatars.githubusercontent.com/u/30478979?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JainSahit", "html_url": "https://github.com/JainSahit", "followers_url": "https://api.github.com/users/JainSahit/followers", "following_url": "https://api.github.com/users/JainSahit/following{/other_user}", "gists_url": "https://api.github.com/users/JainSahit/gists{/gist_id}", "starred_url": "https://api.github.com/users/JainSahit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JainSahit/subscriptions", "organizations_url": "https://api.github.com/users/JainSahit/orgs", "repos_url": "https://api.github.com/users/JainSahit/repos", "events_url": "https://api.github.com/users/JainSahit/events{/privacy}", "received_events_url": "https://api.github.com/users/JainSahit/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "There was a similar issue in #294 \r\nClearing the cache and download again the dataset did the job. Could you try to clear your cache and download the dataset again ?", "I was able to load the dataset successfully, I'm pretty sure it's just a cache issue that you have.\r\nLet me know if clearing your cache fixes the problem", "Yes, it indeed was a cache issue!\r\nThanks for reaching out!!" ]
1,603,310,083,000
1,604,539,917,000
1,604,539,917,000
NONE
null
Code: `dataset = load_dataset('ms_marco', 'v2.1')` Error: ``` `--------------------------------------------------------------------------- JSONDecodeError Traceback (most recent call last) <ipython-input-16-34378c057212> in <module>() 9 10 # Downloading and loading a dataset ---> 11 dataset = load_dataset('ms_marco', 'v2.1') 10 frames /usr/lib/python3.6/json/decoder.py in raw_decode(self, s, idx) 353 """ 354 try: --> 355 obj, end = self.scan_once(s, idx) 356 except StopIteration as err: 357 raise JSONDecodeError("Expecting value", s, err.value) from None JSONDecodeError: Unterminated string starting at: line 1 column 388988661 (char 388988660) ` ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/751/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/751/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/750
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/750/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/750/comments
https://api.github.com/repos/huggingface/datasets/issues/750/events
https://github.com/huggingface/datasets/issues/750
726,589,446
MDU6SXNzdWU3MjY1ODk0NDY=
750
load_dataset doesn't include `features` in its hash
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,603,293,401,000
1,603,964,161,000
1,603,964,161,000
MEMBER
null
It looks like the function `load_dataset` does not include what's passed in the `features` argument when creating a hash for a given dataset. As a result, if a user includes new features from an already downloaded dataset, those are ignored. Example: some models on the hub have a different ordering for the labels than what `datasets` uses for MNLI so I'd like to do something along the lines of: ``` dataset = load_dataset("glue", "mnli") features = dataset["train"].features features["label"] = ClassLabel(names = ['entailment', 'contradiction', 'neutral']) # new label order dataset = load_dataset("glue", "mnli", features=features) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/750/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/750/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/749
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/749/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/749/comments
https://api.github.com/repos/huggingface/datasets/issues/749/events
https://github.com/huggingface/datasets/issues/749
726,366,062
MDU6SXNzdWU3MjYzNjYwNjI=
749
[XGLUE] Adding new dataset
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
null
[ "Amazing! ", "Small poll @thomwolf @yjernite @lhoestq @JetRunner @qiweizhen .\r\n\r\nAs stated in the XGLUE paper: https://arxiv.org/pdf/2004.01401.pdf , for each of the 11 down-stream tasks training data is only available in English, whereas development and test data is available in multiple different language *cf.* here: \r\n\r\n![Screenshot from 2020-11-04 15-02-17](https://user-images.githubusercontent.com/23423619/98120893-d7499a80-1eae-11eb-9d0b-57dfe5d4ee68.png)\r\n\r\nSo, I'd suggest to have exactly 11 \"language-independent\" configs: \"ner\", \"pos\", ... and give the sample in each dataset in the config a \"language\" label being one of \"ar\", \"bg\", .... => To me this makes more sense than making languaga specific config, *e.g.* \"ner-de\", ...especially because training data is only available in English. Do you guys agree? ", "In this case we should have named splits, so config `ner` has splits `train`, `validation`, `test-en`, `test-ar`, `test-bg`, etc...\r\n\r\nThis is more in the spirit of the task afaiu, and will avoid making users do the filtering step themselves when testing different models or different configurations of the same model.", "I see your point! \r\n\r\nI think this would be quite feasible to do and makes sense to me as well! In the paper results are reported per language, so it seems more natural to do it this way. \r\n\r\nGood for me @yjernite ! What do the others think? @lhoestq \r\n", "I agree with Yacine on this!", "Okey actually not that easy to add things like `test-de` to `datasets` => this would be the first dataset to have this.\r\nSee: https://github.com/huggingface/datasets/pull/802", "IMO we should have one config per language. That's what we're doing for xnli, xtreme etc.\r\nHaving split names that depend on the language seems wrong. We should try to avoid split names that are not train/val/test.\r\nSorry for late response on this one", "@lhoestq agreed on having one config per language, but we also need to be able to have different split names and people are going to want to use hyphens, so we should at the very least warn them why it's failing :) E.g. for ANLI with different stages of data (currently using underscores) or https://www.tau-nlp.org/commonsenseqa with their train-sanity or dev-sanity splits", "Yes sure ! Could you open a separate issue for that ?", "Really cool dataset 👍 btw. does Transformers support all 11 tasks 🤔 would be awesome to have a xglue script (like the \"normal\" glue one)", "Just to make sure this is what we want here. If we add one config per language, \r\n\r\nthis means that this dataset ends up with well over 100 different configs most of which will have the same `train` split. The train split is always in English. Also, I'm not sure whether it's better for the user to be honest. \r\n\r\nI think it could be quite confusing for the user to have\r\n\r\n```python\r\ntrain_dataset = load_dataset(\"xglue\", \"ner-de\", split=\"train\")\r\n```\r\n\r\nin English even though it's `ner-de`.\r\n\r\nTo be honest, I'd prefer:\r\n\r\n```python\r\ntrain_dataset = load_dataset(\"xglue\", \"ner\", split=\"train\")\r\ntest_dataset_de = load_dataset(\"xglue\", \"ner\", split=\"test-de\")\r\ntest_dataset_fr = load_dataset(\"xglue\", \"ner\", split=\"test-fr\")\r\n```\r\n\r\nhere", "Oh yes right I didn't notice the train set was always in english sorry.\r\nMoreover it seems that the way this dataset is used is to pick a pretrained multilingual model, fine-tune it on the english train set and then evaluate on each test set (one per language).\r\nSo to better fit the usual usage of this dataset, I agree that it's better to have one test split per language. \r\n\r\nSomething like your latest example patrick is fine imo :\r\n```python\r\ntrain_dataset = load_dataset(\"xglue\", \"ner\", split=\"train\")\r\ntest_dataset_de = load_dataset(\"xglue\", \"ner\", split=\"test.de\")\r\n```\r\n\r\nI just replace test-de with test.de since `-` is not allowed for split names (it has to follow the `\\w+` regex), and usually we specify the language after a point. ", "Closing since XGLUE has been added in #802 , thanks patrick :) " ]
1,603,277,496,000
1,609,927,376,000
1,609,927,375,000
MEMBER
null
XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf). I'm planning on adding the dataset to the library myself in a couple of weeks. Also tagging @JetRunner @qiweizhen in case I need some guidance
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/749/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/749/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/748
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/748/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/748/comments
https://api.github.com/repos/huggingface/datasets/issues/748/events
https://github.com/huggingface/datasets/pull/748
726,196,589
MDExOlB1bGxSZXF1ZXN0NTA3MzAyNjE3
748
New version of CompGuessWhat?! with refined annotations
{ "login": "aleSuglia", "id": 1479733, "node_id": "MDQ6VXNlcjE0Nzk3MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/1479733?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aleSuglia", "html_url": "https://github.com/aleSuglia", "followers_url": "https://api.github.com/users/aleSuglia/followers", "following_url": "https://api.github.com/users/aleSuglia/following{/other_user}", "gists_url": "https://api.github.com/users/aleSuglia/gists{/gist_id}", "starred_url": "https://api.github.com/users/aleSuglia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aleSuglia/subscriptions", "organizations_url": "https://api.github.com/users/aleSuglia/orgs", "repos_url": "https://api.github.com/users/aleSuglia/repos", "events_url": "https://api.github.com/users/aleSuglia/events{/privacy}", "received_events_url": "https://api.github.com/users/aleSuglia/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "No worries. Always happy to help and thanks for your support in fixing the issue :)" ]
1,603,263,341,000
1,603,270,362,000
1,603,269,979,000
CONTRIBUTOR
null
This pull request introduces a few fixes to the annotations for VisualGenome in the CompGuessWhat?! original split.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/748/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/748/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/748", "html_url": "https://github.com/huggingface/datasets/pull/748", "diff_url": "https://github.com/huggingface/datasets/pull/748.diff", "patch_url": "https://github.com/huggingface/datasets/pull/748.patch", "merged_at": 1603269979000 }
true
https://api.github.com/repos/huggingface/datasets/issues/747
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/747/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/747/comments
https://api.github.com/repos/huggingface/datasets/issues/747/events
https://github.com/huggingface/datasets/pull/747
725,884,704
MDExOlB1bGxSZXF1ZXN0NTA3MDQ3MDE4
747
Add Quail question answering dataset
{ "login": "sai-prasanna", "id": 3595526, "node_id": "MDQ6VXNlcjM1OTU1MjY=", "avatar_url": "https://avatars.githubusercontent.com/u/3595526?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sai-prasanna", "html_url": "https://github.com/sai-prasanna", "followers_url": "https://api.github.com/users/sai-prasanna/followers", "following_url": "https://api.github.com/users/sai-prasanna/following{/other_user}", "gists_url": "https://api.github.com/users/sai-prasanna/gists{/gist_id}", "starred_url": "https://api.github.com/users/sai-prasanna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sai-prasanna/subscriptions", "organizations_url": "https://api.github.com/users/sai-prasanna/orgs", "repos_url": "https://api.github.com/users/sai-prasanna/repos", "events_url": "https://api.github.com/users/sai-prasanna/events{/privacy}", "received_events_url": "https://api.github.com/users/sai-prasanna/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,603,222,394,000
1,603,269,315,000
1,603,269,315,000
CONTRIBUTOR
null
QuAIL is a multi-domain RC dataset featuring news, blogs, fiction and user stories. Each domain is represented by 200 texts, which gives us a 4-way data split. The texts are 300-350 word excerpts from CC-licensed texts that were hand-picked so as to make sense to human readers without larger context. Domain diversity mitigates the issue of possible overlap between training and test data of large pre-trained models, which the current SOTA systems are based on. For instance, BERT is trained on Wikipedia + BookCorpus, and was tested on Wikipedia-based SQuAD (Devlin, Chang, Lee, & Toutanova, 2019). https://text-machine-lab.github.io/blog/2020/quail/ @annargrs
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/747/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/747/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/747", "html_url": "https://github.com/huggingface/datasets/pull/747", "diff_url": "https://github.com/huggingface/datasets/pull/747.diff", "patch_url": "https://github.com/huggingface/datasets/pull/747.patch", "merged_at": 1603269315000 }
true
https://api.github.com/repos/huggingface/datasets/issues/746
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/746/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/746/comments
https://api.github.com/repos/huggingface/datasets/issues/746/events
https://github.com/huggingface/datasets/pull/746
725,627,235
MDExOlB1bGxSZXF1ZXN0NTA2ODMzNDMw
746
dataset(ngt): add ngt dataset initial loading script
{ "login": "AmitMY", "id": 5757359, "node_id": "MDQ6VXNlcjU3NTczNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmitMY", "html_url": "https://github.com/AmitMY", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "repos_url": "https://api.github.com/users/AmitMY/repos", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,603,202,698,000
1,616,480,378,000
1,616,480,378,000
CONTRIBUTOR
null
Currently only making the paths to the annotation ELAN (eaf) file and videos available. This is the first accessible way to download this dataset, which is not manual file-by-file. Only downloading the necessary files, the annotation files are very small, 20MB for all of them, but the video files are large, 100GB in total, saved in `mpg` format. I do not intend to actually store these as an uncompressed array of frames, because it will be huge. Future updates may add pose estimation files for all videos, making it easier to work with this data
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/746/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/746/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/746", "html_url": "https://github.com/huggingface/datasets/pull/746", "diff_url": "https://github.com/huggingface/datasets/pull/746.diff", "patch_url": "https://github.com/huggingface/datasets/pull/746.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/745
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/745/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/745/comments
https://api.github.com/repos/huggingface/datasets/issues/745/events
https://github.com/huggingface/datasets/pull/745
725,589,352
MDExOlB1bGxSZXF1ZXN0NTA2ODAxMTI0
745
Fix emotion description
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hello, I probably have a silly question but the labels of the emotion dataset are in the form of numbers and not string, so I can not use the function classification_report because it mixes numbers and string (prediction). How can I access the label in the form of a string and not a number? \r\nThank you in advance." ]
1,603,200,519,000
1,619,102,851,000
1,603,269,507,000
MEMBER
null
Fixes the description of the emotion dataset to reflect the class names observed in the data, not the ones described in the paper. I also took the liberty to make use of `ClassLabel` for the emotion labels.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/745/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/745/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/745", "html_url": "https://github.com/huggingface/datasets/pull/745", "diff_url": "https://github.com/huggingface/datasets/pull/745.diff", "patch_url": "https://github.com/huggingface/datasets/pull/745.patch", "merged_at": 1603269507000 }
true
https://api.github.com/repos/huggingface/datasets/issues/744
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/744/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/744/comments
https://api.github.com/repos/huggingface/datasets/issues/744/events
https://github.com/huggingface/datasets/issues/744
724,918,448
MDU6SXNzdWU3MjQ5MTg0NDg=
744
Dataset Explorer Doesn't Work for squad_es and squad_it
{ "login": "gaotongxiao", "id": 22607038, "node_id": "MDQ6VXNlcjIyNjA3MDM4", "avatar_url": "https://avatars.githubusercontent.com/u/22607038?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gaotongxiao", "html_url": "https://github.com/gaotongxiao", "followers_url": "https://api.github.com/users/gaotongxiao/followers", "following_url": "https://api.github.com/users/gaotongxiao/following{/other_user}", "gists_url": "https://api.github.com/users/gaotongxiao/gists{/gist_id}", "starred_url": "https://api.github.com/users/gaotongxiao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gaotongxiao/subscriptions", "organizations_url": "https://api.github.com/users/gaotongxiao/orgs", "repos_url": "https://api.github.com/users/gaotongxiao/repos", "events_url": "https://api.github.com/users/gaotongxiao/events{/privacy}", "received_events_url": "https://api.github.com/users/gaotongxiao/received_events", "type": "User", "site_admin": false }
[ { "id": 2107841032, "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer", "name": "nlp-viewer", "color": "94203D", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Oups wrong click.\r\nThis one is for you @srush" ]
1,603,136,052,000
1,603,730,177,000
1,603,730,177,000
NONE
null
https://huggingface.co/nlp/viewer/?dataset=squad_es https://huggingface.co/nlp/viewer/?dataset=squad_it Both pages show "OSError: [Errno 28] No space left on device".
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/744/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/744/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/742
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/742/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/742/comments
https://api.github.com/repos/huggingface/datasets/issues/742/events
https://github.com/huggingface/datasets/pull/742
724,509,974
MDExOlB1bGxSZXF1ZXN0NTA1ODgzNjI3
742
Add OCNLI, a new CLUE dataset
{ "login": "JetRunner", "id": 22514219, "node_id": "MDQ6VXNlcjIyNTE0MjE5", "avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JetRunner", "html_url": "https://github.com/JetRunner", "followers_url": "https://api.github.com/users/JetRunner/followers", "following_url": "https://api.github.com/users/JetRunner/following{/other_user}", "gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}", "starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions", "organizations_url": "https://api.github.com/users/JetRunner/orgs", "repos_url": "https://api.github.com/users/JetRunner/repos", "events_url": "https://api.github.com/users/JetRunner/events{/privacy}", "received_events_url": "https://api.github.com/users/JetRunner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks :) merging it" ]
1,603,105,593,000
1,603,383,589,000
1,603,383,588,000
MEMBER
null
OCNLI stands for Original Chinese Natural Language Inference. It is a corpus for Chinese Natural Language Inference, collected following closely the procedures of MNLI, but with enhanced strategies aiming for more challenging inference pairs. We want to emphasize we did not use human/machine translation in creating the dataset, and thus our Chinese texts are original and not translated.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/742/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/742/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/742", "html_url": "https://github.com/huggingface/datasets/pull/742", "diff_url": "https://github.com/huggingface/datasets/pull/742.diff", "patch_url": "https://github.com/huggingface/datasets/pull/742.patch", "merged_at": 1603383587000 }
true
https://api.github.com/repos/huggingface/datasets/issues/740
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/740/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/740/comments
https://api.github.com/repos/huggingface/datasets/issues/740/events
https://github.com/huggingface/datasets/pull/740
723,047,958
MDExOlB1bGxSZXF1ZXN0NTA0NzAyNTc0
740
Fix TREC urls
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,602,839,488,000
1,603,097,677,000
1,603,097,676,000
MEMBER
null
The old TREC urls are now redirections. I updated the urls to the new ones, since we don't support redirections for downloads. Fix #737
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/740/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/740/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/740", "html_url": "https://github.com/huggingface/datasets/pull/740", "diff_url": "https://github.com/huggingface/datasets/pull/740.diff", "patch_url": "https://github.com/huggingface/datasets/pull/740.patch", "merged_at": 1603097675000 }
true
https://api.github.com/repos/huggingface/datasets/issues/739
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/739/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/739/comments
https://api.github.com/repos/huggingface/datasets/issues/739/events
https://github.com/huggingface/datasets/pull/739
723,044,066
MDExOlB1bGxSZXF1ZXN0NTA0Njk5NTY3
739
Add wiki dpr multiset embeddings
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I still have to compute the dataset_infos, and build + host the indexes", "update: I'm computing the metadata, will update the PR soon", "Finally all green and ready to merge :)" ]
1,602,839,149,000
1,606,399,370,000
1,606,399,369,000
MEMBER
null
There are two DPR encoders, one trained on Natural Questions and one trained on a multiset/hybrid dataset. Previously only the embeddings from the encoder trained on NQ were available. I'm adding the ones from the encoder trained on the multiset/hybrid dataset. In the configuration you can now specify `embeddings_name="nq"` or `embeddings_name="multiset"`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/739/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/739/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/739", "html_url": "https://github.com/huggingface/datasets/pull/739", "diff_url": "https://github.com/huggingface/datasets/pull/739.diff", "patch_url": "https://github.com/huggingface/datasets/pull/739.patch", "merged_at": 1606399369000 }
true
https://api.github.com/repos/huggingface/datasets/issues/738
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/738/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/738/comments
https://api.github.com/repos/huggingface/datasets/issues/738/events
https://github.com/huggingface/datasets/pull/738
723,033,923
MDExOlB1bGxSZXF1ZXN0NTA0NjkxNjM4
738
Replace seqeval code with original classification_report for simplicity
{ "login": "Hironsan", "id": 6737785, "node_id": "MDQ6VXNlcjY3Mzc3ODU=", "avatar_url": "https://avatars.githubusercontent.com/u/6737785?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hironsan", "html_url": "https://github.com/Hironsan", "followers_url": "https://api.github.com/users/Hironsan/followers", "following_url": "https://api.github.com/users/Hironsan/following{/other_user}", "gists_url": "https://api.github.com/users/Hironsan/gists{/gist_id}", "starred_url": "https://api.github.com/users/Hironsan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hironsan/subscriptions", "organizations_url": "https://api.github.com/users/Hironsan/orgs", "repos_url": "https://api.github.com/users/Hironsan/repos", "events_url": "https://api.github.com/users/Hironsan/events{/privacy}", "received_events_url": "https://api.github.com/users/Hironsan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hello,\r\n\r\nI ran https://github.com/huggingface/transformers/blob/master/examples/token-classification/run.sh\r\n\r\nAnd received this error:\r\n```\r\n100%|██████████| 407/407 [21:37<00:00, 3.44s/it]Traceback (most recent call last):\r\n File \"run_ner.py\", line 445, in <module>\r\n main()\r\n File \"run_ner.py\", line 398, in main\r\n results = trainer.evaluate()\r\n File \"/data/2021/transformers/src/transformers/trainer.py\", line 1470, in evaluate\r\n metric_key_prefix=metric_key_prefix,\r\n File \"/data/2021/transformers/src/transformers/trainer.py\", line 1622, in prediction_loop\r\n metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))\r\n File \"run_ner.py\", line 345, in compute_metrics\r\n results = metric.compute(predictions=true_predictions, references=true_labels)\r\n File \"/usr/local/lib/python3.6/dist-packages/datasets/metric.py\", line 398, in compute\r\n output = self._compute(predictions=predictions, references=references, **kwargs)\r\n File \"/root/.cache/huggingface/modules/datasets_modules/metrics/seqeval/81eda1ff004361d4fa48754a446ec69bb7aa9cf4d14c7215f407d1475941c5ff/seqeval.py\", line 97, in _compute\r\n report = classification_report(y_true=references, y_pred=predictions, suffix=suffix, output_dict=True)\r\nTypeError: classification_report() got an unexpected keyword argument 'output_dict'\r\n```\r\n\r\nI'm still trying multiple things to see if I can work around this, but I thought it might be useful to mention it here.\r\n\r\n```\r\nName: transformers\r\nVersion: 4.3.0.dev0\r\n\r\nName: datasets\r\nVersion: 1.2.1\r\n```", "Hi, can you try to update your local installation of `seqeval` ?\r\n\r\n```\r\npip install --upgrade seqeval\r\n```", "@lhoestq thanks for the reply. Indeed it was some issue with my setup. I removed the \"transformers\" and \"datasets\" (that I had previously installed from the source code), cleared the cache and installed everything again. It works great now!" ]
1,602,838,305,000
1,611,245,235,000
1,603,103,472,000
CONTRIBUTOR
null
Recently, the original seqeval has enabled us to get per type scores and overall scores as a dictionary. This PR replaces the current code with the original function(`classification_report`) to simplify it. Also, the original code has been updated to fix #352. - Related issue: https://github.com/chakki-works/seqeval/pull/38 ```python from datasets import load_metric metric = load_metric("seqeval") y_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] metric.compute(predictions=y_pred, references=y_true) # Output: {'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0, 'number': 1}, 'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.8} ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/738/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/738/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/738", "html_url": "https://github.com/huggingface/datasets/pull/738", "diff_url": "https://github.com/huggingface/datasets/pull/738.diff", "patch_url": "https://github.com/huggingface/datasets/pull/738.patch", "merged_at": 1603103471000 }
true
https://api.github.com/repos/huggingface/datasets/issues/737
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/737/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/737/comments
https://api.github.com/repos/huggingface/datasets/issues/737/events
https://github.com/huggingface/datasets/issues/737
722,463,923
MDU6SXNzdWU3MjI0NjM5MjM=
737
Trec Dataset Connection Error
{ "login": "aychang95", "id": 10554495, "node_id": "MDQ6VXNlcjEwNTU0NDk1", "avatar_url": "https://avatars.githubusercontent.com/u/10554495?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aychang95", "html_url": "https://github.com/aychang95", "followers_url": "https://api.github.com/users/aychang95/followers", "following_url": "https://api.github.com/users/aychang95/following{/other_user}", "gists_url": "https://api.github.com/users/aychang95/gists{/gist_id}", "starred_url": "https://api.github.com/users/aychang95/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aychang95/subscriptions", "organizations_url": "https://api.github.com/users/aychang95/orgs", "repos_url": "https://api.github.com/users/aychang95/repos", "events_url": "https://api.github.com/users/aychang95/events{/privacy}", "received_events_url": "https://api.github.com/users/aychang95/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for reporting.\r\nThat's because the download url has changed. The old url now redirects to the new one but we don't support redirection for downloads.\r\n\r\nI'm opening a PR to update the url" ]
1,602,777,473,000
1,603,097,676,000
1,603,097,676,000
NONE
null
**Datasets Version:** 1.1.2 **Python Version:** 3.6/3.7 **Code:** ```python from datasets import load_dataset load_dataset("trec") ``` **Expected behavior:** Download Trec dataset and load Dataset object **Current Behavior:** Get a connection error saying it couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label (but the link doesn't seem broken) <details> <summary>Error Logs</summary> Using custom data configuration default Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /root/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7... --------------------------------------------------------------------------- ConnectionError Traceback (most recent call last) <ipython-input-8-66bf1242096e> in <module>() ----> 1 load_dataset("trec") 10 frames /usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag) 473 elif response is not None and response.status_code == 404: 474 raise FileNotFoundError("Couldn't find file at {}".format(url)) --> 475 raise ConnectionError("Couldn't reach {}".format(url)) 476 477 # Try a second time ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label </details>
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/737/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/737/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/736
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/736/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/736/comments
https://api.github.com/repos/huggingface/datasets/issues/736/events
https://github.com/huggingface/datasets/pull/736
722,348,191
MDExOlB1bGxSZXF1ZXN0NTA0MTE0MjMy
736
Start community-provided dataset docs
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "can you also reference the `--organization` flag like in https://github.com/huggingface/transformers/blob/master/docs/source/model_sharing.rst#upload-your-model-with-the-cli ?", "done!", "Not sure if the changes in `datasets/wmt_t2t/wmt_utils.py` are intentional.\r\nIf you want to add more configs to wmt, could you do it in a serapate PR ?", "I don't think I changed wmt_utils (I think github is wrong or my setup is poorly configured).\r\n\r\nLocally git diff master --name-only says one file. Master is up to date.\r\nTried to make a new PR #755 and the same thing happened.", "Trying new fork." ]
1,602,769,299,000
1,603,458,928,000
1,603,458,928,000
MEMBER
null
This is one I did to get the pseudo-labels updated. Not sure if it generalizes, but I figured I would write it down. It was pretty easy because all I had to do was make properly formatted directories and change URLs. + In slack @thomwolf called it a `user-namespace` dataset, but the docs call it `community dataset`. I think the first naming is clearer, but I didn't address that here. + I didn't add metadata, will try that.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/736/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/736/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/736", "html_url": "https://github.com/huggingface/datasets/pull/736", "diff_url": "https://github.com/huggingface/datasets/pull/736.diff", "patch_url": "https://github.com/huggingface/datasets/pull/736.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/735
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/735/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/735/comments
https://api.github.com/repos/huggingface/datasets/issues/735/events
https://github.com/huggingface/datasets/issues/735
722,225,270
MDU6SXNzdWU3MjIyMjUyNzA=
735
Throw error when an unexpected key is used in data_files
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for reporting !\r\nWe'll add support for other keys" ]
1,602,759,327,000
1,604,064,232,000
1,604,064,232,000
CONTRIBUTOR
null
I have found that only "train", "validation" and "test" are valid keys in the `data_files` argument. When you use any other ones, those attached files are silently ignored - leading to unexpected behaviour for the users. So the following, unintuitively, returns only one key (namely `train`). ```python datasets = load_dataset("text", data_files={"train": train_f, "valid": valid_f}) print(datasets.keys()) # dict_keys(['train']) ``` whereas using `validation` instead, does return the expected result: ```python datasets = load_dataset("text", data_files={"train": train_f, "validation": valid_f}) print(datasets.keys()) # dict_keys(['train', 'validation']) ``` I would like to see more freedom in which keys one can use, but if that is not possible at least an error should be thrown when using an unexpected key.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/735/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/735/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/734
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/734/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/734/comments
https://api.github.com/repos/huggingface/datasets/issues/734/events
https://github.com/huggingface/datasets/pull/734
721,767,848
MDExOlB1bGxSZXF1ZXN0NTAzNjMwMDcz
734
Fix GLUE metric description
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,602,708,254,000
1,602,754,063,000
1,602,754,062,000
MEMBER
null
Small typo: the description says translation instead of prediction.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/734/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/734/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/734", "html_url": "https://github.com/huggingface/datasets/pull/734", "diff_url": "https://github.com/huggingface/datasets/pull/734.diff", "patch_url": "https://github.com/huggingface/datasets/pull/734.patch", "merged_at": 1602754062000 }
true
https://api.github.com/repos/huggingface/datasets/issues/733
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/733/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/733/comments
https://api.github.com/repos/huggingface/datasets/issues/733/events
https://github.com/huggingface/datasets/pull/733
721,366,744
MDExOlB1bGxSZXF1ZXN0NTAzMjk2NDQw
733
Update link to dataset viewer
{ "login": "negedng", "id": 12969168, "node_id": "MDQ6VXNlcjEyOTY5MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/12969168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/negedng", "html_url": "https://github.com/negedng", "followers_url": "https://api.github.com/users/negedng/followers", "following_url": "https://api.github.com/users/negedng/following{/other_user}", "gists_url": "https://api.github.com/users/negedng/gists{/gist_id}", "starred_url": "https://api.github.com/users/negedng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/negedng/subscriptions", "organizations_url": "https://api.github.com/users/negedng/orgs", "repos_url": "https://api.github.com/users/negedng/repos", "events_url": "https://api.github.com/users/negedng/events{/privacy}", "received_events_url": "https://api.github.com/users/negedng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,602,674,003,000
1,602,684,451,000
1,602,684,451,000
CONTRIBUTOR
null
Change 404 error links in quick tour to working ones
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/733/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/733/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/733", "html_url": "https://github.com/huggingface/datasets/pull/733", "diff_url": "https://github.com/huggingface/datasets/pull/733.diff", "patch_url": "https://github.com/huggingface/datasets/pull/733.patch", "merged_at": 1602684451000 }
true
https://api.github.com/repos/huggingface/datasets/issues/732
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/732/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/732/comments
https://api.github.com/repos/huggingface/datasets/issues/732/events
https://github.com/huggingface/datasets/pull/732
721,359,448
MDExOlB1bGxSZXF1ZXN0NTAzMjkwMjEy
732
dataset(wlasl): initial loading script
{ "login": "AmitMY", "id": 5757359, "node_id": "MDQ6VXNlcjU3NTczNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmitMY", "html_url": "https://github.com/AmitMY", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "repos_url": "https://api.github.com/users/AmitMY/repos", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Followup: \r\nFrom the info in https://github.com/huggingface/datasets/pull/722, I probably should load the videos as array of frames directly into the database. \r\nThis will make the dataset generation time very long, but will make working with the dataset much easier.", "When I run:\r\n```\r\npython datasets-cli dummy_data datasets/wlasl\r\n```\r\n\r\nI get:\r\n```\r\nChecking datasets/wlasl/wlasl.py for additional imports. \r\nFound main folder for dataset datasets/wlasl/wlasl.py at /home/nlp/amit/.cache/huggingface/modules/datasets_modules/datasets/wlasl \r\nFound specific version folder for dataset datasets/wlasl/wlasl.py at /home/nlp/amit/.cache/huggingface/modules/datasets_modules/datasets/wlasl/f0cad785350d770804f20c471b0cff2d7e3c7b210b5c3a228c393abb95a04786 \r\nFound script file from datasets/wlasl/wlasl.py to /home/nlp/amit/.cache/huggingface/modules/datasets_modules/datasets/wlasl/f0cad785350d770804f20c471b0cff2d7e3c7b210b5c3a228c393abb95a04786/wlasl.py \r\nFound dataset infos file from datasets/wlasl/dataset_infos.json to /home/nlp/amit/.cache/huggingface/modules/datasets_modules/datasets/wlasl/f0cad785350d770804f20c471b0cff2d7e3c7b210b5c3a228c393abb95a04786/dataset_infos.json \r\nFound metadata file for dataset datasets/wlasl/wlasl.py at /home/nlp/amit/.cache/huggingface/modules/datasets_modules/datasets/wlasl/f0cad785350d770804f20c471b0cff2d7e3c7b210b5c3a228c393abb95a04786/wlasl.json \r\nUsing custom data configuration default \r\nLoading Dataset Infos from /home/nlp/amit/.cache/huggingface/modules/datasets_modules/datasets/wlasl/f0cad785350d770804f20c471b0cff2d7e3c7b210b5c3a228c393abb95a04786\r\nCreating dummy folder structure for datasets/wlasl/dummy/0.3.0... \r\nDataset datasets with config None seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has to be created with less guidance. Make sure you create the file dummy_data. \r\nTraceback (most recent call last): \r\nFile \"datasets-cli\", line 36, in \r\nservice.run() File \"/home/nlp/amit/anaconda2/envs/meta-scholar/lib/python3.7/site-packages/datasets-1.1.2-py3.7.egg/datasets/commands/dummy_data.py\", line 73, in run \r\nfor split in generator_splits: \r\nUnboundLocalError: local variable 'generator_splits' referenced before assignment\r\n```" ]
1,602,673,302,000
1,616,480,383,000
1,616,480,383,000
CONTRIBUTOR
null
takes like 9-10 hours to download all of the videos for the dataset, but it does finish :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/732/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/732/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/732", "html_url": "https://github.com/huggingface/datasets/pull/732", "diff_url": "https://github.com/huggingface/datasets/pull/732.diff", "patch_url": "https://github.com/huggingface/datasets/pull/732.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/731
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/731/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/731/comments
https://api.github.com/repos/huggingface/datasets/issues/731/events
https://github.com/huggingface/datasets/pull/731
721,142,985
MDExOlB1bGxSZXF1ZXN0NTAzMTExNzc4
731
dataset(aslg_pc12): initial loading script
{ "login": "AmitMY", "id": 5757359, "node_id": "MDQ6VXNlcjU3NTczNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmitMY", "html_url": "https://github.com/AmitMY", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "repos_url": "https://api.github.com/users/AmitMY/repos", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks @lhoestq \r\nAre there any guidelines for the dummy data?\r\nIn this particular case for example, the dataset fetches from two hardcoded URLs. \r\nDo I just `head -n 10` both files and zip them?\r\n\r\n", "> Thanks @lhoestq\r\n> Are there any guidelines for the dummy data?\r\n> In this particular case for example, the dataset fetches from two hardcoded URLs.\r\n> Do I just `head -n 10` both files and zip them?\r\n\r\nYes the idea is just to have a few examples to properly test the script and make sure it keeps working in the long run.\r\n\r\nAnd FYI there's a command to help you name the dummy data files correctly. More info in the documentation [here](https://huggingface.co/docs/datasets/share_dataset.html#adding-dummy-data)", "@lhoestq passes all tests" ]
1,602,652,477,000
1,603,898,826,000
1,603,898,826,000
CONTRIBUTOR
null
This contains the only current public part of this corpus. The rest of the corpus is not yet been made public, but this sample is still being used by researchers.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/731/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/731/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/731", "html_url": "https://github.com/huggingface/datasets/pull/731", "diff_url": "https://github.com/huggingface/datasets/pull/731.diff", "patch_url": "https://github.com/huggingface/datasets/pull/731.patch", "merged_at": 1603898826000 }
true
https://api.github.com/repos/huggingface/datasets/issues/730
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/730/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/730/comments
https://api.github.com/repos/huggingface/datasets/issues/730/events
https://github.com/huggingface/datasets/issues/730
721,073,812
MDU6SXNzdWU3MjEwNzM4MTI=
730
Possible caching bug
{ "login": "ArneBinder", "id": 3375489, "node_id": "MDQ6VXNlcjMzNzU0ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3375489?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArneBinder", "html_url": "https://github.com/ArneBinder", "followers_url": "https://api.github.com/users/ArneBinder/followers", "following_url": "https://api.github.com/users/ArneBinder/following{/other_user}", "gists_url": "https://api.github.com/users/ArneBinder/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArneBinder/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArneBinder/subscriptions", "organizations_url": "https://api.github.com/users/ArneBinder/orgs", "repos_url": "https://api.github.com/users/ArneBinder/repos", "events_url": "https://api.github.com/users/ArneBinder/events{/privacy}", "received_events_url": "https://api.github.com/users/ArneBinder/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Thanks for reporting. That's a bug indeed.\r\nApparently only the `data_files` parameter is taken into account right now in `DatasetBuilder._create_builder_config` but it should also be the case for `config_kwargs` (or at least the instantiated `builder_config`)", "Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command \r\n`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`\r\n\r\nchange the dateset to load json by refering to https://huggingface.co/docs/datasets/loading.html\r\n`dataset = datasets.load_dataset('json', data_files=args.dataset)`\r\n\r\nErrors:\r\n`Downloading and preparing dataset json/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/json/default-c1e124ad488911b8/0.0.0/45636811569ec4a6630521c18235dfbbab83b7ab572e3393c5ba68ccabe98264...\r\n`" ]
1,602,640,954,000
1,638,109,737,000
1,603,964,161,000
NONE
null
The following code with `test1.txt` containing just "🤗🤗🤗": ``` dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1") print(dataset[0]) dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8") print(dataset[0]) ``` produces this output: ``` Downloading and preparing dataset text/default-15600e4d83254059 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155... Dataset text downloaded and prepared to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155. Subsequent calls will reuse this data. {'text': 'ð\x9f¤\x97ð\x9f¤\x97ð\x9f¤\x97'} Using custom data configuration default Reusing dataset text (/home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155) {'text': 'ð\x9f¤\x97ð\x9f¤\x97ð\x9f¤\x97'} ``` Just changing the order (and deleting the temp files): ``` dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8") print(dataset[0]) dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1") print(dataset[0]) ``` produces this: ``` Using custom data configuration default Downloading and preparing dataset text/default-15600e4d83254059 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155... Dataset text downloaded and prepared to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155. Subsequent calls will reuse this data. {'text': '🤗🤗🤗'} Using custom data configuration default Reusing dataset text (/home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155) {'text': '🤗🤗🤗'} ``` Is it intended that the cache path does not depend on the config entries? tested with datasets==1.1.2 and python==3.8.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/730/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/730/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/729
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/729/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/729/comments
https://api.github.com/repos/huggingface/datasets/issues/729/events
https://github.com/huggingface/datasets/issues/729
719,558,876
MDU6SXNzdWU3MTk1NTg4NzY=
729
Better error message when one forgets to call `add_batch` before `compute`
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,602,525,562,000
1,603,984,704,000
1,603,984,704,000
MEMBER
null
When using metrics, if for some reason a user forgets to call `add_batch` to a metric before `compute` (with no arguments), the error message is a bit cryptic and could probably be made clearer. ## Reproducer ```python import datasets import torch from datasets import Metric class GatherMetric(Metric): def _info(self): return datasets.MetricInfo( description="description", citation="citation", inputs_description="kwargs", features=datasets.Features({ 'predictions': datasets.Value('int64'), 'references': datasets.Value('int64'), }), codebase_urls=[], reference_urls=[], format='numpy' ) def _compute(self, predictions, references): return {"predictions": predictions, "labels": references} metric = GatherMetric(cache_dir="test-metric") inputs = torch.randint(0, 2, (1024,)) targets = torch.randint(0, 2, (1024,)) batch_size = 8 for i in range(0, 1024, batch_size): pass # User forgets to call `add_batch` result = metric.compute() ``` ## Stack trace: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-13-267729d187fa> in <module> 3 pass 4 # metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size]) ----> 5 result = metric.compute() ~/git/datasets/src/datasets/metric.py in compute(self, *args, **kwargs) 380 if predictions is not None: 381 self.add_batch(predictions=predictions, references=references) --> 382 self._finalize() 383 384 self.cache_file_name = None ~/git/datasets/src/datasets/metric.py in _finalize(self) 343 elif self.process_id == 0: 344 # Let's acquire a lock on each node files to be sure they are finished writing --> 345 file_paths, filelocks = self._get_all_cache_files() 346 347 # Read the predictions and references ~/git/datasets/src/datasets/metric.py in _get_all_cache_files(self) 280 filelocks = [] 281 for process_id, file_path in enumerate(file_paths): --> 282 filelock = FileLock(file_path + ".lock") 283 try: 284 filelock.acquire(timeout=self.timeout) TypeError: unsupported operand type(s) for +: 'NoneType' and 'str' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/729/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/729/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/728
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/728/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/728/comments
https://api.github.com/repos/huggingface/datasets/issues/728/events
https://github.com/huggingface/datasets/issues/728
719,555,780
MDU6SXNzdWU3MTk1NTU3ODA=
728
Passing `cache_dir` to a metric does not work
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,602,525,314,000
1,603,964,082,000
1,603,964,082,000
MEMBER
null
When passing `cache_dir` to a custom metric, the folder is concatenated to itself at some point and this results in a FileNotFoundError: ## Reproducer ```python import datasets import torch from datasets import Metric class GatherMetric(Metric): def _info(self): return datasets.MetricInfo( description="description", citation="citation", inputs_description="kwargs", features=datasets.Features({ 'predictions': datasets.Value('int64'), 'references': datasets.Value('int64'), }), codebase_urls=[], reference_urls=[], format='numpy' ) def _compute(self, predictions, references): return {"predictions": predictions, "labels": references} metric = GatherMetric(cache_dir="test-metric") inputs = torch.randint(0, 2, (1024,)) targets = torch.randint(0, 2, (1024,)) batch_size = 8 for i in range(0, 1024, batch_size): metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size]) result = metric.compute() ``` ## Stack trace: ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) ~/git/datasets/src/datasets/metric.py in _finalize(self) 349 reader = ArrowReader(path=self.data_dir, info=DatasetInfo(features=self.features)) --> 350 self.data = Dataset(**reader.read_files([{"filename": f} for f in file_paths])) 351 except FileNotFoundError: ~/git/datasets/src/datasets/arrow_reader.py in read_files(self, files, original_instructions) 227 # Prepend path to filename --> 228 pa_table = self._read_files(files) 229 files = copy.deepcopy(files) ~/git/datasets/src/datasets/arrow_reader.py in _read_files(self, files) 166 for f_dict in files: --> 167 pa_table: pa.Table = self._get_dataset_from_filename(f_dict) 168 pa_tables.append(pa_table) ~/git/datasets/src/datasets/arrow_reader.py in _get_dataset_from_filename(self, filename_skip_take) 291 ) --> 292 mmap = pa.memory_map(filename) 293 f = pa.ipc.open_stream(mmap) ~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.memory_map() ~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.MemoryMappedFile._open() ~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() FileNotFoundError: [Errno 2] Failed to open local file 'test-metric/gather_metric/default/test-metric/gather_metric/default/default_experiment-1-0.arrow'. Detail: [errno 2] No such file or directory During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) <ipython-input-17-e42d43cc981f> in <module> 2 for i in range(0, 1024, batch_size): 3 metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size]) ----> 4 result = metric.compute() ~/git/datasets/src/datasets/metric.py in compute(self, *args, **kwargs) 380 if predictions is not None: 381 self.add_batch(predictions=predictions, references=references) --> 382 self._finalize() 383 384 self.cache_file_name = None ~/git/datasets/src/datasets/metric.py in _finalize(self) 351 except FileNotFoundError: 352 raise ValueError( --> 353 "Error in finalize: another metric instance is already using the local cache file. " 354 "Please specify an experiment_id to avoid colision between distributed metric instances." 355 ) ValueError: Error in finalize: another metric instance is already using the local cache file. Please specify an experiment_id to avoid colision between distributed metric instances. ``` The code works when we remove the `cache_dir=...` from the metric.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/728/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/728/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/725
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/725/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/725/comments
https://api.github.com/repos/huggingface/datasets/issues/725/events
https://github.com/huggingface/datasets/pull/725
718,985,641
MDExOlB1bGxSZXF1ZXN0NTAxMjUxODI1
725
pretty print dataset objects
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Great, as you found it useful I improved the code a bit to automate indentation in the parent class, so that the child repr doesn't need to guess the indentation level, while repr'ing nicely on its own.\r\n\r\n- do we want indent=4 or 2?\r\n- do we want `{` ... `}` or w/o?\r\n\r\ncurrently it's indent4 and w/ curly braces, so it looks:\r\n\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['text', 'headline', 'title'],\r\n num_rows: 157252\r\n })\r\n validation: Dataset({\r\n features: ['text', 'headline', 'title'],\r\n num_rows: 5599\r\n })\r\n test: Dataset({\r\n features: ['text', 'headline', 'title'],\r\n num_rows: 5577\r\n })\r\n})\r\n```\r\njust child:\r\n```\r\nDataset({\r\n features: ['text', 'headline', 'title'],\r\n num_rows: 5577\r\n})\r\n```\r\n\r\n", "Yes! A lot better indeed!" ]
1,602,468,226,000
1,603,470,275,000
1,603,443,646,000
CONTRIBUTOR
null
Currently, if I do: ``` from datasets import load_dataset load_dataset("wikihow", 'all', data_dir="/hf/pegasus-datasets/wikihow/") ``` I get: ``` DatasetDict({'train': Dataset(features: {'text': Value(dtype='string', id=None), 'headline': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None)}, num_rows: 157252), 'validation': Dataset(features: {'text': Value(dtype='string', id=None), 'headline': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None)}, num_rows: 5599), 'test': Dataset(features: {'text': Value(dtype='string', id=None), 'headline': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None)}, num_rows: 5577)}) ``` This is not very readable. Can we either have a better `__repr__` or have a custom method to nicely pprint the dataset object? Here is my very simple attempt. With this PR, it produces: ``` DatasetDict({ train: Dataset({ features: ['text', 'headline', 'title'], num_rows: 157252 }) validation: Dataset({ features: ['text', 'headline', 'title'], num_rows: 5599 }) test: Dataset({ features: ['text', 'headline', 'title'], num_rows: 5577 }) }) ``` I did omit the data types on purpose to make it more readable, but it shouldn't be too difficult to integrate those too. note that this PR also fixes the inconsistency in output that in master misses enclosing `{}` for Dataset, but it is there for `DatasetDict` - or perhaps it was by design. I'm totally not attached to this format, just wanting something more readable. One approach could be to serialize to `json.dumps` or something similar. It'd make the indentation simpler. Thank you.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/725/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/725/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/725", "html_url": "https://github.com/huggingface/datasets/pull/725", "diff_url": "https://github.com/huggingface/datasets/pull/725.diff", "patch_url": "https://github.com/huggingface/datasets/pull/725.patch", "merged_at": 1603443646000 }
true
https://api.github.com/repos/huggingface/datasets/issues/724
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/724/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/724/comments
https://api.github.com/repos/huggingface/datasets/issues/724/events
https://github.com/huggingface/datasets/issues/724
718,947,700
MDU6SXNzdWU3MTg5NDc3MDA=
724
need to redirect /nlp to /datasets and remove outdated info
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Should be fixed now: \r\n\r\n![image](https://user-images.githubusercontent.com/35882/95917301-040b0600-0d78-11eb-9655-c4ac0e788089.png)\r\n\r\nNot sure I understand what you mean by the second part?\r\n", "Thank you!\r\n\r\n> Not sure I understand what you mean by the second part?\r\n\r\nCompare the 2:\r\n* https://huggingface.co/datasets/wikihow\r\n* https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all\r\nCan you see the difference? 2nd has formatting, 1st doesn't.\r\n", "For context, those are two different pages (not an old vs new one), one is from the dataset viewer (you can browse data inside the datasets) while the other is just a basic reference page displayed some metadata about the dataset.\r\n\r\nFor the second one, we'll move to markdown parsing soon, so it'll be formatted better.", "I understand. I was just flagging the lack of markup issue." ]
1,602,457,932,000
1,602,694,812,000
1,602,694,812,000
CONTRIBUTOR
null
It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all should probably redirect to: https://huggingface.co/datasets/wikihow also for some reason the new information is slightly borked. If you look at the old one it was nicely formatted and had the links marked up, the new one is just a jumble of text in one chunk and no markup for links (i.e. not clickable).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/724/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/724/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/723
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/723/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/723/comments
https://api.github.com/repos/huggingface/datasets/issues/723/events
https://github.com/huggingface/datasets/issues/723
718,926,723
MDU6SXNzdWU3MTg5MjY3MjM=
723
Adding pseudo-labels to datasets
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
null
[ "Nice ! :)\r\nIt's indeed the first time we have such contributions so we'll have to figure out the appropriate way to integrate them.\r\nCould you add details on what they could be used for ?\r\n", "They can be used as training data for a smaller model.", "Sounds just like a regular dataset to me then, no?", "A new configuration for those datasets should do the job then.\r\nNote that until now datasets like xsum only had one configuration. It means that users didn't have to specify the configuration name when loading the dataset. If we add new configs, users that update the lib will have to update their code to specify the default/standard configuration name (not the one with pseudo labels).", "Could also be a `user-namespace` dataset maybe?", "Oh yes why not. I'm more in favor of this actually since pseudo labels are things that users (not dataset authors in general) can compute by themselves and share with the community", "![image](https://user-images.githubusercontent.com/6045025/96045248-b528a380-0e3f-11eb-9124-bd55afa031bb.png)\r\n\r\nI assume I should (for example) rename the xsum dir, change the URL, and put the modified dir somewhere in S3?", "You can use the `datasets-cli` to upload the folder with your version of xsum with the pseudo labels.\r\n\r\n```\r\ndatasets-cli upload_dataset path/to/xsum\r\n```" ]
1,602,450,345,000
1,627,967,511,000
1,627,967,511,000
MEMBER
null
I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo. Since pseudo-labels are just a large model's generations on an existing dataset, what is the right way to structure this contribution. I read https://huggingface.co/docs/datasets/add_dataset.html, but it doesn't really cover this type of contribution. I could, for example, make a new directory, `xsum_bart_pseudolabels` for each set of pseudolabels or add some sort of parametrization to `xsum.py`: https://github.com/huggingface/datasets/blob/5f4c6e830f603830117877b8990a0e65a2386aa6/datasets/xsum/xsum.py What do you think @lhoestq ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/723/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/723/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/720
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/720/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/720/comments
https://api.github.com/repos/huggingface/datasets/issues/720/events
https://github.com/huggingface/datasets/issues/720
716,581,266
MDU6SXNzdWU3MTY1ODEyNjY=
720
OSError: Cannot find data file when not using the dummy dataset in RAG
{ "login": "josemlopez", "id": 4112135, "node_id": "MDQ6VXNlcjQxMTIxMzU=", "avatar_url": "https://avatars.githubusercontent.com/u/4112135?v=4", "gravatar_id": "", "url": "https://api.github.com/users/josemlopez", "html_url": "https://github.com/josemlopez", "followers_url": "https://api.github.com/users/josemlopez/followers", "following_url": "https://api.github.com/users/josemlopez/following{/other_user}", "gists_url": "https://api.github.com/users/josemlopez/gists{/gist_id}", "starred_url": "https://api.github.com/users/josemlopez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/josemlopez/subscriptions", "organizations_url": "https://api.github.com/users/josemlopez/orgs", "repos_url": "https://api.github.com/users/josemlopez/repos", "events_url": "https://api.github.com/users/josemlopez/events{/privacy}", "received_events_url": "https://api.github.com/users/josemlopez/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Same issue here. I will be digging further, but it looks like the [script](https://github.com/huggingface/datasets/blob/master/datasets/wiki_dpr/wiki_dpr.py#L132) is attempting to open a file that is not downloaded yet. \r\n\r\n```\r\n99dcbca09109e58502e6b9271d4d3f3791b43f61f3161a76b25d2775ab1a4498.lock\r\n```\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nUnpicklingError Traceback (most recent call last)\r\n~/anaconda3/envs/eqa/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)\r\n 446 try:\r\n--> 447 return pickle.load(fid, **pickle_kwargs)\r\n 448 except Exception:\r\n\r\nUnpicklingError: pickle data was truncated\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nOSError Traceback (most recent call last)\r\n~/src/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 559 \r\n--> 560 if verify_infos:\r\n 561 verify_splits(self.info.splits, split_dict)\r\n\r\n~/src/datasets/src/datasets/builder.py in _prepare_split(self, split_generator)\r\n 847 writer.write(example)\r\n--> 848 finally:\r\n 849 num_examples, num_bytes = writer.finalize()\r\n\r\n~/anaconda3/envs/eqa/lib/python3.7/site-packages/tqdm/notebook.py in __iter__(self, *args, **kwargs)\r\n 227 try:\r\n--> 228 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs):\r\n 229 # return super(tqdm...) will not catch exception\r\n\r\n~/anaconda3/envs/eqa/lib/python3.7/site-packages/tqdm/std.py in __iter__(self)\r\n 1132 try:\r\n-> 1133 for obj in iterable:\r\n 1134 yield obj\r\n\r\n/hdd/rag/cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2/wiki_dpr.py in _generate_examples(self, data_file, vectors_files)\r\n 131 break\r\n--> 132 vecs = np.load(open(vectors_files.pop(0), \"rb\"), allow_pickle=True)\r\n 133 vec_idx = 0\r\n\r\n~/anaconda3/envs/eqa/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)\r\n 449 raise IOError(\r\n--> 450 \"Failed to interpret file %s as a pickle\" % repr(file))\r\n 451 \r\n\r\nOSError: Failed to interpret file <_io.BufferedReader name='/hdd/rag/downloads/99dcbca09109e58502e6b9271d4d3f3791b43f61f3161a76b25d2775ab1a4498'> as a pickle\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nOSError Traceback (most recent call last)\r\n<ipython-input-8-24351ff8ce44> in <module>\r\n 4 retriever = RagRetriever.from_pretrained(\"facebook/rag-sequence-nq\", \r\n 5 index_name=\"exact\",\r\n----> 6 use_dummy_dataset=False)\r\n\r\n~/src/transformers/src/transformers/retrieval_rag.py in from_pretrained(cls, retriever_name_or_path, **kwargs)\r\n 321 generator_tokenizer = rag_tokenizer.generator\r\n 322 return cls(\r\n--> 323 config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer\r\n 324 )\r\n 325 \r\n\r\n~/src/transformers/src/transformers/retrieval_rag.py in __init__(self, config, question_encoder_tokenizer, generator_tokenizer)\r\n 310 self.config = config\r\n 311 if self._init_retrieval:\r\n--> 312 self.init_retrieval()\r\n 313 \r\n 314 @classmethod\r\n\r\n~/src/transformers/src/transformers/retrieval_rag.py in init_retrieval(self)\r\n 338 \r\n 339 logger.info(\"initializing retrieval\")\r\n--> 340 self.index.init_index()\r\n 341 \r\n 342 def postprocess_docs(self, docs, input_strings, prefix, n_docs, return_tensors=None):\r\n\r\n~/src/transformers/src/transformers/retrieval_rag.py in init_index(self)\r\n 248 split=self.dataset_split,\r\n 249 index_name=self.index_name,\r\n--> 250 dummy=self.use_dummy_dataset,\r\n 251 )\r\n 252 self.dataset.set_format(\"numpy\", columns=[\"embeddings\"], output_all_columns=True)\r\n\r\n~/src/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)\r\n 615 builder_instance.download_and_prepare(\r\n 616 download_config=download_config,\r\n--> 617 download_mode=download_mode,\r\n 618 ignore_verifications=ignore_verifications,\r\n 619 )\r\n\r\n~/src/datasets/src/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 481 # Sync info\r\n 482 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())\r\n--> 483 self.info.download_checksums = dl_manager.get_recorded_sizes_checksums()\r\n 484 self.info.size_in_bytes = self.info.dataset_size + self.info.download_size\r\n 485 # Save info\r\n\r\n~/src/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 560 if verify_infos:\r\n 561 verify_splits(self.info.splits, split_dict)\r\n--> 562 \r\n 563 # Update the info object with the splits.\r\n 564 self.info.splits = split_dict\r\n\r\nOSError: Cannot find data file.\r\n```\r\n\r\nThank you.", "An update on my end. This seems like a transient issue. Reran the script from scratch overnight with no errors. ", "Closing this one. Feel free to re-open if you have other questions about this issue" ]
1,602,080,833,000
1,608,732,271,000
1,608,732,271,000
NONE
null
## Environment info transformers version: 3.3.1 Platform: Linux-4.19 Python version: 3.7.7 PyTorch version (GPU?): 1.6.0 Tensorflow version (GPU?): No Using GPU in script?: Yes Using distributed or parallel set-up in script?: No ## To reproduce Steps to reproduce the behaviour: ``` import os os.environ['HF_DATASETS_CACHE'] = '/workspace/notebooks/POCs/cache' from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False) ``` Plese note that I'm using the whole dataset: **use_dummy_dataset=False** After around 4 hours (downloading and some other things) this is returned: ``` Downloading and preparing dataset wiki_dpr/psgs_w100.nq.exact (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /workspace/notebooks/POCs/cache/wiki_dpr/psgs_w100.nq.exact/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2... --------------------------------------------------------------------------- UnpicklingError Traceback (most recent call last) /opt/conda/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding) 459 try: --> 460 return pickle.load(fid, **pickle_kwargs) 461 except Exception: UnpicklingError: pickle data was truncated During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) /opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 552 # Prepare split will record examples associated to the split --> 553 self._prepare_split(split_generator, **prepare_split_kwargs) 554 except OSError: /opt/conda/lib/python3.7/site-packages/datasets/builder.py in _prepare_split(self, split_generator) 840 for key, record in utils.tqdm( --> 841 generator, unit=" examples", total=split_info.num_examples, leave=False, disable=not_verbose 842 ): /opt/conda/lib/python3.7/site-packages/tqdm/notebook.py in __iter__(self, *args, **kwargs) 217 try: --> 218 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): 219 # return super(tqdm...) will not catch exception /opt/conda/lib/python3.7/site-packages/tqdm/std.py in __iter__(self) 1128 try: -> 1129 for obj in iterable: 1130 yield obj ~/.cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2/wiki_dpr.py in _generate_examples(self, data_file, vectors_files) 131 break --> 132 vecs = np.load(open(vectors_files.pop(0), "rb"), allow_pickle=True) 133 vec_idx = 0 /opt/conda/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding) 462 raise IOError( --> 463 "Failed to interpret file %s as a pickle" % repr(file)) 464 finally: OSError: Failed to interpret file <_io.BufferedReader name='/workspace/notebooks/POCs/cache/downloads/f34d5f091294259b4ca90e813631e69a6ded660d71b6cbedf89ddba50df94448'> as a pickle During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) <ipython-input-10-f28df370ac47> in <module> 1 # ln -s /workspace/notebooks/POCs/cache /root/.cache/huggingface/datasets ----> 2 retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False) /opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in from_pretrained(cls, retriever_name_or_path, **kwargs) 307 generator_tokenizer = rag_tokenizer.generator 308 return cls( --> 309 config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer 310 ) 311 /opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in __init__(self, config, question_encoder_tokenizer, generator_tokenizer) 298 self.config = config 299 if self._init_retrieval: --> 300 self.init_retrieval() 301 302 @classmethod /opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in init_retrieval(self) 324 325 logger.info("initializing retrieval") --> 326 self.index.init_index() 327 328 def postprocess_docs(self, docs, input_strings, prefix, n_docs, return_tensors=None): /opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in init_index(self) 238 split=self.dataset_split, 239 index_name=self.index_name, --> 240 dummy=self.use_dummy_dataset, 241 ) 242 self.dataset.set_format("numpy", columns=["embeddings"], output_all_columns=True) /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 609 download_config=download_config, 610 download_mode=download_mode, --> 611 ignore_verifications=ignore_verifications, 612 ) 613 /opt/conda/lib/python3.7/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 474 if not downloaded_from_gcs: 475 self._download_and_prepare( --> 476 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 477 ) 478 # Sync info /opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 553 self._prepare_split(split_generator, **prepare_split_kwargs) 554 except OSError: --> 555 raise OSError("Cannot find data file. " + (self.manual_download_instructions or "")) 556 557 if verify_infos: OSError: Cannot find data file. ``` Thanks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/720/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/720/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/719
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/719/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/719/comments
https://api.github.com/repos/huggingface/datasets/issues/719/events
https://github.com/huggingface/datasets/pull/719
716,492,263
MDExOlB1bGxSZXF1ZXN0NDk5MjE5Mjg2
719
Fix train_test_split output format
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,602,074,341,000
1,602,077,888,000
1,602,077,886,000
MEMBER
null
There was an issue in the `transmit_format` wrapper that returned bad formats when using train_test_split. This was due to `column_names` being handled as a List[str] instead of Dict[str, List[str]] when the dataset transform (train_test_split) returns a DatasetDict (one set of column names per split). This should fix @timothyjlaurent 's issue in #620 and fix #676 I added tests for `transmit_format` so that it doesn't happen again
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/719/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/719/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/719", "html_url": "https://github.com/huggingface/datasets/pull/719", "diff_url": "https://github.com/huggingface/datasets/pull/719.diff", "patch_url": "https://github.com/huggingface/datasets/pull/719.patch", "merged_at": 1602077886000 }
true
https://api.github.com/repos/huggingface/datasets/issues/718
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/718/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/718/comments
https://api.github.com/repos/huggingface/datasets/issues/718/events
https://github.com/huggingface/datasets/pull/718
715,694,709
MDExOlB1bGxSZXF1ZXN0NDk4NTU5MDcw
718
Don't use tqdm 4.50.0
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,601,991,953,000
1,601,992,164,000
1,601,992,162,000
MEMBER
null
tqdm 4.50.0 introduced permission errors on windows see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/235/workflows/cfb6a39f-68eb-4802-8b17-2cd5e8ea7369/jobs/1111) for the error details. For now I just added `<4.50.0` in the setup.py Hopefully we can find what's wrong with this version soon
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/718/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/718/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/718", "html_url": "https://github.com/huggingface/datasets/pull/718", "diff_url": "https://github.com/huggingface/datasets/pull/718.diff", "patch_url": "https://github.com/huggingface/datasets/pull/718.patch", "merged_at": 1601992162000 }
true
https://api.github.com/repos/huggingface/datasets/issues/717
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/717/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/717/comments
https://api.github.com/repos/huggingface/datasets/issues/717/events
https://github.com/huggingface/datasets/pull/717
714,959,268
MDExOlB1bGxSZXF1ZXN0NDk3OTUwOTA2
717
Fixes #712 Error in the Overview.ipynb notebook
{ "login": "subhrm", "id": 850012, "node_id": "MDQ6VXNlcjg1MDAxMg==", "avatar_url": "https://avatars.githubusercontent.com/u/850012?v=4", "gravatar_id": "", "url": "https://api.github.com/users/subhrm", "html_url": "https://github.com/subhrm", "followers_url": "https://api.github.com/users/subhrm/followers", "following_url": "https://api.github.com/users/subhrm/following{/other_user}", "gists_url": "https://api.github.com/users/subhrm/gists{/gist_id}", "starred_url": "https://api.github.com/users/subhrm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/subhrm/subscriptions", "organizations_url": "https://api.github.com/users/subhrm/orgs", "repos_url": "https://api.github.com/users/subhrm/repos", "events_url": "https://api.github.com/users/subhrm/events{/privacy}", "received_events_url": "https://api.github.com/users/subhrm/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,601,913,041,000
1,601,965,903,000
1,601,915,141,000
CONTRIBUTOR
null
Fixes #712 Error in the Overview.ipynb notebook by adding `with_details=True` parameter to `list_datasets` function in Cell 3 of **overview** notebook
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/717/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/717/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/717", "html_url": "https://github.com/huggingface/datasets/pull/717", "diff_url": "https://github.com/huggingface/datasets/pull/717.diff", "patch_url": "https://github.com/huggingface/datasets/pull/717.patch", "merged_at": 1601915140000 }
true
https://api.github.com/repos/huggingface/datasets/issues/716
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/716/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/716/comments
https://api.github.com/repos/huggingface/datasets/issues/716/events
https://github.com/huggingface/datasets/pull/716
714,952,888
MDExOlB1bGxSZXF1ZXN0NDk3OTQ1ODAw
716
Fixes #712 Attribute error in cell 3 of the overview notebook
{ "login": "subhrm", "id": 850012, "node_id": "MDQ6VXNlcjg1MDAxMg==", "avatar_url": "https://avatars.githubusercontent.com/u/850012?v=4", "gravatar_id": "", "url": "https://api.github.com/users/subhrm", "html_url": "https://github.com/subhrm", "followers_url": "https://api.github.com/users/subhrm/followers", "following_url": "https://api.github.com/users/subhrm/following{/other_user}", "gists_url": "https://api.github.com/users/subhrm/gists{/gist_id}", "starred_url": "https://api.github.com/users/subhrm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/subhrm/subscriptions", "organizations_url": "https://api.github.com/users/subhrm/orgs", "repos_url": "https://api.github.com/users/subhrm/repos", "events_url": "https://api.github.com/users/subhrm/events{/privacy}", "received_events_url": "https://api.github.com/users/subhrm/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Referencing the wrong issue # in the commit message. Closing this to fix it again." ]
1,601,912,529,000
1,601,912,798,000
1,601,912,792,000
CONTRIBUTOR
null
Fixes the Attribute error in cell 3 of the overview notebook
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/716/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/716/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/716", "html_url": "https://github.com/huggingface/datasets/pull/716", "diff_url": "https://github.com/huggingface/datasets/pull/716.diff", "patch_url": "https://github.com/huggingface/datasets/pull/716.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/715
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/715/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/715/comments
https://api.github.com/repos/huggingface/datasets/issues/715/events
https://github.com/huggingface/datasets/pull/715
714,690,192
MDExOlB1bGxSZXF1ZXN0NDk3NzMwMDQ2
715
Use python read for text dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "One thing though, could we try to read the files in parallel?", "We could but I'm not sure this would help a lot since the bottleneck is the drive IO if the files are big enough.\r\nIt could make sense for very small files.", "Looks like windows is not a big fan of this approach\r\nI'm working on a fix", "I remember issue https://github.com/huggingface/datasets/issues/546 where this was kinda requested (but maybe IO would bottleneck). What do you think?", "I think it's worth testing multiprocessing. It could also be something we add to our speed benchmarks", "> I remember issue #546 where this was kinda requested (but maybe IO would bottleneck). What do you think?\r\n\r\nIt still would be interesting I think, especially in scenarios where IO is less of an issue (SSDs particularly) and where there are many smaller files. Wrapping this function in a `pool.map` is perhaps an easy thing to try. ", "Merging this one for now for the patch release" ]
1,601,891,275,000
1,601,903,598,000
1,601,903,597,000
MEMBER
null
As mentioned in #622 the pandas reader used for text dataset doesn't work properly when there are \r characters in the text file. Instead I switched to pure python using `open` and `read`. From my benchmark on a 100MB text file, it's the same speed as the previous pandas reader.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/715/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 3, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/715/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/715", "html_url": "https://github.com/huggingface/datasets/pull/715", "diff_url": "https://github.com/huggingface/datasets/pull/715.diff", "patch_url": "https://github.com/huggingface/datasets/pull/715.patch", "merged_at": 1601903596000 }
true
https://api.github.com/repos/huggingface/datasets/issues/714
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/714/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/714/comments
https://api.github.com/repos/huggingface/datasets/issues/714/events
https://github.com/huggingface/datasets/pull/714
714,487,881
MDExOlB1bGxSZXF1ZXN0NDk3NTYzNjAx
714
Add the official dependabot implementation
{ "login": "ALazyMeme", "id": 12804673, "node_id": "MDQ6VXNlcjEyODA0Njcz", "avatar_url": "https://avatars.githubusercontent.com/u/12804673?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ALazyMeme", "html_url": "https://github.com/ALazyMeme", "followers_url": "https://api.github.com/users/ALazyMeme/followers", "following_url": "https://api.github.com/users/ALazyMeme/following{/other_user}", "gists_url": "https://api.github.com/users/ALazyMeme/gists{/gist_id}", "starred_url": "https://api.github.com/users/ALazyMeme/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ALazyMeme/subscriptions", "organizations_url": "https://api.github.com/users/ALazyMeme/orgs", "repos_url": "https://api.github.com/users/ALazyMeme/repos", "events_url": "https://api.github.com/users/ALazyMeme/events{/privacy}", "received_events_url": "https://api.github.com/users/ALazyMeme/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,601,869,785,000
1,602,503,361,000
1,602,503,361,000
NONE
null
This will keep dependencies up to date. This will require a pr label `dependencies` being created in order to function correctly.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/714/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/714/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/714", "html_url": "https://github.com/huggingface/datasets/pull/714", "diff_url": "https://github.com/huggingface/datasets/pull/714.diff", "patch_url": "https://github.com/huggingface/datasets/pull/714.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/713
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/713/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/713/comments
https://api.github.com/repos/huggingface/datasets/issues/713/events
https://github.com/huggingface/datasets/pull/713
714,475,732
MDExOlB1bGxSZXF1ZXN0NDk3NTUzOTUy
713
Fix reading text files with carriage return symbols
{ "login": "mozharovsky", "id": 6762769, "node_id": "MDQ6VXNlcjY3NjI3Njk=", "avatar_url": "https://avatars.githubusercontent.com/u/6762769?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mozharovsky", "html_url": "https://github.com/mozharovsky", "followers_url": "https://api.github.com/users/mozharovsky/followers", "following_url": "https://api.github.com/users/mozharovsky/following{/other_user}", "gists_url": "https://api.github.com/users/mozharovsky/gists{/gist_id}", "starred_url": "https://api.github.com/users/mozharovsky/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mozharovsky/subscriptions", "organizations_url": "https://api.github.com/users/mozharovsky/orgs", "repos_url": "https://api.github.com/users/mozharovsky/repos", "events_url": "https://api.github.com/users/mozharovsky/events{/privacy}", "received_events_url": "https://api.github.com/users/mozharovsky/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Discussed in #622, fixed in #715. Closing the issue. Thanks @lhoestq, it works now! 👍 " ]
1,601,867,223,000
1,602,223,105,000
1,601,905,769,000
NONE
null
The new pandas-based text reader isn't able to work properly with files that contain carriage return symbols (`\r`). It fails with the following error message: ``` ... File "pandas/_libs/parsers.pyx", line 847, in pandas._libs.parsers.TextReader.read File "pandas/_libs/parsers.pyx", line 874, in pandas._libs.parsers.TextReader._read_low_memory File "pandas/_libs/parsers.pyx", line 918, in pandas._libs.parsers.TextReader._read_rows File "pandas/_libs/parsers.pyx", line 905, in pandas._libs.parsers.TextReader._tokenize_rows File "pandas/_libs/parsers.pyx", line 2042, in pandas._libs.parsers.raise_parser_error pandas.errors.ParserError: Error tokenizing data. C error: Buffer overflow caught - possible malformed input file. ``` ___ I figured out the pandas uses those symbols as line terminators and this eventually causes the error. Explicitly specifying the `lineterminator` fixes that issue and everything works fine. Please, consider this PR as it seems to be a common issue to solve.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/713/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/713/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/713", "html_url": "https://github.com/huggingface/datasets/pull/713", "diff_url": "https://github.com/huggingface/datasets/pull/713.diff", "patch_url": "https://github.com/huggingface/datasets/pull/713.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/712
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/712/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/712/comments
https://api.github.com/repos/huggingface/datasets/issues/712/events
https://github.com/huggingface/datasets/issues/712
714,242,316
MDU6SXNzdWU3MTQyNDIzMTY=
712
Error in the notebooks/Overview.ipynb notebook
{ "login": "subhrm", "id": 850012, "node_id": "MDQ6VXNlcjg1MDAxMg==", "avatar_url": "https://avatars.githubusercontent.com/u/850012?v=4", "gravatar_id": "", "url": "https://api.github.com/users/subhrm", "html_url": "https://github.com/subhrm", "followers_url": "https://api.github.com/users/subhrm/followers", "following_url": "https://api.github.com/users/subhrm/following{/other_user}", "gists_url": "https://api.github.com/users/subhrm/gists{/gist_id}", "starred_url": "https://api.github.com/users/subhrm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/subhrm/subscriptions", "organizations_url": "https://api.github.com/users/subhrm/orgs", "repos_url": "https://api.github.com/users/subhrm/repos", "events_url": "https://api.github.com/users/subhrm/events{/privacy}", "received_events_url": "https://api.github.com/users/subhrm/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Do this:\r\n``` python\r\nsquad_dataset = list_datasets(with_details=True)[datasets.index('squad')]\r\npprint(squad_dataset.__dict__) # It's a simple python dataclass\r\n```", "Thanks! This worked. I have created a PR to fix this in the notebook. " ]
1,601,791,111,000
1,601,915,140,000
1,601,915,140,000
CONTRIBUTOR
null
Hi, I got the following error in **cell number 3** while exploring the **Overview.ipynb** notebook in google colab. I used the [link ](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) provided in the main README file to open it in colab. ```python # You can access various attributes of the datasets before downloading them squad_dataset = list_datasets()[datasets.index('squad')] pprint(squad_dataset.__dict__) # It's a simple python dataclass ``` Error message ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-5-8dc805c4949c> in <module>() 2 squad_dataset = list_datasets()[datasets.index('squad')] 3 ----> 4 pprint(squad_dataset.__dict__) # It's a simple python dataclass AttributeError: 'str' object has no attribute '__dict__' ``` The object `squad_dataset` is a `str` not a `dataclass` .
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/712/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/712/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/710
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/710/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/710/comments
https://api.github.com/repos/huggingface/datasets/issues/710/events
https://github.com/huggingface/datasets/pull/710
714,186,999
MDExOlB1bGxSZXF1ZXN0NDk3MzQ1NjQ0
710
fix README typos/ consistency
{ "login": "discdiver", "id": 7703961, "node_id": "MDQ6VXNlcjc3MDM5NjE=", "avatar_url": "https://avatars.githubusercontent.com/u/7703961?v=4", "gravatar_id": "", "url": "https://api.github.com/users/discdiver", "html_url": "https://github.com/discdiver", "followers_url": "https://api.github.com/users/discdiver/followers", "following_url": "https://api.github.com/users/discdiver/following{/other_user}", "gists_url": "https://api.github.com/users/discdiver/gists{/gist_id}", "starred_url": "https://api.github.com/users/discdiver/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/discdiver/subscriptions", "organizations_url": "https://api.github.com/users/discdiver/orgs", "repos_url": "https://api.github.com/users/discdiver/repos", "events_url": "https://api.github.com/users/discdiver/events{/privacy}", "received_events_url": "https://api.github.com/users/discdiver/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,601,763,656,000
1,602,928,365,000
1,602,928,365,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/710/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/710/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/710", "html_url": "https://github.com/huggingface/datasets/pull/710", "diff_url": "https://github.com/huggingface/datasets/pull/710.diff", "patch_url": "https://github.com/huggingface/datasets/pull/710.patch", "merged_at": 1602928365000 }
true
https://api.github.com/repos/huggingface/datasets/issues/708
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/708/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/708/comments
https://api.github.com/repos/huggingface/datasets/issues/708/events
https://github.com/huggingface/datasets/issues/708
714,020,953
MDU6SXNzdWU3MTQwMjA5NTM=
708
Datasets performance slow? - 6.4x slower than in memory dataset
{ "login": "eugeneware", "id": 38154, "node_id": "MDQ6VXNlcjM4MTU0", "avatar_url": "https://avatars.githubusercontent.com/u/38154?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eugeneware", "html_url": "https://github.com/eugeneware", "followers_url": "https://api.github.com/users/eugeneware/followers", "following_url": "https://api.github.com/users/eugeneware/following{/other_user}", "gists_url": "https://api.github.com/users/eugeneware/gists{/gist_id}", "starred_url": "https://api.github.com/users/eugeneware/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eugeneware/subscriptions", "organizations_url": "https://api.github.com/users/eugeneware/orgs", "repos_url": "https://api.github.com/users/eugeneware/repos", "events_url": "https://api.github.com/users/eugeneware/events{/privacy}", "received_events_url": "https://api.github.com/users/eugeneware/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Facing a similar issue here. My model using SQuAD dataset takes about 1h to process with in memory data and more than 2h with datasets directly.", "And if you use in-memory-data with datasets with `load_dataset(..., keep_in_memory=True)`?", "Thanks for the tip @thomwolf ! I did not see that flag in the docs. I'll try with that.", "We should add it indeed and also maybe a specific section with all the tips for maximal speed. What do you think @lhoestq @SBrandeis @yjernite ?", "By default the datasets loaded with `load_dataset` live on disk.\r\nIt's possible to load them in memory by using some transforms like `.map(..., keep_in_memory=True)`.\r\n\r\nSmall correction to @thomwolf 's comment above: currently we don't have the `keep_in_memory` parameter for `load_dataset` AFAIK but it would be nice to add it indeed :)", "Yes indeed we should add it!", "Great! Thanks a lot.\r\n\r\nI did a test using `map(..., keep_in_memory=True)` and also a test using in-memory only data.\r\n\r\n```python\r\nfeatures = dataset.map(tokenize, batched=True, remove_columns=dataset['train'].column_names)\r\nfeatures.set_format(type='torch', columns=['input_ids', 'token_type_ids', 'attention_mask'])\r\n\r\nfeatures_in_memory = dataset.map(tokenize, batched=True, keep_in_memory=True, remove_columns=dataset['train'].column_names)\r\nfeatures_in_memory.set_format(type='torch', columns=['input_ids', 'token_type_ids', 'attention_mask'])\r\n\r\nin_memory = [features['train'][i] for i in range(len(features['train']))]\r\n```\r\n\r\nFor using the features without any tweak, I got **1min17s** for copying the entire DataLoader to CUDA:\r\n\r\n```\r\n%%time\r\n\r\nfor i, batch in enumerate(DataLoader(features['train'], batch_size=16, num_workers=4)):\r\n batch['input_ids'].to(device)\r\n```\r\n\r\nFor using the features mapped with `keep_in_memory=True`, I also got **1min17s** for copying the entire DataLoader to CUDA:\r\n\r\n```\r\n%%time\r\n\r\nfor i, batch in enumerate(DataLoader(features_in_memory['train'], batch_size=16, num_workers=4)):\r\n batch['input_ids'].to(device)\r\n```\r\n\r\nAnd for the case using every element in memory, converted from the original dataset, I got **12.5s**:\r\n\r\n```\r\n%%time\r\n\r\nfor i, batch in enumerate(DataLoader(in_memory, batch_size=16, num_workers=4)):\r\n batch['input_ids'].to(device)\r\n```\r\n\r\nTaking a closer look in my SQuAD code, using a profiler, I see a lot of calls to `posix read` api. It seems that it is really reliying on disk, which results in a very high train time.", "I am having the same issue here. When loading from memory I can get the GPU up to 70% util but when loading after mapping I can only get 40%.\r\n\r\nIn disk:\r\n```\r\nbook_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='/home/ad/Desktop/bookcorpus', split='train[:20%]')\r\nbook_corpus = book_corpus.map(encode, batched=True, num_proc=20, load_from_cache_file=True, batch_size=2500)\r\nbook_corpus.set_format(type='torch', columns=['text', \"input_ids\", \"attention_mask\", \"token_type_ids\"])\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\"./mobile_bert_big\",\r\n overwrite_output_dir=True,\r\n num_train_epochs=1,\r\n per_device_train_batch_size=32,\r\n per_device_eval_batch_size=16,\r\n save_steps=50,\r\n save_total_limit=2,\r\n logging_first_step=True,\r\n warmup_steps=100,\r\n logging_steps=50,\r\n eval_steps=100,\r\n no_cuda=False,\r\n gradient_accumulation_steps=16,\r\n fp16=True)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=book_corpus,\r\n tokenizer=tokenizer)\r\n```\r\n\r\nIn disk I can only get 0,17 it/s:\r\n`[ 13/28907 01:03 < 46:03:27, 0.17 it/s, Epoch 0.00/1] `\r\n\r\nIf I load it with torch.utils.data.Dataset()\r\n```\r\nclass BCorpusDataset(torch.utils.data.Dataset):\r\n def __init__(self, encodings):\r\n self.encodings = encodings\r\n\r\n def __getitem__(self, idx):\r\n item = [torch.tensor(val[idx]) for key, val in self.encodings.items()][0]\r\n return item\r\n\r\n def __len__(self):\r\n length = [len(val) for key, val in self.encodings.items()][0]\r\n return length\r\n\r\n**book_corpus = book_corpus.select([i for i in range(16*2000)])** # filtering to not have 20% of BC in memory...\r\nbook_corpus = book_corpus(book_corpus)\r\n```\r\nI can get:\r\n` [ 5/62 00:09 < 03:03, 0.31 it/s, Epoch 0.06/1]`\r\n\r\nBut obviously I can not get BookCorpus in memory xD\r\n\r\nEDIT: it is something weird. If i load in disk 1% of bookcorpus:\r\n```\r\nbook_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='/home/ad/Desktop/bookcorpus', split='train[:1%]')\r\n```\r\n\r\nI can get 0.28 it/s, (the same that in memory) but if I load 20% of bookcorpus:\r\n```\r\nbook_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='/home/ad/Desktop/bookcorpus', split='train[:20%]')\r\n```\r\nI get again 0.17 it/s. \r\n\r\nI am missing something? I think it is something related to size, and not disk or in-memory.", "There is a way to increase the batches read from memory? or multiprocessed it? I think that one of two or it is reading with just 1 core o it is reading very small chunks from disk and left my GPU at 0 between batches", "My fault! I had not seen the `dataloader_num_workers` in `TrainingArguments` ! Now I can parallelize and go fast! Sorry, and thanks." ]
1,601,707,447,000
1,613,139,208,000
1,613,139,208,000
NONE
null
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset. Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower. For example, in the `yelp_polarity` dataset (560000 datapoints, or 17500 batches of 32), it was taking me 3:31 to just get process the data and get it on the GPU (no model involved). Whereas, the equivalent in-memory dataset would finish in just 0:33. Is this expected? Given that one of the goals of this project is also accelerate dataset processing, this seems a bit slower than I would expect. I understand the advantages of being able to work on datasets that exceed memory, and that's very exciting to me, but thought I'd open this issue to discuss. For reference I'm running a AMD Ryzen Threadripper 1900X 8-Core Processor CPU, with 128 GB of RAM and an NVME SSD Samsung 960 EVO. I'm running with an RTX Titan 24GB GPU. I can see with `iotop` that the dataset gets quickly loaded into the system read buffers, and thus doesn't incur any additional IO reads. Thus in theory, all the data *should* be in RAM, but in my benchmark code below it's still 6.4 times slower. What am I doing wrong? And is there a way to force the datasets to completely load into memory instead of being memory mapped in cases where you want maximum performance? At 3:31 for 17500 batches, that's 12ms per batch. Does this 12ms just become insignificant as a proportion of forward and backward passes in practice, and thus it's not worth worrying about this in practice? In any case, here's my code `benchmark.py`. If you run it with an argument of `memory` it will copy the data into memory before executing the same test. ``` py import sys from datasets import load_dataset from transformers import DataCollatorWithPadding, BertTokenizerFast from torch.utils.data import DataLoader from tqdm import tqdm if __name__ == '__main__': tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased') collate_fn = DataCollatorWithPadding(tokenizer, padding=True) ds = load_dataset('yelp_polarity') def do_tokenize(x): return tokenizer(x['text'], truncation=True) ds = ds.map(do_tokenize, batched=True) ds.set_format('torch', ['input_ids', 'token_type_ids', 'attention_mask']) if len(sys.argv) == 2 and sys.argv[1] == 'memory': # copy to memory - probably a faster way to do this - but demonstrates the point # approximately 530 batches per second - 17500 batches in 0:33 print('using memory') _ds = [data for data in tqdm(ds['train'])] else: # approximately 83 batches per second - 17500 batches in 3:31 print('using datasets') _ds = ds['train'] dl = DataLoader(_ds, shuffle=True, collate_fn=collate_fn, batch_size=32, num_workers=4) for data in tqdm(dl): for k, v in data.items(): data[k] = v.to('cuda') ``` For reference, my conda environment is [here](https://gist.github.com/05b6101518ff70ed42a858b302a0405d) Once again, I'm very excited about this library, and how easy it is to load datasets, and to do so without worrying about system memory constraints. Thanks for all your great work.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/708/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/708/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/707
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/707/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/707/comments
https://api.github.com/repos/huggingface/datasets/issues/707/events
https://github.com/huggingface/datasets/issues/707
713,954,666
MDU6SXNzdWU3MTM5NTQ2NjY=
707
Requirements should specify pyarrow<1
{ "login": "mathcass", "id": 918541, "node_id": "MDQ6VXNlcjkxODU0MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/918541?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mathcass", "html_url": "https://github.com/mathcass", "followers_url": "https://api.github.com/users/mathcass/followers", "following_url": "https://api.github.com/users/mathcass/following{/other_user}", "gists_url": "https://api.github.com/users/mathcass/gists{/gist_id}", "starred_url": "https://api.github.com/users/mathcass/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mathcass/subscriptions", "organizations_url": "https://api.github.com/users/mathcass/orgs", "repos_url": "https://api.github.com/users/mathcass/repos", "events_url": "https://api.github.com/users/mathcass/events{/privacy}", "received_events_url": "https://api.github.com/users/mathcass/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hello @mathcass I would want to work on this issue. May I do the same? ", "@punitaojha, certainly. Feel free to work on this. Let me know if you need any help or clarity.", "Hello @mathcass \r\n1. I did fork the repository and clone the same on my local system. \r\n\r\n2. Then learnt about how we can publish our package on pypi.org. Also, found some instructions on same in setup.py documentation.\r\n\r\n3. Then I Perplexity document link that you shared above. I created a colab link from there keep both tensorflow and pytorch means a mixed option and tried to run it in colab but I encountered no errors at a point where you mentioned. Can you help me to figure out the issue. \r\n\r\n4.Here is the link of the colab file with my saved responses. \r\nhttps://colab.research.google.com/drive/1hfYz8Ira39FnREbxgwa_goZWpOojp2NH?usp=sharing", "Also, please share some links which made you conclude that pyarrow < 1 would help. ", "Access granted for the colab link. ", "Thanks for looking at this @punitaojha and thanks for sharing the notebook. \r\n\r\nI just tried to reproduce this on my own (based on the environment where I had this issue) and I can't reproduce it somehow. If I run into this again, I'll include some steps to reproduce it. I'll close this as invalid. \r\n\r\nThanks again. ", "I am sorry for hijacking this closed issue, but I believe I was able to reproduce this very issue. Strangely enough, it also turned out that running `pip install \"pyarrow<1\" --upgrade` did indeed fix the issue (PyArrow was installed in version `0.14.1` in my case).\r\n\r\nPlease see the Colab below:\r\n\r\nhttps://colab.research.google.com/drive/15QQS3xWjlKW2aK0J74eEcRFuhXUddUST\r\n\r\nThanks!" ]
1,601,681,979,000
1,607,070,159,000
1,601,844,628,000
NONE
null
I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error, ``` module 'pyarrow' has no attribute 'PyExtensionType' ``` I traced it back to datasets having installed PyArrow 1.0.1 but there's not pinning in the setup file. https://github.com/huggingface/datasets/blob/e86a2a8f869b91654e782c9133d810bb82783200/setup.py#L68 Downgrading by installing `pip install "pyarrow<1"` resolved the issue.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/707/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/707/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/706
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/706/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/706/comments
https://api.github.com/repos/huggingface/datasets/issues/706/events
https://github.com/huggingface/datasets/pull/706
713,721,959
MDExOlB1bGxSZXF1ZXN0NDk2OTkwMDA0
706
Fix config creation for data files with NamedSplit
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,601,653,609,000
1,601,885,700,000
1,601,885,699,000
MEMBER
null
During config creation, we need to iterate through the data files of all the splits to compute a hash. To make sure the hash is unique given a certain combination of files/splits, we sort the split names. However the `NamedSplit` objects can't be passed to `sorted` and currently it raises an error: we need to sort the string of their names instead. Fix #705
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/706/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/706/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/706", "html_url": "https://github.com/huggingface/datasets/pull/706", "diff_url": "https://github.com/huggingface/datasets/pull/706.diff", "patch_url": "https://github.com/huggingface/datasets/pull/706.patch", "merged_at": 1601885699000 }
true
https://api.github.com/repos/huggingface/datasets/issues/705
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/705/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/705/comments
https://api.github.com/repos/huggingface/datasets/issues/705/events
https://github.com/huggingface/datasets/issues/705
713,709,100
MDU6SXNzdWU3MTM3MDkxMDA=
705
TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit'
{ "login": "pvcastro", "id": 12713359, "node_id": "MDQ6VXNlcjEyNzEzMzU5", "avatar_url": "https://avatars.githubusercontent.com/u/12713359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pvcastro", "html_url": "https://github.com/pvcastro", "followers_url": "https://api.github.com/users/pvcastro/followers", "following_url": "https://api.github.com/users/pvcastro/following{/other_user}", "gists_url": "https://api.github.com/users/pvcastro/gists{/gist_id}", "starred_url": "https://api.github.com/users/pvcastro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pvcastro/subscriptions", "organizations_url": "https://api.github.com/users/pvcastro/orgs", "repos_url": "https://api.github.com/users/pvcastro/repos", "events_url": "https://api.github.com/users/pvcastro/events{/privacy}", "received_events_url": "https://api.github.com/users/pvcastro/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi !\r\nThanks for reporting :) \r\nIndeed this is an issue on the `datasets` side.\r\nI'm creating a PR", "Thanks @lhoestq !" ]
1,601,652,475,000
1,601,885,699,000
1,601,885,699,000
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 (installed from master) - `datasets` version: 1.0.2 (installed as a dependency from transformers) - Platform: Linux-4.15.0-118-generic-x86_64-with-debian-stretch-sid - Python version: 3.7.9 I'm testing my own text classification dataset using [this example](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow) from transformers. The dataset is split into train / dev / test, and in csv format, containing just a text and a label columns, using comma as sep. Here's a sample: ``` text,label "Registra-se a presença do acadêmico <name> . <REL_SEP> Ao me deparar com a descrição de dois autores no polo ativo da ação junto ao PJe , margem esquerda foi informado pela procuradora do reclamante que se trata de uma reclamação trabalhista individual . <REL_SEP> Diante disso , face a ausência injustificada do autor <name> , determina-se o ARQUIVAMENTO do presente processo , com relação a este , nos termos do [[ art . 844 da CLT ]] . <REL_SEP> CUSTAS AUTOR - DISPENSADO <REL_SEP> Custas pelo autor no importe de R $326,82 , calculadas sobre R $16.341,03 , dispensadas na forma da lei , em virtude da concessão dos benefícios da Justiça Gratuita , ora deferida . <REL_SEP> Cientes os presentes . <REL_SEP> Audiência encerrada às 8h42min . <REL_SEP> <name> <REL_SEP> Juíza do Trabalho <REL_SEP> Ata redigida por << <name> >> , Secretário de Audiência .",NO_RELATION ``` However, @Santosh-Gupta reported in #7351 that he had the exact same problem using the ChemProt dataset. His colab notebook is referenced in the following section. ## To reproduce Steps to reproduce the behavior: 1. Created a new conda environment using conda env -n transformers python=3.7 2. Cloned transformers master, `cd` into it and installed using pip install --editable . -r examples/requirements.txt 3. Installed tensorflow with `pip install tensorflow` 3. Ran `run_tf_text_classification.py` with the following parameters: ``` --train_file <DATASET_PATH>/train.csv \ --dev_file <DATASET_PATH>/dev.csv \ --test_file <DATASET_PATH>/test.csv \ --label_column_id 1 \ --model_name_or_path neuralmind/bert-base-portuguese-cased \ --output_dir <OUTPUT_PATH> \ --num_train_epochs 4 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --do_train \ --do_eval \ --do_predict \ --logging_steps 1000 \ --evaluate_during_training \ --save_steps 1000 \ --overwrite_output_dir \ --overwrite_cache ``` I have also copied [@Santosh-Gupta 's colab notebook](https://colab.research.google.com/drive/11APei6GjphCZbH5wD9yVlfGvpIkh8pwr?usp=sharing) as a reference. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> Here is the stack trace: ``` 2020-10-02 07:33:41.622011: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 /media/discoD/repositorios/transformers_pedro/src/transformers/training_args.py:333: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options) FutureWarning, 2020-10-02 07:33:43.471648: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1 2020-10-02 07:33:43.471791: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.472664: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1 coreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB/s 2020-10-02 07:33:43.472684: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 2020-10-02 07:33:43.472765: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10 2020-10-02 07:33:43.472809: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10 2020-10-02 07:33:43.472848: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10 2020-10-02 07:33:43.474209: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10 2020-10-02 07:33:43.474276: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10 2020-10-02 07:33:43.561219: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7 2020-10-02 07:33:43.561397: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.562345: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.563219: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0 2020-10-02 07:33:43.563595: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2020-10-02 07:33:43.570091: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 3591830000 Hz 2020-10-02 07:33:43.570494: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x560842432400 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-10-02 07:33:43.570511: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-10-02 07:33:43.570702: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.571599: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1 coreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB/s 2020-10-02 07:33:43.571633: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 2020-10-02 07:33:43.571645: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10 2020-10-02 07:33:43.571654: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10 2020-10-02 07:33:43.571664: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10 2020-10-02 07:33:43.571691: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10 2020-10-02 07:33:43.571704: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10 2020-10-02 07:33:43.571718: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7 2020-10-02 07:33:43.571770: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.572641: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.573475: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0 2020-10-02 07:33:47.139227: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-10-02 07:33:47.139265: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0 2020-10-02 07:33:47.139272: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N 2020-10-02 07:33:47.140323: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:47.141248: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:47.142085: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:47.142854: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5371 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1) 2020-10-02 07:33:47.146317: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5608b95dc5c0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-10-02 07:33:47.146336: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce GTX 1070, Compute Capability 6.1 10/02/2020 07:33:47 - INFO - __main__ - n_replicas: 1, distributed training: False, 16-bits training: False 10/02/2020 07:33:47 - INFO - __main__ - Training/evaluation parameters TFTrainingArguments(output_dir='/media/discoD/models/datalawyer/pedidos/transformers_tf', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=True, evaluate_during_training=True, evaluation_strategy=<EvaluationStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=4, per_device_eval_batch_size=4, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=4.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Oct02_07-33-43_user-XPS-8700', logging_first_step=False, logging_steps=1000, save_steps=1000, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=1000, dataloader_num_workers=0, past_index=-1, run_name='/media/discoD/models/datalawyer/pedidos/transformers_tf', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=False, tpu_name=None, xla=False) 10/02/2020 07:33:53 - INFO - filelock - Lock 140407857405776 acquired on /home/user/.cache/huggingface/datasets/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock 10/02/2020 07:33:53 - INFO - filelock - Lock 140407857405776 released on /home/user/.cache/huggingface/datasets/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock Using custom data configuration default Traceback (most recent call last): File "run_tf_text_classification.py", line 283, in <module> main() File "run_tf_text_classification.py", line 222, in main max_seq_length=data_args.max_seq_length, File "run_tf_text_classification.py", line 43, in get_tfds ds = datasets.load_dataset("csv", data_files=files) File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/load.py", line 604, in load_dataset **config_kwargs, File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 158, in __init__ **config_kwargs, File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 269, in _create_builder_config for key in sorted(data_files.keys()): TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit' ``` ## Expected behavior Should be able to run the text-classification example as described in [https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow) Originally opened this issue at transformers' repository: [https://github.com/huggingface/transformers/issues/7535](https://github.com/huggingface/transformers/issues/7535). @jplu instructed me to open here, since according to [this](https://github.com/huggingface/transformers/issues/7535#issuecomment-702778885) evidence, the problem is from datasets. Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/705/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/705/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/704
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/704/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/704/comments
https://api.github.com/repos/huggingface/datasets/issues/704/events
https://github.com/huggingface/datasets/pull/704
713,572,556
MDExOlB1bGxSZXF1ZXN0NDk2ODY2NTQ0
704
Fix remote tests for new datasets
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,601,640,484,000
1,601,640,722,000
1,601,640,721,000
MEMBER
null
When adding a new dataset, the remote tests fail because they try to get the new dataset from the master branch (i.e., where the dataset doesn't exist yet) To fix that I reverted to the use of the HF API that fetch the available datasets on S3 that is synced with the master branch
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/704/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/704/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/704", "html_url": "https://github.com/huggingface/datasets/pull/704", "diff_url": "https://github.com/huggingface/datasets/pull/704.diff", "patch_url": "https://github.com/huggingface/datasets/pull/704.patch", "merged_at": 1601640721000 }
true
https://api.github.com/repos/huggingface/datasets/issues/703
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/703/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/703/comments
https://api.github.com/repos/huggingface/datasets/issues/703/events
https://github.com/huggingface/datasets/pull/703
713,559,718
MDExOlB1bGxSZXF1ZXN0NDk2ODU1OTQ5
703
Add hotpot QA
{ "login": "ghomasHudson", "id": 13795113, "node_id": "MDQ6VXNlcjEzNzk1MTEz", "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghomasHudson", "html_url": "https://github.com/ghomasHudson", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Awesome :) \r\n\r\nDon't pay attention to the RemoteDatasetTest error, I'm fixing it right now", "You can rebase from master to fix the CI test :)", "If we're lucky we can even include this dataset in today's release", "Just thinking since `type` can only be `comparison` or `bridge` and `level` can only be `easy`, `medium`, `hard` should they be `ClassLabel`?", "> Just thinking since `type` can only be `comparison` or `bridge` and `level` can only be `easy`, `medium`, `hard` should they be `ClassLabel`?\r\n\r\nI think it's more a tag than a label. I guess a string is fine\r\n" ]
1,601,639,068,000
1,601,643,281,000
1,601,643,281,000
CONTRIBUTOR
null
Added the [HotpotQA](https://github.com/hotpotqa/hotpot) multi-hop question answering dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/703/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/703/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/703", "html_url": "https://github.com/huggingface/datasets/pull/703", "diff_url": "https://github.com/huggingface/datasets/pull/703.diff", "patch_url": "https://github.com/huggingface/datasets/pull/703.patch", "merged_at": 1601643280000 }
true
https://api.github.com/repos/huggingface/datasets/issues/702
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/702/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/702/comments
https://api.github.com/repos/huggingface/datasets/issues/702/events
https://github.com/huggingface/datasets/pull/702
713,499,628
MDExOlB1bGxSZXF1ZXN0NDk2ODA3Mjg4
702
Complete rouge kwargs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,601,632,741,000
1,601,633,464,000
1,601,633,463,000
MEMBER
null
In #701 we noticed that some kwargs were missing for rouge
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/702/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/702/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/702", "html_url": "https://github.com/huggingface/datasets/pull/702", "diff_url": "https://github.com/huggingface/datasets/pull/702.diff", "patch_url": "https://github.com/huggingface/datasets/pull/702.patch", "merged_at": 1601633463000 }
true
https://api.github.com/repos/huggingface/datasets/issues/701
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/701/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/701/comments
https://api.github.com/repos/huggingface/datasets/issues/701/events
https://github.com/huggingface/datasets/pull/701
713,485,757
MDExOlB1bGxSZXF1ZXN0NDk2Nzk2MTQ1
701
Add rouge 2 and rouge Lsum to rouge metric outputs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Oups too late, sorry" ]
1,601,631,346,000
1,601,632,514,000
1,601,632,338,000
MEMBER
null
Continuation of #700 Rouge 2 and Rouge Lsum were missing in Rouge's outputs. Rouge Lsum is also useful to evaluate Rouge L for sentences with `\n` Fix #617
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/701/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/701/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/701", "html_url": "https://github.com/huggingface/datasets/pull/701", "diff_url": "https://github.com/huggingface/datasets/pull/701.diff", "patch_url": "https://github.com/huggingface/datasets/pull/701.patch", "merged_at": 1601632338000 }
true
https://api.github.com/repos/huggingface/datasets/issues/700
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/700/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/700/comments
https://api.github.com/repos/huggingface/datasets/issues/700/events
https://github.com/huggingface/datasets/pull/700
713,450,295
MDExOlB1bGxSZXF1ZXN0NDk2NzY3MTMz
700
Add rouge-2 in rouge_types for metric calculation
{ "login": "Shashi456", "id": 18056781, "node_id": "MDQ6VXNlcjE4MDU2Nzgx", "avatar_url": "https://avatars.githubusercontent.com/u/18056781?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Shashi456", "html_url": "https://github.com/Shashi456", "followers_url": "https://api.github.com/users/Shashi456/followers", "following_url": "https://api.github.com/users/Shashi456/following{/other_user}", "gists_url": "https://api.github.com/users/Shashi456/gists{/gist_id}", "starred_url": "https://api.github.com/users/Shashi456/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Shashi456/subscriptions", "organizations_url": "https://api.github.com/users/Shashi456/orgs", "repos_url": "https://api.github.com/users/Shashi456/repos", "events_url": "https://api.github.com/users/Shashi456/events{/privacy}", "received_events_url": "https://api.github.com/users/Shashi456/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Indeed there's currently a mismatch between the description and what it rouge actually returns.\r\nThanks for proposing this fix :) \r\n\r\nI think it's better to return rouge 1-2-L.\r\nWas there a reason to only include rouge 1 and rouge L @thomwolf ? ", "rougeLsum is also missing, could you add it ?", "Adding `RougeLSum` would fix https://github.com/huggingface/datasets/issues/617", "I am opening a PR with both of them right now actually :)", "Also the format of the output isn't exactly ideal, It's usually only the F-1 score that is cared about. \r\n\r\nFormatting the output to reflect how `ROUGE-1-5-5` (the perl version thats usually used and pyrouge is a wrapper over it), would be better.\r\n\r\n", "I'll close this since you seem to have already added it in another PR. Sorry for the delay in responding to you @lhoestq.", "What do you mean by \"Formatting the output to reflect how ROUGE-1-5-5\" @Shashi456 ?", "I like the idea of returning all the scores for two reason:\r\n- Rouge's aggregator does sampling and therefore it returns \"low\" \"mid\" and \"high\" scores\r\n- It is interesting to have the precision and recall to see how the F1 score was computed\r\nBut I understand your point that returning only the F1 score makes sense since it's the one that's always used ", "@thomwolf the scores now returned look like this:\r\n```\r\n{'rouge1': AggregateScore(low=Score(precision=0.16620308156871524, recall=0.18219819615984395, fmeasure=0.16226017699359463), mid=Score(precision=0.17274338501705871, recall=0.1890957812369246, fmeasure=0.16823877588620403), high=Score(precision=0.17934569582981455, recall=0.1965626706042028, fmeasure=0.17491509794856058)), \r\n'rouge2': AggregateScore(low=Score(precision=0.12478835737689957, recall=0.1362113231755514, fmeasure=0.12055941950062395), mid=Score(precision=0.1303967602691664, recall=0.1423747229852964, fmeasure=0.1258363976151122), high=Score(precision=0.13654527560789362, recall=0.1488071465116122, fmeasure=0.13184989406704056)), \r\n'rougeL': AggregateScore(low=Score(precision=0.16568068818352072, recall=0.1811919016674486, fmeasure=0.1614784523482225), mid=Score(precision=0.17156684723552357, recall=0.1879777628247058, fmeasure=0.16720699286250762), high=Score(precision=0.17788847350584547, recall=0.1948899838530898, fmeasure=0.17316501523379826))}\r\n```\r\n\r\nWhile when computed through the perl rouge script, it looks like:\r\n```\r\nROUGE-1 Average_R: 0.34775 (95%-conf.int. 0.34546 - 0.35025)\r\nROUGE-1 Average_P: 0.19381 (95%-conf.int. 0.19246 - 0.19538)\r\nROUGE-1 Average_F: 0.24070 (95%-conf.int. 0.23925 - 0.24230)\r\n---------------------------------------------\r\nROUGE-2 Average_R: 0.07160 (95%-conf.int. 0.07010 - 0.07298)\r\nROUGE-2 Average_F: 0.04845 (95%-conf.int. 0.04741 - 0.04942)\r\n---------------------------------------------\r\nROUGE-L Average_R: 0.26404 (95%-conf.int. 0.26215 - 0.26598)\r\nROUGE-L Average_P: 0.14696 (95%-conf.int. 0.14576 - 0.14815)\r\nROUGE-L Average_F: 0.18245 (95%-conf.int. 0.18120 - 0.18367)\r\n```\r\nwhile the wrapper returns the much more readable:\r\n```\r\n[2020-07-30 18:13:38,556 INFO] Rouges at step 13000 \r\n>> ROUGE-F(1/2/3/l): 43.43/20.42/39.78 \r\nROUGE-R(1/2/3/l): 53.91/25.34/49.32\r\n```\r\n\r\nThe formatting allows for easy reading, and although \"low\", \"mid\", \"high\" make sense, this is more concise and effective. \r\n\r\nOne way of changing this might be to return a dictionary that returns values like `rouge_1_precision`, `rouge_1_F1`, `rouge_1_recall`, and maybe also having the ability to get the values you are interested in and keeping `recall` and `F1` as default.", "cc: @lhoestq ", "Ok I see.\r\nI think it's also important to follow one of the existing output format (there are already too many different formats, let's try not to add another different one)\r\nI'd still stick with the current format and not transform the output of the python implementation of rouge since it's already widely used.\r\nWhat do you think ?", "Maybe we could convert the dataclasses in dictionnaries, would that help @Shashi456 ?", "@thomwolf yeah I think that would help. I initially didn't understand the high low mid categories. Dictionaries could help in this case I guess, and if we allow the user to choose what they want i.e F1 and precision or recall." ]
1,601,627,805,000
1,601,636,929,000
1,601,632,745,000
NONE
null
The description of the ROUGE metric says, ``` _KWARGS_DESCRIPTION = """ Calculates average rouge scores for a list of hypotheses and references Args: predictions: list of predictions to score. Each predictions should be a string with tokens separated by spaces. references: list of reference for each prediction. Each reference should be a string with tokens separated by spaces. Returns: rouge1: rouge_1 f1, rouge2: rouge_2 f1, rougeL: rouge_l f1, rougeLsum: rouge_l precision """ ``` but the `rouge_types` argument defaults to `rouge_types = ["rouge1", "rougeL"]`, this PR updates and add `rouge2` to the list so as to reflect the description card.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/700/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/700/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/700", "html_url": "https://github.com/huggingface/datasets/pull/700", "diff_url": "https://github.com/huggingface/datasets/pull/700.diff", "patch_url": "https://github.com/huggingface/datasets/pull/700.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/699
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/699/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/699/comments
https://api.github.com/repos/huggingface/datasets/issues/699/events
https://github.com/huggingface/datasets/issues/699
713,395,642
MDU6SXNzdWU3MTMzOTU2NDI=
699
XNLI dataset is not loading
{ "login": "imadarsh1001", "id": 14936525, "node_id": "MDQ6VXNlcjE0OTM2NTI1", "avatar_url": "https://avatars.githubusercontent.com/u/14936525?v=4", "gravatar_id": "", "url": "https://api.github.com/users/imadarsh1001", "html_url": "https://github.com/imadarsh1001", "followers_url": "https://api.github.com/users/imadarsh1001/followers", "following_url": "https://api.github.com/users/imadarsh1001/following{/other_user}", "gists_url": "https://api.github.com/users/imadarsh1001/gists{/gist_id}", "starred_url": "https://api.github.com/users/imadarsh1001/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/imadarsh1001/subscriptions", "organizations_url": "https://api.github.com/users/imadarsh1001/orgs", "repos_url": "https://api.github.com/users/imadarsh1001/repos", "events_url": "https://api.github.com/users/imadarsh1001/events{/privacy}", "received_events_url": "https://api.github.com/users/imadarsh1001/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "also i tried below code to solve checksum error \r\n`datasets-cli test ./datasets/xnli --save_infos --all_configs`\r\n\r\nand it shows \r\n\r\n```\r\n2020-10-02 07:06:16.588760: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\r\nTraceback (most recent call last):\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/load.py\", line 268, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py\", line 308, in cached_path\r\n use_etag=download_config.use_etag,\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py\", line 474, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.0.2/datasets/./datasets/xnli/xnli.py\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/load.py\", line 279, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py\", line 308, in cached_path\r\n use_etag=download_config.use_etag,\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py\", line 474, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/./datasets/xnli/xnli.py\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/opt/conda/bin/datasets-cli\", line 36, in <module>\r\n service.run()\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/commands/test.py\", line 76, in run\r\n module_path, hash = prepare_module(path)\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/load.py\", line 283, in prepare_module\r\n combined_path, github_file_path, file_path\r\nFileNotFoundError: Couldn't find file locally at ./datasets/xnli/xnli.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.0.2/datasets/./datasets/xnli/xnli.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/./datasets/xnli/xnli.py\r\n```\r\n\r\n", "Hi !\r\nYes the download url changed.\r\nIt's updated on the master branch. I'm doing a release today to fix that :)", "the issue is fixed with latest release \r\n\r\n" ]
1,601,621,596,000
1,601,747,152,000
1,601,747,017,000
NONE
null
`dataset = datasets.load_dataset(path='xnli')` showing below error ``` /opt/conda/lib/python3.7/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 36 if len(bad_urls) > 0: 37 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 38 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 39 logger.info("All the checksums matched successfully" + for_verification_name) 40 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip'] ``` I think URL is now changed to "https://cims.nyu.edu/~sbowman/xnli/XNLI-MT-1.0.zip"
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/699/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/699/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/697
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/697/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/697/comments
https://api.github.com/repos/huggingface/datasets/issues/697/events
https://github.com/huggingface/datasets/pull/697
712,979,029
MDExOlB1bGxSZXF1ZXN0NDk2MzczNDU5
697
Update README.md
{ "login": "bishug", "id": 71011306, "node_id": "MDQ6VXNlcjcxMDExMzA2", "avatar_url": "https://avatars.githubusercontent.com/u/71011306?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bishug", "html_url": "https://github.com/bishug", "followers_url": "https://api.github.com/users/bishug/followers", "following_url": "https://api.github.com/users/bishug/following{/other_user}", "gists_url": "https://api.github.com/users/bishug/gists{/gist_id}", "starred_url": "https://api.github.com/users/bishug/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bishug/subscriptions", "organizations_url": "https://api.github.com/users/bishug/orgs", "repos_url": "https://api.github.com/users/bishug/repos", "events_url": "https://api.github.com/users/bishug/events{/privacy}", "received_events_url": "https://api.github.com/users/bishug/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,601,568,162,000
1,601,568,720,000
1,601,568,720,000
NONE
null
Hey I was just telling my subscribers to check out your repositories Thank you
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/697/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/697/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/697", "html_url": "https://github.com/huggingface/datasets/pull/697", "diff_url": "https://github.com/huggingface/datasets/pull/697.diff", "patch_url": "https://github.com/huggingface/datasets/pull/697.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/696
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/696/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/696/comments
https://api.github.com/repos/huggingface/datasets/issues/696/events
https://github.com/huggingface/datasets/pull/696
712,942,977
MDExOlB1bGxSZXF1ZXN0NDk2MzQzMjEy
696
Elasticsearch index docs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,601,565,538,000
1,601,624,899,000
1,601,624,898,000
MEMBER
null
I added the docs for ES indexes. I also added a `load_elasticsearch_index` method to load an index that has already been built. I checked the tests for the ES index and we have tests that mock ElasticSearch. I think this is good for now but at some point it would be cool to have an end-to-end test with a real ES running.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/696/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/696/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/696", "html_url": "https://github.com/huggingface/datasets/pull/696", "diff_url": "https://github.com/huggingface/datasets/pull/696.diff", "patch_url": "https://github.com/huggingface/datasets/pull/696.patch", "merged_at": 1601624898000 }
true