url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.07B
node_id
stringlengths
18
32
number
int64
1
3.39k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
1 value
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
int64
1,587B
1,639B
updated_at
int64
1,587B
1,639B
closed_at
int64
1,587B
1,639B
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/1926
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1926/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1926/comments
https://api.github.com/repos/huggingface/datasets/issues/1926/events
https://github.com/huggingface/datasets/pull/1926
813,607,994
MDExOlB1bGxSZXF1ZXN0NTc3NzI4Mjgy
1,926
Fix: Wiki_dpr - add missing scalar quantizer
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,614,007,925,000
1,614,008,994,000
1,614,008,993,000
MEMBER
null
All the prebuilt wiki_dpr indexes already use SQ8, I forgot to update the wiki_dpr script after building them. Now it's finally done. The scalar quantizer SQ8 doesn't reduce the performance of the index as shown in retrieval experiments on RAG. The quantizer reduces the size of the index a lot but increases index building time.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1926/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1926/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1926", "html_url": "https://github.com/huggingface/datasets/pull/1926", "diff_url": "https://github.com/huggingface/datasets/pull/1926.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1926.patch", "merged_at": 1614008993000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1925
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1925/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1925/comments
https://api.github.com/repos/huggingface/datasets/issues/1925/events
https://github.com/huggingface/datasets/pull/1925
813,600,902
MDExOlB1bGxSZXF1ZXN0NTc3NzIyMzc3
1,925
Fix: Wiki_dpr - fix when with_embeddings is False or index_name is "no_index"
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @lhoestq ,\r\n\r\nI am running into an issue now when trying to run RAG. Running exactly as described [here](https://huggingface.co/facebook/rag-token-nq?fbclid=IwAR3bTfhls5U_t9DqsX2Vzb7NhtRHxJxfQ-uwFT7VuCPMZUM2AdAlKF_qkI8#usage) I get the error below. Wondering if it's related to this.\r\n\r\nRunning Transformers 4.3.2 with datasets installed from source from `master` branch.\r\n\r\n```bash\r\n(venv) sergey_mkrtchyan datasets (master) $ python\r\nPython 3.8.6 (v3.8.6:db455296be, Sep 23 2020, 13:31:39)\r\n[Clang 6.0 (clang-600.0.57)] on darwin\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration\r\n>>> tokenizer = RagTokenizer.from_pretrained(\"facebook/rag-token-nq\")\r\n>>> retriever = RagRetriever.from_pretrained(\"facebook/rag-token-nq\", index_name=\"exact\", use_dummy_dataset=True)\r\nUsing custom data configuration dummy.psgs_w100.nq.no_index-dummy=True,with_index=False\r\nReusing dataset wiki_dpr (/Users/sergey_mkrtchyan/.cache/huggingface/datasets/wiki_dpr/dummy.psgs_w100.nq.no_index-dummy=True,with_index=False/0.0.0/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb)\r\nUsing custom data configuration dummy.psgs_w100.nq.exact-50b6cda57ff32ab4\r\nReusing dataset wiki_dpr (/Users/sergey_mkrtchyan/.cache/huggingface/datasets/wiki_dpr/dummy.psgs_w100.nq.exact-50b6cda57ff32ab4/0.0.0/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb)\r\n 0%| | 0/10 [00:00<?, ?it/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py\", line 425, in from_pretrained\r\n return cls(\r\n File \"/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py\", line 387, in __init__\r\n self.init_retrieval()\r\n File \"/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py\", line 458, in init_retrieval\r\n self.index.init_index()\r\n File \"/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py\", line 284, in init_index\r\n self.dataset = load_dataset(\r\n File \"/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/load.py\", line 750, in load_dataset\r\n ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)\r\n File \"/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/builder.py\", line 734, in as_dataset\r\n datasets = utils.map_nested(\r\n File \"/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/utils/py_utils.py\", line 195, in map_nested\r\n return function(data_struct)\r\n File \"/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/builder.py\", line 769, in _build_single_dataset\r\n post_processed = self._post_process(ds, resources_paths)\r\n File \"/Users/sergey_mkrtchyan/.cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb/wiki_dpr.py\", line 205, in _post_process\r\n dataset.add_faiss_index(\"embeddings\", custom_index=index)\r\n File \"/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/arrow_dataset.py\", line 2516, in add_faiss_index\r\n super().add_faiss_index(\r\n File \"/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/search.py\", line 416, in add_faiss_index\r\n faiss_index.add_vectors(self, column=column, train_size=train_size, faiss_verbose=faiss_verbose)\r\n File \"/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/search.py\", line 281, in add_vectors\r\n self.faiss_index.add(vecs)\r\n File \"/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/faiss/__init__.py\", line 104, in replacement_add\r\n self.add_c(n, swig_ptr(x))\r\n File \"/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/faiss/swigfaiss.py\", line 3263, in add\r\n return _swigfaiss.IndexHNSW_add(self, n, x)\r\nRuntimeError: Error in virtual void faiss::IndexHNSW::add(faiss::Index::idx_t, const float *) at /Users/runner/work/faiss-wheels/faiss-wheels/faiss/faiss/IndexHNSW.cpp:356: Error: 'is_trained' failed\r\n>>>\r\n```\r\n\r\nThe error message is hinting that it could be related to this, but I might be wrong. Any ideas?\r\n\r\n\r\nEdit: Can confirm it's working fine with datasets==1.2.0\r\n\r\nDouble Edit: Did some further digging. The issue is related to this commit: 8c5220307c33f00e01c3bf7b8. I opened a separate issue #1941 for proper tracking." ]
1,614,007,426,000
1,614,216,828,000
1,614,008,168,000
MEMBER
null
Fix the bugs noticed in #1915 There was a bug when `with_embeddings=False` where the configuration name was the same as if `with_embeddings=True`, which led the dataset builder to do bad verifications (for example it used to expect to download the embeddings for `with_embeddings=False`). Another issue was that setting `index_name="no_index"` didn't set `with_index` to False. I fixed both of them and added dummy data for those configurations for testing.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1925/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1925/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1925", "html_url": "https://github.com/huggingface/datasets/pull/1925", "diff_url": "https://github.com/huggingface/datasets/pull/1925.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1925.patch", "merged_at": 1614008167000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1923
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1923/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1923/comments
https://api.github.com/repos/huggingface/datasets/issues/1923/events
https://github.com/huggingface/datasets/pull/1923
813,363,472
MDExOlB1bGxSZXF1ZXN0NTc3NTI0MTU0
1,923
Fix save_to_disk with relative path
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,989,639,000
1,613,992,964,000
1,613,992,963,000
MEMBER
null
As noticed in #1919 and #1920 the target directory was not created using `makedirs` so saving to it raises `FileNotFoundError`. For absolute paths it works but not for the good reason. This is because the target path was the same as the temporary path where in-memory data are written as an intermediary step. I added the `makedirs` call using `fs.makedirs` in order to support remote filesystems. I also fixed the issue with the target path being the temporary path. I added a test case for relative paths as well for save_to_disk. Thanks to @M-Salti for reporting and investigating
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1923/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1923/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1923", "html_url": "https://github.com/huggingface/datasets/pull/1923", "diff_url": "https://github.com/huggingface/datasets/pull/1923.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1923.patch", "merged_at": 1613992963000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1921
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1921/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1921/comments
https://api.github.com/repos/huggingface/datasets/issues/1921/events
https://github.com/huggingface/datasets/pull/1921
812,716,042
MDExOlB1bGxSZXF1ZXN0NTc3MDEzMDM4
1,921
Standardizing datasets dtypes
{ "login": "justin-yan", "id": 7731709, "node_id": "MDQ6VXNlcjc3MzE3MDk=", "avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4", "gravatar_id": "", "url": "https://api.github.com/users/justin-yan", "html_url": "https://github.com/justin-yan", "followers_url": "https://api.github.com/users/justin-yan/followers", "following_url": "https://api.github.com/users/justin-yan/following{/other_user}", "gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}", "starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions", "organizations_url": "https://api.github.com/users/justin-yan/orgs", "repos_url": "https://api.github.com/users/justin-yan/repos", "events_url": "https://api.github.com/users/justin-yan/events{/privacy}", "received_events_url": "https://api.github.com/users/justin-yan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq - apologies for the multiple PRs, my previous one (#1905) got mangled due to some merge conflicts that I had trouble resolving so I just cherry-picked my changes onto a fresh branch here." ]
1,613,858,641,000
1,613,987,050,000
1,613,987,050,000
CONTRIBUTOR
null
This PR follows up on discussion in #1900 to have an explicit set of basic dtypes for datasets. This moves away from str(pyarrow.DataType) as the method of choice for creating dtypes, favoring an explicit mapping to a list of supported Value dtypes. I believe in practice this should be backward compatible, since anyone previously using Value() would only have been able to use dtypes that had an identically named pyarrow factory function, which are all explicitly supported here, with `float32` and `float64` acting as the official datasets dtypes, which resolves the tension between `double` being the pyarrow dtype and `float64` being the pyarrow type factory function.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1921/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1921/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1921", "html_url": "https://github.com/huggingface/datasets/pull/1921", "diff_url": "https://github.com/huggingface/datasets/pull/1921.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1921.patch", "merged_at": 1613987050000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1920
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1920/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1920/comments
https://api.github.com/repos/huggingface/datasets/issues/1920/events
https://github.com/huggingface/datasets/pull/1920
812,628,220
MDExOlB1bGxSZXF1ZXN0NTc2OTQ5NzI2
1,920
Fix save_to_disk issue
{ "login": "M-Salti", "id": 9285264, "node_id": "MDQ6VXNlcjkyODUyNjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/M-Salti", "html_url": "https://github.com/M-Salti", "followers_url": "https://api.github.com/users/M-Salti/followers", "following_url": "https://api.github.com/users/M-Salti/following{/other_user}", "gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}", "starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions", "organizations_url": "https://api.github.com/users/M-Salti/orgs", "repos_url": "https://api.github.com/users/M-Salti/repos", "events_url": "https://api.github.com/users/M-Salti/events{/privacy}", "received_events_url": "https://api.github.com/users/M-Salti/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "So I was curious why the issue reported at #1919 wasn't caught in [this test](https://github.com/huggingface/datasets/blob/248104c4bdb2e01c036b7578867199191fbff181/tests/test_arrow_dataset.py#L209), so I did some digging.\r\nI tried to save to a temporary directory (just like in the test), like this:\r\n```python\r\nwith tempfile.TemporaryDirectory() as requested_tempdir:\r\n squad.save_to_disk(requested_tempdir) # no error\r\n```\r\nand it executes succesfuly without problems.\r\nSo why does it work, but this doesn't?\r\n```python\r\nsquad.save_to_disk(\"./squad\") # error\r\n```\r\nIt's because `save_to_disk` also creates a temporary directory (let's call it `tempdir`), and since `tempdir` and `requested_tempdir` share the same parents, the `Path.joinpath` method [(here)](https://github.com/huggingface/datasets/blob/248104c4bdb2e01c036b7578867199191fbff181/src/datasets/arrow_dataset.py#L469) will keep `requested_tempdir` as it is and the *train* directory will be created under `requested_tempdir` and hence no errors will arise.\r\n\r\nBut in the second case (where we are saving to a local dir), the *train* directory is created under *squad* which in turn is created under `tempdir`, not under `.` (current dir).\r\n\r\nSo, all of this probably doesn't help solving the issue but it might help creating a better test, and it also makes me wonder why are we saving to a temporary dir in `save_to_disk` anyway? I mean, won't it be removed with all its contents upon execution completion? what's the point then? ", "CLosing in favor of #1923" ]
1,613,830,959,000
1,613,989,811,000
1,613,989,811,000
CONTRIBUTOR
null
Fixes #1919
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1920/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1920/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1920", "html_url": "https://github.com/huggingface/datasets/pull/1920", "diff_url": "https://github.com/huggingface/datasets/pull/1920.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1920.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1919
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1919/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1919/comments
https://api.github.com/repos/huggingface/datasets/issues/1919/events
https://github.com/huggingface/datasets/issues/1919
812,626,872
MDU6SXNzdWU4MTI2MjY4NzI=
1,919
Failure to save with save_to_disk
{ "login": "M-Salti", "id": 9285264, "node_id": "MDQ6VXNlcjkyODUyNjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/M-Salti", "html_url": "https://github.com/M-Salti", "followers_url": "https://api.github.com/users/M-Salti/followers", "following_url": "https://api.github.com/users/M-Salti/following{/other_user}", "gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}", "starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions", "organizations_url": "https://api.github.com/users/M-Salti/orgs", "repos_url": "https://api.github.com/users/M-Salti/repos", "events_url": "https://api.github.com/users/M-Salti/events{/privacy}", "received_events_url": "https://api.github.com/users/M-Salti/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi thanks for reporting and for proposing a fix :)\r\n\r\nI just merged a fix, feel free to try it from the master branch !", "Closing since this has been fixed by #1923" ]
1,613,830,690,000
1,614,793,227,000
1,614,793,227,000
CONTRIBUTOR
null
When I try to save a dataset locally using the `save_to_disk` method I get the error: ```bash FileNotFoundError: [Errno 2] No such file or directory: '/content/squad/train/squad-train.arrow' ``` To replicate: 1. Install `datasets` from master 2. Run this code: ```python from datasets import load_dataset squad = load_dataset("squad") # or any other dataset squad.save_to_disk("squad") # error here ``` The problem is that the method is not creating a directory with the name `dataset_path` for saving the dataset in (i.e. it's not creating the *train* and *validation* directories in this case). After creating the directory the problem resolves. I'll open a PR soon doing that and linking this issue.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1919/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1919/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1918
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1918/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1918/comments
https://api.github.com/repos/huggingface/datasets/issues/1918/events
https://github.com/huggingface/datasets/pull/1918
812,541,510
MDExOlB1bGxSZXF1ZXN0NTc2ODg2OTQ0
1,918
Fix QA4MRE download URLs
{ "login": "M-Salti", "id": 9285264, "node_id": "MDQ6VXNlcjkyODUyNjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/M-Salti", "html_url": "https://github.com/M-Salti", "followers_url": "https://api.github.com/users/M-Salti/followers", "following_url": "https://api.github.com/users/M-Salti/following{/other_user}", "gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}", "starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions", "organizations_url": "https://api.github.com/users/M-Salti/orgs", "repos_url": "https://api.github.com/users/M-Salti/repos", "events_url": "https://api.github.com/users/M-Salti/events{/privacy}", "received_events_url": "https://api.github.com/users/M-Salti/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,806,337,000
1,614,000,906,000
1,614,000,906,000
CONTRIBUTOR
null
The URLs in the `dataset_infos` and `README` are correct, only the ones in the download script needed updating.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1918/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1918/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1918", "html_url": "https://github.com/huggingface/datasets/pull/1918", "diff_url": "https://github.com/huggingface/datasets/pull/1918.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1918.patch", "merged_at": 1614000906000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1917
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1917/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1917/comments
https://api.github.com/repos/huggingface/datasets/issues/1917/events
https://github.com/huggingface/datasets/issues/1917
812,390,178
MDU6SXNzdWU4MTIzOTAxNzg=
1,917
UnicodeDecodeError: windows 10 machine
{ "login": "yosiasz", "id": 900951, "node_id": "MDQ6VXNlcjkwMDk1MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/900951?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yosiasz", "html_url": "https://github.com/yosiasz", "followers_url": "https://api.github.com/users/yosiasz/followers", "following_url": "https://api.github.com/users/yosiasz/following{/other_user}", "gists_url": "https://api.github.com/users/yosiasz/gists{/gist_id}", "starred_url": "https://api.github.com/users/yosiasz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yosiasz/subscriptions", "organizations_url": "https://api.github.com/users/yosiasz/orgs", "repos_url": "https://api.github.com/users/yosiasz/repos", "events_url": "https://api.github.com/users/yosiasz/events{/privacy}", "received_events_url": "https://api.github.com/users/yosiasz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "upgraded to php 3.9.2 and it works!" ]
1,613,772,785,000
1,613,774,471,000
1,613,774,428,000
NONE
null
Windows 10 Php 3.6.8 when running ``` import datasets oscar_am = datasets.load_dataset("oscar", "unshuffled_deduplicated_am") print(oscar_am["train"][0]) ``` I get the following error ``` file "C:\PYTHON\3.6.8\lib\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 58: character maps to <undefined> ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1917/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1917/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1916
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1916/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1916/comments
https://api.github.com/repos/huggingface/datasets/issues/1916/events
https://github.com/huggingface/datasets/pull/1916
812,291,984
MDExOlB1bGxSZXF1ZXN0NTc2NjgwNjY5
1,916
Remove unused py_utils objects
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hmmm this one broke master. I'm fixing it.\r\n\r\nMaybe because your branch was outdated ?", "Sorry @lhoestq, I forgot to update the imports... :/", "It's fine, the CI should have caught this tbh. Not sure why it did't fail" ]
1,613,764,285,000
1,614,005,816,000
1,614,000,769,000
MEMBER
null
Remove unused/unnecessary py_utils functions/classes.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1916/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1916/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1916", "html_url": "https://github.com/huggingface/datasets/pull/1916", "diff_url": "https://github.com/huggingface/datasets/pull/1916.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1916.patch", "merged_at": 1614000769000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1915
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1915/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1915/comments
https://api.github.com/repos/huggingface/datasets/issues/1915/events
https://github.com/huggingface/datasets/issues/1915
812,229,654
MDU6SXNzdWU4MTIyMjk2NTQ=
1,915
Unable to download `wiki_dpr`
{ "login": "nitarakad", "id": 18504534, "node_id": "MDQ6VXNlcjE4NTA0NTM0", "avatar_url": "https://avatars.githubusercontent.com/u/18504534?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nitarakad", "html_url": "https://github.com/nitarakad", "followers_url": "https://api.github.com/users/nitarakad/followers", "following_url": "https://api.github.com/users/nitarakad/following{/other_user}", "gists_url": "https://api.github.com/users/nitarakad/gists{/gist_id}", "starred_url": "https://api.github.com/users/nitarakad/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nitarakad/subscriptions", "organizations_url": "https://api.github.com/users/nitarakad/orgs", "repos_url": "https://api.github.com/users/nitarakad/repos", "events_url": "https://api.github.com/users/nitarakad/events{/privacy}", "received_events_url": "https://api.github.com/users/nitarakad/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting ! This is a bug. For now feel free to set `ignore_verifications=False` in `load_dataset`.\r\nI'm working on a fix", "I just merged a fix :)\r\n\r\nWe'll do a patch release soon. In the meantime feel free to try it from the master branch\r\nThanks again for reporting !", "Closing since this has been fixed by #1925" ]
1,613,758,292,000
1,614,793,248,000
1,614,793,248,000
NONE
null
I am trying to download the `wiki_dpr` dataset. Specifically, I want to download `psgs_w100.multiset.no_index` with no embeddings/no index. In order to do so, I ran: `curr_dataset = load_dataset("wiki_dpr", embeddings_name="multiset", index_name="no_index")` However, I got the following error: `datasets.utils.info_utils.UnexpectedDownloadedFile: {'embeddings_index'}` I tried adding in flags `with_embeddings=False` and `with_index=False`: `curr_dataset = load_dataset("wiki_dpr", with_embeddings=False, with_index=False, embeddings_name="multiset", index_name="no_index")` But I got the following error: `raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums))) datasets.utils.info_utils.ExpectedMoreDownloadedFiles: {‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_5’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_15’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_30’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_36’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_18’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_41’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_13’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_48’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_10’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_23’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_14’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_34’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_43’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_40’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_47’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_3’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_24’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_7’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_33’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_46’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_42’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_27’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_29’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_26’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_22’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_4’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_20’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_39’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_6’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_16’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_8’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_35’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_49’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_17’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_25’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_0’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_38’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_12’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_44’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_1’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_32’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_19’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_31’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_37’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_9’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_11’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_21’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_28’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_45’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_2’}` Is there anything else I need to set to download the dataset? **UPDATE**: just running `curr_dataset = load_dataset("wiki_dpr", with_embeddings=False, with_index=False)` gives me the same error.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1915/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1915/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1914
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1914/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1914/comments
https://api.github.com/repos/huggingface/datasets/issues/1914/events
https://github.com/huggingface/datasets/pull/1914
812,149,201
MDExOlB1bGxSZXF1ZXN0NTc2NTYyNTkz
1,914
Fix logging imports and make all datasets use library logger
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,751,154,000
1,613,936,883,000
1,613,936,883,000
MEMBER
null
Fix library relative logging imports and make all datasets use library logger.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1914/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1914/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1914", "html_url": "https://github.com/huggingface/datasets/pull/1914", "diff_url": "https://github.com/huggingface/datasets/pull/1914.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1914.patch", "merged_at": 1613936883000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1913
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1913/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1913/comments
https://api.github.com/repos/huggingface/datasets/issues/1913/events
https://github.com/huggingface/datasets/pull/1913
812,127,307
MDExOlB1bGxSZXF1ZXN0NTc2NTQ0NjQw
1,913
Add keep_linebreaks parameter to text loader
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Just so I understand how it can be used in practice, do you have an example showing how to load a text dataset with this option?", "Sure ! Here is an example:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"text\", keep_linebreaks=True, data_files=...)\r\n```\r\n\r\nI'll update the documentation to explain this", "Perfect!" ]
1,613,749,425,000
1,613,759,772,000
1,613,759,771,000
MEMBER
null
As asked in #870 and https://github.com/huggingface/transformers/issues/10269 there should be a parameter to keep the linebreaks when loading a text dataset. cc @sgugger @jncasey
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1913/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1913/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1913", "html_url": "https://github.com/huggingface/datasets/pull/1913", "diff_url": "https://github.com/huggingface/datasets/pull/1913.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1913.patch", "merged_at": 1613759771000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1912
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1912/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1912/comments
https://api.github.com/repos/huggingface/datasets/issues/1912/events
https://github.com/huggingface/datasets/pull/1912
812,034,140
MDExOlB1bGxSZXF1ZXN0NTc2NDY2ODQx
1,912
Update: WMT - use mirror links
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "So much better - thank you for doing that, @lhoestq!", "Also fixed the `uncorpus` urls for wmt19 ru-en and zh-en for https://github.com/huggingface/datasets/issues/1893", "Thanks!\r\nCan this be merged sooner? \r\nI manually update it and it works well." ]
1,613,742,154,000
1,614,174,293,000
1,614,174,293,000
MEMBER
null
As asked in #1892 I created mirrors of the data hosted on statmt.org and updated the wmt scripts. Now downloading the wmt datasets is blazing fast :) cc @stas00 @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1912/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 4, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1912/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1912", "html_url": "https://github.com/huggingface/datasets/pull/1912", "diff_url": "https://github.com/huggingface/datasets/pull/1912.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1912.patch", "merged_at": 1614174293000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1910
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1910/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1910/comments
https://api.github.com/repos/huggingface/datasets/issues/1910/events
https://github.com/huggingface/datasets/pull/1910
811,697,108
MDExOlB1bGxSZXF1ZXN0NTc2MTg0MDQ3
1,910
Adding CoNLLpp dataset.
{ "login": "ZihanWangKi", "id": 21319243, "node_id": "MDQ6VXNlcjIxMzE5MjQz", "avatar_url": "https://avatars.githubusercontent.com/u/21319243?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZihanWangKi", "html_url": "https://github.com/ZihanWangKi", "followers_url": "https://api.github.com/users/ZihanWangKi/followers", "following_url": "https://api.github.com/users/ZihanWangKi/following{/other_user}", "gists_url": "https://api.github.com/users/ZihanWangKi/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZihanWangKi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZihanWangKi/subscriptions", "organizations_url": "https://api.github.com/users/ZihanWangKi/orgs", "repos_url": "https://api.github.com/users/ZihanWangKi/repos", "events_url": "https://api.github.com/users/ZihanWangKi/events{/privacy}", "received_events_url": "https://api.github.com/users/ZihanWangKi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "It looks like this PR now includes changes to many other files than the ones for CoNLLpp.\r\n\r\nTo fix that feel free to create another branch and another PR.\r\n\r\nThis was probably caused by a git rebase. You can avoid this issue by using git merge if you've already pushed your branch." ]
1,613,711,550,000
1,614,895,367,000
1,614,895,367,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1910/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1910/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1910", "html_url": "https://github.com/huggingface/datasets/pull/1910", "diff_url": "https://github.com/huggingface/datasets/pull/1910.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1910.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1907
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1907/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1907/comments
https://api.github.com/repos/huggingface/datasets/issues/1907/events
https://github.com/huggingface/datasets/issues/1907
811,520,569
MDU6SXNzdWU4MTE1MjA1Njk=
1,907
DBPedia14 Dataset Checksum bug?
{ "login": "francisco-perez-sorrosal", "id": 918006, "node_id": "MDQ6VXNlcjkxODAwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/918006?v=4", "gravatar_id": "", "url": "https://api.github.com/users/francisco-perez-sorrosal", "html_url": "https://github.com/francisco-perez-sorrosal", "followers_url": "https://api.github.com/users/francisco-perez-sorrosal/followers", "following_url": "https://api.github.com/users/francisco-perez-sorrosal/following{/other_user}", "gists_url": "https://api.github.com/users/francisco-perez-sorrosal/gists{/gist_id}", "starred_url": "https://api.github.com/users/francisco-perez-sorrosal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/francisco-perez-sorrosal/subscriptions", "organizations_url": "https://api.github.com/users/francisco-perez-sorrosal/orgs", "repos_url": "https://api.github.com/users/francisco-perez-sorrosal/repos", "events_url": "https://api.github.com/users/francisco-perez-sorrosal/events{/privacy}", "received_events_url": "https://api.github.com/users/francisco-perez-sorrosal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! :)\r\n\r\nThis looks like the same issue as https://github.com/huggingface/datasets/issues/1856 \r\nBasically google drive has quota issues that makes it inconvenient for downloading files.\r\n\r\nIf the quota of a file is exceeded, you have to wait 24h for the quota to reset (which is painful).\r\n\r\nThe error says that the checksum of the downloaded file doesn't match because google drive returns a text file with the \"Quota Exceeded\" error instead of the actual data file.", "Thanks @lhoestq! Yes, it seems back to normal after a couple of days." ]
1,613,687,148,000
1,614,036,125,000
1,614,036,124,000
CONTRIBUTOR
null
Hi there!!! I've been using successfully the DBPedia dataset (https://huggingface.co/datasets/dbpedia_14) with my codebase in the last couple of weeks, but in the last couple of days now I get this error: ``` Traceback (most recent call last): File "./conditional_classification/basic_pipeline.py", line 178, in <module> main() File "./conditional_classification/basic_pipeline.py", line 128, in main corpus.load_data(limit_train_examples_per_class=args.data_args.train_examples_per_class, File "/home/fp/dev/conditional_classification/conditional_classification/datasets_base.py", line 83, in load_data datasets = load_dataset(self.name, split=dataset_split) File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/load.py", line 609, in load_dataset builder_instance.download_and_prepare( File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/builder.py", line 526, in download_and_prepare self._download_and_prepare( File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/builder.py", line 586, in _download_and_prepare verify_checksums( File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbQ2Vic1kxMmZZQ1k'] ``` I've seen this has happened before in other datasets as reported in #537. I've tried clearing my cache and call again `load_dataset` but still is not working. My same codebase is successfully downloading and using other datasets (e.g. AGNews) without any problem, so I guess something has happened specifically to the DBPedia dataset in the last few days. Can you please check if there's a problem with the checksums? Or this is related to any other stuff? I've seen that the path in the cache for the dataset is `/home/fp/.cache/huggingface/datasets/d_bpedia14/dbpedia_14/2.0.0/a70413e39e7a716afd0e90c9e53cb053691f56f9ef5fe317bd07f2c368e8e897...` and includes `d_bpedia14` instead maybe of `dbpedia_14`. Was this maybe a bug introduced recently? Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1907/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1907/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1905
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1905/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1905/comments
https://api.github.com/repos/huggingface/datasets/issues/1905/events
https://github.com/huggingface/datasets/pull/1905
811,384,174
MDExOlB1bGxSZXF1ZXN0NTc1OTIxMDk1
1,905
Standardizing datasets.dtypes
{ "login": "justin-yan", "id": 7731709, "node_id": "MDQ6VXNlcjc3MzE3MDk=", "avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4", "gravatar_id": "", "url": "https://api.github.com/users/justin-yan", "html_url": "https://github.com/justin-yan", "followers_url": "https://api.github.com/users/justin-yan/followers", "following_url": "https://api.github.com/users/justin-yan/following{/other_user}", "gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}", "starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions", "organizations_url": "https://api.github.com/users/justin-yan/orgs", "repos_url": "https://api.github.com/users/justin-yan/repos", "events_url": "https://api.github.com/users/justin-yan/events{/privacy}", "received_events_url": "https://api.github.com/users/justin-yan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Also - I took a stab at updating the docs, but I'm not sure how to actually check the outputs to see if it's formatted properly." ]
1,613,675,731,000
1,613,858,490,000
1,613,858,490,000
CONTRIBUTOR
null
This PR was further branched off of jdy-str-to-pyarrow-parsing, so it depends on https://github.com/huggingface/datasets/pull/1900 going first for the diff to be up-to-date (I'm not sure if there's a way for me to use jdy-str-to-pyarrow-parsing as a base branch while having it appear in the pull requests here). This moves away from `str(pyarrow.DataType)` as the method of choice for creating dtypes, favoring an explicit mapping to a list of supported Value dtypes. I believe in practice this should be backward compatible, since anyone previously using Value() would only have been able to use dtypes that had an identically named pyarrow factory function, which are all explicitly supported here.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1905/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1905/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1905", "html_url": "https://github.com/huggingface/datasets/pull/1905", "diff_url": "https://github.com/huggingface/datasets/pull/1905.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1905.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1904
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1904/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1904/comments
https://api.github.com/repos/huggingface/datasets/issues/1904/events
https://github.com/huggingface/datasets/pull/1904
811,260,904
MDExOlB1bGxSZXF1ZXN0NTc1ODE4MjA0
1,904
Fix to_pandas for boolean ArrayXD
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks!" ]
1,613,665,846,000
1,613,668,203,000
1,613,668,201,000
MEMBER
null
As noticed in #1887 the conversion of a dataset with a boolean ArrayXD feature types fails because of the underlying ListArray conversion to numpy requires `zero_copy_only=False`. zero copy is available for all primitive types except booleans see https://arrow.apache.org/docs/python/generated/pyarrow.Array.html#pyarrow.Array.to_numpy and https://issues.apache.org/jira/browse/ARROW-2871?jql=text%20~%20%22boolean%20to_numpy%22 cc @SBrandeis
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1904/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1904/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1904", "html_url": "https://github.com/huggingface/datasets/pull/1904", "diff_url": "https://github.com/huggingface/datasets/pull/1904.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1904.patch", "merged_at": 1613668200000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1903
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1903/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1903/comments
https://api.github.com/repos/huggingface/datasets/issues/1903/events
https://github.com/huggingface/datasets/pull/1903
811,145,531
MDExOlB1bGxSZXF1ZXN0NTc1NzIwOTk2
1,903
Initial commit for the addition of TIMIT dataset
{ "login": "vrindaprabhu", "id": 16264631, "node_id": "MDQ6VXNlcjE2MjY0NjMx", "avatar_url": "https://avatars.githubusercontent.com/u/16264631?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vrindaprabhu", "html_url": "https://github.com/vrindaprabhu", "followers_url": "https://api.github.com/users/vrindaprabhu/followers", "following_url": "https://api.github.com/users/vrindaprabhu/following{/other_user}", "gists_url": "https://api.github.com/users/vrindaprabhu/gists{/gist_id}", "starred_url": "https://api.github.com/users/vrindaprabhu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vrindaprabhu/subscriptions", "organizations_url": "https://api.github.com/users/vrindaprabhu/orgs", "repos_url": "https://api.github.com/users/vrindaprabhu/repos", "events_url": "https://api.github.com/users/vrindaprabhu/events{/privacy}", "received_events_url": "https://api.github.com/users/vrindaprabhu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@patrickvonplaten could you please review and help me close this PR?", "@lhoestq Thank you so much for your comments and for patiently reviewing the code. Have _hopefully_ included all the suggested changes. Let me know if any more changes are required.\r\n\r\nSorry the code had lots of silly errors from my side!:' Will be more careful from next time! :)\r\n\r\n\r\n" ]
1,613,658,192,000
1,614,591,552,000
1,614,591,552,000
CONTRIBUTOR
null
Below points needs to be addressed: - Creation of dummy dataset is failing - Need to check on the data representation - License is not creative commons. Copyright: Portions © 1993 Trustees of the University of Pennsylvania Also the links (_except the download_) point to the ami corpus! ;-) @patrickvonplaten Requesting your comments, will be happy to address them!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1903/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1903/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1903", "html_url": "https://github.com/huggingface/datasets/pull/1903", "diff_url": "https://github.com/huggingface/datasets/pull/1903.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1903.patch", "merged_at": 1614591552000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1902
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1902/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1902/comments
https://api.github.com/repos/huggingface/datasets/issues/1902/events
https://github.com/huggingface/datasets/pull/1902
810,931,171
MDExOlB1bGxSZXF1ZXN0NTc1NTQwMDM1
1,902
Fix setimes_2 wmt urls
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,641,346,000
1,613,642,141,000
1,613,642,141,000
MEMBER
null
Continuation of #1901 Some other urls were missing https
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1902/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1902/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1902", "html_url": "https://github.com/huggingface/datasets/pull/1902", "diff_url": "https://github.com/huggingface/datasets/pull/1902.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1902.patch", "merged_at": 1613642141000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1901
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1901/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1901/comments
https://api.github.com/repos/huggingface/datasets/issues/1901/events
https://github.com/huggingface/datasets/pull/1901
810,845,605
MDExOlB1bGxSZXF1ZXN0NTc1NDY5MDUy
1,901
Fix OPUS dataset download errors
{ "login": "YangWang92", "id": 3883941, "node_id": "MDQ6VXNlcjM4ODM5NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/3883941?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YangWang92", "html_url": "https://github.com/YangWang92", "followers_url": "https://api.github.com/users/YangWang92/followers", "following_url": "https://api.github.com/users/YangWang92/following{/other_user}", "gists_url": "https://api.github.com/users/YangWang92/gists{/gist_id}", "starred_url": "https://api.github.com/users/YangWang92/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YangWang92/subscriptions", "organizations_url": "https://api.github.com/users/YangWang92/orgs", "repos_url": "https://api.github.com/users/YangWang92/repos", "events_url": "https://api.github.com/users/YangWang92/events{/privacy}", "received_events_url": "https://api.github.com/users/YangWang92/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,633,981,000
1,613,660,840,000
1,613,641,161,000
CONTRIBUTOR
null
Replace http to https. https://github.com/huggingface/datasets/issues/854 https://discuss.huggingface.co/t/cannot-download-wmt16/2081
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1901/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1901/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1901", "html_url": "https://github.com/huggingface/datasets/pull/1901", "diff_url": "https://github.com/huggingface/datasets/pull/1901.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1901.patch", "merged_at": 1613641161000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1900
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1900/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1900/comments
https://api.github.com/repos/huggingface/datasets/issues/1900/events
https://github.com/huggingface/datasets/pull/1900
810,512,488
MDExOlB1bGxSZXF1ZXN0NTc1MTkxNTc3
1,900
Issue #1895: Bugfix for string_to_arrow timestamp[ns] support
{ "login": "justin-yan", "id": 7731709, "node_id": "MDQ6VXNlcjc3MzE3MDk=", "avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4", "gravatar_id": "", "url": "https://api.github.com/users/justin-yan", "html_url": "https://github.com/justin-yan", "followers_url": "https://api.github.com/users/justin-yan/followers", "following_url": "https://api.github.com/users/justin-yan/following{/other_user}", "gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}", "starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions", "organizations_url": "https://api.github.com/users/justin-yan/orgs", "repos_url": "https://api.github.com/users/justin-yan/repos", "events_url": "https://api.github.com/users/justin-yan/events{/privacy}", "received_events_url": "https://api.github.com/users/justin-yan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "OK! Thank you for the review - I will follow up with a separate PR for the comments here (https://github.com/huggingface/datasets/pull/1900#discussion_r578319725)!" ]
1,613,593,564,000
1,613,759,231,000
1,613,759,231,000
CONTRIBUTOR
null
Should resolve https://github.com/huggingface/datasets/issues/1895 The main part of this PR adds additional parsing in `string_to_arrow` to convert the timestamp dtypes that result from `str(pa_type)` back into the pa.DataType TimestampType. While adding unit-testing, I noticed that support for the double/float types also don't invert correctly, so I added them, which I believe would hypothetically make this section of `Value` redundant: ``` def __post_init__(self): if self.dtype == "double": # fix inferred type self.dtype = "float64" if self.dtype == "float": # fix inferred type self.dtype = "float32" ``` However, since I think Value.dtype is part of the public interface, removing that would result in a backward-incompatible change, so I didn't muck with that. The rest of the PR consists of docstrings that I added while developing locally so I could keep track of which functions were supposed to be inverses of each other, and thought I'd include them initially in case you want to keep them around, but I'm happy to delete or remove any of them at your request!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1900/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1900/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1900", "html_url": "https://github.com/huggingface/datasets/pull/1900", "diff_url": "https://github.com/huggingface/datasets/pull/1900.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1900.patch", "merged_at": 1613759231000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1899
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1899/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1899/comments
https://api.github.com/repos/huggingface/datasets/issues/1899/events
https://github.com/huggingface/datasets/pull/1899
810,308,332
MDExOlB1bGxSZXF1ZXN0NTc1MDIxMjc4
1,899
Fix: ALT - fix duplicated examples in alt-parallel
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,577,236,000
1,613,582,449,000
1,613,582,449,000
MEMBER
null
As noticed in #1898 by @10-zin the examples of the `alt-paralel` configurations have all the same values for the `translation` field. This was due to a bad copy of a python dict. This PR fixes that.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1899/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1899/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1899", "html_url": "https://github.com/huggingface/datasets/pull/1899", "diff_url": "https://github.com/huggingface/datasets/pull/1899.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1899.patch", "merged_at": 1613582449000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1898
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1898/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1898/comments
https://api.github.com/repos/huggingface/datasets/issues/1898/events
https://github.com/huggingface/datasets/issues/1898
810,157,251
MDU6SXNzdWU4MTAxNTcyNTE=
1,898
ALT dataset has repeating instances in all splits
{ "login": "10-zin", "id": 33179372, "node_id": "MDQ6VXNlcjMzMTc5Mzcy", "avatar_url": "https://avatars.githubusercontent.com/u/33179372?v=4", "gravatar_id": "", "url": "https://api.github.com/users/10-zin", "html_url": "https://github.com/10-zin", "followers_url": "https://api.github.com/users/10-zin/followers", "following_url": "https://api.github.com/users/10-zin/following{/other_user}", "gists_url": "https://api.github.com/users/10-zin/gists{/gist_id}", "starred_url": "https://api.github.com/users/10-zin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/10-zin/subscriptions", "organizations_url": "https://api.github.com/users/10-zin/orgs", "repos_url": "https://api.github.com/users/10-zin/repos", "events_url": "https://api.github.com/users/10-zin/events{/privacy}", "received_events_url": "https://api.github.com/users/10-zin/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting. This looks like a very bad issue. I'm looking into it", "I just merged a fix, we'll do a patch release soon. Thanks again for reporting, and sorry for the inconvenience.\r\nIn the meantime you can load `ALT` using `datasets` from the master branch", "Thanks!!! works perfectly in the bleading edge master version", "Closed by #1899" ]
1,613,566,302,000
1,613,715,526,000
1,613,715,526,000
NONE
null
The [ALT](https://huggingface.co/datasets/alt) dataset has all the same instances within each split :/ Seemed like a great dataset for some experiments I wanted to carry out, especially since its medium-sized, and has all splits. Would be great if this could be fixed :) Added a snapshot of the contents from `explore-datset` feature, for quick reference. ![image](https://user-images.githubusercontent.com/33179372/108206321-442a2d00-714c-11eb-882f-b4b6e708ef9c.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1898/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1898/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1897
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1897/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1897/comments
https://api.github.com/repos/huggingface/datasets/issues/1897/events
https://github.com/huggingface/datasets/pull/1897
810,113,263
MDExOlB1bGxSZXF1ZXN0NTc0ODU3MTIy
1,897
Fix PandasArrayExtensionArray conversion to native type
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,562,504,000
1,613,567,716,000
1,613,567,715,000
MEMBER
null
To make the conversion to csv work in #1887 , we need PandasArrayExtensionArray used for multidimensional numpy arrays to be converted to pandas native types. However previously pandas.core.internals.ExtensionBlock.to_native_types would fail with an PandasExtensionArray because 1. the PandasExtensionArray.isna method was wrong 2. the conversion of a PandasExtensionArray to a numpy array with dtype=object was returning a multidimensional array while pandas excepts a 1D array in this case (more info [here](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.api.extensions.ExtensionArray.html#pandas.api.extensions.ExtensionArray)) I fixed these two issues and now the conversion to native types works, and so is the export to csv. cc @SBrandeis
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1897/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1897/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1897", "html_url": "https://github.com/huggingface/datasets/pull/1897", "diff_url": "https://github.com/huggingface/datasets/pull/1897.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1897.patch", "merged_at": 1613567715000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1895
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1895/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1895/comments
https://api.github.com/repos/huggingface/datasets/issues/1895/events
https://github.com/huggingface/datasets/issues/1895
809,630,271
MDU6SXNzdWU4MDk2MzAyNzE=
1,895
Bug Report: timestamp[ns] not recognized
{ "login": "justin-yan", "id": 7731709, "node_id": "MDQ6VXNlcjc3MzE3MDk=", "avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4", "gravatar_id": "", "url": "https://api.github.com/users/justin-yan", "html_url": "https://github.com/justin-yan", "followers_url": "https://api.github.com/users/justin-yan/followers", "following_url": "https://api.github.com/users/justin-yan/following{/other_user}", "gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}", "starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions", "organizations_url": "https://api.github.com/users/justin-yan/orgs", "repos_url": "https://api.github.com/users/justin-yan/repos", "events_url": "https://api.github.com/users/justin-yan/events{/privacy}", "received_events_url": "https://api.github.com/users/justin-yan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for reporting !\r\n\r\nYou're right, `string_to_arrow` should be able to take `\"timestamp[ns]\"` as input and return the right pyarrow timestamp type.\r\nFeel free to suggest a fix for `string_to_arrow` and open a PR if you want to contribute ! This would be very appreciated :)\r\n\r\nTo give you more context:\r\n\r\nAs you may know we define the features types of a dataset using the `Features` object in combination with feature types like `Value`. For example\r\n```python\r\nfeatures = Features({\r\n \"age\": Value(\"int32\")\r\n})\r\n```\r\nHowever under the hood we are actually using pyarrow to store the data, and so we have a mapping between the feature types of `datasets` and the types of pyarrow.\r\n\r\nFor example, the `Value` feature types are created from a pyarrow type with `Value(str(pa_type))`.\r\nHowever it looks like the conversion back to a pyarrow type doesn't work with `\"timestamp[ns]\"`.\r\nThis is the `string_to_arrow` function you highlighted that does this conversion, so we should fix that.\r\n\r\n", "Thanks for the clarification @lhoestq !\r\n\r\nThis may be a little bit of a stupid question, but I wanted to clarify one more thing before I took a stab at this:\r\n\r\nWhen the features get inferred, I believe they already have a pyarrow schema (https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L234).\r\n\r\nWe then convert it to a string (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L778) only to convert it back into the arrow type (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L143, and https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L35). Is there a reason for this round-trip?\r\n\r\nI'll open a PR later to add `timestamp` support to `string_to_arrow`, but I'd be curious to understand since it feels like there may be some opportunities to simplify!", "The objective in terms of design is to make it easy to create Features in a pythonic way. So for example we use a string to define a Value type.\r\nThat's why when inferring the Features from an arrow schema we have to find the right string definitions for Value types. I guess we could also have a constructor `Value.from_arrow_type` to avoid recreating the arrow type, but this could create silent errors if the pyarrow type doesn't have a valid mapping with the string definition. The \"round-trip\" is used to enforce that the ground truth is the string definition, not the pyarrow type, and also as a sanity check.\r\n\r\nLet me know if that makes sense ", "OK I think I understand now:\r\n\r\nFeatures are datasets' internal representation of a schema type, distinct from pyarrow's schema.\r\nValue() corresponds to pyarrow's \"primitive\" types (e.g. `int` or `string`, but not things like `list` or `dict`).\r\n`get_nested_type()` (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L698) and `generate_from_arrow_type()` (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L778) *should* be inverses of each other, and similarly, for the primitive values, `string_to_arrow()` and `Value.__call__` (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L146) should be inverses of each other?\r\n\r\nThanks for taking the time to answer - I just wanted to make sure I understood before opening a PR so I'm not disrupting anything about how the codebase is expected to work!", "Yes you're totally right :)" ]
1,613,507,884,000
1,613,759,231,000
1,613,759,231,000
CONTRIBUTOR
null
Repro: ``` from datasets import Dataset import pandas as pd import pyarrow df = pd.DataFrame(pd.date_range("2018-01-01", periods=3, freq="H")) pyarrow.Table.from_pandas(df) Dataset.from_pandas(df) # Throws ValueError: Neither timestamp[ns] nor timestamp[ns]_ seems to be a pyarrow data type. ``` The factory function seems to be just "timestamp": https://arrow.apache.org/docs/python/generated/pyarrow.timestamp.html#pyarrow.timestamp It seems like https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L36-L43 could have a little bit of additional structure for handling these cases? I'd be happy to take a shot at opening a PR if I could receive some guidance on whether parsing something like `timestamp[ns]` and resolving it to timestamp('ns') is the goal of this method. Alternatively, if I'm using this incorrectly (e.g. is the expectation that we always provide a schema when timestamps are involved?), that would be very helpful to know as well! ``` $ pip list # only the relevant libraries/versions datasets 1.2.1 pandas 1.0.3 pyarrow 3.0.0 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1895/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1895/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1893
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1893/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1893/comments
https://api.github.com/repos/huggingface/datasets/issues/1893/events
https://github.com/huggingface/datasets/issues/1893
809,556,503
MDU6SXNzdWU4MDk1NTY1MDM=
1,893
wmt19 is broken
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "This was also mentioned in https://github.com/huggingface/datasets/issues/488 \r\n\r\nThe bucket where is data was stored seems to be unavailable now. Maybe we can change the URL to the ones in https://conferences.unite.un.org/uncorpus/en/downloadoverview ?", "Closing since this has been fixed by #1912" ]
1,613,500,798,000
1,614,793,322,000
1,614,793,322,000
CONTRIBUTOR
null
1. Check which lang pairs we have: `--dataset_name wmt19`: Please pick one among the available configs: ['cs-en', 'de-en', 'fi-en', 'gu-en', 'kk-en', 'lt-en', 'ru-en', 'zh-en', 'fr-de'] 2. OK, let's pick `ru-en`: `--dataset_name wmt19 --dataset_config "ru-en"` no cookies: ``` Traceback (most recent call last): File "./run_seq2seq.py", line 661, in <module> main() File "./run_seq2seq.py", line 317, in main datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 740, in load_dataset builder_instance.download_and_prepare( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 572, in download_and_prepare self._download_and_prepare( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 628, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt19/436092de5f3faaf0fc28bc84875475b384e90a5470fa6afaee11039ceddc5052/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 276, in download_and_extract return self.extract(self.download(url_or_urls)) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 191, in download downloaded_path_or_paths = map_nested( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 233, in map_nested mapped = [ File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 234, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 190, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 190, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 172, in _single_map_nested return function(data_struct) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 211, in _download return cached_path(url_or_filename, download_config=download_config) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 274, in cached_path output_path = get_from_cache( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 584, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-ru.tar.gz ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1893/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1893/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1892
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1892/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1892/comments
https://api.github.com/repos/huggingface/datasets/issues/1892/events
https://github.com/huggingface/datasets/issues/1892
809,554,174
MDU6SXNzdWU4MDk1NTQxNzQ=
1,892
request to mirror wmt datasets, as they are really slow to download
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Yes that would be awesome. Not only the download speeds are awful, but also some files are missing.\r\nWe list all the URLs in the datasets/wmt19/wmt_utils.py so we can make a script to download them all and host on S3.\r\nAlso I think most of the materials are under the CC BY-NC-SA 3.0 license (must double check) so it should be possible to redistribute the data with no issues.\r\n\r\ncc @patrickvonplaten who knows more about the wmt scripts", "Yeah, the scripts are pretty ugly! A big refactor would make sense here...and I also remember that the datasets were veeery slow to download", "I'm downloading them.\r\nI'm starting with the ones hosted on http://data.statmt.org which are the slowest ones", "@lhoestq better to use our new git-based system than just raw S3, no? (that way we have built-in CDN etc.)", "Closing since the urls were changed to mirror urls in #1912 ", "Hi there! What about mirroring other datasets like [CCAligned](http://www.statmt.org/cc-aligned/) as well? All of them are really slow to download..." ]
1,613,500,571,000
1,635,231,342,000
1,616,673,203,000
CONTRIBUTOR
null
Would it be possible to mirror the wmt data files under hf? Some of them take hours to download and not because of the local speed. They are all quite small datasets, just extremely slow to download. Thank you!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1892/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1892/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1890
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1890/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1890/comments
https://api.github.com/repos/huggingface/datasets/issues/1890/events
https://github.com/huggingface/datasets/pull/1890
809,395,586
MDExOlB1bGxSZXF1ZXN0NTc0MjY0OTMx
1,890
Reformat dataset cards section titles
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,488,307,000
1,613,488,354,000
1,613,488,353,000
MEMBER
null
Titles are formatted like [Foo](#foo) instead of just Foo
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1890/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1890/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1890", "html_url": "https://github.com/huggingface/datasets/pull/1890", "diff_url": "https://github.com/huggingface/datasets/pull/1890.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1890.patch", "merged_at": 1613488353000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1889
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1889/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1889/comments
https://api.github.com/repos/huggingface/datasets/issues/1889/events
https://github.com/huggingface/datasets/pull/1889
809,276,015
MDExOlB1bGxSZXF1ZXN0NTc0MTY1NDAz
1,889
Implement to_dict and to_pandas for Dataset
{ "login": "SBrandeis", "id": 33657802, "node_id": "MDQ6VXNlcjMzNjU3ODAy", "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SBrandeis", "html_url": "https://github.com/SBrandeis", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "repos_url": "https://api.github.com/users/SBrandeis/repos", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Next step is going to add these two in the documentation ^^" ]
1,613,479,099,000
1,613,673,757,000
1,613,673,754,000
CONTRIBUTOR
null
With options to return a generator or the full dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1889/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1889/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1889", "html_url": "https://github.com/huggingface/datasets/pull/1889", "diff_url": "https://github.com/huggingface/datasets/pull/1889.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1889.patch", "merged_at": 1613673754000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1888
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1888/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1888/comments
https://api.github.com/repos/huggingface/datasets/issues/1888/events
https://github.com/huggingface/datasets/pull/1888
809,241,123
MDExOlB1bGxSZXF1ZXN0NTc0MTM2MDU4
1,888
Docs for adding new column on formatted dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Close #1872" ]
1,613,475,900,000
1,617,112,863,000
1,613,476,737,000
MEMBER
null
As mentioned in #1872 we should add in the documentation how the format gets updated when new columns are added Close #1872
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1888/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1888/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1888", "html_url": "https://github.com/huggingface/datasets/pull/1888", "diff_url": "https://github.com/huggingface/datasets/pull/1888.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1888.patch", "merged_at": 1613476737000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1887
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1887/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1887/comments
https://api.github.com/repos/huggingface/datasets/issues/1887/events
https://github.com/huggingface/datasets/pull/1887
809,229,809
MDExOlB1bGxSZXF1ZXN0NTc0MTI2NTMy
1,887
Implement to_csv for Dataset
{ "login": "SBrandeis", "id": 33657802, "node_id": "MDQ6VXNlcjMzNjU3ODAy", "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SBrandeis", "html_url": "https://github.com/SBrandeis", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "repos_url": "https://api.github.com/users/SBrandeis/repos", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq I stumbled upon an interesting failure when adding tests for CSV serialization of `ArrayXD` features (see the failing unit tests in the CI)\r\n\r\nIt's due to the fact that booleans cannot be converted from arrow format to numpy without copy: https://arrow.apache.org/docs/python/generated/pyarrow.Array.html#pyarrow.Array.to_numpy", "Good catch ! I must be able to fix that one by allowing copies for this kind of arrays.\r\nThis is the kind of surprise you get sometimes when playing with arrow x)", "Raising this error for booleans was introduced in https://issues.apache.org/jira/browse/ARROW-2871?jql=text%20~%20%22boolean%20to_numpy%22 without much explanations unfortunately.\r\nSo \"no copy\" only works for primitive types - except booleans.\r\nThis is confirmed in the source code at https://github.com/wesm/arrow/blob/c07b9b48cf3e0bbbab493992a492ae47e5b04cad/python/pyarrow/array.pxi#L621\r\n\r\nI'm opening a PR to allow copies for booleans...", "I just merged the fix for boolean ArrayXD, feel free to merge from master to see if it fixes the ci :)", "@lhoestq unfirtunately, arrays of strings (or any other non-primitive type) require a copy too\r\n\r\nA list of primitive types can be found here: https://github.com/wesm/arrow/blob/c07b9b48cf3e0bbbab493992a492ae47e5b04cad/python/pyarrow/types.pxi#L821\r\n\r\npyarrow provides a `is_primitive` function to check whether a type is primitive , I used it to set `zero_copy_only`\r\n\r\nAlso, `PandasArrayExtensionArray.isna` was using `numpy.isnan` which fails for arrays of strings. I replaced it with `pandas.isna`. Let me know what you think! :) " ]
1,613,474,849,000
1,613,727,719,000
1,613,727,719,000
CONTRIBUTOR
null
cc @thomwolf `to_csv` supports passing either a file path or a *binary* file object The writing is batched to avoid loading the whole table in memory
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1887/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1887/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1887", "html_url": "https://github.com/huggingface/datasets/pull/1887", "diff_url": "https://github.com/huggingface/datasets/pull/1887.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1887.patch", "merged_at": 1613727719000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1886
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1886/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1886/comments
https://api.github.com/repos/huggingface/datasets/issues/1886/events
https://github.com/huggingface/datasets/pull/1886
809,221,885
MDExOlB1bGxSZXF1ZXN0NTc0MTE5ODcz
1,886
Common voice
{ "login": "BirgerMoell", "id": 1704131, "node_id": "MDQ6VXNlcjE3MDQxMzE=", "avatar_url": "https://avatars.githubusercontent.com/u/1704131?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BirgerMoell", "html_url": "https://github.com/BirgerMoell", "followers_url": "https://api.github.com/users/BirgerMoell/followers", "following_url": "https://api.github.com/users/BirgerMoell/following{/other_user}", "gists_url": "https://api.github.com/users/BirgerMoell/gists{/gist_id}", "starred_url": "https://api.github.com/users/BirgerMoell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BirgerMoell/subscriptions", "organizations_url": "https://api.github.com/users/BirgerMoell/orgs", "repos_url": "https://api.github.com/users/BirgerMoell/repos", "events_url": "https://api.github.com/users/BirgerMoell/events{/privacy}", "received_events_url": "https://api.github.com/users/BirgerMoell/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Does it make sense to make the domains as the different languages?\r\nA problem is that you need to download the datasets from the browser.\r\nOne idea would be to either contact Mozilla regarding API access to the dataset or make use of a headless browser for downloading the datasets (might be hard since we have to figure out how to host them). An even more creative idea would be to host the dataset inside a torrent and figure out a way to download specific datasets from within that torrent.\r\n\r\nHere is some information about the download authorization. They are hosting the data on S3.\r\n\r\nhttps://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-auth-using-authorization-header.html\r\n\r\nHere is an example of how a download link looks.\r\n\r\nhttps://mozilla-common-voice-datasets.s3.dualstack.us-west-2.amazonaws.com/cv-corpus-6.1-2020-12-11/nl.tar.gz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIAQ3GQRTO3ND4UAQXB%2F20210217%2Fus-west-2%2Fs3%2Faws4_request&X-Amz-Date=20210217T080740Z&X-Amz-Expires=43200&X-Amz-Security-Token=FwoGZXIvYXdzEGIaDCC6ALh%2FwIK9ovvRdCKSBCs5WaSJNsZ2h0SnhpnWFv4yiAJHJTe%2BY6pBcCqadRMs0RABHeQ2n1QDACJ5V9WOqIHfMfT0AI%2Bfe6iFkTGLgRrJOMYpgV%2FmIBcXCjeb72r4ZvudMA8tprkSxZsEh53bJkIDQx1tXqfpz0yoefM0geD3461suEGhHnLIyiwffrUpRg%2BkNZN9%2FLZZXpF5F2pogieKKV533Jetkd1xlWOR%2Bem9R2bENu2RV563XX3JvbWxSYN9IHkVT1xwd4ZiOpUtX7%2F2RoluJUKw%2BUPpyml3J%2FOPPGdr7CyPLjqNxdq9ceRi8lRybty64XvNYZGt45VNTQ3pkTTz4VpUCJAGkgxq95Ve%2BOwW%2Fsc8JtblTFKrH11vej62NB7C0n7JPPS4SLKXHKW%2B7ZbybcNf3BnsAVouPdsGTMslcgkD81b9trnjyXJdOZkzdHUf2KcWVXVceEsZnMhcCZQ1cJpI7qXPEk8QrKCQcNByPLHmPIEdHpj9IrIBKDkl2qO7VX7CCB65WDt2eZRltOcNHXWVFXFktMdQOQztI1j0XSZz2iOX4jPKKaqz193VEytlAqmehNi8pePOnxkP9Z1SP7d3I6rayuBF3phmpHxw499tY3ECYYgoCnJ6QSFa3KxMjFmEpQlmjxuwEMHd4CDL2FJYGcCiIxbCcL1r8ZE3%2BbGdcu7PRsVCHX3Huh%2FqGIaF4h40FgteN6teyKCHKOebs4EGMipb9xmEMZ9ZbVopz4bkhLdMTrjKon9w624Xem0MTPqN7XY%2BB6lRgrW8rd4%3D&X-Amz-Signature=28eabdfce72a472a70b0f9e1e2c37fe1471b5ec8ed60614fbe900bfa97ae1ac8&X-Amz-SignedHeaders=host\r\n\r\nIt could be that we simply need to make a http-request with the right parameters and we can download the datasets.", "> Wow, this looks great already! It's really a difficult dataset so thanks a lot for opening a PR.\r\n> I think the tagging tool is not too important for now and we can take a look at that later!\r\n> \r\n> At the moment, it would be very good to correctly generate some dummy data for all the possible languages. I think the structure of the `.tsv` file as you've noted in the PR is the one we want to use as the structure for `features = datasets.Features(`\r\n> \r\n> The splits `'Train\"`, `\"Test\"`, `\"Validation\"` look great to me! Because this is a special dataset that also has files called `\"Invalidated\"` I think the best option is to also add those as splits, _i.e._ `\"other\"`, `\"invalidated\"`, `\"reported\"`, `\"validated\"` . Those split names can be gives as shown here for example:\r\n> \r\n> https://github.com/huggingface/datasets/blob/28be129db862ec89a87ac9349c64df6b6118aff4/datasets/librispeech_asr/librispeech_asr.py#L124\r\n> \r\n> Also putting @lhoestq in cc here to hear his opinion on the different splits. @lhoestq Common Voicie is a crowd collected dataset where if a collected data sample did not receive enough \"up_votes\" from the community -> then it is (If I understood it correctly) marked as invalid -> hence the file `\"invalidated.tsv\"`. I think this is still useful data, so I would include it what do you think?\r\n> \r\n> @BirgerMoell let me know if you have any more questions :-)\r\n\r\nI think reporting is a separate feature. People can help annotate the data and then they can report things while annotating.\r\nhttps://commonvoice.mozilla.org/sv-SE/listen\r\n\r\nHere is the interface that shows reporting and the thumbs up and down which gives upvotes and downvotes.\r\n<img src=\"https://i.imgur.com/utWjszt.png\" height=\"800px\">\r\n", "I added splits and features. I'm not sure how you want me to generate dummy data for all the languages?", "Hey @BirgerMoell,\r\n\r\nI tweaked your dataset file a bit to have a first working version. To test this dataset downloading script, you can do the following:\r\n\r\n- 1) Download the Common Voice Georgian dataset from https://commonvoice.mozilla.org/en/datasets (It's pretty small which is why I chose it)\r\n- 2) Run the following command using this branch: \r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"./../datasets/datasets/common_voice\", \"Georgian\", data_dir=\"./cv-corpus-6.1-2020-12-11/ka/\", split=\"train\")\r\n```\r\n\r\nNote that I'm loading a local version of the dataset script (`\"./../datasets/datasets/common_voice/\"` points to the folder in your branch) and that I also insert the downloaded data with the `data_dir` arg.\r\n\r\n-> You'll see that the data is correctly loaded and that `ds` contains all the information we need.\r\n\r\nNow there are a lot of different datasets on Common Voice, so it probably takes too much time to test all of those, but maybe you can test whether the current script works as well *e.g.* for Swedish, 3,4 other languages.\r\n\r\nIt would be very nice if we can use the exact same structure for all languages, meaning that we don't have to change the `datasets.Features(...)` structure depending on the language, but can use the exact same one for every language.\r\n\r\nIf everything works as expected we can then go over to cleaning the script and seeing how to add dummy data tests for it." ]
1,613,474,170,000
1,615,315,891,000
1,615,315,891,000
CONTRIBUTOR
null
Started filling out information about the dataset and a dataset card. To do Create tagging file Update the common_voice.py file with more information
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1886/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1886/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1886", "html_url": "https://github.com/huggingface/datasets/pull/1886", "diff_url": "https://github.com/huggingface/datasets/pull/1886.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1886.patch", "merged_at": 1615315891000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1885
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1885/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1885/comments
https://api.github.com/repos/huggingface/datasets/issues/1885/events
https://github.com/huggingface/datasets/pull/1885
808,881,501
MDExOlB1bGxSZXF1ZXN0NTczODQyNzcz
1,885
add missing info on how to add large files
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,432,799,000
1,613,492,539,000
1,613,475,852,000
CONTRIBUTOR
null
Thanks to @lhoestq's instructions I was able to add data files to a custom dataset repo. This PR is attempting to tell others how to do the same if they need to. @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1885/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1885/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1885", "html_url": "https://github.com/huggingface/datasets/pull/1885", "diff_url": "https://github.com/huggingface/datasets/pull/1885.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1885.patch", "merged_at": 1613475852000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1884
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1884/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1884/comments
https://api.github.com/repos/huggingface/datasets/issues/1884/events
https://github.com/huggingface/datasets/pull/1884
808,755,894
MDExOlB1bGxSZXF1ZXN0NTczNzQwNzI5
1,884
dtype fix when using numpy arrays
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,415,325,000
1,627,642,878,000
1,627,642,878,000
CONTRIBUTOR
null
As discussed in #625 this fix lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1884/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1884/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1884", "html_url": "https://github.com/huggingface/datasets/pull/1884", "diff_url": "https://github.com/huggingface/datasets/pull/1884.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1884.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1883
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1883/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1883/comments
https://api.github.com/repos/huggingface/datasets/issues/1883/events
https://github.com/huggingface/datasets/pull/1883
808,750,623
MDExOlB1bGxSZXF1ZXN0NTczNzM2NTIz
1,883
Add not-in-place implementations for several dataset transforms
{ "login": "SBrandeis", "id": 33657802, "node_id": "MDQ6VXNlcjMzNjU3ODAy", "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SBrandeis", "html_url": "https://github.com/SBrandeis", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "repos_url": "https://api.github.com/users/SBrandeis/repos", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq I am not sure how to test `dictionary_encode_column` (in-place version was not tested before)", "I can take a look at dictionary_encode_column tomorrow.\r\nAlthough it's likely that it doesn't work then. It was added at the beginning of the lib and never tested nor used afaik.", "Now let's update the documentation to use the new methods x)" ]
1,613,414,666,000
1,614,178,489,000
1,614,178,406,000
CONTRIBUTOR
null
Should we deprecate in-place versions of such methods?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1883/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1883/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1883", "html_url": "https://github.com/huggingface/datasets/pull/1883", "diff_url": "https://github.com/huggingface/datasets/pull/1883.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1883.patch", "merged_at": 1614178406000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1881
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1881/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1881/comments
https://api.github.com/repos/huggingface/datasets/issues/1881/events
https://github.com/huggingface/datasets/pull/1881
808,578,200
MDExOlB1bGxSZXF1ZXN0NTczNTk1Nzkw
1,881
`list_datasets()` returns a list of strings, not objects
{ "login": "pminervini", "id": 227357, "node_id": "MDQ6VXNlcjIyNzM1Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/227357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pminervini", "html_url": "https://github.com/pminervini", "followers_url": "https://api.github.com/users/pminervini/followers", "following_url": "https://api.github.com/users/pminervini/following{/other_user}", "gists_url": "https://api.github.com/users/pminervini/gists{/gist_id}", "starred_url": "https://api.github.com/users/pminervini/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pminervini/subscriptions", "organizations_url": "https://api.github.com/users/pminervini/orgs", "repos_url": "https://api.github.com/users/pminervini/repos", "events_url": "https://api.github.com/users/pminervini/events{/privacy}", "received_events_url": "https://api.github.com/users/pminervini/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,398,815,000
1,613,401,789,000
1,613,401,788,000
CONTRIBUTOR
null
Here and there in the docs there is still stuff like this: ```python >>> datasets_list = list_datasets() >>> print(', '.join(dataset.id for dataset in datasets_list)) ``` However, my understanding is that `list_datasets()` returns a list of strings rather than a list of objects.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1881/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1881/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1881", "html_url": "https://github.com/huggingface/datasets/pull/1881", "diff_url": "https://github.com/huggingface/datasets/pull/1881.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1881.patch", "merged_at": 1613401788000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1880
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1880/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1880/comments
https://api.github.com/repos/huggingface/datasets/issues/1880/events
https://github.com/huggingface/datasets/pull/1880
808,563,439
MDExOlB1bGxSZXF1ZXN0NTczNTgzNjg0
1,880
Update multi_woz_v22 checksums
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,397,618,000
1,613,398,699,000
1,613,398,698,000
MEMBER
null
As noticed in #1876 the checksums of this dataset are outdated. I updated them in this PR
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1880/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1880/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1880", "html_url": "https://github.com/huggingface/datasets/pull/1880", "diff_url": "https://github.com/huggingface/datasets/pull/1880.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1880.patch", "merged_at": 1613398698000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1879
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1879/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1879/comments
https://api.github.com/repos/huggingface/datasets/issues/1879/events
https://github.com/huggingface/datasets/pull/1879
808,541,442
MDExOlB1bGxSZXF1ZXN0NTczNTY1NDAx
1,879
Replace flatten_nested
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @lhoestq. If you agree to merge this, I will start separating the logic for NestedDataStructure.map ;)" ]
1,613,395,780,000
1,613,759,714,000
1,613,759,714,000
MEMBER
null
Replace `flatten_nested` with `NestedDataStructure.flatten`. This is a first step towards having all NestedDataStructure logic as a separated concern, independent of the caller/user of the data structure. Eventually, all checks (whether the underlying data is list, dict, etc.) will be only inside this class. I have also generalized the flattening, and now it handles multiple levels of nesting.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1879/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1879/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1879", "html_url": "https://github.com/huggingface/datasets/pull/1879", "diff_url": "https://github.com/huggingface/datasets/pull/1879.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1879.patch", "merged_at": 1613759714000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1878
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1878/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1878/comments
https://api.github.com/repos/huggingface/datasets/issues/1878/events
https://github.com/huggingface/datasets/pull/1878
808,526,883
MDExOlB1bGxSZXF1ZXN0NTczNTUyODk3
1,878
Add LJ Speech dataset
{ "login": "anton-l", "id": 26864830, "node_id": "MDQ6VXNlcjI2ODY0ODMw", "avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anton-l", "html_url": "https://github.com/anton-l", "followers_url": "https://api.github.com/users/anton-l/followers", "following_url": "https://api.github.com/users/anton-l/following{/other_user}", "gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}", "starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anton-l/subscriptions", "organizations_url": "https://api.github.com/users/anton-l/orgs", "repos_url": "https://api.github.com/users/anton-l/repos", "events_url": "https://api.github.com/users/anton-l/events{/privacy}", "received_events_url": "https://api.github.com/users/anton-l/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hey @anton-l,\r\n\r\nThanks a lot for the very clean integration!\r\n\r\n1) I think we should now start having \"automatic-speech-recognition\" as a label in the dataset tagger (@yjernite is it easy to add?). But we can surely add this dataset with the tag you've added and then later change the label to `asr` \r\n\r\n2) That's perfect! Yeah good question - we're currently thinking about a better design with @lhoestq \r\n\r\n3) Again tagging @yjernite & @lhoestq here - guess we should add this license though!", "Thanks @anton-l for adding this one :)\r\nAbout the points you mentioned:\r\n1. Sure as soon as we've updated the tag sets in https://github.com/huggingface/datasets-tagging/blob/main/task_set.json, we can update the tags in this dataset card and also in the other audio dataset card.\r\n2. For now we just try to have them as small as possible but we may switch to S3/LFS at one point indeed\r\n3. If it's not part of the license set at https://github.com/huggingface/datasets-tagging/blob/main/license_set.json we can add it to this license set\r\n\r\nFor now it's ok to have the other-* tags but we'll update them very soon", "Let's merge this one and then we'll update the tags for the audio datasets. We'll probably also add something like this:\r\n```\r\ntype:\r\n- text\r\n- audio\r\n```\r\n\r\nThank you so much for adding this one, good job !" ]
1,613,394,642,000
1,613,417,981,000
1,613,398,689,000
CONTRIBUTOR
null
This PR adds the LJ Speech dataset (https://keithito.com/LJ-Speech-Dataset/) As requested by #1841 The ASR format is based on #1767 There are a couple of quirks that should be addressed: - I tagged this dataset as `other-other-automatic-speech-recognition` and `other-other-text-to-speech` (as classified by paperswithcode). Since the number of speech datasets is about to grow, maybe these categories should be added to the main list? - Similarly to #1767 this dataset uses only a single dummy sample to reduce the zip size (`wav`s are quite heavy). Is there a plan to allow LFS or S3 usage for dummy data in the repo? - The dataset is distributed under the Public Domain license, which is not used anywhere else in the repo, AFAIK. Do you think Public Domain is worth adding to the tagger app as well? Pinging @patrickvonplaten to review
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1878/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1878/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1878", "html_url": "https://github.com/huggingface/datasets/pull/1878", "diff_url": "https://github.com/huggingface/datasets/pull/1878.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1878.patch", "merged_at": 1613398689000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1877
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1877/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1877/comments
https://api.github.com/repos/huggingface/datasets/issues/1877/events
https://github.com/huggingface/datasets/issues/1877
808,462,272
MDU6SXNzdWU4MDg0NjIyNzI=
1,877
Allow concatenation of both in-memory and on-disk datasets
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "I started working on this. My idea is to first add the pyarrow Table wrappers InMemoryTable and MemoryMappedTable that both implement what's necessary regarding copy/pickle. Then have another wrapper that takes the concatenation of InMemoryTable/MemoryMappedTable objects.\r\n\r\nWhat's important here is that concatenating two tables into one doesn't double the memory used (`total_allocated_bytes()` stays the same).", "Hi @lhoestq @albertvillanova,\r\n\r\nI checked the linked issues and PR, this seems like a great idea. Would you mind elaborating on the in-memory and memory-mapped datasets? \r\nBased on my understanding, it is something like this, please correct me if I am wrong:\r\n1. For in-memory datasets, we don't have any dataset files so the entire dataset is pickled to the cache during loading, and then whenever required it is unpickled .\r\n2. For on-disk/memory-mapped datasets, we have the data files provided, so they can be re-loaded from the paths, and only the file-paths are stored while pickling.\r\n\r\nIf this is correct, will the feature also handle pickling/unpickling of a concatenated dataset? Will this be cached?\r\n\r\nThis also leads me to ask whether datasets are chunked during pickling? \r\n\r\nThanks,\r\nGunjan", "Hi ! Yes you're totally right about your two points :)\r\n\r\nAnd in the case of a concatenated dataset, then we should reload each sub-table depending on whether it's in-memory or memory mapped. That means the dataset will be made of several blocks in order to keep track of what's from memory and what's memory mapped. This allows to pickle/unpickle concatenated datasets", "Hi @lhoestq\r\n\r\nThanks, that sounds nice. Can you explain where the issue of the double memory may arise? Also, why is the existing `concatenate_datasets` not sufficient for this purpose?", "Hi @lhoestq,\r\n\r\nWill the `add_item` feature also help with lazy writing (or no caching) during `map`/`filter`?", "> Can you explain where the issue of the double memory may arise?\r\n\r\nWe have to keep each block (in-memory vs memory mapped) separated in order to be able to reload them with pickle.\r\nOn the other hand we also need to have the full table from mixed in-memory and memory mapped data in order to iterate or extract data conveniently. That means that each block is accessible twice: once in the full table, and once in the separated blocks. But since pyarrow tables concatenation doesn't double the memory, then building the full table doesn't cost memory which is what we want :)\r\n\r\n> Also, why is the existing concatenate_datasets not sufficient for this purpose?\r\n\r\nThe existing `concatenate_datasets` doesn't support having both in-memory and memory mapped data together (there's no fancy block separation logic). It works for datasets fully in-memory or fully memory mapped but not a mix of the two.\r\n\r\n> Will the add_item feature also help with lazy writing (or no caching) during map/filter?\r\n\r\nIt will enable the implementation of the fast, masked filter from this discussion: https://github.com/huggingface/datasets/issues/1949\r\nHowever I don't think this will affect map." ]
1,613,389,186,000
1,616,777,518,000
1,616,777,518,000
MEMBER
null
This is a prerequisite for the addition of the `add_item` feature (see #1870). Currently there is one assumption that we would need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk (using the dataset._data_files). This assumption is used for pickling for example: - in-memory dataset can just be pickled/unpickled in-memory - on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling Maybe let's have a design that allows a Dataset to have a Table that can be rebuilt from heterogenous sources like in-memory tables or on-disk tables ? This could also be further extended in the future One idea would be to define a list of sources and each source implements a way to reload its corresponding pyarrow Table. Then the dataset would be the concatenation of all these tables. Depending on the source type, the serialization using pickle would be different. In-memory data would be copied while on-disk data would simply be replaced by the path to these data. If you have some ideas you would like to share about the design/API feel free to do so :) cc @albertvillanova
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1877/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1877/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1876
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1876/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1876/comments
https://api.github.com/repos/huggingface/datasets/issues/1876/events
https://github.com/huggingface/datasets/issues/1876
808,025,859
MDU6SXNzdWU4MDgwMjU4NTk=
1,876
load_dataset("multi_woz_v22") NonMatchingChecksumError
{ "login": "Vincent950129", "id": 5945326, "node_id": "MDQ6VXNlcjU5NDUzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/5945326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Vincent950129", "html_url": "https://github.com/Vincent950129", "followers_url": "https://api.github.com/users/Vincent950129/followers", "following_url": "https://api.github.com/users/Vincent950129/following{/other_user}", "gists_url": "https://api.github.com/users/Vincent950129/gists{/gist_id}", "starred_url": "https://api.github.com/users/Vincent950129/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Vincent950129/subscriptions", "organizations_url": "https://api.github.com/users/Vincent950129/orgs", "repos_url": "https://api.github.com/users/Vincent950129/repos", "events_url": "https://api.github.com/users/Vincent950129/events{/privacy}", "received_events_url": "https://api.github.com/users/Vincent950129/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for reporting !\r\nThis is due to the changes made in the data files in the multiwoz repo: https://github.com/budzianowski/multiwoz/pull/59\r\nI'm opening a PR to update the checksums of the data files.", "I just merged the fix. It will be available in the new release of `datasets` later today.\r\nYou'll be able to get the new version with\r\n```\r\npip install --upgrade datasets\r\n```", "Hi, I still meet the error when loading the datasets after upgradeing datasets.\r\n\r\nraise NonMatchingChecksumError(error_msg + str(bad_urls))\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dialog_acts.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_001.json']", "This must be related to https://github.com/budzianowski/multiwoz/pull/72\r\nThose files have changed, let me update the checksums for this dataset.\r\n\r\nFor now you can use `ignore_verifications=True` in `load_dataset` to skip the checksum verification." ]
1,613,330,088,000
1,628,100,480,000
1,628,100,480,000
NONE
null
Hi, it seems that loading the multi_woz_v22 dataset gives a NonMatchingChecksumError. To reproduce: `dataset = load_dataset('multi_woz_v22','v2.2_active_only',split='train')` This will give the following error: ``` raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dialog_acts.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_001.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_003.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_004.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_005.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_006.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_007.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_008.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_009.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_010.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_012.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_013.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_014.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_015.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_016.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_017.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dev/dialogues_001.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dev/dialogues_002.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_001.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_002.json'] ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1876/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1876/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1875
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1875/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1875/comments
https://api.github.com/repos/huggingface/datasets/issues/1875/events
https://github.com/huggingface/datasets/pull/1875
807,887,267
MDExOlB1bGxSZXF1ZXN0NTczMDM2NzE0
1,875
Adding sari metric
{ "login": "ddhruvkr", "id": 6061911, "node_id": "MDQ6VXNlcjYwNjE5MTE=", "avatar_url": "https://avatars.githubusercontent.com/u/6061911?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ddhruvkr", "html_url": "https://github.com/ddhruvkr", "followers_url": "https://api.github.com/users/ddhruvkr/followers", "following_url": "https://api.github.com/users/ddhruvkr/following{/other_user}", "gists_url": "https://api.github.com/users/ddhruvkr/gists{/gist_id}", "starred_url": "https://api.github.com/users/ddhruvkr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ddhruvkr/subscriptions", "organizations_url": "https://api.github.com/users/ddhruvkr/orgs", "repos_url": "https://api.github.com/users/ddhruvkr/repos", "events_url": "https://api.github.com/users/ddhruvkr/events{/privacy}", "received_events_url": "https://api.github.com/users/ddhruvkr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,277,515,000
1,613,577,387,000
1,613,577,387,000
CONTRIBUTOR
null
Adding SARI metric that is used in evaluation of text simplification. This is required as part of the GEM benchmark.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1875/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1875/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1875", "html_url": "https://github.com/huggingface/datasets/pull/1875", "diff_url": "https://github.com/huggingface/datasets/pull/1875.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1875.patch", "merged_at": 1613577386000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1874
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1874/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1874/comments
https://api.github.com/repos/huggingface/datasets/issues/1874/events
https://github.com/huggingface/datasets/pull/1874
807,786,094
MDExOlB1bGxSZXF1ZXN0NTcyOTYzMjAy
1,874
Adding Europarl Bilingual dataset
{ "login": "lucadiliello", "id": 23355969, "node_id": "MDQ6VXNlcjIzMzU1OTY5", "avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lucadiliello", "html_url": "https://github.com/lucadiliello", "followers_url": "https://api.github.com/users/lucadiliello/followers", "following_url": "https://api.github.com/users/lucadiliello/following{/other_user}", "gists_url": "https://api.github.com/users/lucadiliello/gists{/gist_id}", "starred_url": "https://api.github.com/users/lucadiliello/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucadiliello/subscriptions", "organizations_url": "https://api.github.com/users/lucadiliello/orgs", "repos_url": "https://api.github.com/users/lucadiliello/repos", "events_url": "https://api.github.com/users/lucadiliello/events{/privacy}", "received_events_url": "https://api.github.com/users/lucadiliello/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "is there a way to check errors without subscribing to CircleCI? Because they want access to private repositories when logging.", "I think you need to be logged in to check the errors unfortunately. Feel free to create an account with bitbucket maybe if you don't want it to access your private github repos", "I've resolved some requirements, but I cannot create dummy data. The dataset works as follows: for each language pair `<lang1>-<lang2>` 3 files are downloaded:\r\n- dataset for `<lang1>`\r\n- dataset for `<lang2>`\r\n- alignments between `<lang1>` and `<lang2>`\r\n\r\nSuppose we work with the `bg-cs` language pair. Then, the dataset will download three `gzip` files which should be decompressed. I do not understand the relation between the folders created by the script to create dummy data and the original data provided by the download manager.", "Hi ! Indeed the data files structure of this dataset looks very specific.\r\nThe command `datasets-cli dummy_data ./datasets/europarl_bilingual` shows some instructions for each split but let me add more details.\r\n\r\nFirst things to know is that the dummy data files need to be uncompressed data, so for example for the file `bg.zip` you should actually have one folder with all the xml files in it instead. In the same way, `bg-cs.xml.gz` must be replaced by an actual uncompressed xml file.\r\n\r\nLet's take the bg-cs config as an example. To make the dummy data you need to:\r\n- go to `./datasets/europarl_bilingual/dummy/bg-cs/8.0.0` and create a folder named `dummy_data`. Then go inside this folder\r\n- create a text file named `bg-cs.xml.gz` containing xml content (so without .gz compression). The xml content must have the same structure as the original `bg-cs.zml` but only include 1 `linkGrp` entry. You can pick one entry from the original `bg-cs.xml` file. Let's say this entry is about this file: `ep-06-01-16-003.xml`\r\n- create a folder named `bg.zip` and inside this folder add one file Europarl/raw/bg/ep-06-01-16-003.xml. You can pick the xml file from the original `bg.zip` archive.\r\n- create a folder named `cs.zip` and inside this folder add one file Europarl/raw/cs/ep-06-01-16-003.xml. You can pick the xml file from the original `cs.zip` archive.\r\n- zip the `dummy_data` into `dummy_data.zip`\r\n\r\nAt this point you have dummy data files to generate 1 example which is what we want to be able to test the dataset script `europarl_bilingual.py` with pytest. \r\n\r\nIn particular this will make this test pass:\r\n```\r\npytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_europarl_bilingual\r\n```\r\n\r\nIdeally it would be awesome to have dummy data for all the different configs so if we manage to make a script that generates all of it automatically that would be perfect. However since the structure is not trivial, another option would be to only have the dummy data for only 1 or 2 configs, like what we do for [bible_para](https://github.com/huggingface/datasets/blob/master/datasets/bible_para/bible_para.py) for example. In `bible_para` only a few configurations are tested. As you can see there is only 6 configs in the `BUILDER_CONFIGS` attribute. All the other configs can still be used, here is what is said inside the dataset card of bible_para:\r\n```\r\nTo load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.\r\nYou can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/bible-uedin.php\r\nE.g.\r\n\r\n`dataset = load_dataset(\"bible_para\", lang1=\"fi\", lang2=\"hi\")`\r\n```\r\nIn this case the configuration \"fi-hi\" is simply created on the fly, instead of being picked from the `BUILDER_CONFIGS` list.\r\n\r\nI hope this helps, let me know if you have questions or if I can help", "I already created the scripts to create reduced versions of the data. What I didn't understand was how to put files in the dummy_data folder because, as you noticed, some file decompress to a nested tree structure. I will now try again with your suggestions!", "Is there something else I should do? If not can this be integrated?", "Thanks a lot !!\r\nSince the set of all the dummy data files is quite big I only kept a few of them. If we had kept them all the size of the `datasets` repo would have increased too much :/\r\nSo I did the same as for `bible_para`: only keep a few configurations in BUILDER_CONFIGS and have all the other pairs loadable with the lang1 and lang2 parameters like this:\r\n\r\n`dataset = load_dataset(\"europarl_bilingual\", lang1=\"fi\", lang2=\"fr\")`" ]
1,613,235,724,000
1,614,854,302,000
1,614,854,302,000
CONTRIBUTOR
null
Implementation of Europarl bilingual dataset from described [here](https://opus.nlpl.eu/Europarl.php). This dataset allows to use every language pair detailed in the original dataset. The loading script manages also the small errors contained in the original dataset (in very rare cases (1 over 10M) there are some keys that references to inexistent sentences). I chose to follow the the style of a similar dataset available in this repository: `multi_para_crawl`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1874/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1874/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1874", "html_url": "https://github.com/huggingface/datasets/pull/1874", "diff_url": "https://github.com/huggingface/datasets/pull/1874.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1874.patch", "merged_at": 1614854302000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1873
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1873/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1873/comments
https://api.github.com/repos/huggingface/datasets/issues/1873/events
https://github.com/huggingface/datasets/pull/1873
807,750,745
MDExOlB1bGxSZXF1ZXN0NTcyOTM4MTYy
1,873
add iapp_wiki_qa_squad
{ "login": "cstorm125", "id": 15519308, "node_id": "MDQ6VXNlcjE1NTE5MzA4", "avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cstorm125", "html_url": "https://github.com/cstorm125", "followers_url": "https://api.github.com/users/cstorm125/followers", "following_url": "https://api.github.com/users/cstorm125/following{/other_user}", "gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}", "starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions", "organizations_url": "https://api.github.com/users/cstorm125/orgs", "repos_url": "https://api.github.com/users/cstorm125/repos", "events_url": "https://api.github.com/users/cstorm125/events{/privacy}", "received_events_url": "https://api.github.com/users/cstorm125/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,223,267,000
1,613,485,318,000
1,613,485,318,000
CONTRIBUTOR
null
`iapp_wiki_qa_squad` is an extractive question answering dataset from Thai Wikipedia articles. It is adapted from [the original iapp-wiki-qa-dataset](https://github.com/iapp-technology/iapp-wiki-qa-dataset) to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, resulting in 5761/742/739 questions from 1529/191/192 articles.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1873/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1873/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1873", "html_url": "https://github.com/huggingface/datasets/pull/1873", "diff_url": "https://github.com/huggingface/datasets/pull/1873.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1873.patch", "merged_at": 1613485318000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1872
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1872/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1872/comments
https://api.github.com/repos/huggingface/datasets/issues/1872/events
https://github.com/huggingface/datasets/issues/1872
807,711,935
MDU6SXNzdWU4MDc3MTE5MzU=
1,872
Adding a new column to the dataset after set_format was called
{ "login": "villmow", "id": 2743060, "node_id": "MDQ6VXNlcjI3NDMwNjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4", "gravatar_id": "", "url": "https://api.github.com/users/villmow", "html_url": "https://github.com/villmow", "followers_url": "https://api.github.com/users/villmow/followers", "following_url": "https://api.github.com/users/villmow/following{/other_user}", "gists_url": "https://api.github.com/users/villmow/gists{/gist_id}", "starred_url": "https://api.github.com/users/villmow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/villmow/subscriptions", "organizations_url": "https://api.github.com/users/villmow/orgs", "repos_url": "https://api.github.com/users/villmow/repos", "events_url": "https://api.github.com/users/villmow/events{/privacy}", "received_events_url": "https://api.github.com/users/villmow/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Indeed if you add a column to a formatted dataset, then the new dataset gets a new formatting in which:\r\n```\r\nnew formatted columns = (all columns - previously unformatted columns)\r\n```\r\nTherefore the new column is going to be formatted using the `torch` formatting.\r\n\r\nIf you want your new column to be unformatted you can re-run this line:\r\n```python\r\ndata.set_format(\"torch\", columns=[\"some_integer_column1\", \"some_integer_column2\"], output_all_columns=True)\r\n```", "Hi, thanks that solved my problem. Maybe mention that in the documentation. ", "Ok cool :) \r\nAlso I just did a PR to mention this behavior in the documentation", "Closed by #1888" ]
1,613,207,675,000
1,617,112,905,000
1,617,112,905,000
NONE
null
Hi, thanks for the nice library. I'm in the process of creating a custom dataset, which has a mix of tensors and lists of strings. I stumbled upon an error and want to know if its a problem on my side. I load some lists of strings and integers, then call `data.set_format("torch", columns=["some_integer_column1", "some_integer_column2"], output_all_columns=True)`. This converts the integer columns into tensors, but keeps the lists of strings as they are. I then call `map` to add a new column to my dataset, which is a **list of strings**. Once I iterate through my dataset, I get an error that the new column can't be converted into a tensor (which is probably caused by `set_format`). Below some pseudo code: ```python def augment_func(sample: Dict) -> Dict: # do something return { "some_integer_column1" : augmented_data["some_integer_column1"], # <-- tensor "some_integer_column2" : augmented_data["some_integer_column2"], # <-- tensor "NEW_COLUMN": targets, # <-- list of strings } data = datasets.load_dataset(__file__, data_dir="...", split="train") data.set_format("torch", columns=["some_integer_column1", "some_integer_column2"], output_all_columns=True) augmented_dataset = data.map(augment_func, batched=False) for sample in augmented_dataset: print(sample) # fails ``` and the exception: ```python Traceback (most recent call last): File "dataset.py", line 487, in <module> main() File "dataset.py", line 471, in main for sample in augmented_dataset: File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 697, in __iter__ yield self._getitem( File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1069, in _getitem outputs = self._convert_outputs( File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 890, in _convert_outputs v = map_nested(command, v, **map_nested_kwargs) File "lib/python3.8/site-packages/datasets/utils/py_utils.py", line 225, in map_nested return function(data_struct) File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 850, in command return [map_nested(command, i, **map_nested_kwargs) for i in x] File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 850, in <listcomp> return [map_nested(command, i, **map_nested_kwargs) for i in x] File "lib/python3.8/site-packages/datasets/utils/py_utils.py", line 225, in map_nested return function(data_struct) File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 850, in command return [map_nested(command, i, **map_nested_kwargs) for i in x] File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 850, in <listcomp> return [map_nested(command, i, **map_nested_kwargs) for i in x] File "lib/python3.8/site-packages/datasets/utils/py_utils.py", line 225, in map_nested return function(data_struct) File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 851, in command return torch.tensor(x, **format_kwargs) TypeError: new(): invalid data type 'str' ``` Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1872/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/1872/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1871
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1871/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1871/comments
https://api.github.com/repos/huggingface/datasets/issues/1871/events
https://github.com/huggingface/datasets/pull/1871
807,697,671
MDExOlB1bGxSZXF1ZXN0NTcyODk5Nzgz
1,871
Add newspop dataset
{ "login": "frankier", "id": 299380, "node_id": "MDQ6VXNlcjI5OTM4MA==", "avatar_url": "https://avatars.githubusercontent.com/u/299380?v=4", "gravatar_id": "", "url": "https://api.github.com/users/frankier", "html_url": "https://github.com/frankier", "followers_url": "https://api.github.com/users/frankier/followers", "following_url": "https://api.github.com/users/frankier/following{/other_user}", "gists_url": "https://api.github.com/users/frankier/gists{/gist_id}", "starred_url": "https://api.github.com/users/frankier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/frankier/subscriptions", "organizations_url": "https://api.github.com/users/frankier/orgs", "repos_url": "https://api.github.com/users/frankier/repos", "events_url": "https://api.github.com/users/frankier/events{/privacy}", "received_events_url": "https://api.github.com/users/frankier/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for the changes :)\r\nmerging" ]
1,613,201,483,000
1,615,198,365,000
1,615,198,365,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1871/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1871/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1871", "html_url": "https://github.com/huggingface/datasets/pull/1871", "diff_url": "https://github.com/huggingface/datasets/pull/1871.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1871.patch", "merged_at": 1615198365000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1870
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1870/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1870/comments
https://api.github.com/repos/huggingface/datasets/issues/1870/events
https://github.com/huggingface/datasets/pull/1870
807,306,564
MDExOlB1bGxSZXF1ZXN0NTcyNTc4Mjc4
1,870
Implement Dataset add_item
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/3", "html_url": "https://github.com/huggingface/datasets/milestone/3", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/3/labels", "id": 6644287, "node_id": "MDk6TWlsZXN0b25lNjY0NDI4Nw==", "number": 3, "title": "1.7", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 3, "state": "closed", "created_at": 1617974191000, "updated_at": 1622478053000, "due_on": 1620975600000, "closed_at": 1622478053000 }
[ "Thanks @lhoestq for your remarks. Yes, I agree there are still many issues to be tackled... This PR is just a starting point, so that we can discuss how Dataset should be generalized.", "Sure ! I opened an issue #1877 so we can discuss this specific aspect :)", "I am going to implement this consolidation step in #2151.", "Sounds good !", "I retake this PR once the consolidation step is already implemented by #2151." ]
1,613,142,226,000
1,619,172,091,000
1,619,172,091,000
MEMBER
null
Implement `Dataset.add_item`. Close #1854.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1870/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1870/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1870", "html_url": "https://github.com/huggingface/datasets/pull/1870", "diff_url": "https://github.com/huggingface/datasets/pull/1870.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1870.patch", "merged_at": 1619172090000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1869
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1869/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1869/comments
https://api.github.com/repos/huggingface/datasets/issues/1869/events
https://github.com/huggingface/datasets/pull/1869
807,159,835
MDExOlB1bGxSZXF1ZXN0NTcyNDU0NTMy
1,869
Remove outdated commands in favor of huggingface-cli
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,129,290,000
1,613,146,389,000
1,613,146,388,000
MEMBER
null
Removing the old user commands since `huggingface_hub` is going to be used instead. cc @julien-c
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1869/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1869/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1869", "html_url": "https://github.com/huggingface/datasets/pull/1869", "diff_url": "https://github.com/huggingface/datasets/pull/1869.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1869.patch", "merged_at": 1613146388000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1868
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1868/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1868/comments
https://api.github.com/repos/huggingface/datasets/issues/1868/events
https://github.com/huggingface/datasets/pull/1868
807,138,159
MDExOlB1bGxSZXF1ZXN0NTcyNDM2MjA0
1,868
Update oscar sizes
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,127,335,000
1,613,127,787,000
1,613,127,786,000
MEMBER
null
This commit https://github.com/huggingface/datasets/commit/837a152e4724adc5308e2c4481908c00a8d93383 removed empty lines from the oscar deduplicated datasets. This PR updates the size of each deduplicated dataset to fix possible `NonMatchingSplitsSizesError` errors. cc @cahya-wirawan
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1868/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1868/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1868", "html_url": "https://github.com/huggingface/datasets/pull/1868", "diff_url": "https://github.com/huggingface/datasets/pull/1868.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1868.patch", "merged_at": 1613127786000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1867
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1867/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1867/comments
https://api.github.com/repos/huggingface/datasets/issues/1867/events
https://github.com/huggingface/datasets/issues/1867
807,127,181
MDU6SXNzdWU4MDcxMjcxODE=
1,867
ERROR WHEN USING SET_TRANSFORM()
{ "login": "alexvaca0", "id": 35173563, "node_id": "MDQ6VXNlcjM1MTczNTYz", "avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexvaca0", "html_url": "https://github.com/alexvaca0", "followers_url": "https://api.github.com/users/alexvaca0/followers", "following_url": "https://api.github.com/users/alexvaca0/following{/other_user}", "gists_url": "https://api.github.com/users/alexvaca0/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexvaca0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexvaca0/subscriptions", "organizations_url": "https://api.github.com/users/alexvaca0/orgs", "repos_url": "https://api.github.com/users/alexvaca0/repos", "events_url": "https://api.github.com/users/alexvaca0/events{/privacy}", "received_events_url": "https://api.github.com/users/alexvaca0/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @alejandrocros it looks like an incompatibility with the current Trainer @sgugger \r\nIndeed currently the Trainer of `transformers` doesn't support a dataset with a transform\r\n\r\nIt looks like it comes from this line: https://github.com/huggingface/transformers/blob/f51188cbe74195c14c5b3e2e8f10c2f435f9751a/src/transformers/trainer.py#L442\r\n\r\nThis line sets the format to not return certain unused columns. But this has two issues:\r\n1. it forgets to also set the format_kwargs (this causes the error you got):\r\n```python\r\ndataset.set_format(type=dataset.format[\"type\"], columns=columns, format_kwargs=dataset.format[\"format_kwargs\"])\r\n```\r\n2. the Trainer wants to keep only the fields that are used as input for a model. However for a dataset with a transform, the output fields are often different from the columns fields. For example from a column \"text\" in the dataset, the strings can be transformed on-the-fly into \"input_ids\". If you want your dataset to only output certain fields and not other you must change your transform function.\r\n", "FYI that option can be removed with `remove_unused_columns = False` in your `TrainingArguments`, so there is a workaround @alexvaca0 while the fix in `Trainer` is underway.\r\n\r\n@lhoestq I think I will just use the line you suggested and if someone is using the columns that are removed in their transform they will need to change `remove_unused_columns` to `False`. We might switch the default of that argument in the next version if that proves too bug-proof.", "I've tried your solutions @sgugger @lhoestq and the good news is that it throws no error. However, TPU training is taking forever, in 1 hour it has only trained 1 batch of 8192 elements, which doesn't make much sense... Is it possible that \"on the fly\" tokenization of batches is slowing down TPU training to that extent?", "I'm pretty sure this is because of padding but @sgugger might know better", "I don't know what the value of `padding` is in your lines of code pasted above so I can't say for sure. The first batch will be very slow on TPU since it compiles everything, so that's normal (1 hour is long but 8192 elements is also large). Then if your batches are not of the same lengths, it will recompile everything at each step instead of using the same graph, which will be very slow, so you should double check you are using padding to make everything the exact same shape. ", "I have tried now on a GPU and it goes smooth! Amazing feature .set_transform() instead of .map()! Now I can pre-train my model without the hard disk limitation. Thanks for your work all HuggingFace team!! :clap: ", "In the end, to make it work I turned to A-100 gpus instead of TPUS, among other changes. Set_transform doesn't work as expected and slows down training very much even in GPUs, and applying map destroys the disk, as it multiplies by 100 the size of the data passed to it (due to inefficient implementation converting strings to int64 floats I guess). For that reason, I chose to use datasets to load the data as text, and then edit the Collator from Transformers to tokenize every batch it receives before processing it. That way, I'm being able to train fast, without memory breaks, without the disk being unnecessarily filled, while making use of GPUs almost all the time I'm paying for them (the map function over the whole dataset took ~15hrs, in which you're not training at all). I hope this info helps others that are looking for training a language model from scratch cheaply, I'm going to close the issue as the optimal solution I found after many experiments to the problem posted in it is explained above. ", "Great comment @alexvaca0 . I think that we could re-open the issue as a reformulation of why it takes so much space to save the arrow. Saving a 1% of oscar corpus takes more thank 600 GB (it breaks when it pass 600GB because it is the free memory that I have at this moment) when the full dataset is 1,3 TB. I have a 1TB M.2 NVMe disk that I can not train on because the saved .arrow files goes crazily big. If you can share your Collator I will be grateful. " ]
1,613,126,311,000
1,614,607,464,000
1,614,168,043,000
NONE
null
Hi, I'm trying to use dataset.set_transform(encode) as @lhoestq told me in this issue: https://github.com/huggingface/datasets/issues/1825#issuecomment-774202797 However, when I try to use Trainer from transformers with such dataset, it throws an error: ``` TypeError: __init__() missing 1 required positional argument: 'transform' [INFO|trainer.py:357] 2021-02-12 10:18:09,893 >> The following columns in the training set don't have a corresponding argument in `AlbertForMaskedLM.forward` and have been ignored: text. Exception in device=TPU:0: __init__() missing 1 required positional argument: 'transform' Traceback (most recent call last): File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn _start_fn(index, pf_cfg, fn, args) File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn fn(gindex, *args) File "/home/alejandro_vaca/transformers/examples/language-modeling/run_mlm_wwm.py", line 368, in _mp_fn main() File "/home/alejandro_vaca/transformers/examples/language-modeling/run_mlm_wwm.py", line 332, in main data_collator=data_collator, File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/trainer.py", line 286, in __init__ self._remove_unused_columns(self.train_dataset, description="training") File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/trainer.py", line 359, in _remove_unused_columns dataset.set_format(type=dataset.format["type"], columns=columns) File "/home/alejandro_vaca/datasets/src/datasets/fingerprint.py", line 312, in wrapper out = func(self, *args, **kwargs) File "/home/alejandro_vaca/datasets/src/datasets/arrow_dataset.py", line 818, in set_format _ = get_formatter(type, **format_kwargs) File "/home/alejandro_vaca/datasets/src/datasets/formatting/__init__.py", line 112, in get_formatter return _FORMAT_TYPES[format_type](**format_kwargs) TypeError: __init__() missing 1 required positional argument: 'transform' ``` The code I'm using: ```{python} def tokenize_function(examples): # Remove empty lines examples["text"] = [line for line in examples["text"] if len(line) > 0 and not line.isspace()] return tokenizer(examples["text"], padding=padding, truncation=True, max_length=data_args.max_seq_length) datasets.set_transform(tokenize_function) data_collator = DataCollatorForWholeWordMask(tokenizer=tokenizer, mlm_probability=data_args.mlm_probability) # Initialize our Trainer trainer = Trainer( model=model, args=training_args, train_dataset=datasets["train"] if training_args.do_train else None, eval_dataset=datasets["val"] if training_args.do_eval else None, tokenizer=tokenizer, data_collator=data_collator, ) ``` I've installed from source, master branch.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1867/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1867/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1866
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1866/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1866/comments
https://api.github.com/repos/huggingface/datasets/issues/1866/events
https://github.com/huggingface/datasets/pull/1866
807,017,816
MDExOlB1bGxSZXF1ZXN0NTcyMzM3NDQ1
1,866
Add dataset for Financial PhraseBank
{ "login": "frankier", "id": 299380, "node_id": "MDQ6VXNlcjI5OTM4MA==", "avatar_url": "https://avatars.githubusercontent.com/u/299380?v=4", "gravatar_id": "", "url": "https://api.github.com/users/frankier", "html_url": "https://github.com/frankier", "followers_url": "https://api.github.com/users/frankier/followers", "following_url": "https://api.github.com/users/frankier/following{/other_user}", "gists_url": "https://api.github.com/users/frankier/gists{/gist_id}", "starred_url": "https://api.github.com/users/frankier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/frankier/subscriptions", "organizations_url": "https://api.github.com/users/frankier/orgs", "repos_url": "https://api.github.com/users/frankier/repos", "events_url": "https://api.github.com/users/frankier/events{/privacy}", "received_events_url": "https://api.github.com/users/frankier/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for the feedback. All accepted and metadata regenerated." ]
1,613,115,056,000
1,613,571,756,000
1,613,571,756,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1866/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1866/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1866", "html_url": "https://github.com/huggingface/datasets/pull/1866", "diff_url": "https://github.com/huggingface/datasets/pull/1866.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1866.patch", "merged_at": 1613571756000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1865
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1865/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1865/comments
https://api.github.com/repos/huggingface/datasets/issues/1865/events
https://github.com/huggingface/datasets/pull/1865
806,388,290
MDExOlB1bGxSZXF1ZXN0NTcxODE2ODI2
1,865
Updated OPUS Open Subtitles Dataset with metadata information
{ "login": "Valahaar", "id": 19476123, "node_id": "MDQ6VXNlcjE5NDc2MTIz", "avatar_url": "https://avatars.githubusercontent.com/u/19476123?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Valahaar", "html_url": "https://github.com/Valahaar", "followers_url": "https://api.github.com/users/Valahaar/followers", "following_url": "https://api.github.com/users/Valahaar/following{/other_user}", "gists_url": "https://api.github.com/users/Valahaar/gists{/gist_id}", "starred_url": "https://api.github.com/users/Valahaar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Valahaar/subscriptions", "organizations_url": "https://api.github.com/users/Valahaar/orgs", "repos_url": "https://api.github.com/users/Valahaar/repos", "events_url": "https://api.github.com/users/Valahaar/events{/privacy}", "received_events_url": "https://api.github.com/users/Valahaar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi !\r\nAbout the problems you mentioned:\r\n- Saving the infos is only done for the configurations inside the BUILDER_CONFIGS. Otherwise you would need to run the scripts on ALL language pairs, which is not what we want.\r\n- Moreover when you're on your branch, please specify the path to your local version of the dataset script, like \"./datasets/open_subtitles\". Otherwise the dataset is loaded from the master branch on github.\r\nHope that clarifies things a bit\r\n\r\nAnd of course feel free to add methods or classmethods to your builder.\r\n", "Great! Thank you :)\r\nI'll close the issue as well." ]
1,613,049,986,000
1,613,738,289,000
1,613,149,184,000
CONTRIBUTOR
null
Close #1844 Problems: - I ran `python datasets-cli test datasets/open_subtitles --save_infos --all_configs`, hence the change in `dataset_infos.json`, but it appears that the metadata features have not been added for all pairs. Any idea why that might be? - Possibly related to the above, I tried doing `pip uninstall datasets && pip install -e ".[dev]"` after the changes, and loading the dataset via `load_dataset("open_subtitles", lang1='hi', lang2='it')` to check if the update worked, but the loaded dataset did not contain the metadata fields (neither in the features nor doing `next(iter(dataset['train']))`). What step(s) did I miss? Questions: - Is it ok to have a `classmethod` in there? I have not seen any in the few other datasets I have checked. I could make it a local method of the `_generate_examples` method, but I'd rather not duplicate the logic...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1865/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1865/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1865", "html_url": "https://github.com/huggingface/datasets/pull/1865", "diff_url": "https://github.com/huggingface/datasets/pull/1865.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1865.patch", "merged_at": 1613149184000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1864
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1864/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1864/comments
https://api.github.com/repos/huggingface/datasets/issues/1864/events
https://github.com/huggingface/datasets/issues/1864
806,172,843
MDU6SXNzdWU4MDYxNzI4NDM=
1,864
Add Winogender Schemas
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "Nevermind, this one is already available on the hub under the name `'wino_bias'`: https://huggingface.co/datasets/wino_bias" ]
1,613,031,518,000
1,613,031,591,000
1,613,031,591,000
NONE
null
## Adding a Dataset - **Name:** Winogender Schemas - **Description:** Winogender Schemas (inspired by Winograd Schemas) are minimal pairs of sentences that differ only by the gender of one pronoun in the sentence, designed to test for the presence of gender bias in automated coreference resolution systems. - **Paper:** https://arxiv.org/abs/1804.09301 - **Data:** https://github.com/rudinger/winogender-schemas (see data directory) - **Motivation:** Testing gender bias in automated coreference resolution systems, improve coreference resolution in general. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1864/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1864/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1862
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1862/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1862/comments
https://api.github.com/repos/huggingface/datasets/issues/1862/events
https://github.com/huggingface/datasets/pull/1862
805,722,293
MDExOlB1bGxSZXF1ZXN0NTcxMjc2ODAx
1,862
Fix writing GPU Faiss index
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,612,978,323,000
1,612,981,068,000
1,612,981,067,000
MEMBER
null
As reported in by @corticalstack there is currently an error when we try to save a faiss index on GPU. I fixed that by checking the index `getDevice()` method before calling `index_gpu_to_cpu` Close #1859
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1862/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1862/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1862", "html_url": "https://github.com/huggingface/datasets/pull/1862", "diff_url": "https://github.com/huggingface/datasets/pull/1862.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1862.patch", "merged_at": 1612981067000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1861
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1861/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1861/comments
https://api.github.com/repos/huggingface/datasets/issues/1861/events
https://github.com/huggingface/datasets/pull/1861
805,631,215
MDExOlB1bGxSZXF1ZXN0NTcxMjAwNjA1
1,861
Fix Limit url
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,612,971,896,000
1,612,973,700,000
1,612,973,699,000
MEMBER
null
The test.json file of the Literal-Motion-in-Text (LiMiT) dataset was removed recently on the master branch of the repo at https://github.com/ilmgut/limit_dataset This PR uses the previous commit sha to download the file instead, as suggested by @Paethon Close #1836
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1861/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1861/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1861", "html_url": "https://github.com/huggingface/datasets/pull/1861", "diff_url": "https://github.com/huggingface/datasets/pull/1861.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1861.patch", "merged_at": 1612973698000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1860
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1860/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1860/comments
https://api.github.com/repos/huggingface/datasets/issues/1860/events
https://github.com/huggingface/datasets/pull/1860
805,510,037
MDExOlB1bGxSZXF1ZXN0NTcxMDk4OTIz
1,860
Add loading from the Datasets Hub + add relative paths in download manager
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I just added the steps to share a dataset on the datasets hub. It's highly inspired by the steps to share a model in the `transformers` doc.\r\n\r\nMoreover once the new huggingface_hub is released we can update the version in the setup.py. We also need to update the command to create a dataset repo in the documentation\r\n\r\nI added a few more tests with the \"lhoestq/test\" dataset I added on the hub and it works fine :) ", "Here is the PR adding support for datasets repos in `huggingface_hub`: https://github.com/huggingface/huggingface_hub/pull/14" ]
1,612,963,451,000
1,613,157,210,000
1,613,157,209,000
MEMBER
null
With the new Datasets Hub on huggingface.co it's now possible to have a dataset repo with your own script and data. For example: https://huggingface.co/datasets/lhoestq/custom_squad/tree/main contains one script and two json files. You can load it using ```python from datasets import load_dataset d = load_dataset("lhoestq/custom_squad") ``` To be able to use the data files that live right next to the dataset script on the repo in the hub, I added relative paths support for the DownloadManager. For example in the repo mentioned above, there are two json files that can be downloaded via ```python _URLS = { "train": "train-v1.1.json", "dev": "dev-v1.1.json", } downloaded_files = dl_manager.download_and_extract(_URLS) ``` To make it work, I set the `base_path` of the DownloadManager to be the parent path of the dataset script (which comes from either a local path or a remote url). I also had to add the auth header of the requests to huggingface.co for private datasets repos. The token is fetched from [huggingface_hub](https://github.com/huggingface/huggingface_hub).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1860/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1860/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1860", "html_url": "https://github.com/huggingface/datasets/pull/1860", "diff_url": "https://github.com/huggingface/datasets/pull/1860.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1860.patch", "merged_at": 1613157209000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1859
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1859/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1859/comments
https://api.github.com/repos/huggingface/datasets/issues/1859/events
https://github.com/huggingface/datasets/issues/1859
805,479,025
MDU6SXNzdWU4MDU0NzkwMjU=
1,859
Error "in void don't know how to serialize this type of index" when saving index to disk when device=0 (GPU)
{ "login": "corticalstack", "id": 3995321, "node_id": "MDQ6VXNlcjM5OTUzMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/3995321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/corticalstack", "html_url": "https://github.com/corticalstack", "followers_url": "https://api.github.com/users/corticalstack/followers", "following_url": "https://api.github.com/users/corticalstack/following{/other_user}", "gists_url": "https://api.github.com/users/corticalstack/gists{/gist_id}", "starred_url": "https://api.github.com/users/corticalstack/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/corticalstack/subscriptions", "organizations_url": "https://api.github.com/users/corticalstack/orgs", "repos_url": "https://api.github.com/users/corticalstack/repos", "events_url": "https://api.github.com/users/corticalstack/events{/privacy}", "received_events_url": "https://api.github.com/users/corticalstack/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @corticalstack ! Thanks for reporting. Indeed in the recent versions of Faiss we must use `getDevice` to check if the index in on GPU.\r\n\r\nI'm opening a PR", "I fixed this issue. It should work fine now.\r\nFeel free to try it out by installing `datasets` from source.\r\nOtherwise you can wait for the next release of `datasets` (in a few days)", "Thanks for such a quick fix and merge to master, pip installed git master, tested all OK" ]
1,612,960,860,000
1,612,981,932,000
1,612,981,067,000
NONE
null
Error serializing faiss index. Error as follows: `Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /home/conda/feedstock_root/build_artifacts/faiss-split_1612472484670/work/faiss/impl/index_write.cpp:453: don't know how to serialize this type of index` Note: `torch.cuda.is_available()` reports: ``` Cuda is available cuda:0 ``` Adding index, device=0 for GPU. `dataset.add_faiss_index(column='embeddings', index_name='idx_embeddings', device=0)` However, during a quick debug, self.faiss_index has no attr "device" when checked in` search.py, method save`, so fails to transform gpu index to cpu index. If I add index without device, index is saved OK. ``` def save(self, file: str): """Serialize the FaissIndex on disk""" import faiss # noqa: F811 if ( hasattr(self.faiss_index, "device") and self.faiss_index.device is not None and self.faiss_index.device > -1 ): index = faiss.index_gpu_to_cpu(self.faiss_index) else: index = self.faiss_index faiss.write_index(index, file) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1859/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1859/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1858
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1858/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1858/comments
https://api.github.com/repos/huggingface/datasets/issues/1858/events
https://github.com/huggingface/datasets/pull/1858
805,477,774
MDExOlB1bGxSZXF1ZXN0NTcxMDcxNzIx
1,858
Clean config getenvs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,612,960,754,000
1,612,972,350,000
1,612,972,349,000
MEMBER
null
Following #1848 Remove double getenv calls and fix one issue with rarfile cc @albertvillanova
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1858/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1858/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1858", "html_url": "https://github.com/huggingface/datasets/pull/1858", "diff_url": "https://github.com/huggingface/datasets/pull/1858.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1858.patch", "merged_at": 1612972349000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1857
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1857/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1857/comments
https://api.github.com/repos/huggingface/datasets/issues/1857/events
https://github.com/huggingface/datasets/issues/1857
805,391,107
MDU6SXNzdWU4MDUzOTExMDc=
1,857
Unable to upload "community provided" dataset - 400 Client Error
{ "login": "mwrzalik", "id": 1376337, "node_id": "MDQ6VXNlcjEzNzYzMzc=", "avatar_url": "https://avatars.githubusercontent.com/u/1376337?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mwrzalik", "html_url": "https://github.com/mwrzalik", "followers_url": "https://api.github.com/users/mwrzalik/followers", "following_url": "https://api.github.com/users/mwrzalik/following{/other_user}", "gists_url": "https://api.github.com/users/mwrzalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/mwrzalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mwrzalik/subscriptions", "organizations_url": "https://api.github.com/users/mwrzalik/orgs", "repos_url": "https://api.github.com/users/mwrzalik/repos", "events_url": "https://api.github.com/users/mwrzalik/events{/privacy}", "received_events_url": "https://api.github.com/users/mwrzalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! We're in the process of switching the community datasets to git repos, exactly like what we're doing for models.\r\nYou can find an example here:\r\nhttps://huggingface.co/datasets/lhoestq/custom_squad/tree/main\r\n\r\nWe'll update the CLI in the coming days and do a new release :)\r\n\r\nAlso cc @julien-c maybe we can make improve the error message ?" ]
1,612,953,541,000
1,627,967,173,000
1,627,967,173,000
CONTRIBUTOR
null
Hi, i'm trying to a upload a dataset as described [here](https://huggingface.co/docs/datasets/v1.2.0/share_dataset.html#sharing-a-community-provided-dataset). This is what happens: ``` $ datasets-cli login $ datasets-cli upload_dataset my_dataset About to upload file /path/to/my_dataset/dataset_infos.json to S3 under filename my_dataset/dataset_infos.json and namespace username About to upload file /path/to/my_dataset/my_dataset.py to S3 under filename my_dataset/my_dataset.py and namespace username Proceed? [Y/n] Y Uploading... This might take a while if files are large 400 Client Error: Bad Request for url: https://huggingface.co/api/datasets/presign huggingface.co migrated to a new model hosting system. You need to upgrade to transformers v3.5+ to upload new models. More info at https://discuss.hugginface.co or https://twitter.com/julien_c. Thank you! ``` I'm using the latest releases of datasets and transformers.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1857/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1857/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1855
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1855/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1855/comments
https://api.github.com/repos/huggingface/datasets/issues/1855/events
https://github.com/huggingface/datasets/pull/1855
805,256,579
MDExOlB1bGxSZXF1ZXN0NTcwODkzNDY3
1,855
Minor fix in the docs
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,612,942,063,000
1,612,960,389,000
1,612,960,389,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1855/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1855/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1855", "html_url": "https://github.com/huggingface/datasets/pull/1855", "diff_url": "https://github.com/huggingface/datasets/pull/1855.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1855.patch", "merged_at": 1612960389000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1854
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1854/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1854/comments
https://api.github.com/repos/huggingface/datasets/issues/1854/events
https://github.com/huggingface/datasets/issues/1854
805,204,397
MDU6SXNzdWU4MDUyMDQzOTc=
1,854
Feature Request: Dataset.add_item
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @sshleifer.\r\n\r\nI am not sure of understanding the need of the `add_item` approach...\r\n\r\nBy just reading your \"Desired API\" section, I would say you could (nearly) get it with a 1-column Dataset:\r\n```python\r\ndata = {\"input_ids\": [np.array([4,4,2]), np.array([8,6,5,5,2]), np.array([3,3,31,5])]}\r\nds = Dataset.from_dict(data)\r\nassert (ds[\"input_ids\"][0] == np.array([4,4,2])).all()\r\n```", "Hi @sshleifer :) \r\n\r\nWe don't have methods like `Dataset.add_batch` or `Dataset.add_entry/add_item` yet.\r\nBut that's something we'll add pretty soon. Would an API that looks roughly like this help ? Do you have suggestions ?\r\n```python\r\nimport numpy as np\r\nfrom datasets import Dataset\r\n\r\ntokenized = [np.array([4,4,2]), np.array([8,6,5,5,2]), np.array([3,3,31,5])\r\n\r\n# API suggestion (not available yet)\r\nd = Dataset()\r\nfor input_ids in tokenized:\r\n d.add_item({\"input_ids\": input_ids})\r\n\r\nprint(d[0][\"input_ids\"])\r\n# [4, 4, 2]\r\n```\r\n\r\nCurrently you can define a dataset with what @albertvillanova suggest, or via a generator using dataset builders. It's also possible to [concatenate datasets](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate#datasets.concatenate_datasets).", "Your API looks perfect @lhoestq, thanks!" ]
1,612,937,160,000
1,619,172,090,000
1,619,172,090,000
MEMBER
null
I'm trying to integrate `huggingface/datasets` functionality into `fairseq`, which requires (afaict) being able to build a dataset through an `add_item` method, such as https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L318, as opposed to loading all the text into arrow, and then `dataset.map(binarizer)`. Is this possible at the moment? Is there an example? I'm happy to use raw `pa.Table` but not sure whether it will support uneven length entries. ### Desired API ```python import numpy as np tokenized: List[np.NDArray[np.int64]] = [np.array([4,4,2]), np.array([8,6,5,5,2]), np.array([3,3,31,5]) def build_dataset_from_tokenized(tokenized: List[np.NDArray[int]]) -> Dataset: """FIXME""" dataset = EmptyDataset() for t in tokenized: dataset.append(t) return dataset ds = build_dataset_from_tokenized(tokenized) assert (ds[0] == np.array([4,4,2])).all() ``` ### What I tried grep, google for "add one entry at a time", "datasets.append" ### Current Code This code achieves the same result but doesn't fit into the `add_item` abstraction. ```python dataset = load_dataset('text', data_files={'train': 'train.txt'}) tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_length=4096) def tokenize_function(examples): ids = tokenizer(examples['text'], return_attention_mask=False)['input_ids'] return {'input_ids': [x[1:] for x in ids]} ds = dataset.map(tokenize_function, batched=True, num_proc=4, remove_columns=['text'], load_from_cache_file=not overwrite_cache) print(ds['train'][0]) => np array ``` Thanks in advance!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1854/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1854/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1853
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1853/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1853/comments
https://api.github.com/repos/huggingface/datasets/issues/1853/events
https://github.com/huggingface/datasets/pull/1853
804,791,166
MDExOlB1bGxSZXF1ZXN0NTcwNTAwMjc4
1,853
Configure library root logger at the module level
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,612,894,272,000
1,612,960,354,000
1,612,960,354,000
MEMBER
null
Configure library root logger at the datasets.logging module level (singleton-like). By doing it this way: - we are sure configuration is done only once: module level code is only runned once - no need of global variable - no need of threading lock
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1853/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1853/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1853", "html_url": "https://github.com/huggingface/datasets/pull/1853", "diff_url": "https://github.com/huggingface/datasets/pull/1853.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1853.patch", "merged_at": 1612960354000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1852
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1852/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1852/comments
https://api.github.com/repos/huggingface/datasets/issues/1852/events
https://github.com/huggingface/datasets/pull/1852
804,633,033
MDExOlB1bGxSZXF1ZXN0NTcwMzY3NTU1
1,852
Add Arabic Speech Corpus
{ "login": "zaidalyafeai", "id": 15667714, "node_id": "MDQ6VXNlcjE1NjY3NzE0", "avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zaidalyafeai", "html_url": "https://github.com/zaidalyafeai", "followers_url": "https://api.github.com/users/zaidalyafeai/followers", "following_url": "https://api.github.com/users/zaidalyafeai/following{/other_user}", "gists_url": "https://api.github.com/users/zaidalyafeai/gists{/gist_id}", "starred_url": "https://api.github.com/users/zaidalyafeai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zaidalyafeai/subscriptions", "organizations_url": "https://api.github.com/users/zaidalyafeai/orgs", "repos_url": "https://api.github.com/users/zaidalyafeai/repos", "events_url": "https://api.github.com/users/zaidalyafeai/events{/privacy}", "received_events_url": "https://api.github.com/users/zaidalyafeai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,612,882,946,000
1,613,038,735,000
1,613,038,735,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1852/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1852/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1852", "html_url": "https://github.com/huggingface/datasets/pull/1852", "diff_url": "https://github.com/huggingface/datasets/pull/1852.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1852.patch", "merged_at": 1613038734000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1851
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1851/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1851/comments
https://api.github.com/repos/huggingface/datasets/issues/1851/events
https://github.com/huggingface/datasets/pull/1851
804,523,174
MDExOlB1bGxSZXF1ZXN0NTcwMjc2MTk5
1,851
set bert_score version dependency
{ "login": "pvl", "id": 3596, "node_id": "MDQ6VXNlcjM1OTY=", "avatar_url": "https://avatars.githubusercontent.com/u/3596?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pvl", "html_url": "https://github.com/pvl", "followers_url": "https://api.github.com/users/pvl/followers", "following_url": "https://api.github.com/users/pvl/following{/other_user}", "gists_url": "https://api.github.com/users/pvl/gists{/gist_id}", "starred_url": "https://api.github.com/users/pvl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pvl/subscriptions", "organizations_url": "https://api.github.com/users/pvl/orgs", "repos_url": "https://api.github.com/users/pvl/repos", "events_url": "https://api.github.com/users/pvl/events{/privacy}", "received_events_url": "https://api.github.com/users/pvl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,612,875,067,000
1,612,880,508,000
1,612,880,508,000
CONTRIBUTOR
null
Set the bert_score version in requirements since previous versions of bert_score will fail with datasets (closes #843)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1851/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1851/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1851", "html_url": "https://github.com/huggingface/datasets/pull/1851", "diff_url": "https://github.com/huggingface/datasets/pull/1851.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1851.patch", "merged_at": 1612880508000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1850
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1850/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1850/comments
https://api.github.com/repos/huggingface/datasets/issues/1850/events
https://github.com/huggingface/datasets/pull/1850
804,412,249
MDExOlB1bGxSZXF1ZXN0NTcwMTg0MDAx
1,850
Add cord 19 dataset
{ "login": "ggdupont", "id": 5583410, "node_id": "MDQ6VXNlcjU1ODM0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/5583410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ggdupont", "html_url": "https://github.com/ggdupont", "followers_url": "https://api.github.com/users/ggdupont/followers", "following_url": "https://api.github.com/users/ggdupont/following{/other_user}", "gists_url": "https://api.github.com/users/ggdupont/gists{/gist_id}", "starred_url": "https://api.github.com/users/ggdupont/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ggdupont/subscriptions", "organizations_url": "https://api.github.com/users/ggdupont/orgs", "repos_url": "https://api.github.com/users/ggdupont/repos", "events_url": "https://api.github.com/users/ggdupont/events{/privacy}", "received_events_url": "https://api.github.com/users/ggdupont/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Cleaned-up version of previous PR: https://github.com/huggingface/datasets/pull/1129", "@lhoestq FYI", "Before merging I might tweak a little bit the dummy data to avoid having to check if the `document_parses` and `embeddings` directories exist or not. I'll do that later today", "Looks all good now ! Thanks a lot @ggdupont :)\r\nMerging" ]
1,612,866,128,000
1,612,883,786,000
1,612,883,786,000
CONTRIBUTOR
null
Initial version only reading the metadata in CSV. ### Checklist: - [x] Create the dataset script /datasets/my_dataset/my_dataset.py using the template - [x] Fill the _DESCRIPTION and _CITATION variables - [x] Implement _infos(), _split_generators() and _generate_examples() - [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class. - [x] Generate the metadata file dataset_infos.json for all configurations - [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card README.md using the template and at least fill the tags - [x] Both tests for the real data and the dummy data pass. ### Extras: - [x] add more metadata - [x] add full text - [x] add pre-computed document embedding
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1850/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1850/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1850", "html_url": "https://github.com/huggingface/datasets/pull/1850", "diff_url": "https://github.com/huggingface/datasets/pull/1850.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1850.patch", "merged_at": 1612883785000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1849
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1849/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1849/comments
https://api.github.com/repos/huggingface/datasets/issues/1849/events
https://github.com/huggingface/datasets/issues/1849
804,292,971
MDU6SXNzdWU4MDQyOTI5NzE=
1,849
Add TIMIT
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech", "name": "speech", "color": "d93f0b", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "@patrickvonplaten Could you please help me with how the output text has to be represented in the data? TIMIT has Words, Phonemes and texts. Also has lot on info on the speaker and the dialect. Could you please help me? An example of how to arrange it would be super helpful!\r\n\r\n", "Hey @vrindaprabhu - sure I'll help you :-) Could you open a first PR for TIMIT where you copy-paste more or less the `librispeech_asr` script: https://github.com/huggingface/datasets/blob/28be129db862ec89a87ac9349c64df6b6118aff4/datasets/librispeech_asr/librispeech_asr.py#L93 (obviously replacing all the naming and links correctly...) and then you can list all possible outputs in the features dict: https://github.com/huggingface/datasets/blob/28be129db862ec89a87ac9349c64df6b6118aff4/datasets/librispeech_asr/librispeech_asr.py#L104 (words, phonemes should probably be of kind `datasets.Sequence(datasets.Value(\"string\"))` and texts I think should be of type `\"text\": datasets.Value(\"string\")`.\r\n\r\nWhen you've opened a first PR, I think it'll be much easier for us to take a look together :-) ", "I am sorry! I created the PR [#1903](https://github.com/huggingface/datasets/pull/1903#). Requesting your comments! CircleCI tests are failing, will address them along with your comments!" ]
1,612,855,781,000
1,615,787,977,000
1,615,787,977,000
MEMBER
null
## Adding a Dataset - **Name:** *TIMIT* - **Description:** *The TIMIT corpus of read speech has been designed to provide speech data for the acquisition of acoustic-phonetic knowledge and for the development and evaluation of automatic speech recognition systems* - **Paper:** *Homepage*: http://groups.inf.ed.ac.uk/ami/corpus/ / *Wikipedia*: https://en.wikipedia.org/wiki/TIMIT - **Data:** *https://deepai.org/dataset/timit* - **Motivation:** Important speech dataset If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1849/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1849/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1848
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1848/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1848/comments
https://api.github.com/repos/huggingface/datasets/issues/1848/events
https://github.com/huggingface/datasets/pull/1848
803,826,506
MDExOlB1bGxSZXF1ZXN0NTY5Njg5ODU1
1,848
Refactoring: Create config module
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,612,809,831,000
1,612,960,175,000
1,612,960,175,000
MEMBER
null
Refactorize configuration settings into their own module. This could be seen as a Pythonic singleton-like approach. Eventually a config instance class might be created.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1848/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1848/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1848", "html_url": "https://github.com/huggingface/datasets/pull/1848", "diff_url": "https://github.com/huggingface/datasets/pull/1848.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1848.patch", "merged_at": 1612960175000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1847
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1847/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1847/comments
https://api.github.com/repos/huggingface/datasets/issues/1847/events
https://github.com/huggingface/datasets/pull/1847
803,824,694
MDExOlB1bGxSZXF1ZXN0NTY5Njg4NDY0
1,847
[Metrics] Add word error metric metric
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Feel free to merge once the CI is all green ;)" ]
1,612,809,675,000
1,612,893,201,000
1,612,893,201,000
MEMBER
null
This PR adds the word error rate metric to datasets. WER: https://en.wikipedia.org/wiki/Word_error_rate for speech recognition. WER is the main metric used in ASR. `jiwer` seems to be a solid library (see https://github.com/asteroid-team/asteroid/pull/329#discussion_r525158939)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1847/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1847/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1847", "html_url": "https://github.com/huggingface/datasets/pull/1847", "diff_url": "https://github.com/huggingface/datasets/pull/1847.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1847.patch", "merged_at": 1612893201000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1846
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1846/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1846/comments
https://api.github.com/repos/huggingface/datasets/issues/1846/events
https://github.com/huggingface/datasets/pull/1846
803,806,380
MDExOlB1bGxSZXF1ZXN0NTY5NjczMzcy
1,846
Make DownloadManager downloaded/extracted paths accessible
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "First I was thinking of the dict, which makes sense for .download, mapping URL to downloaded path. However does this make sense for .extract, mapping the downloaded path to the extracted path? I ask this because the user did not chose the downloaded path, so this is completely unknown for them...", "There could be several situations:\r\n- download a file with no extraction\r\n- download a file and extract it\r\n- download a file, extract it and then inside the output folder extract some more files\r\n- extract a local file (for datasets with data that are manually downloaded for example)\r\n- extract a local file, and then inside the output folder extract some more files\r\n\r\nSo I think it's ok to have `downloaded_paths` as a dict url -> downloaded_path and `extracted_paths` as a dict local_path -> extracted_path.", "OK. I am refactoring this. I have opened #1879, as an intermediate step..." ]
1,612,808,082,000
1,614,262,218,000
1,614,262,218,000
MEMBER
null
Make accessible the file paths downloaded/extracted by DownloadManager. Close #1831. The approach: - I set these paths as DownloadManager attributes: these are DownloadManager's concerns - To access to these from DatasetBuilder, I set the DownloadManager instance as DatasetBuilder attribute: object composition
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1846/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1846/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1846", "html_url": "https://github.com/huggingface/datasets/pull/1846", "diff_url": "https://github.com/huggingface/datasets/pull/1846.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1846.patch", "merged_at": 1614262218000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1845
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1845/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1845/comments
https://api.github.com/repos/huggingface/datasets/issues/1845/events
https://github.com/huggingface/datasets/pull/1845
803,714,493
MDExOlB1bGxSZXF1ZXN0NTY5NTk2MTIz
1,845
Enable logging propagation and remove logging handler
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thank you @lhoestq. This logging configuration makes more sense to me.\r\n\r\nOnce propagation is allowed, the end-user can customize logging behavior and add custom handlers to the proper top logger in the hierarchy.\r\n\r\nAnd I also agree with following the best practices and removing any custom handlers:\r\n- it is the end user who has to implement any custom handlers\r\n- indeed, the previous logging problem with TensorFlow was due to the fact that absl did not follow best practices and had implemented a custom handler\r\n\r\nOur errors/warnings will be displayed anyway, even if we do not implement any custom handler. Since Python 3.2, logging has a built-in \"default\" handler (logging.lastResort) with the expected default behavior (sending error/warning messages to sys.stderr), which is used only if the end user has not configured any custom handler." ]
1,612,801,333,000
1,612,880,558,000
1,612,880,557,000
MEMBER
null
We used to have logging propagation disabled because of this issue: https://github.com/tensorflow/tensorflow/issues/26691 But since it's now fixed we should re-enable it. This is important to keep the default logging behavior for users, and propagation is also needed for pytest fixtures as asked in #1826 I also removed the handler that was added since, according to the logging [documentation](https://docs.python.org/3/howto/logging.html#configuring-logging-for-a-library): > It is strongly advised that you do not add any handlers other than NullHandler to your library’s loggers. This is because the configuration of handlers is the prerogative of the application developer who uses your library. The application developer knows their target audience and what handlers are most appropriate for their application: if you add handlers ‘under the hood’, you might well interfere with their ability to carry out unit tests and deliver logs which suit their requirements. It could have been useful if we wanted to have a custom formatter for the logging but I think it's more important to keep the logging as default to not interfere with the users' logging management. Therefore I also removed the two methods `datasets.logging.enable_default_handler` and `datasets.logging.disable_default_handler`. cc @albertvillanova this should let you use capsys/caplog in pytest cc @LysandreJik @sgugger if you want to do the same in `transformers`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1845/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1845/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1845", "html_url": "https://github.com/huggingface/datasets/pull/1845", "diff_url": "https://github.com/huggingface/datasets/pull/1845.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1845.patch", "merged_at": 1612880557000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1844
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1844/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1844/comments
https://api.github.com/repos/huggingface/datasets/issues/1844/events
https://github.com/huggingface/datasets/issues/1844
803,588,125
MDU6SXNzdWU4MDM1ODgxMjU=
1,844
Update Open Subtitles corpus with original sentence IDs
{ "login": "Valahaar", "id": 19476123, "node_id": "MDQ6VXNlcjE5NDc2MTIz", "avatar_url": "https://avatars.githubusercontent.com/u/19476123?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Valahaar", "html_url": "https://github.com/Valahaar", "followers_url": "https://api.github.com/users/Valahaar/followers", "following_url": "https://api.github.com/users/Valahaar/following{/other_user}", "gists_url": "https://api.github.com/users/Valahaar/gists{/gist_id}", "starred_url": "https://api.github.com/users/Valahaar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Valahaar/subscriptions", "organizations_url": "https://api.github.com/users/Valahaar/orgs", "repos_url": "https://api.github.com/users/Valahaar/repos", "events_url": "https://api.github.com/users/Valahaar/events{/privacy}", "received_events_url": "https://api.github.com/users/Valahaar/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
null
[]
null
[ "Hi ! You're right this can can useful.\r\nThis should be easy to add, so feel free to give it a try if you want to contribute :)\r\nI think we just need to add it to the _generate_examples method of the OpenSubtitles dataset builder [here](https://github.com/huggingface/datasets/blob/master/datasets/open_subtitles/open_subtitles.py#L103)", "Hey @lhoestq , absolutely yes! Just one question before I start implementing. The ids found in the zip file have this format: \r\n(the following is line `22497315` of the `ids` file of the `de-en` dump)\r\n\r\n\r\n`de/2017/7006210/7063319.xml.gz en/2017/7006210/7050201.xml.gz 335 339 340` (every space is actually a tab, aside from the space between `339` and `340`)\r\n\r\n\r\nWhere filenames encode the information like this: `lang/year/imdb_id/opensubtitles_id.xml.gz` whereas the numbers correspond to the sentence ids which are linked together (i.e. sentence `335` of the German subtitle corresponds to lines `339` and `340` of the English file)\r\n\r\nThat being said, do you think I should stick to the raw sentence id (and replace the current sequential id) or should I include more detailed metadata (or both things maybe)?\r\n\r\nGoing with raw ID is surely simpler, but including `year`, `imdbId` and `subtitleId` should save space as they're just integers; besides, any operation (like filtering or grouping) will be much easier if users don't have to manually parse the ids every time.\r\nAs for the language-specific sentenceIds, what could be the best option? A list of integers or a comma-separated string?\r\n\r\n**Note:** I did not find any official information about this encoding, but it appears to check out:\r\nhttps://www.imdb.com/title/tt7006210/, https://www.opensubtitles.org/en/subtitles/7063319 and https://www.opensubtitles.org/en/subtitles/7050201 all link to the same episode, so I guess (I hope!) it's correct.\r\n\r\n", "I like the idea of having `year`, `imdbId` and `subtitleId` as columns for filtering for example.\r\nAnd for the `sentenceIds` a list of integers is fine.", "Thanks for improving it @Valahaar :) ", "Something like this? (adapted from [here](https://github.com/huggingface/datasets/blob/master/datasets/open_subtitles/open_subtitles.py#L114))\r\n\r\n```python\r\nresult = (\r\n sentence_counter,\r\n {\r\n \"id\": str(sentence_counter),\r\n \"meta\": {\r\n \"year\": year,\r\n \"imdbId\": imdb_id,\r\n \"subtitleId\": {l1: l1_sub_id, l2: l2_sub_id},\r\n \"sentenceIds\": {l1: [... source_sids ...], l2: [... target_sids ...]},\r\n # or maybe src/tgt? I'd go with the first one for consistency with 'translation'\r\n \"subtitleId\": {\"src\": l1_sub_id, \"tgt\": l2_sub_id},\r\n \"sentenceIds\": {\"src\": [... source_sids ...], \"tgt\": [... target_sids ...]},\r\n },\r\n \"translation\": {l1: x, l2: y},\r\n },\r\n )\r\n```\r\nOr at top level, avoiding nesting into 'meta'?", "Merged in #1865, closing. Thanks :)" ]
1,612,792,513,000
1,613,151,538,000
1,613,151,538,000
CONTRIBUTOR
null
Hi! It would be great if you could add the original sentence ids to [Open Subtitles](https://huggingface.co/datasets/open_subtitles). I can think of two reasons: first, it's possible to gather sentences for an entire document (the original ids contain media id, subtitle file id and sentence id), therefore somewhat allowing for document-level machine translation (and other document-level stuff which could be cool to have); second, it's possible to have parallel sentences in multiple languages, as they share the same ids across bitexts. I think I should tag @abhishekkrthakur as he's the one who added it in the first place. Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1844/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1844/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1841
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1841/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1841/comments
https://api.github.com/repos/huggingface/datasets/issues/1841/events
https://github.com/huggingface/datasets/issues/1841
803,561,123
MDU6SXNzdWU4MDM1NjExMjM=
1,841
Add ljspeech
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech", "name": "speech", "color": "d93f0b", "default": false, "description": "" } ]
closed
false
null
[]
null
[]
1,612,790,546,000
1,615,787,942,000
1,615,787,942,000
MEMBER
null
## Adding a Dataset - **Name:** *ljspeech* - **Description:** *This is a public domain speech dataset consisting of 13,100 short audio clips of a single speaker reading passages from 7 non-fiction books. A transcription is provided for each clip. Clips vary in length from 1 to 10 seconds and have a total length of approximately 24 hours. The texts were published between 1884 and 1964, and are in the public domain. The audio was recorded in 2016-17 by the LibriVox project and is also in the public domain.)* - **Paper:** *Homepage*: https://keithito.com/LJ-Speech-Dataset/ - **Data:** *https://keithito.com/LJ-Speech-Dataset/* - **Motivation:** Important speech dataset - **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/ljspeech If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1841/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1841/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1840
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1840/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1840/comments
https://api.github.com/repos/huggingface/datasets/issues/1840/events
https://github.com/huggingface/datasets/issues/1840
803,560,039
MDU6SXNzdWU4MDM1NjAwMzk=
1,840
Add common voice
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech", "name": "speech", "color": "d93f0b", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "I have started working on adding this dataset.", "Hey @BirgerMoell - awesome that you started working on Common Voice. Common Voice is a bit special since, there is no direct download link to download the data. In these cases we usually consider two options:\r\n\r\n1) Find a hacky solution to extract the download link somehow from the XLM tree of the website \r\n2) If this doesn't work we force the user to download the data himself and add a `\"data_dir\"` as an input parameter. E.g. you can take a look at how it is done for [this](https://github.com/huggingface/datasets/blob/66f2a7eece98d2778bd22bb5034cb7c2376032d4/datasets/arxiv_dataset/arxiv_dataset.py#L66) \r\n\r\nAlso the documentation here: https://huggingface.co/docs/datasets/add_dataset.html?highlight=data_dir#downloading-data-files-and-organizing-splits (especially the \"note\") might be helpful.", "Let me know if you have any other questions", "I added a Work in Progress pull request (hope that is ok). I've made a card for the dataset and filled out the common_voice.py file with information about the datset (not completely).\r\n\r\nI didn't manage to get the tagging tool working locally on my machine but will look into that later.\r\n\r\nLeft to do.\r\n\r\n- Tag the dataset\r\n- Add missing information and update common_voice.py\r\n\r\nhttps://github.com/huggingface/datasets/pull/1886", "Awesome! I left a longer comment on the PR :-)", "I saw that this current datasets package holds common voice version 6.1, how to add the new version 7.0 that is already available?", "Will me merged next week - we're working on it :-)" ]
1,612,790,465,000
1,632,474,311,000
1,615,787,781,000
MEMBER
null
## Adding a Dataset - **Name:** *common voice* - **Description:** *Mozilla Common Voice Dataset* - **Paper:** Homepage: https://voice.mozilla.org/en/datasets - **Data:** https://voice.mozilla.org/en/datasets - **Motivation:** Important speech dataset - **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/common_voice If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1840/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1840/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1836
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1836/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1836/comments
https://api.github.com/repos/huggingface/datasets/issues/1836/events
https://github.com/huggingface/datasets/issues/1836
803,531,837
MDU6SXNzdWU4MDM1MzE4Mzc=
1,836
test.json has been removed from the limit dataset repo (breaks dataset)
{ "login": "Paethon", "id": 237550, "node_id": "MDQ6VXNlcjIzNzU1MA==", "avatar_url": "https://avatars.githubusercontent.com/u/237550?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Paethon", "html_url": "https://github.com/Paethon", "followers_url": "https://api.github.com/users/Paethon/followers", "following_url": "https://api.github.com/users/Paethon/following{/other_user}", "gists_url": "https://api.github.com/users/Paethon/gists{/gist_id}", "starred_url": "https://api.github.com/users/Paethon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Paethon/subscriptions", "organizations_url": "https://api.github.com/users/Paethon/orgs", "repos_url": "https://api.github.com/users/Paethon/repos", "events_url": "https://api.github.com/users/Paethon/events{/privacy}", "received_events_url": "https://api.github.com/users/Paethon/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "Thanks for the heads up ! I'm opening a PR to fix that" ]
1,612,788,353,000
1,612,973,698,000
1,612,973,698,000
NONE
null
https://github.com/huggingface/datasets/blob/16042b233dbff2a7585110134e969204c69322c3/datasets/limit/limit.py#L51 The URL is not valid anymore since test.json has been removed in master for some reason. Directly referencing the last commit works: `https://raw.githubusercontent.com/ilmgut/limit_dataset/0707d3989cd8848f0f11527c77dcf168fefd2b23/data`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1836/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1836/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1834
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1834/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1834/comments
https://api.github.com/repos/huggingface/datasets/issues/1834/events
https://github.com/huggingface/datasets/pull/1834
803,517,094
MDExOlB1bGxSZXF1ZXN0NTY5NDMzNDA4
1,834
Fixes base_url of limit dataset
{ "login": "Paethon", "id": 237550, "node_id": "MDQ6VXNlcjIzNzU1MA==", "avatar_url": "https://avatars.githubusercontent.com/u/237550?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Paethon", "html_url": "https://github.com/Paethon", "followers_url": "https://api.github.com/users/Paethon/followers", "following_url": "https://api.github.com/users/Paethon/following{/other_user}", "gists_url": "https://api.github.com/users/Paethon/gists{/gist_id}", "starred_url": "https://api.github.com/users/Paethon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Paethon/subscriptions", "organizations_url": "https://api.github.com/users/Paethon/orgs", "repos_url": "https://api.github.com/users/Paethon/repos", "events_url": "https://api.github.com/users/Paethon/events{/privacy}", "received_events_url": "https://api.github.com/users/Paethon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "OK, apparently it is a lot more complicated than simply changing the URL? Going to make an issue." ]
1,612,787,195,000
1,612,788,170,000
1,612,788,170,000
NONE
null
`test.json` is not available in the master branch of the repository anymore. Linking to a specific commit.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1834/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1834/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1834", "html_url": "https://github.com/huggingface/datasets/pull/1834", "diff_url": "https://github.com/huggingface/datasets/pull/1834.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1834.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1833
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1833/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1833/comments
https://api.github.com/repos/huggingface/datasets/issues/1833/events
https://github.com/huggingface/datasets/pull/1833
803,120,978
MDExOlB1bGxSZXF1ZXN0NTY5MDk5MTUx
1,833
Add OSCAR dataset card
{ "login": "pjox", "id": 635220, "node_id": "MDQ6VXNlcjYzNTIyMA==", "avatar_url": "https://avatars.githubusercontent.com/u/635220?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pjox", "html_url": "https://github.com/pjox", "followers_url": "https://api.github.com/users/pjox/followers", "following_url": "https://api.github.com/users/pjox/following{/other_user}", "gists_url": "https://api.github.com/users/pjox/gists{/gist_id}", "starred_url": "https://api.github.com/users/pjox/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pjox/subscriptions", "organizations_url": "https://api.github.com/users/pjox/orgs", "repos_url": "https://api.github.com/users/pjox/repos", "events_url": "https://api.github.com/users/pjox/events{/privacy}", "received_events_url": "https://api.github.com/users/pjox/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq Thanks for the suggestions! I agree with all of them. Should I accept them one by one or can I accept them all at once? When I try to load the whole diff GitHub is complaining and it does no render them well (probably my browser?) 😅 ", "I just merged the tables as suggested 😄 . However I noticed something weird, the train sizes are identical for both the original and deduplicated files ... This is not normal, in general the original files are almost twice as big as the deduplicated ones 🤔 ", "Good catch @pjox ! I just checked and this is because the scripts doesn't handle having several blank lines in a row.\r\nBlank lines introduced by deduplication are currently not ignored so we end up with the same number of examples in the dataset as the original version (but with empty examples...)\r\nI fixed that in this [commit](https://github.com/huggingface/datasets/commit/837a152e4724adc5308e2c4481908c00a8d93383). I'm re-running the metadata generation for deduplicated configs.", "I got the new sizes today, will update the dataset_infos.json and the dataset card tomorrow", "> I got the new sizes today, will update the dataset_infos.json and the dataset card tomorrow\r\n\r\ngreat, I just wanted to report that I got error message \"NonMatchingSplitsSizesError\" when I tried to load one of the oscar dataset.", "Hi @cahya-wirawan, which configuration of oscar do you have this issue with ?", "Ok I see you're having this issue because I haven't updated the sizes yet ! I'm opening a PR\r\n\r\nI just checked and indeed there's an issue with the `deduplicated` configurations since the commit I mentioned above.\r\nI'm fixing this by using the new sizes I got yesterday :) \r\n", "I just updated the size in the table @pjox it should be good now :) \r\nI also updated the sizes in the dataset_infos.json in https://github.com/huggingface/datasets/pull/1868 (merged)", "Thanks @lhoestq for fixing the issue, it works now", "Thank you so much @lhoestq !" ]
1,612,748,389,000
1,613,138,965,000
1,613,138,904,000
CONTRIBUTOR
null
I added more information and completed the dataset card for OSCAR which was started by @lhoestq in his previous [PR](https://github.com/huggingface/datasets/pull/1824).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1833/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1833/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1833", "html_url": "https://github.com/huggingface/datasets/pull/1833", "diff_url": "https://github.com/huggingface/datasets/pull/1833.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1833.patch", "merged_at": 1613138904000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1832
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1832/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1832/comments
https://api.github.com/repos/huggingface/datasets/issues/1832/events
https://github.com/huggingface/datasets/issues/1832
802,880,897
MDU6SXNzdWU4MDI4ODA4OTc=
1,832
Looks like nokogumbo is up-to-date now, so this is no longer needed.
{ "login": "JimmyJim1", "id": 68724553, "node_id": "MDQ6VXNlcjY4NzI0NTUz", "avatar_url": "https://avatars.githubusercontent.com/u/68724553?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JimmyJim1", "html_url": "https://github.com/JimmyJim1", "followers_url": "https://api.github.com/users/JimmyJim1/followers", "following_url": "https://api.github.com/users/JimmyJim1/following{/other_user}", "gists_url": "https://api.github.com/users/JimmyJim1/gists{/gist_id}", "starred_url": "https://api.github.com/users/JimmyJim1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JimmyJim1/subscriptions", "organizations_url": "https://api.github.com/users/JimmyJim1/orgs", "repos_url": "https://api.github.com/users/JimmyJim1/repos", "events_url": "https://api.github.com/users/JimmyJim1/events{/privacy}", "received_events_url": "https://api.github.com/users/JimmyJim1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,612,680,727,000
1,612,805,249,000
1,612,805,249,000
NONE
null
Looks like nokogumbo is up-to-date now, so this is no longer needed. __Originally posted by @dependabot in https://github.com/discourse/discourse/pull/11373#issuecomment-738993432__
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1832/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1832/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1831
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1831/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1831/comments
https://api.github.com/repos/huggingface/datasets/issues/1831/events
https://github.com/huggingface/datasets/issues/1831
802,868,854
MDU6SXNzdWU4MDI4Njg4NTQ=
1,831
Some question about raw dataset download info in the project .
{ "login": "svjack", "id": 27874014, "node_id": "MDQ6VXNlcjI3ODc0MDE0", "avatar_url": "https://avatars.githubusercontent.com/u/27874014?v=4", "gravatar_id": "", "url": "https://api.github.com/users/svjack", "html_url": "https://github.com/svjack", "followers_url": "https://api.github.com/users/svjack/followers", "following_url": "https://api.github.com/users/svjack/following{/other_user}", "gists_url": "https://api.github.com/users/svjack/gists{/gist_id}", "starred_url": "https://api.github.com/users/svjack/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/svjack/subscriptions", "organizations_url": "https://api.github.com/users/svjack/orgs", "repos_url": "https://api.github.com/users/svjack/repos", "events_url": "https://api.github.com/users/svjack/events{/privacy}", "received_events_url": "https://api.github.com/users/svjack/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi ! The `dl_manager` is a `DownloadManager` object and is responsible for downloading the raw data files.\r\nIt is used by dataset builders in their `_split_generators` method to download the raw data files that are necessary to build the datasets splits.\r\n\r\nThe `Conll2003` class is a dataset builder, and so you can download all the raw data files by calling `_split_generators` with a download manager:\r\n```python\r\nfrom datasets import DownloadManager\r\nfrom datasets.load import import_main_class\r\n\r\nconll2003_builder = import_main_class(...)\r\n\r\ndl_manager = DownloadManager()\r\nsplis_generators = conll2003_builder._split_generators(dl_manager)\r\n```\r\n\r\nThen you can see what files have been downloaded with\r\n```python\r\ndl_manager.get_recorded_sizes_checksums()\r\n```\r\nIt returns a dictionary with the format {url: {num_bytes: int, checksum: str}}\r\n\r\nThen you can get the actual location of the downloaded files with\r\n```python\r\nfrom datasets import cached_path\r\n\r\nlocal_path_to_downloaded_file = cached_path(url)\r\n```\r\n\r\n------------------\r\n\r\nNote that you can also get the urls from the Dataset object:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nconll2003 = load_dataset(\"conll2003\")\r\nprint(conll2003[\"train\"].download_checksums)\r\n```\r\nIt returns the same dictionary with the format {url: {num_bytes: int, checksum: str}}", "I am afraid that there is not a very straightforward way to get that location.\r\n\r\nAnother option, from _split_generators would be to use:\r\n- `dl_manager._download_config.cache_dir` to get the directory where all the raw downloaded files are:\r\n ```python\r\n download_dir = dl_manager._download_config.cache_dir\r\n ```\r\n- the function `datasets.utils.file_utils.hash_url_to_filename` to get the filenames of the raw downloaded files:\r\n ```python\r\n filenames = [hash_url_to_filename(url) for url in urls_to_download.values()]\r\n ```\r\nTherefore the complete path to the raw downloaded files would be the join of both:\r\n```python\r\ndownloaded_paths = [os.path.join(download_dir, filename) for filename in filenames]\r\n```\r\n\r\nMaybe it would be interesting to make these paths accessible more easily. I could work on this. What do you think, @lhoestq ?", "Sure it would be nice to have an easier access to these paths !\r\nThe dataset builder could have a method to return those, what do you think ?\r\nFeel free to work on this @albertvillanova , it would be a nice addition :) \r\n\r\nYour suggestion does work as well @albertvillanova if you complete it by specifying `etag=` to `hash_url_to_filename`.\r\n\r\nThe ETag is obtained by a HEAD request and is used to know if the file on the remote host has changed. Therefore if a file is updated on the remote host, then the hash returned by `hash_url_to_filename` is different.", "Once #1846 will be merged, the paths to the raw downloaded files will be accessible as:\r\n```python\r\nbuilder_instance.dl_manager.downloaded_paths\r\n``` " ]
1,612,676,016,000
1,614,262,218,000
1,614,262,218,000
NONE
null
Hi , i review the code in https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py in the _split_generators function is the truly logic of download raw datasets with dl_manager and use Conll2003 cls by use import_main_class in load_dataset function My question is that , with this logic it seems that i can not have the raw dataset download location in variable in downloaded_files in _split_generators. If someone also want use huggingface datasets as raw dataset downloader, how can he retrieve the raw dataset download path from attributes in datasets.dataset_dict.DatasetDict ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1831/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1831/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1829
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1829/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1829/comments
https://api.github.com/repos/huggingface/datasets/issues/1829/events
https://github.com/huggingface/datasets/pull/1829
802,693,600
MDExOlB1bGxSZXF1ZXN0NTY4NzgzNjA5
1,829
Add Tweet Eval Dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,612,614,985,000
1,612,790,274,000
1,612,790,273,000
CONTRIBUTOR
null
Closes Draft PR #1407. Notes: 1. I have excluded `mapping.txt` from the dataset at it only contained the name mappings, which are already present in the ClassLabels. 2. I have also exluded the textual names for the emojis mentioned in the [mapping](https://github.com/cardiffnlp/tweeteval/blob/main/datasets/emoji/mapping.txt). 3. I do not understand @abhishekkrthakur's example generator on #1407. Maybe he was trying to build up on code from some other dataset. Requesting @lhoestq to review.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1829/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1829/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1829", "html_url": "https://github.com/huggingface/datasets/pull/1829", "diff_url": "https://github.com/huggingface/datasets/pull/1829.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1829.patch", "merged_at": 1612790273000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1828
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1828/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1828/comments
https://api.github.com/repos/huggingface/datasets/issues/1828/events
https://github.com/huggingface/datasets/pull/1828
802,449,234
MDExOlB1bGxSZXF1ZXN0NTY4NTkwNDM2
1,828
Add CelebA Dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @gchhablani! Thanks for all the contributions! We definitely want more image datasets, but Face datasets are tricky in general, in this one includes predicting attributes such as Attractiveness, Gender, or Race, which can be pretty problematic.\r\n\r\nWould you be up for starting with only object classification or object detection datasets instead? (Your CIFAR-100 contribution will be super useful for example!)", "Hi @yjernite, You're welcome. I am enjoying adding new datasets :)\r\nBy \"pretty problematic\", are you referring to the ethical issues? I used TFDS's [CelebA](https://github.com/tensorflow/datasets/blob/5ef7861470896acb6f74dacba85036001e4f1b8c/tensorflow_datasets/image/celeba.py#L91) as a reference. Here they mention in a \"Note\" that CelebA \"may contain potential bias\". Can we not do the same? I skipped the note for now, and we can add it. However, if you feel this isn't the right time, then I won't pursue this further. \r\n\r\nBut, can this issue be handled at a later stage? Does this also apply for my Hateful Memes Issue #1810?\r\n\r\nAlso, how can I \r\n1. load a part of the dataset? since `load_dataset(<>,split='train[10:20]')` still loads all the examples.\r\n2. make `datasets_infos.json` for huge datasets which have a single configuration?\r\n\r\nI will ofcourse be looking for other datasets to add regardless. \r\n", "It's definitely a thorny question. The short answer is: Hateful Memes and hate speech detection datasets are different since their use case is specifically to train systems to identify and hopefully remove hateful content, whereas the purpose of a dataset that has an Attractiveness score as output is implicitly to train more models to rate \"Attractiveness\". \r\n\r\nAs far as warning about the \"potential biases\", I do not think it is quite enough, especially because it is hard to guarantee that every potential user will read the documentation (it is also an insufficient warning.)\r\n\r\nNote that we do have higher standards for the dataset cards of hate speech and hateful memes datasets, so if you do choose to add that one yourself we will ask that you summarize the relevant literature in the Social Impact section.\r\n\r\nIf you really need to add this dataset for your own research for the explicit purpose of studying these biases, you can add it as a community provided dataset following https://huggingface.co/docs/datasets/master/share_dataset.html#sharing-a-community-provided-dataset but I'd recommend just skipping it for now.", "So currently you do need to download the whole dataset when using it, we are working on making it easier to stream parts of it from a remote host. You can also use the filesystem integration if local storage is an issue:\r\nhttps://huggingface.co/docs/datasets/master/filesystems.html\r\n", "I don't think we have a great solution for `dataset_infos.json` with a single very large config when storage space is an issue, but it should be solved by the same upcoming feature mentioned above", "Okay, then I won't pursue this one further. I'll keep this branch on my repository just in case the possibility of adding this dataset comes up in the future.\r\n\r\n> So currently you do need to download the whole dataset when using it, we are working on making it easier to stream parts of it from a remote host. You can also use the filesystem integration if local storage is an issue:\r\n> https://huggingface.co/docs/datasets/master/filesystems.html\r\n\r\nAfter downloading the whole dataset (around 1.4GB), it still loads all the examples despite using `split='train[:10%]'` or `split='train[10:20]'`. \r\n\r\nEDIT: I think this would happen only when the examples are generated for the first time and saved to the cache. Streaming parts of the data from a remote host sounds amazing! But, would that also allow for streaming examples of the data from the local cache? (without saving all the examples the first time).\r\n\r\nWhat I used:\r\n`d = load_dataset('./datasets/celeb_a',split='train[:10]')`\r\nOutput:\r\n`570 examples [01:33, 6.25 examples/s]` and it keeps going. \r\n\r\nEDIT 2: After a few thousand images, I get the following error:\r\n```python\r\nOSError: [Errno 24] Too many open files: '~/.cache/huggingface/datasets/celeb_a/default/1.1.0/01f9dca66039ab7c40b91b09af47a5fa8c3e49dc8d55df50da55b14116229207.incomplete'\r\n```\r\nI understand this is because of the way I load the images :\r\n```python\r\nImage.open(<path>)\r\n```\r\nWhat could be better alternative? I am only asking in case I face the same issues in the future.", "Just some addition about loading only a subset of the data:\r\nCurrently if even you specify `split='train[:10]'`, it downloads and generate the full dataset, so that you can pick another part afterward if you want to. We may change that in the future and use streaming.\r\n\r\nAnd about your open files issue, you can try to close each image file after reading its content.", "Hi @lhoestq,\r\nThanks for your response.\r\n\r\nI used `gc.collect()` inside the loop and that worked for me. I think since we are using a generator, and if I have something like `train[100000:100002]`, we will need to generate the first 1000001 examples and store. Ofcourse, this feature isn't a necessity right now, I suppose.", "Closing this PR." ]
1,612,556,455,000
1,613,657,827,000
1,613,657,827,000
CONTRIBUTOR
null
Trying to add CelebA Dataset. Need help with testing. Loading examples takes a lot of time so I am unable to generate the `dataset_infos.json` and unable to test. Also, need help with creating `dummy_data.zip`. Additionally, trying to load a few examples using `load_dataset('./datasets/celeb_a',split='train[10:20]')` still loads all the examples (doesn't stop at 10).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1828/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1828/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1828", "html_url": "https://github.com/huggingface/datasets/pull/1828", "diff_url": "https://github.com/huggingface/datasets/pull/1828.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1828.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1827
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1827/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1827/comments
https://api.github.com/repos/huggingface/datasets/issues/1827/events
https://github.com/huggingface/datasets/issues/1827
802,353,974
MDU6SXNzdWU4MDIzNTM5NzQ=
1,827
Regarding On-the-fly Data Loading
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Possible duplicate\r\n\r\n#1776 https://github.com/huggingface/datasets/issues/\r\n\r\nreally looking PR for this feature", "Hi @acul3 \r\n\r\nIssue #1776 talks about doing on-the-fly data pre-processing, which I think is solved in the next release as mentioned in the issue #1825. I also look forward to using this feature, though :)\r\n\r\nI wanted to ask about on-the-fly data loading from the cache (before pre-processing).", "Hi ! Currently when you load a dataset via `load_dataset` for example, then the dataset is memory-mapped from an Arrow file on disk. Therefore there's almost no RAM usage even if your dataset contains TB of data.\r\nUsually at training time only one batch of data at a time is loaded in memory.\r\n\r\nDoes that answer your question or were you thinking about something else ?", "Hi @lhoestq,\r\n\r\nI apologize for the late response. This answers my question. Thanks a lot." ]
1,612,547,028,000
1,613,656,516,000
1,613,656,516,000
CONTRIBUTOR
null
Hi, I was wondering if it is possible to load images/texts as a batch during the training process, without loading the entire dataset on the RAM at any given point. Thanks, Gunjan
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1827/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1827/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1826
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1826/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1826/comments
https://api.github.com/repos/huggingface/datasets/issues/1826/events
https://github.com/huggingface/datasets/pull/1826
802,074,744
MDExOlB1bGxSZXF1ZXN0NTY4Mjc4OTI2
1,826
Print error message with filename when malformed CSV
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,612,523,279,000
1,612,892,367,000
1,612,892,367,000
MEMBER
null
Print error message specifying filename when malformed CSV file. Close #1821
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1826/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1826/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1826", "html_url": "https://github.com/huggingface/datasets/pull/1826", "diff_url": "https://github.com/huggingface/datasets/pull/1826.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1826.patch", "merged_at": 1612892366000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1825
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1825/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1825/comments
https://api.github.com/repos/huggingface/datasets/issues/1825/events
https://github.com/huggingface/datasets/issues/1825
802,073,925
MDU6SXNzdWU4MDIwNzM5MjU=
1,825
Datasets library not suitable for huge text datasets.
{ "login": "alexvaca0", "id": 35173563, "node_id": "MDQ6VXNlcjM1MTczNTYz", "avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexvaca0", "html_url": "https://github.com/alexvaca0", "followers_url": "https://api.github.com/users/alexvaca0/followers", "following_url": "https://api.github.com/users/alexvaca0/following{/other_user}", "gists_url": "https://api.github.com/users/alexvaca0/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexvaca0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexvaca0/subscriptions", "organizations_url": "https://api.github.com/users/alexvaca0/orgs", "repos_url": "https://api.github.com/users/alexvaca0/repos", "events_url": "https://api.github.com/users/alexvaca0/events{/privacy}", "received_events_url": "https://api.github.com/users/alexvaca0/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi ! Looks related to #861 \r\n\r\nYou are right: tokenizing a dataset using map takes a lot of space since it can store `input_ids` but also `token_type_ids`, `attention_mask` and `special_tokens_mask`. Moreover if your tokenization function returns python integers then by default they'll be stored as int64 which can take a lot of space. Padding can also increase the size of the tokenized dataset.\r\n\r\nTo make things more convenient, we recently added a \"lazy map\" feature that allows to tokenize each batch at training time as you mentioned. For example you'll be able to do\r\n```python\r\nfrom transformers import BertTokenizer\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\n\r\ndef encode(batch):\r\n return tokenizer(batch[\"text\"], padding=\"longest\", truncation=True, max_length=512, return_tensors=\"pt\")\r\n\r\ndataset.set_transform(encode)\r\nprint(dataset.format)\r\n# {'type': 'custom', 'format_kwargs': {'transform': <function __main__.encode(batch)>}, 'columns': ['idx', 'label', 'sentence1', 'sentence2'], 'output_all_columns': False}\r\nprint(dataset[:2])\r\n# {'input_ids': tensor([[ 101, 2572, 3217, ... 102]]), 'token_type_ids': tensor([[0, 0, 0, ... 0]]), 'attention_mask': tensor([[1, 1, 1, ... 1]])}\r\n\r\n```\r\nIn this example the `encode` transform is applied on-the-fly on the \"text\" column.\r\n\r\nThis feature will be available in the next release 2.0 which will happen in a few days.\r\nYou can already play with it by installing `datasets` from source if you want :)\r\n\r\nHope that helps !", "How recently was `set_transform` added? I am actually trying to implement it and getting an error:\r\n\r\n`AttributeError: 'Dataset' object has no attribute 'set_transform'\r\n`\r\n\r\nI'm on v.1.2.1.\r\n\r\nEDIT: Oh, wait I see now it's in the v.2.0. Whoops! This should be really useful.", "Yes indeed it was added a few days ago. The code is available on master\r\nWe'll do a release next week :)\r\n\r\nFeel free to install `datasets` from source to try it out though, I would love to have some feedbacks", "For information: it's now available in `datasets` 1.3.0.\r\nThe 2.0 is reserved for even cooler features ;)", "Hi @alexvaca0 , we have optimized Datasets' disk usage in the latest release v1.5.\r\n\r\nFeel free to update your Datasets version\r\n```shell\r\npip install -U datasets\r\n```\r\nand see if it better suits your needs." ]
1,612,523,210,000
1,617,113,041,000
1,615,887,840,000
NONE
null
Hi, I'm trying to use datasets library to load a 187GB dataset of pure text, with the intention of building a Language Model. The problem is that from the 187GB it goes to some TB when processed by Datasets. First of all, I think the pre-tokenizing step (with tokenizer.map()) is not really thought for datasets this big, but for fine-tuning datasets, as this process alone takes so much time, usually in expensive machines (due to the need of tpus - gpus) which is not being used for training. It would possibly be more efficient in such cases to tokenize each batch at training time (receive batch - tokenize batch - train with batch), so that the whole time the machine is up it's being used for training. Moreover, the pyarrow objects created from a 187 GB datasets are huge, I mean, we always receive OOM, or No Space left on device errors when only 10-12% of the dataset has been processed, and only that part occupies 2.1TB in disk, which is so many times the disk usage of the pure text (and this doesn't make sense, as tokenized texts should be lighter than pure texts). Any suggestions??
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1825/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1825/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1824
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1824/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1824/comments
https://api.github.com/repos/huggingface/datasets/issues/1824/events
https://github.com/huggingface/datasets/pull/1824
802,048,281
MDExOlB1bGxSZXF1ZXN0NTY4MjU3MTU3
1,824
Add OSCAR dataset card
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @lhoestq! When are you planning to release the version with this dataset?\r\n\r\nBTW: What a huge README file :astonished:", "Next week !", "Closing in favor of #1833" ]
1,612,521,026,000
1,620,239,054,000
1,612,783,833,000
MEMBER
null
I started adding the dataset card for OSCAR ! For now it's just basic info for all the different configurations in `Dataset Structure`. In particular the Data Splits section tells how may samples there are for each config. The Data Instances section show an example for each config, and it also shows the size in MB. Since the Data Instances section is very long the user has to click to expand the info. I was able to generate it thanks to the tools made by @madlag and @yjernite :D Cc @pjox could you help me with the other sections ? (Dataset Description, Dataset Creation, Considerations for Using the Data, Additional Information)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1824/reactions", "total_count": 4, "+1": 2, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1824/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1824", "html_url": "https://github.com/huggingface/datasets/pull/1824", "diff_url": "https://github.com/huggingface/datasets/pull/1824.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1824.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1823
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1823/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1823/comments
https://api.github.com/repos/huggingface/datasets/issues/1823/events
https://github.com/huggingface/datasets/pull/1823
802,042,181
MDExOlB1bGxSZXF1ZXN0NTY4MjUyMjIx
1,823
Add FewRel Dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @lhoestq,\r\n\r\nSorry for the late response. What do you mean when you say \"adding names to default config\"? Should I handle \"pid2name\" in the same config as \"default\"?", "Yes I was thinking of having the pid2name field available in the default configuration (and therefore only have one config). What do you think ?", "Hi @lhoestq,\r\n\r\nSorry again, the last couple of weeks were a bit busy for me. I am wondering how do you want me to achieve that. Using a custom BuilderConfig which takes in whether it is the regular data or \"pid2name\"? \"pid2name\" is only useful for \"train_wiki\", \"val_nyt\" and \"val_wiki\". So, based on my understanding, it would look like this:\r\n\r\n```python\r\nwiki_data = load_dataset('few_rel','train_wiki')\r\nid2name = load_dataset('few_rel','pid2name')\r\n```\r\nand this will be handled in the multiple configs.\r\n\r\n\r\nA better alternative could be providing name of the relationship in only \"train_wiki\", \"val_nyt\" and \"val_wiki\" as an extra feature in the dataset, and doing away with \"pid2name\" entirely. I'll only download pid2name if any of those datasets are requested, and then during generation I'll return the list with the dataset under \"names\" feature. How does this sound?\r\n\r\nEDIT:\r\nThere is one issue with the second approach, the entire pid2name is saved with all three datasets - \"train_wiki\", \"val_nyt\" and \"val_wiki\" ([see code below](https://github.com/huggingface/datasets/pull/1823#issuecomment-786402026)). In dummy data, I can address this by manually editing the pid2name to contain only a few id-name pairs, those matching with the examples in the corresponding example file. But this seems to be inefficient for the entire dataset - storing the same file in multiple places.", "Okay, I apologize, I guess I finally understand what is required.\r\n\r\nBasically, using:\r\n\r\n```python\r\nfew_rel = load_dataset('few_rel')\r\n```\r\nshould give all the files. This seems difficult since \"pid2name\" has a different format. Any suggestions on this?", "Yes that's it, sorry if that wasn't clear !", "Hi @lhoestq,\n\nSince pid2name has different features from the rest of the files, how will I add them to the same config?\n\nDo we want to exclude pid2name totally and add \"names\" to every example?", "If I understand correctly each sample in the \"default\" config has one relation, and each relation has corresponding names in pid2name.\r\nWould it be possible to also include the names in the \"default\" configuration for each sample ? The names of one sample can be retrieved using the relation id no ?", "Yes, that can be done. But for some files, the name is already given instead of ID. Only \"train_wiki\", \"val_wiki\", \"val_nyc\" have IDs. For others, I can set the names equal to a list of key.", "I think that's fine as long as we mention this processing explicitly in the dataset card.", "Hi @lhoestq,\r\n\r\nI have added the changes. Please let me know in case of any remaining issues.\r\n\r\nThanks,\r\nGunjan", "Hi @lhoestq,\r\n\r\nThanks for fixing it and approving :)" ]
1,612,520,523,000
1,614,599,780,000
1,614,594,099,000
CONTRIBUTOR
null
Hi, This PR closes this [Card](https://github.com/huggingface/datasets/projects/1#card-53285184) and Issue #1757. I wasn't sure how to add `pid2name` along with the dataset so I added it as a separate configuration. For each (head, tail, tokens) triplet, I have created one example. I have added the dictionary key as `"relation"` in the dataset. Additionally, for `pubmed_unsupervised`, I kept `"relation":""` in the dictionary. Please recommend better alternatives, if any. Thanks, Gunjan
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1823/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1823/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1823", "html_url": "https://github.com/huggingface/datasets/pull/1823", "diff_url": "https://github.com/huggingface/datasets/pull/1823.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1823.patch", "merged_at": 1614594099000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1822
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1822/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1822/comments
https://api.github.com/repos/huggingface/datasets/issues/1822/events
https://github.com/huggingface/datasets/pull/1822
802,003,835
MDExOlB1bGxSZXF1ZXN0NTY4MjIxMzIz
1,822
Add Hindi Discourse Analysis Natural Language Inference Dataset
{ "login": "avinsit123", "id": 33565881, "node_id": "MDQ6VXNlcjMzNTY1ODgx", "avatar_url": "https://avatars.githubusercontent.com/u/33565881?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avinsit123", "html_url": "https://github.com/avinsit123", "followers_url": "https://api.github.com/users/avinsit123/followers", "following_url": "https://api.github.com/users/avinsit123/following{/other_user}", "gists_url": "https://api.github.com/users/avinsit123/gists{/gist_id}", "starred_url": "https://api.github.com/users/avinsit123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avinsit123/subscriptions", "organizations_url": "https://api.github.com/users/avinsit123/orgs", "repos_url": "https://api.github.com/users/avinsit123/repos", "events_url": "https://api.github.com/users/avinsit123/events{/privacy}", "received_events_url": "https://api.github.com/users/avinsit123/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Could you also run `make style` to fix the CI check on code formatting ?", "@lhoestq completed and resolved all comments." ]
1,612,517,454,000
1,613,383,059,000
1,613,383,059,000
CONTRIBUTOR
null
# Dataset Card for Hindi Discourse Analysis Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - HomePage : https://github.com/midas-research/hindi-nli-data - Paper : https://www.aclweb.org/anthology/2020.aacl-main.71 - Point of Contact : https://github.com/midas-research/hindi-nli-data ### Dataset Summary - Dataset for Natural Language Inference in Hindi Language. Hindi Discourse Analysis (HDA) Dataset consists of textual-entailment pairs. - Each row of the Datasets if made up of 4 columns - Premise, Hypothesis, Label and Topic. - Premise and Hypothesis is written in Hindi while Entailment_Label is in English. - Entailment_label is of 2 types - entailed and not-entailed. - Entailed means that hypotheis can be inferred from premise and not-entailed means vice versa - Dataset can be used to train models for Natural Language Inference tasks in Hindi Language. ### Supported Tasks and Leaderboards - Natural Language Inference for Hindi ### Languages - Dataset is in Hindi ## Dataset Structure - Data is structured in TSV format. - train, test and dev files are in seperate files ### Dataset Instances An example of 'train' looks as follows. ``` {'hypothesis': 'यह एक वर्णनात्मक कथन है।', 'label': 1, 'premise': 'जैसे उस का सारा चेहरा अपना हो और आँखें किसी दूसरे की जो चेहरे पर पपोटों के पीछे महसूर कर दी गईं।', 'topic': 1} ``` ### Data Fields - Each row contatins 4 columns - premise, hypothesis, label and topic. ### Data Splits - Train : 31892 - Valid : 9460 - Test : 9970 ## Dataset Creation - We employ a recasting technique from Poliak et al. (2018a,b) to convert publicly available Hindi Discourse Analysis classification datasets in Hindi and pose them as TE problems - In this recasting process, we build template hypotheses for each class in the label taxonomy - Then, we pair the original annotated sentence with each of the template hypotheses to create TE samples. - For more information on the recasting process, refer to paper https://www.aclweb.org/anthology/2020.aacl-main.71 ### Source Data Source Dataset for the recasting process is the BBC Hindi Headlines Dataset(https://github.com/NirantK/hindi2vec/releases/tag/bbc-hindi-v0.1) #### Initial Data Collection and Normalization - Initial Data was collected by members of MIDAS Lab from Hindi Websites. They crowd sourced the data annotation process and selected two random stories from our corpus and had the three annotators work on them independently and classify each sentence based on the discourse mode. - Please refer to this paper for detailed information: https://www.aclweb.org/anthology/2020.lrec-1.149/ - The Discourse is further classified into "Argumentative" , "Descriptive" , "Dialogic" , "Informative" and "Narrative" - 5 Clases. #### Who are the source language producers? Please refer to this paper for detailed information: https://www.aclweb.org/anthology/2020.lrec-1.149/ ### Annotations #### Annotation process Annotation process has been described in Dataset Creation Section. #### Who are the annotators? Annotation is done automatically by machine and corresponding recasting process. ### Personal and Sensitive Information No Personal and Sensitive Information is mentioned in the Datasets. ## Considerations for Using the Data Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71 ### Discussion of Biases No known bias exist in the dataset. Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71 ### Other Known Limitations No other known limitations . Size of data may not be enough to train large models ## Additional Information Pls refer to this link: https://github.com/midas-research/hindi-nli-data ### Dataset Curators It is written in the repo : https://github.com/midas-research/hindi-nli-data that - This corpus can be used freely for research purposes. - The paper listed below provide details of the creation and use of the corpus. If you use the corpus, then please cite the paper. - If interested in commercial use of the corpus, send email to midas@iiitd.ac.in. - If you use the corpus in a product or application, then please credit the authors and Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus. - Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications. - Rather than redistributing the corpus, please direct interested parties to this page - Please feel free to send us an email: - with feedback regarding the corpus. - with information on how you have used the corpus. - if interested in having us analyze your data for natural language inference. - if interested in a collaborative research project. ### Licensing Information Copyright (C) 2019 Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi (MIDAS, IIIT-Delhi). Pls contact authors for any information on the dataset. ### Citation Information ``` @inproceedings{uppal-etal-2020-two, title = "Two-Step Classification using Recasted Data for Low Resource Settings", author = "Uppal, Shagun and Gupta, Vivek and Swaminathan, Avinash and Zhang, Haimin and Mahata, Debanjan and Gosangi, Rakesh and Shah, Rajiv Ratn and Stent, Amanda", booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing", month = dec, year = "2020", address = "Suzhou, China", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.aacl-main.71", pages = "706--719", abstract = "An NLP model{'}s ability to reason should be independent of language. Previous works utilize Natural Language Inference (NLI) to understand the reasoning ability of models, mostly focusing on high resource languages like English. To address scarcity of data in low-resource languages such as Hindi, we use data recasting to create NLI datasets for four existing text classification datasets. Through experiments, we show that our recasted dataset is devoid of statistical irregularities and spurious patterns. We further study the consistency in predictions of the textual entailment models and propose a consistency regulariser to remove pairwise-inconsistencies in predictions. We propose a novel two-step classification method which uses textual-entailment predictions for classification task. We further improve the performance by using a joint-objective for classification and textual entailment. We therefore highlight the benefits of data recasting and improvements on classification performance using our approach with supporting experimental results.", } ``` ### Contributions Thanks to [@avinsit123](https://github.com/avinsit123) for adding this dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1822/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1822/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1822", "html_url": "https://github.com/huggingface/datasets/pull/1822", "diff_url": "https://github.com/huggingface/datasets/pull/1822.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1822.patch", "merged_at": 1613383059000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1821
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1821/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1821/comments
https://api.github.com/repos/huggingface/datasets/issues/1821/events
https://github.com/huggingface/datasets/issues/1821
801,747,647
MDU6SXNzdWU4MDE3NDc2NDc=
1,821
Provide better exception message when one of many files results in an exception
{ "login": "david-waterworth", "id": 5028974, "node_id": "MDQ6VXNlcjUwMjg5NzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4", "gravatar_id": "", "url": "https://api.github.com/users/david-waterworth", "html_url": "https://github.com/david-waterworth", "followers_url": "https://api.github.com/users/david-waterworth/followers", "following_url": "https://api.github.com/users/david-waterworth/following{/other_user}", "gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}", "starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions", "organizations_url": "https://api.github.com/users/david-waterworth/orgs", "repos_url": "https://api.github.com/users/david-waterworth/repos", "events_url": "https://api.github.com/users/david-waterworth/events{/privacy}", "received_events_url": "https://api.github.com/users/david-waterworth/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi!\r\n\r\nThank you for reporting this issue. I agree that the information about the exception should be more clear and explicit.\r\n\r\nI could take on this issue.\r\n\r\nOn the meantime, as you can see from the exception stack trace, HF Datasets uses pandas to read the CSV files. You can pass arguments to `pandas.read_csv` by passing additional keyword arguments to `load_dataset`. For example, you may find useful this argument:\r\n- `error_bad_lines` : bool, default True\r\n Lines with too many fields (e.g. a csv line with too many commas) will by default cause an exception to be raised, and no DataFrame will be returned. If False, then these “bad lines” will be dropped from the DataFrame that is returned.\r\n\r\nYou could try:\r\n```python\r\ndatasets = load_dataset(\"csv\", data_files=dict(train=train_files, validation=validation_files), error_bad_lines=False)\r\n```\r\n" ]
1,612,486,143,000
1,612,892,367,000
1,612,892,367,000
NONE
null
I find when I process many files, i.e. ``` train_files = glob.glob('rain*.csv') validation_files = glob.glob(validation*.csv') datasets = load_dataset("csv", data_files=dict(train=train_files, validation=validation_files)) ``` I sometimes encounter an error due to one of the files being misformed (i.e. no data, or a comma in a field that isn't quoted, etc). For example, this is the tail of an exception which I suspect is due to a stray comma. > File "pandas/_libs/parsers.pyx", line 756, in pandas._libs.parsers.TextReader.read > File "pandas/_libs/parsers.pyx", line 783, in pandas._libs.parsers.TextReader._read_low_memory > File "pandas/_libs/parsers.pyx", line 827, in pandas._libs.parsers.TextReader._read_rows > File "pandas/_libs/parsers.pyx", line 814, in pandas._libs.parsers.TextReader._tokenize_rows > File "pandas/_libs/parsers.pyx", line 1951, in pandas._libs.parsers.raise_parser_error > pandas.errors.ParserError: Error tokenizing data. C error: Expected 2 fields in line 559, saw 3 It would be nice if the exception trace contained the name of the file being processed (I have 250 separate files!)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1821/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1821/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1820
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1820/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1820/comments
https://api.github.com/repos/huggingface/datasets/issues/1820/events
https://github.com/huggingface/datasets/pull/1820
801,529,936
MDExOlB1bGxSZXF1ZXN0NTY3ODI4OTg1
1,820
Add metrics usage examples and tests
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,612,463,030,000
1,612,533,601,000
1,612,533,600,000
MEMBER
null
All metrics finally have usage examples and proper fast + slow tests :) I added examples of usage for every metric, and I use doctest to make sure they all work as expected. For "slow" metrics such as bert_score or bleurt which require to download + run a transformer model, the download + forward pass are only done in the slow test. In the fast test on the other hand, the download + forward pass are monkey patched. Metrics that need to be installed from github are not added to setup.py because it prevents uploading the `datasets` package to pypi. An additional-test-requirements.txt file is used instead. This file also include `comet` in order to not have to resolve its *impossible* dependencies. Also `comet` is not tested on windows because one of its dependencies (fairseq) can't be installed in the CI for some reason.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1820/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1820/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1820", "html_url": "https://github.com/huggingface/datasets/pull/1820", "diff_url": "https://github.com/huggingface/datasets/pull/1820.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1820.patch", "merged_at": 1612533600000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1819
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1819/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1819/comments
https://api.github.com/repos/huggingface/datasets/issues/1819/events
https://github.com/huggingface/datasets/pull/1819
801,448,670
MDExOlB1bGxSZXF1ZXN0NTY3NzYyMzI2
1,819
Fixed spelling `S3Fileystem` to `S3FileSystem`
{ "login": "philschmid", "id": 32632186, "node_id": "MDQ6VXNlcjMyNjMyMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/philschmid", "html_url": "https://github.com/philschmid", "followers_url": "https://api.github.com/users/philschmid/followers", "following_url": "https://api.github.com/users/philschmid/following{/other_user}", "gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}", "starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philschmid/subscriptions", "organizations_url": "https://api.github.com/users/philschmid/orgs", "repos_url": "https://api.github.com/users/philschmid/repos", "events_url": "https://api.github.com/users/philschmid/events{/privacy}", "received_events_url": "https://api.github.com/users/philschmid/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,612,456,606,000
1,612,457,547,000
1,612,457,546,000
MEMBER
null
Fixed documentation spelling errors. Wrong `S3Fileystem` Right `S3FileSystem`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1819/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1819/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1819", "html_url": "https://github.com/huggingface/datasets/pull/1819", "diff_url": "https://github.com/huggingface/datasets/pull/1819.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1819.patch", "merged_at": 1612457546000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1816
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1816/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1816/comments
https://api.github.com/repos/huggingface/datasets/issues/1816/events
https://github.com/huggingface/datasets/pull/1816
800,660,995
MDExOlB1bGxSZXF1ZXN0NTY3MTExMjEx
1,816
Doc2dial rc update to latest version
{ "login": "songfeng", "id": 2062185, "node_id": "MDQ6VXNlcjIwNjIxODU=", "avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4", "gravatar_id": "", "url": "https://api.github.com/users/songfeng", "html_url": "https://github.com/songfeng", "followers_url": "https://api.github.com/users/songfeng/followers", "following_url": "https://api.github.com/users/songfeng/following{/other_user}", "gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}", "starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/songfeng/subscriptions", "organizations_url": "https://api.github.com/users/songfeng/orgs", "repos_url": "https://api.github.com/users/songfeng/repos", "events_url": "https://api.github.com/users/songfeng/events{/privacy}", "received_events_url": "https://api.github.com/users/songfeng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "- update data loader and readme for latest version 1.0.1" ]
1,612,382,934,000
1,613,402,124,000
1,613,401,473,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1816/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1816/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1816", "html_url": "https://github.com/huggingface/datasets/pull/1816", "diff_url": "https://github.com/huggingface/datasets/pull/1816.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1816.patch", "merged_at": 1613401473000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1815
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1815/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1815/comments
https://api.github.com/repos/huggingface/datasets/issues/1815/events
https://github.com/huggingface/datasets/pull/1815
800,610,017
MDExOlB1bGxSZXF1ZXN0NTY3MDY3NjU1
1,815
Add CCAligned Multilingual Dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi !\r\n\r\nWe already have some datasets that can have many many configurations possible.\r\nTo be able to support that, we allow to subclass BuilderConfig to add as many additional parameters as you may need.\r\nThis way users can load any language they want. For example the [bible_para](https://github.com/huggingface/datasets/blob/master/datasets/bible_para/bible_para.py) dataset is a dataset for translation and therefore users should be able to provide any language pair. You can check how the subclass of BuilderConfig is defined [here](https://github.com/huggingface/datasets/blob/master/datasets/bible_para/bible_para.py#L49).\r\n\r\nFor testing, only the configurations defined in the `BUILDER_CONFIGS` class attribute are used.\r\nAll the other configs combinations are not tested, but they can be used by users. If a config doesn't already exist in `BUILDER_CONFIGS`, then it is created on the fly.\r\nFor example in [bible_para](https://github.com/huggingface/datasets/blob/master/datasets/bible_para/bible_para.py#L61), only 6 configs are defined in `BUILDER_CONFIGS`.\r\n\r\nSo what I would do in your case is have something like\r\n```python\r\n\r\nclass CCAlignedConfig(datasets.BuilderConfig):\r\n def __init__(self, *args, documents_or_sentences=None, language_code=None, **kwargs):\r\n super().__init__(\r\n *args,\r\n name=f\"{documents_or_sentences}-{language_code}\",\r\n **kwargs,\r\n )\r\n self.documents_or_sentences = documents_or_sentences\r\n self.language_code = language_code\r\n```\r\nAnd of course, feel free to change/rename things if you want to. In particular I think we can improve the name of the parameter `documents_or_sentences`", "Hi @lhoestq,\r\n\r\nThanks a lot! I don't know why I didn't think about that. :P \r\nI'll make these changes and update.", "Hi @lhoestq,\r\n\r\nI have tested and added dummy files. Request you to review.\r\n\r\nAlso, does this mean BUILDER_CONFIGS is only needed while testing?", "Hi @lhoestq,\r\n\r\nAny changes required on this one?\r\n\r\nThanks,\r\nGunjan", "Hi @lhoestq,\r\n\r\nSorry for the delay, I have added the changes from the review. For the ISO format language codes, I just selected the first two characters from the names, hoping those are correct. Let me know if you want me to verify :P\r\n\r\nThanks for taking the time to add such a detailed review. I'll keep all these changes in mind the next time I'm adding a dataset.\r\n\r\nThanks,\r\nGunjan", "Hi @lhoestq,\r\n\r\nI have changed the README, and added a single example per config. Even one example is long enough to make the files heavy. Hope that isn't an issue.\r\n\r\nThanks,\r\nGunjan", "Hi @lhoestq,\r\n\r\nThanks for approving." ]
1,612,378,792,000
1,614,601,983,000
1,614,594,981,000
CONTRIBUTOR
null
Hello, I'm trying to add [CCAligned Multilingual Dataset](http://www.statmt.org/cc-aligned/). This has the potential to close #1756. This dataset has two types - Document-Pairs, and Sentence-Pairs. The datasets are huge, so I won't be able to test all of them. At the same time, a user might only want to download one particular language and not all. To provide this feature, `load_dataset`'s `**config_kwargs` should allow some random keyword args, in this case -`language_code`. This will be needed before the dataset is downloaded and extracted. I'm expecting the usage to be something like - `load_dataset('ccaligned_multilingual','documents',language_code='en_XX-af_ZA')`. Ofcourse, at a later stage we can provide just two character language codes. This also has an issue where one language has multiple files (`my_MM` and `my_MM_zaw` on the link), but before that the required functionality must be added to `load_dataset`. It would be great if someone could either tell me an alternative way to do this, or point me to where changes need to be made, if any, apart from the `BuilderConfig` definition. Additionally, I believe the tests will also have to be modified if this change is made, since it would not be possible to test for any random keyword arguments. A decent way to go about this would be to provide all the options in a list/dictionary for `language_code` and use that to test the arguments. In essence, this is similar to the pre-trained checkpoint dictionary as `transformers`. That means writing dataset specific tests, or adding something new to dataset generation script to make it easier for everyone to add keyword arguments without having to worry about the tests. Thanks, Gunjan Requesting @lhoestq / @yjernite to review.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1815/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1815/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1815", "html_url": "https://github.com/huggingface/datasets/pull/1815", "diff_url": "https://github.com/huggingface/datasets/pull/1815.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1815.patch", "merged_at": 1614594981000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1814
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1814/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1814/comments
https://api.github.com/repos/huggingface/datasets/issues/1814/events
https://github.com/huggingface/datasets/pull/1814
800,516,236
MDExOlB1bGxSZXF1ZXN0NTY2OTg4NTI1
1,814
Add Freebase QA Dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @lhoestq \r\n\r\nThanks for approving. Request you to close PR #1435 as well." ]
1,612,371,469,000
1,612,468,071,000
1,612,455,708,000
CONTRIBUTOR
null
Closes PR #1435. Fixed issues with PR #1809. Requesting @lhoestq to review.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1814/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1814/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1814", "html_url": "https://github.com/huggingface/datasets/pull/1814", "diff_url": "https://github.com/huggingface/datasets/pull/1814.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1814.patch", "merged_at": 1612455708000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1813
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1813/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1813/comments
https://api.github.com/repos/huggingface/datasets/issues/1813/events
https://github.com/huggingface/datasets/pull/1813
800,435,973
MDExOlB1bGxSZXF1ZXN0NTY2OTIxNDcz
1,813
Support future datasets
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,612,366,009,000
1,612,521,228,000
1,612,521,227,000
MEMBER
null
If a dataset is available at the version of the local installation of `datasets` (e.g. 1.2.0), then loading this dataset means loading the script at this version. However when trying to load a dataset that is only available on master, currently users have to specify `script_version="master"` in `load_dataset` to make it work. However we could automatically get the dataset from master instead in this case. I added this feature in this PR. I also added a warning if a dataset is not available at the version of the local installation of `datasets` but is loaded from master: ```python >>> load_dataset("silicone", "dyda_da") Couldn't find file locally at silicone/silicone.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.2.0/datasets/silicone/silicone.py. The file was picked from the master branch on github instead at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/silicone/silicone.py. Downloading and preparing dataset silicone/dyda_da (download: 8.46 MiB, generated: 9.39 MiB, post-processed: Unknown size, total: 17.86 MiB) to /Users/quentinlhoest/.cache/huggingface/datasets/silicone/dyda_da/1.0.0/d41d8c0b73c6df035b1369c45774418f0051163ea689b5502b8bda783adf6342... ... ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1813/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1813/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1813", "html_url": "https://github.com/huggingface/datasets/pull/1813", "diff_url": "https://github.com/huggingface/datasets/pull/1813.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1813.patch", "merged_at": 1612521227000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1812
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1812/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1812/comments
https://api.github.com/repos/huggingface/datasets/issues/1812/events
https://github.com/huggingface/datasets/pull/1812
799,379,178
MDExOlB1bGxSZXF1ZXN0NTY2MDMxODIy
1,812
Add CIFAR-100 Dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @lhoestq,\r\nI have updated with the changes from the review.", "Thanks for approving :)" ]
1,612,279,379,000
1,612,782,618,000
1,612,780,746,000
CONTRIBUTOR
null
Adding CIFAR-100 Dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1812/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1812/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1812", "html_url": "https://github.com/huggingface/datasets/pull/1812", "diff_url": "https://github.com/huggingface/datasets/pull/1812.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1812.patch", "merged_at": 1612780746000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1811
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1811/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1811/comments
https://api.github.com/repos/huggingface/datasets/issues/1811/events
https://github.com/huggingface/datasets/issues/1811
799,211,060
MDU6SXNzdWU3OTkyMTEwNjA=
1,811
Unable to add Multi-label Datasets
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for adding this dataset! As far as I know `supervised_keys` is mostly a holdover from TFDS, but isn't really used, so feel free to drop it (@lhoestq or @thomwolf correct me if I'm wrong). It definitely shouldn't be blocking :) ", "I can confirm that it comes from TFDS and is not used at the moment.", "Thanks @yjernite @lhoestq \r\n\r\nThe template for new dataset makes it slightly confusing. I suppose the comment suggesting its update can be removed.", "Closing this issue since it was answered." ]
1,612,266,656,000
1,613,657,791,000
1,613,657,791,000
CONTRIBUTOR
null
I am trying to add [CIFAR-100](https://www.cs.toronto.edu/~kriz/cifar.html) dataset. The dataset contains two labels per image - `fine label` and `coarse label`. Using just one label in supervised keys as `supervised_keys=("img", "fine_label")` raises no issue. But trying `supervised_keys=("img", "fine_label","coarse_label")` leads to this error : ```python Traceback (most recent call last): File "test_script.py", line 2, in <module> d = load_dataset('./datasets/cifar100') File "~/datasets/src/datasets/load.py", line 668, in load_dataset **config_kwargs, File "~/datasets/src/datasets/builder.py", line 896, in __init__ super(GeneratorBasedBuilder, self).__init__(*args, **kwargs) File "~/datasets/src/datasets/builder.py", line 247, in __init__ info.update(self._info()) File "~/.cache/huggingface/modules/datasets_modules/datasets/cifar100/61d2489b2d4a4abc34201432541b7380984ec714e290817d9a1ee318e4b74e0f/cifar100.py", line 79, in _info citation=_CITATION, File "<string>", line 19, in __init__ File "~/datasets/src/datasets/info.py", line 136, in __post_init__ self.supervised_keys = SupervisedKeysData(*self.supervised_keys) TypeError: __init__() takes from 1 to 3 positional arguments but 4 were given ``` Is there a way I can fix this? Also, what does adding `supervised_keys` do? Is it necessary? How would I specify `supervised_keys` for a multi-input, multi-label dataset? Thanks, Gunjan
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1811/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1811/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1809
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1809/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1809/comments
https://api.github.com/repos/huggingface/datasets/issues/1809/events
https://github.com/huggingface/datasets/pull/1809
799,059,141
MDExOlB1bGxSZXF1ZXN0NTY1NzY4ODQz
1,809
Add FreebaseQA dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! It looks like this PR contains changes about other datasets than freebase_qa such as DuoRC.\r\n\r\nCan you remove these changes please ?", "Hi @lhoestq,\r\n\r\nI think this happened because of rebasing. I'm unable to remove the duorc commit from the branch. GEM, Arabic sarcasm datasets are also there. I can't see any merge conflicts, however. Before commiting I always rebase (shouldn't have done that).\r\nCan you explain what is to be done? Should I create a clean PR?", "Hi @gchhablani \r\nI think you can simply create another branch and another PR.\r\n\r\nIf I understand correctly the github diff is messed up because you rebased instead of merge.\r\nRebasing is supposed to be used only before pushing the branch the first time, or github messes up the diff.\r\nIf you want to include changes from master on a branch that is already push you need to use git merge.", "Thanks @lhoestq.\r\n\r\nI understand the issue now. I missed the instructions on the template. Sorry for bothering you unnecessarily, I'm pretty new to contributing on GitHub. I'll make a fresh PR.\r\n", "No problem, I'm not a big fan of this weird behavior tbh.\r\nThanks for making a new PR", "@lhoestq Haha, well, it's not as weird as not reading the [instructions](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#open-a-pull-request-on-the-main-huggingface-repo-and-share-your-work).\r\nAlso, I'm enjoying adding new datasets so it's all cool :)" ]
1,612,254,953,000
1,612,372,505,000
1,612,370,586,000
CONTRIBUTOR
null
Adding FreebaseQA dataset suggested in PR #1435 with minor edits. Also closes that PR. Requesting @lhoestq to review.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1809/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1809/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1809", "html_url": "https://github.com/huggingface/datasets/pull/1809", "diff_url": "https://github.com/huggingface/datasets/pull/1809.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1809.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1807
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1807/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1807/comments
https://api.github.com/repos/huggingface/datasets/issues/1807/events
https://github.com/huggingface/datasets/pull/1807
798,823,591
MDExOlB1bGxSZXF1ZXN0NTY1NTczNzU5
1,807
Adding an aggregated dataset for the GEM benchmark
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Nice !" ]
1,612,226,393,000
1,612,306,121,000
1,612,289,218,000
MEMBER
null
This dataset gathers modified versions of several other conditional text generation datasets which together make up the shared task for the Generation Evaluation and Metrics workshop (think GLUE for text generation) The changes from the original datasets are detailed in the Dataset Cards on the GEM website, which are linked to in this dataset card. cc @sebastianGehrmann
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1807/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1807/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1807", "html_url": "https://github.com/huggingface/datasets/pull/1807", "diff_url": "https://github.com/huggingface/datasets/pull/1807.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1807.patch", "merged_at": 1612289218000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1806
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1806/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1806/comments
https://api.github.com/repos/huggingface/datasets/issues/1806/events
https://github.com/huggingface/datasets/pull/1806
798,607,869
MDExOlB1bGxSZXF1ZXN0NTY1Mzk0ODIz
1,806
Update details to MLSUM dataset
{ "login": "padipadou", "id": 15138872, "node_id": "MDQ6VXNlcjE1MTM4ODcy", "avatar_url": "https://avatars.githubusercontent.com/u/15138872?v=4", "gravatar_id": "", "url": "https://api.github.com/users/padipadou", "html_url": "https://github.com/padipadou", "followers_url": "https://api.github.com/users/padipadou/followers", "following_url": "https://api.github.com/users/padipadou/following{/other_user}", "gists_url": "https://api.github.com/users/padipadou/gists{/gist_id}", "starred_url": "https://api.github.com/users/padipadou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/padipadou/subscriptions", "organizations_url": "https://api.github.com/users/padipadou/orgs", "repos_url": "https://api.github.com/users/padipadou/repos", "events_url": "https://api.github.com/users/padipadou/events{/privacy}", "received_events_url": "https://api.github.com/users/padipadou/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks!" ]
1,612,204,512,000
1,612,205,188,000
1,612,205,181,000
CONTRIBUTOR
null
Update details to MLSUM dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1806/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1806/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1806", "html_url": "https://github.com/huggingface/datasets/pull/1806", "diff_url": "https://github.com/huggingface/datasets/pull/1806.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1806.patch", "merged_at": 1612205181000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1805
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1805/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1805/comments
https://api.github.com/repos/huggingface/datasets/issues/1805/events
https://github.com/huggingface/datasets/issues/1805
798,498,053
MDU6SXNzdWU3OTg0OTgwNTM=
1,805
can't pickle SwigPyObject objects when calling dataset.get_nearest_examples from FAISS index
{ "login": "abarbosa94", "id": 6608232, "node_id": "MDQ6VXNlcjY2MDgyMzI=", "avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abarbosa94", "html_url": "https://github.com/abarbosa94", "followers_url": "https://api.github.com/users/abarbosa94/followers", "following_url": "https://api.github.com/users/abarbosa94/following{/other_user}", "gists_url": "https://api.github.com/users/abarbosa94/gists{/gist_id}", "starred_url": "https://api.github.com/users/abarbosa94/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abarbosa94/subscriptions", "organizations_url": "https://api.github.com/users/abarbosa94/orgs", "repos_url": "https://api.github.com/users/abarbosa94/repos", "events_url": "https://api.github.com/users/abarbosa94/events{/privacy}", "received_events_url": "https://api.github.com/users/abarbosa94/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Indeed we used to require mapping functions to be picklable with `pickle` or `dill` in order to cache the resulting datasets. And FAISS indexes are not picklable unfortunately.\r\n\r\nBut since #1703 this is no longer required (the caching will simply be disabled). This change will be available in the next release of `datasets`, or you can also install `datasets` from source.", "I totally forgot to answer this issue, I'm so sorry. \r\n\r\nI was able to get it working by installing `datasets` from source. Huge thanks!" ]
1,612,196,057,000
1,615,041,166,000
1,615,041,166,000
CONTRIBUTOR
null
So, I have the following instances in my dataset ``` {'question': 'An astronomer observes that a planet rotates faster after a meteorite impact. Which is the most likely effect of this increase in rotation?', 'answer': 'C', 'example_id': 'ARCCH_Mercury_7175875', 'options':[{'option_context': 'One effect of increased amperage in the planetary world (..)', 'option_id': 'A', 'option_text': 'Planetary density will decrease.'}, (...)]} ``` The `options` value is always an list with 4 options, each one is a dict with `option_context`; `option_id` and `option_text`. I would like to overwrite the `option_context` of each instance of my dataset for a dpr result that I am developing. Then, I trained a model already and save it in a FAISS index ``` dpr_dataset = load_dataset( "text", data_files=ARC_CORPUS_TEXT, cache_dir=CACHE_DIR, split="train[:100%]", ) dpr_dataset.load_faiss_index("embeddings", f"{ARC_CORPUS_FAISS}") torch.set_grad_enabled(False) ``` Then, as a processor of my dataset, I created a map function that calls the `dpr_dataset` for each _option_ ``` def generate_context(example): question_text = example['question'] for option in example['options']: question_with_option = question_text + " " + option['option_text'] tokenize_text = question_tokenizer(question_with_option, return_tensors="pt").to(device) question_embed = ( question_encoder(**tokenize_text) )[0][0].cpu().numpy() _, retrieved_examples = dpr_dataset.get_nearest_examples( "embeddings", question_embed, k=10 ) # option["option_context"] = retrieved_examples["text"] # option["option_context"] = " ".join(option["option_context"]).strip() #result_dict = { # 'example_id': example['example_id'], # 'answer': example['answer'], # 'question': question_text, #options': example['options'] # } return example ``` I intentionally commented on this portion of the code. But when I call the `map` method, `ds_with_context = dataset.map(generate_context,load_from_cache_file=False)` It calls the following error: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-55-75a458ce205c> in <module> ----> 1 ds_with_context = dataset.map(generate_context,load_from_cache_file=False) ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc) 301 num_proc=num_proc, 302 ) --> 303 for k, dataset in self.items() 304 } 305 ) ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/dataset_dict.py in <dictcomp>(.0) 301 num_proc=num_proc, 302 ) --> 303 for k, dataset in self.items() 304 } 305 ) ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint) 1257 fn_kwargs=fn_kwargs, 1258 new_fingerprint=new_fingerprint, -> 1259 update_data=update_data, 1260 ) 1261 else: ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 155 } 156 # apply actual function --> 157 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 158 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 159 # re-apply format to the output ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 156 kwargs_for_fingerprint["fingerprint_name"] = fingerprint_name 157 kwargs[fingerprint_name] = update_fingerprint( --> 158 self._fingerprint, transform, kwargs_for_fingerprint 159 ) 160 ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/fingerprint.py in update_fingerprint(fingerprint, transform, transform_args) 103 for key in sorted(transform_args): 104 hasher.update(key) --> 105 hasher.update(transform_args[key]) 106 return hasher.hexdigest() 107 ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/fingerprint.py in update(self, value) 55 def update(self, value): 56 self.m.update(f"=={type(value)}==".encode("utf8")) ---> 57 self.m.update(self.hash(value).encode("utf-8")) 58 59 def hexdigest(self): ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/fingerprint.py in hash(cls, value) 51 return cls.dispatch[type(value)](cls, value) 52 else: ---> 53 return cls.hash_default(value) 54 55 def update(self, value): ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/fingerprint.py in hash_default(cls, value) 44 @classmethod 45 def hash_default(cls, value): ---> 46 return cls.hash_bytes(dumps(value)) 47 48 @classmethod ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/utils/py_utils.py in dumps(obj) 387 file = StringIO() 388 with _no_cache_fields(obj): --> 389 dump(obj, file) 390 return file.getvalue() 391 ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/utils/py_utils.py in dump(obj, file) 359 def dump(obj, file): 360 """pickle an object to a file""" --> 361 Pickler(file, recurse=True).dump(obj) 362 return 363 ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/dill/_dill.py in dump(self, obj) 452 raise PicklingError(msg) 453 else: --> 454 StockPickler.dump(self, obj) 455 stack.clear() # clear record of 'recursion-sensitive' pickled objects 456 return /usr/lib/python3.7/pickle.py in dump(self, obj) 435 if self.proto >= 4: 436 self.framer.start_framing() --> 437 self.save(obj) 438 self.write(STOP) 439 self.framer.end_framing() /usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 502 f = self.dispatch.get(t) 503 if f is not None: --> 504 f(self, obj) # Call unbound method with explicit self 505 return 506 ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/utils/py_utils.py in save_function(pickler, obj) 554 dill._dill._create_function, 555 (obj.__code__, globs, obj.__name__, obj.__defaults__, obj.__closure__, obj.__dict__, fkwdefaults), --> 556 obj=obj, 557 ) 558 else: /usr/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj) 636 else: 637 save(func) --> 638 save(args) 639 write(REDUCE) 640 /usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 502 f = self.dispatch.get(t) 503 if f is not None: --> 504 f(self, obj) # Call unbound method with explicit self 505 return 506 /usr/lib/python3.7/pickle.py in save_tuple(self, obj) 784 write(MARK) 785 for element in obj: --> 786 save(element) 787 788 if id(obj) in memo: /usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 502 f = self.dispatch.get(t) 503 if f is not None: --> 504 f(self, obj) # Call unbound method with explicit self 505 return 506 ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj) 939 # we only care about session the first pass thru 940 pickler._session = False --> 941 StockPickler.save_dict(pickler, obj) 942 log.info("# D2") 943 return /usr/lib/python3.7/pickle.py in save_dict(self, obj) 854 855 self.memoize(obj) --> 856 self._batch_setitems(obj.items()) 857 858 dispatch[dict] = save_dict /usr/lib/python3.7/pickle.py in _batch_setitems(self, items) 880 for k, v in tmp: 881 save(k) --> 882 save(v) 883 write(SETITEMS) 884 elif n: /usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 547 548 # Save the reduce() output and finally memoize the object --> 549 self.save_reduce(obj=obj, *rv) 550 551 def persistent_id(self, obj): /usr/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj) 660 661 if state is not None: --> 662 save(state) 663 write(BUILD) 664 /usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 502 f = self.dispatch.get(t) 503 if f is not None: --> 504 f(self, obj) # Call unbound method with explicit self 505 return 506 ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj) 939 # we only care about session the first pass thru 940 pickler._session = False --> 941 StockPickler.save_dict(pickler, obj) 942 log.info("# D2") 943 return /usr/lib/python3.7/pickle.py in save_dict(self, obj) 854 855 self.memoize(obj) --> 856 self._batch_setitems(obj.items()) 857 858 dispatch[dict] = save_dict /usr/lib/python3.7/pickle.py in _batch_setitems(self, items) 880 for k, v in tmp: 881 save(k) --> 882 save(v) 883 write(SETITEMS) 884 elif n: /usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 502 f = self.dispatch.get(t) 503 if f is not None: --> 504 f(self, obj) # Call unbound method with explicit self 505 return 506 ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj) 939 # we only care about session the first pass thru 940 pickler._session = False --> 941 StockPickler.save_dict(pickler, obj) 942 log.info("# D2") 943 return /usr/lib/python3.7/pickle.py in save_dict(self, obj) 854 855 self.memoize(obj) --> 856 self._batch_setitems(obj.items()) 857 858 dispatch[dict] = save_dict /usr/lib/python3.7/pickle.py in _batch_setitems(self, items) 885 k, v = tmp[0] 886 save(k) --> 887 save(v) 888 write(SETITEM) 889 # else tmp is empty, and we're done /usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 547 548 # Save the reduce() output and finally memoize the object --> 549 self.save_reduce(obj=obj, *rv) 550 551 def persistent_id(self, obj): /usr/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj) 660 661 if state is not None: --> 662 save(state) 663 write(BUILD) 664 /usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 502 f = self.dispatch.get(t) 503 if f is not None: --> 504 f(self, obj) # Call unbound method with explicit self 505 return 506 ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj) 939 # we only care about session the first pass thru 940 pickler._session = False --> 941 StockPickler.save_dict(pickler, obj) 942 log.info("# D2") 943 return /usr/lib/python3.7/pickle.py in save_dict(self, obj) 854 855 self.memoize(obj) --> 856 self._batch_setitems(obj.items()) 857 858 dispatch[dict] = save_dict /usr/lib/python3.7/pickle.py in _batch_setitems(self, items) 880 for k, v in tmp: 881 save(k) --> 882 save(v) 883 write(SETITEMS) 884 elif n: /usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 547 548 # Save the reduce() output and finally memoize the object --> 549 self.save_reduce(obj=obj, *rv) 550 551 def persistent_id(self, obj): /usr/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj) 660 661 if state is not None: --> 662 save(state) 663 write(BUILD) 664 /usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 502 f = self.dispatch.get(t) 503 if f is not None: --> 504 f(self, obj) # Call unbound method with explicit self 505 return 506 ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj) 939 # we only care about session the first pass thru 940 pickler._session = False --> 941 StockPickler.save_dict(pickler, obj) 942 log.info("# D2") 943 return /usr/lib/python3.7/pickle.py in save_dict(self, obj) 854 855 self.memoize(obj) --> 856 self._batch_setitems(obj.items()) 857 858 dispatch[dict] = save_dict /usr/lib/python3.7/pickle.py in _batch_setitems(self, items) 885 k, v = tmp[0] 886 save(k) --> 887 save(v) 888 write(SETITEM) 889 # else tmp is empty, and we're done /usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 522 reduce = getattr(obj, "__reduce_ex__", None) 523 if reduce is not None: --> 524 rv = reduce(self.proto) 525 else: 526 reduce = getattr(obj, "__reduce__", None) TypeError: can't pickle SwigPyObject objects ``` Which I have no idea how to solve/deal with it
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1805/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1805/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1804
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1804/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1804/comments
https://api.github.com/repos/huggingface/datasets/issues/1804/events
https://github.com/huggingface/datasets/pull/1804
798,483,881
MDExOlB1bGxSZXF1ZXN0NTY1MjkzMTc3
1,804
Add SICK dataset
{ "login": "calpt", "id": 36051308, "node_id": "MDQ6VXNlcjM2MDUxMzA4", "avatar_url": "https://avatars.githubusercontent.com/u/36051308?v=4", "gravatar_id": "", "url": "https://api.github.com/users/calpt", "html_url": "https://github.com/calpt", "followers_url": "https://api.github.com/users/calpt/followers", "following_url": "https://api.github.com/users/calpt/following{/other_user}", "gists_url": "https://api.github.com/users/calpt/gists{/gist_id}", "starred_url": "https://api.github.com/users/calpt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/calpt/subscriptions", "organizations_url": "https://api.github.com/users/calpt/orgs", "repos_url": "https://api.github.com/users/calpt/repos", "events_url": "https://api.github.com/users/calpt/events{/privacy}", "received_events_url": "https://api.github.com/users/calpt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,612,195,064,000
1,612,547,188,000
1,612,540,165,000
CONTRIBUTOR
null
Adds the SICK dataset (http://marcobaroni.org/composes/sick.html). Closes #1772. Edit: also closes #1632, which is the original issue requesting the dataset. The newer one is a duplicate.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1804/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1804/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1804", "html_url": "https://github.com/huggingface/datasets/pull/1804", "diff_url": "https://github.com/huggingface/datasets/pull/1804.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1804.patch", "merged_at": 1612540165000 }
true