url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1B
| node_id
stringlengths 18
32
| number
int64 1
2.96k
| title
stringlengths 1
268
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,632B
| updated_at
int64 1,587B
1,632B
| closed_at
int64 1,587B
1,632B
⌀ | author_association
stringclasses 4
values | active_lock_reason
null | pull_request
dict | body
stringlengths 0
228k
⌀ | timeline_url
stringlengths 67
70
| performed_via_github_app
null | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2136 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2136/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2136/comments | https://api.github.com/repos/huggingface/datasets/issues/2136/events | https://github.com/huggingface/datasets/pull/2136 | 843,492,015 | MDExOlB1bGxSZXF1ZXN0NjAyODY0ODY5 | 2,136 | fix dialogue action slot name and value | {
"login": "adamlin120",
"id": 31605305,
"node_id": "MDQ6VXNlcjMxNjA1MzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/31605305?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adamlin120",
"html_url": "https://github.com/adamlin120",
"followers_url": "https://api.github.com/users/adamlin120/followers",
"following_url": "https://api.github.com/users/adamlin120/following{/other_user}",
"gists_url": "https://api.github.com/users/adamlin120/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adamlin120/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adamlin120/subscriptions",
"organizations_url": "https://api.github.com/users/adamlin120/orgs",
"repos_url": "https://api.github.com/users/adamlin120/repos",
"events_url": "https://api.github.com/users/adamlin120/events{/privacy}",
"received_events_url": "https://api.github.com/users/adamlin120/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,617,032,053,000 | 1,617,194,882,000 | 1,617,194,881,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2136",
"html_url": "https://github.com/huggingface/datasets/pull/2136",
"diff_url": "https://github.com/huggingface/datasets/pull/2136.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2136.patch"
} | fix #2128 | https://api.github.com/repos/huggingface/datasets/issues/2136/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2135 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2135/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2135/comments | https://api.github.com/repos/huggingface/datasets/issues/2135/events | https://github.com/huggingface/datasets/issues/2135 | 843,246,344 | MDU6SXNzdWU4NDMyNDYzNDQ= | 2,135 | en language data from MLQA dataset is missing | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Indeed only the languages of the `translate-train` data are included...\r\nI can't find a link to download the english train set on https://github.com/facebookresearch/MLQA though, do you know where we can download it ?",
"Hi @lhoestq \r\nthank you very much for coming back to me, now I see, you are right, in the link you sent I see split of {split}-context-{context_language}-question-{question_language}.json with context_language=question_language=en, TFDS most probably has extracted english ones from these files as en language files, but translate-train/test do not have en indeed. thanks a lot for the great explanations",
"I close the ticket, since I do not see any en existing, they have trained on \"SQuAD V1.1\" instead. Thanks. "
] | 1,617,014,870,000 | 1,617,099,623,000 | 1,617,099,623,000 | CONTRIBUTOR | null | null | Hi
I need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue. | https://api.github.com/repos/huggingface/datasets/issues/2135/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2134 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2134/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2134/comments | https://api.github.com/repos/huggingface/datasets/issues/2134/events | https://github.com/huggingface/datasets/issues/2134 | 843,242,849 | MDU6SXNzdWU4NDMyNDI4NDk= | 2,134 | Saving large in-memory datasets with save_to_disk crashes because of pickling | {
"login": "prokopCerny",
"id": 5815801,
"node_id": "MDQ6VXNlcjU4MTU4MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5815801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prokopCerny",
"html_url": "https://github.com/prokopCerny",
"followers_url": "https://api.github.com/users/prokopCerny/followers",
"following_url": "https://api.github.com/users/prokopCerny/following{/other_user}",
"gists_url": "https://api.github.com/users/prokopCerny/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prokopCerny/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prokopCerny/subscriptions",
"organizations_url": "https://api.github.com/users/prokopCerny/orgs",
"repos_url": "https://api.github.com/users/prokopCerny/repos",
"events_url": "https://api.github.com/users/prokopCerny/events{/privacy}",
"received_events_url": "https://api.github.com/users/prokopCerny/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi !\r\nIndeed `save_to_disk` doesn't call pickle anymore. Though the `OverflowError` can still appear for in-memory datasets bigger than 4GB. This happens when doing this for example:\r\n```python\r\nimport pyarrow as pa\r\nimport pickle\r\n\r\narr = pa.array([0] * ((4 * 8 << 30) // 64))\r\ntable = pa.Table.from_arrays([a], names=[\"foo\"])\r\npickle.dumps(table) # fails with an OverflowError\r\npickle.dumps(table, 4) # works !\r\n```\r\nWe'll do the change to use `protocol=4`.\r\n\r\nMoreover I've also seen other users complain about this error\r\n```\r\nstruct.error: 'I' format requires 0 <= number <= 4294967295\r\n```\r\n\r\nIt looks like something related to the 4GB limit as well but I'm not able to reproduce on my side.\r\nDo you think you can provide a script that reproduces the issue ?\r\nHow big is your dataset ? (number of bytes, number of rows)\r\n\r\n",
"Hi!\r\nSo I've managed to created a minimum working (well technically crashing) example for the multiprocessing case, I create a huge list of zeros, like in your example, and then I try to .map(None, num_proc=2) over it, which then crashes, here's the code:\r\n\r\n```python\r\nfrom datasets import Dataset\r\n\r\nif __name__ == '__main__':\r\n ton_of_zeroes = [0] * ((12 * 8 << 30) // 64)\r\n large_dataset = Dataset.from_dict({'col': ton_of_zeroes})\r\n print(\"Start\")\r\n large_dataset.map(function=None, num_proc=2)\r\n print(\"Done - should not print\")\r\n```\r\n\r\nThe amount of zeros could probably be reduced, I haven't tried to minimize it to find the breaking point, I just increased it from your code (which by quick glance I assumed tried to allocate over 4 GiB)\r\n\r\nRunning this results in the following traceback:\r\n\r\n```\r\nParameter 'indices'=[ 0 1 2 ... 805306365 805306366 805306367] of the transform datasets.arrow_dataset.Dataset.select couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.\r\nTraceback (most recent call last):\r\n File \"./crash_multiproc_pickle.py\", line 7, in <module>\r\n large_dataset.map(function=None, num_proc=2)\r\n File \"/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py\", line 1485, in map\r\n transformed_shards = [r.get() for r in results]\r\n File \"/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py\", line 1485, in <listcomp>\r\n transformed_shards = [r.get() for r in results]\r\n File \"/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py\", line 657, in get\r\n raise self._value\r\n File \"/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py\", line 431, in _handle_tasks\r\n put(task)\r\n File \"/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/connection.py\", line 209, in send\r\n self._send_bytes(_ForkingPickler.dumps(obj))\r\n File \"/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/reduction.py\", line 54, in dumps\r\n cls(buf, protocol, *args, **kwds).dump(obj)\r\n File \"/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py\", line 454, in dump\r\n StockPickler.dump(self, obj)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 437, in dump\r\n self.save(obj)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 789, in save_tuple\r\n save(element)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py\", line 941, in save_module_dict\r\n StockPickler.save_dict(pickler, obj)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 859, in save_dict\r\n self._batch_setitems(obj.items())\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 885, in _batch_setitems\r\n save(v)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 662, in save_reduce\r\n save(state)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py\", line 941, in save_module_dict\r\n StockPickler.save_dict(pickler, obj)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 859, in save_dict\r\n self._batch_setitems(obj.items())\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 885, in _batch_setitems\r\n save(v)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 638, in save_reduce\r\n save(args)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 774, in save_tuple\r\n save(element)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 819, in save_list\r\n self._batch_appends(obj)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 846, in _batch_appends\r\n save(tmp[0])\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 638, in save_reduce\r\n save(args)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 774, in save_tuple\r\n save(element)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 819, in save_list\r\n self._batch_appends(obj)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 846, in _batch_appends\r\n save(tmp[0])\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 638, in save_reduce\r\n save(args)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 774, in save_tuple\r\n save(element)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 789, in save_tuple\r\n save(element)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 819, in save_list\r\n self._batch_appends(obj)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 843, in _batch_appends\r\n save(x)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 638, in save_reduce\r\n save(args)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 774, in save_tuple\r\n save(element)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 732, in save_bytes\r\n self._write_large_bytes(BINBYTES + pack(\"<I\", n), obj)\r\nstruct.error: 'I' format requires 0 <= number <= 4294967295\r\n```\r\n\r\nMy datasets usually have hundreds of thousands to low millions of rows, with each row containing a list of 10 strings and list of vectors of different length (the strings tokenized), which in the worst case have 10\\*512\\*8 = 40960 bytes (but usually it is much smaller, as the vectors tend to be shorter. I need these groups of text lines to create training data for the Inverse Cloze Task.\r\n\r\nAnyway I don't think my particular dataset is relevant, as the tiny script I created also manages to crash.\r\nBut I think the issue is the same as the save_to_disk, from the traceback it seems that in multiprocessing, it tries to use dill to return the result of the map workers, which tries to pickle the data and can't do it, probably because it's again using the older pickle protocol. That's my guess anyway.",
"I just merged a fix #2150 that allows to pickle tables bigger than 4GiB\r\nFeel free to try it on the `master` branch !",
"awesome! I started getting this error as well when I tried to tokenize with a longer sequence length",
"@prokopCerny does this fix work for you? I found that with the latest master, my container with 500GB RAM starts crashing when I try to map a large dataset using `num_proc`.\r\n\r\n@lhoestq would it be possible to implement some logic to keep the individual cache files small (say below 100mb)? I find this helps with loading large datasets, but the \"hack\" I was using (increasing `num_proc` to a large number) doesn't work anymore with the latest master; my container crashes even with `num_proc=200` now",
"Closing since the original issue was fixed in #2150 \r\nFeel free to reopen if you are still experiencing it.\r\nFor the other problems, please open separate issues"
] | 1,617,014,595,000 | 1,620,064,761,000 | 1,620,064,761,000 | NONE | null | null | Using Datasets 1.5.0 on Python 3.7.
Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively faster when done in memory, and I have the ability to requisition a lot of RAM, so I decided to do these steps completely out of the datasets library.
So my workflow is to do several .map() on datasets object, then for the operation which is faster in memory to extract the necessary columns from the dataset and then drop it whole, do the transformation in memory, and then create a fresh Dataset object using .from_dict() or other method.
When I then try to call save_to_disk(path) on the dataset, it crashes because of pickling, which appears to be because of using old pickle protocol which doesn't support large files (over 4 GiB).
```
Traceback (most recent call last):
File "./tokenize_and_chunkify_in_memory.py", line 80, in <module>
main()
File "./tokenize_and_chunkify_in_memory.py", line 75, in main
tokenize_and_chunkify(config)
File "./tokenize_and_chunkify_in_memory.py", line 60, in tokenize_and_chunkify
contexts_dataset.save_to_disk(chunked_path)
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 457, in save_to_disk
self = pickle.loads(pickle.dumps(self))
OverflowError: cannot serialize a bytes object larger than 4 GiB
```
From what I've seen this issue may be possibly fixed, as the line `self = pickle.loads(pickle.dumps(self))` does not appear to be present in the current state of the repository.
To save these datasets to disk, I've resorted to calling .map() over them with `function=None` and specifying the .arrow cache file, and then creating a new dataset using the .from_file() method, which I can then safely save to disk.
Additional issue when working with these large in-memory datasets is when using multiprocessing, is again to do with pickling. I've tried to speed up the mapping with function=None by specifying num_proc to the available cpu count, and I again get issues with transferring the dataset, with the following traceback. I am not sure if I should open a separate issue for that.
```
Traceback (most recent call last):
File "./tokenize_and_chunkify_in_memory.py", line 94, in <module>
main()
File "./tokenize_and_chunkify_in_memory.py", line 89, in main
tokenize_and_chunkify(config)
File "./tokenize_and_chunkify_in_memory.py", line 67, in tokenize_and_chunkify
contexts_dataset.map(function=None, cache_file_name=str(output_dir_path / "tmp.arrow"), writer_batch_size=50000, num_proc=config.threads)
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in map
transformed_shards = [r.get() for r in results]
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in <listcomp>
transformed_shards = [r.get() for r in results]
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 657, in get
raise self._value
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 431, in _handle_tasks
put(task)
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/connection.py", line 209, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/reduction.py", line 54, in dumps
cls(buf, protocol, *args, **kwds).dump(obj)
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump
StockPickler.dump(self, obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 437, in dump
self.save(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 662, in save_reduce
save(state)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends
save(x)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends
save(tmp[0])
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends
save(tmp[0])
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends
save(tmp[0])
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends
save(x)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 732, in save_bytes
self._write_large_bytes(BINBYTES + pack("<I", n), obj)
struct.error: 'I' format requires 0 <= number <= 4294967295Traceback (most recent call last):
File "./tokenize_and_chunkify_in_memory.py", line 94, in <module>
main()
File "./tokenize_and_chunkify_in_memory.py", line 89, in main
tokenize_and_chunkify(config)
File "./tokenize_and_chunkify_in_memory.py", line 67, in tokenize_and_chunkify
contexts_dataset.map(function=None, cache_file_name=str(output_dir_path / "tmp.arrow"), writer_batch_size=50000, num_proc=config.threads)
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in map
transformed_shards = [r.get() for r in results]
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in <listcomp>
transformed_shards = [r.get() for r in results]
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 657, in get
raise self._value
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 431, in _handle_tasks
put(task)
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/connection.py", line 209, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/reduction.py", line 54, in dumps
cls(buf, protocol, *args, **kwds).dump(obj)
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump
StockPickler.dump(self, obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 437, in dump
self.save(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 662, in save_reduce
save(state)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends
save(x)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends
save(tmp[0])
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends
save(tmp[0])
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends
save(tmp[0])
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends
save(x)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 732, in save_bytes
self._write_large_bytes(BINBYTES + pack("<I", n), obj)
struct.error: 'I' format requires 0 <= number <= 4294967295
``` | https://api.github.com/repos/huggingface/datasets/issues/2134/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2133 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2133/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2133/comments | https://api.github.com/repos/huggingface/datasets/issues/2133/events | https://github.com/huggingface/datasets/issues/2133 | 843,149,680 | MDU6SXNzdWU4NDMxNDk2ODA= | 2,133 | bug in mlqa dataset | {
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"If you print those questions, you get readable texts:\r\n```python\r\n>>> questions = [\r\n... \"\\u0645\\u062a\\u0649 \\u0628\\u062f\\u0627\\u062a \\u0627\\u0644\\u0645\\u062c\\u0644\\u0629 \\u0627\\u0644\\u0645\\u062f\\u0631\\u0633\\u064a\\u0629 \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645 \\u0628\\u0627\\u0644\\u0646\\u0634\\u0631?\",\r\n... \"\\u0643\\u0645 \\u0645\\u0631\\u0629 \\u064a\\u062a\\u0645 \\u0646\\u0634\\u0631\\u0647\\u0627 \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645?\",\r\n... \"\\u0645\\u0627 \\u0647\\u064a \\u0627\\u0644\\u0648\\u0631\\u0642\\u0629 \\u0627\\u0644\\u064a\\u0648\\u0645\\u064a\\u0629 \\u0644\\u0644\\u0637\\u0644\\u0627\\u0628 \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645?\",\r\n... \"\\u0643\\u0645 \\u0639\\u062f\\u062f \\u0627\\u0644\\u0627\\u0648\\u0631\\u0627\\u0642 \\u0627\\u0644\\u0627\\u062e\\u0628\\u0627\\u0631\\u064a\\u0629 \\u0644\\u0644\\u0637\\u0644\\u0627\\u0628 \\u0627\\u0644\\u062a\\u064a \\u0648\\u062c\\u062f\\u062a \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645?\",\r\n... \"\\u0641\\u064a \\u0627\\u064a \\u0633\\u0646\\u0629 \\u0628\\u062f\\u0627\\u062a \\u0648\\u0631\\u0642\\u0629 \\u0627\\u0644\\u0637\\u0627\\u0644\\u0628 \\u0627\\u0644\\u062d\\u0633 \\u0627\\u0644\\u0633\\u0644\\u064a\\u0645 \\u0628\\u0627\\u0644\\u0646\\u0634\\u0631 \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645?\"\r\n... ]\r\n>>> print(questions)\r\n['متى بدات المجلة المدرسية في نوتردام بالنشر?', 'كم مرة يتم نشرها في نوتردام?', 'ما هي الورقة اليومية للطلاب في نوتردام?', 'كم عدد الاوراق الاخبارية للطلاب التي وجدت في نوتردام?', 'في اي سنة بدات ورقة الطالب الحس السليم بالنشر في نوتردام?']\r\n```\r\nI don't think we can change this",
"Hi @dorost1234.\r\n\r\nIn Python 3, strings are sequences of Unicode _code points_. Unicode is a specification that maps all characters (and emoji symbols) with its unique representation in terms of code points. That is what you see: Unicode code points (represented by a \\u escaped sequence of 16-bit hex values).\r\n\r\nCharacters are usually represented (on screen and papers) with a graphical element called _glyph_. That is what you would like to see: glyphs. But Python does not care about glyphs: that is the job of the GUI or the terminal; glyphs are what you get with the `print` function (if your terminal is properly configured to display those glyphs).\r\n\r\nYou have more detailed information about Unicode in the Python documentation: https://docs.python.org/3/howto/unicode.html",
"thank you so much for the insightful comments. "
] | 1,617,008,589,000 | 1,617,126,057,000 | 1,617,126,057,000 | NONE | null | null | Hi
Looking into MLQA dataset for langauge "ar":
```
"question": [
"\u0645\u062a\u0649 \u0628\u062f\u0627\u062a \u0627\u0644\u0645\u062c\u0644\u0629 \u0627\u0644\u0645\u062f\u0631\u0633\u064a\u0629 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645 \u0628\u0627\u0644\u0646\u0634\u0631?",
"\u0643\u0645 \u0645\u0631\u0629 \u064a\u062a\u0645 \u0646\u0634\u0631\u0647\u0627 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?",
"\u0645\u0627 \u0647\u064a \u0627\u0644\u0648\u0631\u0642\u0629 \u0627\u0644\u064a\u0648\u0645\u064a\u0629 \u0644\u0644\u0637\u0644\u0627\u0628 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?",
"\u0643\u0645 \u0639\u062f\u062f \u0627\u0644\u0627\u0648\u0631\u0627\u0642 \u0627\u0644\u0627\u062e\u0628\u0627\u0631\u064a\u0629 \u0644\u0644\u0637\u0644\u0627\u0628 \u0627\u0644\u062a\u064a \u0648\u062c\u062f\u062a \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?",
"\u0641\u064a \u0627\u064a \u0633\u0646\u0629 \u0628\u062f\u0627\u062a \u0648\u0631\u0642\u0629 \u0627\u0644\u0637\u0627\u0644\u0628 \u0627\u0644\u062d\u0633 \u0627\u0644\u0633\u0644\u064a\u0645 \u0628\u0627\u0644\u0646\u0634\u0631 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?"
]
```
the questions are in the wrong format, and not readable, could you please have a look? thanks @lhoestq
| https://api.github.com/repos/huggingface/datasets/issues/2133/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2132 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2132/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2132/comments | https://api.github.com/repos/huggingface/datasets/issues/2132/events | https://github.com/huggingface/datasets/issues/2132 | 843,142,822 | MDU6SXNzdWU4NDMxNDI4MjI= | 2,132 | TydiQA dataset is mixed and is not split per language | {
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"You can filter the languages this way:\r\n```python\r\ntydiqa_en = tydiqa_dataset.filter(lambda x: x[\"language\"] == \"english\")\r\n```\r\n\r\nOtherwise maybe we can have one configuration per language ?\r\nWhat do you think of this for example ?\r\n\r\n```python\r\nload_dataset(\"tydiqa\", \"primary_task.en\")\r\n```",
"Hi\nthank you very much for the great response, this will be really wonderful\nto have one configuration per language, as one need the dataset in majority\nof case per language for cross-lingual evaluations.\nThis becomes also then more close to TFDS format, which is separated per\nlanguage https://www.tensorflow.org/datasets/catalog/tydi_qa which will be\nreally awesome to have.\nthanks\n\nOn Mon, Mar 29, 2021 at 6:17 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> You can filter the languages this way:\n>\n> tydiqa_en = tydiqa_dataset.filter(lambda x: x[\"language\"] == \"english\")\n>\n> Otherwise maybe we can have one configuration per language ?\n> What do you think of this for example ?\n>\n> load_dataset(\"tydiqa\", \"primary_task.en\")\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2132#issuecomment-809516799>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AS37NMXPW2PWSQ2RHG73O7TTGCY4LANCNFSM4Z7ER7IA>\n> .\n>\n",
"@lhoestq I greatly appreciate any updates on this. thanks a lot"
] | 1,617,008,181,000 | 1,617,530,235,000 | null | NONE | null | null | Hi @lhoestq
Currently TydiQA is mixed and user can only access the whole training set of all languages:
https://www.tensorflow.org/datasets/catalog/tydi_qa
for using this dataset, one need to train/evaluate in each separate language, and having them mixed, makes it hard to use this dataset. This is much convenient for user to have them split and I appreciate your help on this.
Meanwhile, till hopefully this is split per language, I greatly appreciate telling me how I can preprocess and get data per language. thanks a lot | https://api.github.com/repos/huggingface/datasets/issues/2132/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2131 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2131/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2131/comments | https://api.github.com/repos/huggingface/datasets/issues/2131/events | https://github.com/huggingface/datasets/issues/2131 | 843,133,112 | MDU6SXNzdWU4NDMxMzMxMTI= | 2,131 | When training with Multi-Node Multi-GPU the worker 2 has TypeError: 'NoneType' object | {
"login": "andy-yangz",
"id": 23011317,
"node_id": "MDQ6VXNlcjIzMDExMzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/23011317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andy-yangz",
"html_url": "https://github.com/andy-yangz",
"followers_url": "https://api.github.com/users/andy-yangz/followers",
"following_url": "https://api.github.com/users/andy-yangz/following{/other_user}",
"gists_url": "https://api.github.com/users/andy-yangz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andy-yangz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andy-yangz/subscriptions",
"organizations_url": "https://api.github.com/users/andy-yangz/orgs",
"repos_url": "https://api.github.com/users/andy-yangz/repos",
"events_url": "https://api.github.com/users/andy-yangz/events{/privacy}",
"received_events_url": "https://api.github.com/users/andy-yangz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi ! Thanks for reporting\r\nI was able to reproduce this issue. This was caused by missing split infos if a worker reloads the cache of the other worker.\r\n\r\nI just opened https://github.com/huggingface/datasets/pull/2137 to fix this issue",
"The PR got merged :)\r\nFeel free to try it out on the `master` branch",
"Sorry for the late reply. \r\nNow everything just works well XD"
] | 1,617,007,558,000 | 1,618,052,935,000 | 1,618,052,935,000 | NONE | null | null | version: 1.5.0
met a very strange error, I am training large scale language model, and need train on 2 machines(workers).
And sometimes I will get this error `TypeError: 'NoneType' object is not iterable`
This is traceback
```
71 | | Traceback (most recent call last):
-- | -- | --
72 | | File "run_gpt.py", line 316, in <module>
73 | | main()
74 | | File "run_gpt.py", line 222, in main
75 | | delimiter="\t", column_names=["input_ids", "attention_mask", "chinese_ref"])
76 | | File "/data/miniconda3/lib/python3.7/site-packages/datasets/load.py", line 747, in load_dataset
77 | | use_auth_token=use_auth_token,
78 | | File "/data/miniconda3/lib/python3.7/site-packages/datasets/builder.py", line 513, in download_and_prepare
79 | | self.download_post_processing_resources(dl_manager)
80 | | File "/data/miniconda3/lib/python3.7/site-packages/datasets/builder.py", line 673, in download_post_processing_resources
81 | | for split in self.info.splits:
82 | | TypeError: 'NoneType' object is not iterable
83 | | WARNING:datasets.builder:Reusing dataset csv (/usr/local/app/.cache/huggingface/datasets/csv/default-1c257ebd48e225e7/0.0.0/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2)
84 | | Traceback (most recent call last):
85 | | File "/data/miniconda3/lib/python3.7/runpy.py", line 193, in _run_module_as_main
86 | | "__main__", mod_spec)
87 | | File "/data/miniconda3/lib/python3.7/runpy.py", line 85, in _run_code
88 | | exec(code, run_globals)
89 | | File "/data/miniconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 340, in <module>
90 | | main()
91 | | File "/data/miniconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 326, in main
92 | | sigkill_handler(signal.SIGTERM, None) # not coming back
93 | | File "/data/miniconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler
94 | | raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
```
On worker 1 it loads the dataset well, however on worker 2 will get this error.
And I will meet this error from time to time, sometimes it just goes well. | https://api.github.com/repos/huggingface/datasets/issues/2131/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2130 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2130/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2130/comments | https://api.github.com/repos/huggingface/datasets/issues/2130/events | https://github.com/huggingface/datasets/issues/2130 | 843,111,936 | MDU6SXNzdWU4NDMxMTE5MzY= | 2,130 | wikiann dataset is missing columns | {
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | null | [] | null | [
"Here please find TFDS format of this dataset: https://www.tensorflow.org/datasets/catalog/wikiann\r\nwhere there is a span column, this is really necessary to be able to use the data, and I appreciate your help @lhoestq ",
"Hi !\r\nApparently you can get the spans from the NER tags using `tags_to_spans` defined here:\r\n\r\nhttps://github.com/tensorflow/datasets/blob/c7096bd38e86ed240b8b2c11ecab9893715a7d55/tensorflow_datasets/text/wikiann/wikiann.py#L81-L126\r\n\r\nIt would be nice to include the `spans` field in this dataset as in TFDS. This could be a good first issue for new contributors !\r\n\r\nThe objective is to use `tags_to_spans` in the `_generate_examples` method [here](https://github.com/huggingface/nlp/blob/c98e4b8f23e3770c401c6d9326e243e1ffd599ec/datasets/wikiann/wikiann.py#L292-L316) to create he `spans` for each example.",
"Hi @lhoestq \r\nthank you very much for the help, it would be very nice to have it included, here is the full code, one need to also convert tags to string first:\r\n\r\n```\r\nimport datasets \r\nfrom datasets import load_dataset\r\n\r\ndef tags_to_spans(tags):\r\n \"\"\"Convert tags to spans.\"\"\"\r\n spans = set()\r\n span_start = 0\r\n span_end = 0\r\n active_conll_tag = None\r\n for index, string_tag in enumerate(tags):\r\n # Actual BIO tag.\r\n bio_tag = string_tag[0]\r\n assert bio_tag in [\"B\", \"I\", \"O\"], \"Invalid Tag\"\r\n conll_tag = string_tag[2:]\r\n if bio_tag == \"O\":\r\n # The span has ended.\r\n if active_conll_tag:\r\n spans.add((active_conll_tag, (span_start, span_end)))\r\n active_conll_tag = None\r\n # We don't care about tags we are\r\n # told to ignore, so we do nothing.\r\n continue\r\n elif bio_tag == \"B\":\r\n # We are entering a new span; reset indices and active tag to new span.\r\n if active_conll_tag:\r\n spans.add((active_conll_tag, (span_start, span_end)))\r\n active_conll_tag = conll_tag\r\n span_start = index\r\n span_end = index\r\n elif bio_tag == \"I\" and conll_tag == active_conll_tag:\r\n # We're inside a span.\r\n span_end += 1\r\n else:\r\n # This is the case the bio label is an \"I\", but either:\r\n # 1) the span hasn't started - i.e. an ill formed span.\r\n # 2) We have IOB1 tagging scheme.\r\n # We'll process the previous span if it exists, but also include this\r\n # span. This is important, because otherwise, a model may get a perfect\r\n # F1 score whilst still including false positive ill-formed spans.\r\n if active_conll_tag:\r\n spans.add((active_conll_tag, (span_start, span_end)))\r\n active_conll_tag = conll_tag\r\n span_start = index\r\n span_end = index\r\n # Last token might have been a part of a valid span.\r\n if active_conll_tag:\r\n spans.add((active_conll_tag, (span_start, span_end)))\r\n # Return sorted list of spans\r\n return sorted(list(spans), key=lambda x: x[1][0])\r\n\r\ndataset = load_dataset('wikiann', 'en', split=\"train\")\r\nner_tags = {\r\n 0:\"O\",\r\n 1:\"B-PER\",\r\n 2:\"I-PER\",\r\n 3:\"B-ORG\",\r\n 4:\"I-ORG\",\r\n 5:\"B-LOC\",\r\n 6:\"I-LOC\"\r\n}\r\n\r\ndef get_spans(tokens, tags):\r\n \"\"\"Convert tags to textspans.\"\"\"\r\n spans = tags_to_spans(tags)\r\n text_spans = [\r\n x[0] + \": \" + \" \".join([tokens[i]\r\n for i in range(x[1][0], x[1][1] + 1)])\r\n for x in spans\r\n ]\r\n if not text_spans:\r\n text_spans = [\"None\"]\r\n return text_spans\r\n\r\n\r\nfor i, d in enumerate(dataset):\r\n tokens = d['tokens']\r\n tags = d['ner_tags']\r\n tags = [ner_tags[i] for i in tags]\r\n spans = get_spans(tokens, tags)\r\n print(\"spans \", spans)\r\n print(d)\r\n if i > 10:\r\n break; \r\n```\r\nI am not sure how to contribute to the repository and how things work, could you let me know how one can access the datasets to be able to contribute to the repository? Maybe I could do it then\r\nthanks \r\n",
"Cool ! Let me give you some context:\r\n\r\n#### Contribution guide\r\n\r\nYou can find the contribution guide here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md\r\n\r\nIt explains how to set up your dev environment in a few steps.\r\n\r\n#### Dataset loading\r\n\r\nEach Dataset is defined by a Table that have many rows (one row = one example) and columns (one column = one feature).\r\nTo change how a dataset is constructed, you have to modify its dataset script that you can find here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/master/datasets/wikiann/wikiann.py\r\n\r\nIt includes everything needed to load the WikiANN dataset.\r\nYou can load locally a modified version of `wikiann.py` with `load_dataset(\"path/to/wikiann.py\")`.\r\n\r\n#### Define a new column\r\n\r\nEach column has a name and a type. You can see how the features of WikiANN are defined here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/c98e4b8f23e3770c401c6d9326e243e1ffd599ec/datasets/wikiann/wikiann.py#L245-L263\r\n\r\nIdeally we would have one additional feature \"spans\":\r\n```python\r\n \"spans\": datasets.Sequence(datasets.Value(\"string\")),\r\n```\r\n\r\n#### Compute the content of each row\r\n\r\nTo build the WikiANN rows, the _generate_examples method from [here](https://github.com/huggingface/nlp/blob/c98e4b8f23e3770c401c6d9326e243e1ffd599ec/datasets/wikiann/wikiann.py#L292-L316) is used. This function `yield` one python dictionary for each example:\r\n```python\r\nyield guid_index, {\"tokens\": tokens, \"ner_tags\": ner_tags, \"langs\": langs}\r\n```\r\n\r\nThe objective would be to return instead something like\r\n```python\r\nspans = spans = get_spans(tokens, tags)\r\nyield guid_index, {\"tokens\": tokens, \"ner_tags\": ner_tags, \"langs\": langs, \"spans\": spans}\r\n```\r\n\r\nLet me know if you have questions !",
"The PR was merged. Issue should be closed.\r\n\r\nCC: @lhoestq "
] | 1,617,006,180,000 | 1,630,075,458,000 | 1,630,075,458,000 | NONE | null | null | Hi
Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq | https://api.github.com/repos/huggingface/datasets/issues/2130/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2129 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2129/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2129/comments | https://api.github.com/repos/huggingface/datasets/issues/2129/events | https://github.com/huggingface/datasets/issues/2129 | 843,033,656 | MDU6SXNzdWU4NDMwMzM2NTY= | 2,129 | How to train BERT model with next sentence prediction? | {
"login": "jnishi",
"id": 836541,
"node_id": "MDQ6VXNlcjgzNjU0MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/836541?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jnishi",
"html_url": "https://github.com/jnishi",
"followers_url": "https://api.github.com/users/jnishi/followers",
"following_url": "https://api.github.com/users/jnishi/following{/other_user}",
"gists_url": "https://api.github.com/users/jnishi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jnishi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jnishi/subscriptions",
"organizations_url": "https://api.github.com/users/jnishi/orgs",
"repos_url": "https://api.github.com/users/jnishi/repos",
"events_url": "https://api.github.com/users/jnishi/events{/privacy}",
"received_events_url": "https://api.github.com/users/jnishi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi !\r\nWe're not using `TextDatasetForNextSentencePrediction` in `datasets`.\r\nAlthough you can probably use the `TextDatasetForNextSentencePrediction.create_examples_from_document` on a dataset to prepare it for next sentence prediction.",
"Thanks.\r\n\r\nDo you mean that `TextDatasetForNextSentencePrediction.create_exapmles_from_document` can be applied to dataset object other than `TextDatasetForNextSentencePrediction` e.g. a `Dataset` object which is loaded by `datasets.load_dataset`?",
"It would probably require a bit of tweaking, but you can apply it to a dataset, yes.\r\nThis should give you a new dataset with sentence pairs you can train a model on.\r\n\r\nYou can find the documentation about dataset processing here:\r\nhttps://huggingface.co/docs/datasets/processing.html#processing-data-with-map",
"Thank you for detail information.\r\n\r\nI'll try to apply `create_examples_from_document` to `Dataset` object.\r\n"
] | 1,617,000,483,000 | 1,617,253,120,000 | 1,617,253,120,000 | NONE | null | null | Hello.
I'm trying to pretrain the BERT model with next sentence prediction. Is there any function that supports next sentence prediction
like ` TextDatasetForNextSentencePrediction` of `huggingface/transformers` ?
| https://api.github.com/repos/huggingface/datasets/issues/2129/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2128 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2128/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2128/comments | https://api.github.com/repos/huggingface/datasets/issues/2128/events | https://github.com/huggingface/datasets/issues/2128 | 843,023,910 | MDU6SXNzdWU4NDMwMjM5MTA= | 2,128 | Dialogue action slot name and value are reversed in MultiWoZ 2.2 | {
"login": "adamlin120",
"id": 31605305,
"node_id": "MDQ6VXNlcjMxNjA1MzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/31605305?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adamlin120",
"html_url": "https://github.com/adamlin120",
"followers_url": "https://api.github.com/users/adamlin120/followers",
"following_url": "https://api.github.com/users/adamlin120/following{/other_user}",
"gists_url": "https://api.github.com/users/adamlin120/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adamlin120/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adamlin120/subscriptions",
"organizations_url": "https://api.github.com/users/adamlin120/orgs",
"repos_url": "https://api.github.com/users/adamlin120/repos",
"events_url": "https://api.github.com/users/adamlin120/events{/privacy}",
"received_events_url": "https://api.github.com/users/adamlin120/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"Hi\r\nGood catch ! Thanks for reporting\r\n\r\nIf you are interested in contributing, feel free to open a PR to fix this :) "
] | 1,616,999,642,000 | 1,617,194,881,000 | 1,617,194,881,000 | CONTRIBUTOR | null | null | Hi @yjernite, thank you for adding MultiWoZ 2.2 in the huggingface datasets platform. It is beneficial!
I spot an error that the order of Dialogue action slot names and values are reversed.
https://github.com/huggingface/datasets/blob/649b2c469779bc4221e1b6969aa2496d63eb5953/datasets/multi_woz_v22/multi_woz_v22.py#L251-L262 | https://api.github.com/repos/huggingface/datasets/issues/2128/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2127 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2127/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2127/comments | https://api.github.com/repos/huggingface/datasets/issues/2127/events | https://github.com/huggingface/datasets/pull/2127 | 843,017,199 | MDExOlB1bGxSZXF1ZXN0NjAyNDYxMzc3 | 2,127 | make documentation more clear to use different cloud storage | {
"login": "philschmid",
"id": 32632186,
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philschmid",
"html_url": "https://github.com/philschmid",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"repos_url": "https://api.github.com/users/philschmid/repos",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,616,999,046,000 | 1,617,020,184,000 | 1,617,020,184,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2127",
"html_url": "https://github.com/huggingface/datasets/pull/2127",
"diff_url": "https://github.com/huggingface/datasets/pull/2127.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2127.patch"
} | This PR extends the cloud storage documentation. To show you can use a different `fsspec` implementation. | https://api.github.com/repos/huggingface/datasets/issues/2127/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2126 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2126/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2126/comments | https://api.github.com/repos/huggingface/datasets/issues/2126/events | https://github.com/huggingface/datasets/pull/2126 | 842,779,966 | MDExOlB1bGxSZXF1ZXN0NjAyMjcyMjg4 | 2,126 | Replace legacy torch.Tensor constructor with torch.tensor | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,616,950,650,000 | 1,617,010,034,000 | 1,617,010,033,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2126",
"html_url": "https://github.com/huggingface/datasets/pull/2126",
"diff_url": "https://github.com/huggingface/datasets/pull/2126.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2126.patch"
} | The title says it all (motivated by [this issue](https://github.com/pytorch/pytorch/issues/53146) in the pytorch repo). | https://api.github.com/repos/huggingface/datasets/issues/2126/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2125 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2125/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2125/comments | https://api.github.com/repos/huggingface/datasets/issues/2125/events | https://github.com/huggingface/datasets/issues/2125 | 842,690,570 | MDU6SXNzdWU4NDI2OTA1NzA= | 2,125 | Is dataset timit_asr broken? | {
"login": "kosuke-kitahara",
"id": 42398050,
"node_id": "MDQ6VXNlcjQyMzk4MDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/42398050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kosuke-kitahara",
"html_url": "https://github.com/kosuke-kitahara",
"followers_url": "https://api.github.com/users/kosuke-kitahara/followers",
"following_url": "https://api.github.com/users/kosuke-kitahara/following{/other_user}",
"gists_url": "https://api.github.com/users/kosuke-kitahara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kosuke-kitahara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kosuke-kitahara/subscriptions",
"organizations_url": "https://api.github.com/users/kosuke-kitahara/orgs",
"repos_url": "https://api.github.com/users/kosuke-kitahara/repos",
"events_url": "https://api.github.com/users/kosuke-kitahara/events{/privacy}",
"received_events_url": "https://api.github.com/users/kosuke-kitahara/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi,\r\n\r\nthanks for the report, but this is a duplicate of #2052. ",
"@mariosasko \r\nThank you for your quick response! Following #2052, I've fixed the problem."
] | 1,616,920,218,000 | 1,616,934,565,000 | 1,616,934,565,000 | NONE | null | null | Using `timit_asr` dataset, I saw all records are the same.
``` python
from datasets import load_dataset, load_metric
timit = load_dataset("timit_asr")
from datasets import ClassLabel
import random
import pandas as pd
from IPython.display import display, HTML
def show_random_elements(dataset, num_examples=10):
assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset)-1)
while pick in picks:
pick = random.randint(0, len(dataset)-1)
picks.append(pick)
df = pd.DataFrame(dataset[picks])
display(HTML(df.to_html()))
show_random_elements(timit['train'].remove_columns(["file", "phonetic_detail", "word_detail", "dialect_region", "id",
"sentence_type", "speaker_id"]), num_examples=20)
```
`output`
<img width="312" alt="Screen Shot 2021-03-28 at 17 29 04" src="https://user-images.githubusercontent.com/42398050/112746646-21acee80-8feb-11eb-84f3-dbb5d4269724.png">
I double-checked it [here](https://huggingface.co/datasets/viewer/), and met the same problem.
<img width="1374" alt="Screen Shot 2021-03-28 at 17 32 07" src="https://user-images.githubusercontent.com/42398050/112746698-9bdd7300-8feb-11eb-97ed-5babead385f4.png">
| https://api.github.com/repos/huggingface/datasets/issues/2125/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2124 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2124/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2124/comments | https://api.github.com/repos/huggingface/datasets/issues/2124/events | https://github.com/huggingface/datasets/issues/2124 | 842,627,729 | MDU6SXNzdWU4NDI2Mjc3Mjk= | 2,124 | Adding ScaNN library to do MIPS? | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"I haven't played with it (yet) but it sounds really cool !\r\n"
] | 1,616,890,020,000 | 1,617,024,223,000 | null | NONE | null | null | @lhoestq Hi I am thinking of adding this new google library to do the MIPS similar to **add_faiss_idex**. As the paper suggests, it is really fast when it comes to retrieving the nearest neighbors.
https://github.com/google-research/google-research/tree/master/scann
![image](https://user-images.githubusercontent.com/16892570/112738294-78ec9800-8fc6-11eb-9a5f-3d7ee5818e76.png)
| https://api.github.com/repos/huggingface/datasets/issues/2124/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2123 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2123/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2123/comments | https://api.github.com/repos/huggingface/datasets/issues/2123/events | https://github.com/huggingface/datasets/issues/2123 | 842,577,285 | MDU6SXNzdWU4NDI1NzcyODU= | 2,123 | Problem downloading GEM wiki_auto_asset_turk dataset | {
"login": "mille-s",
"id": 29705940,
"node_id": "MDQ6VXNlcjI5NzA1OTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/29705940?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mille-s",
"html_url": "https://github.com/mille-s",
"followers_url": "https://api.github.com/users/mille-s/followers",
"following_url": "https://api.github.com/users/mille-s/following{/other_user}",
"gists_url": "https://api.github.com/users/mille-s/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mille-s/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mille-s/subscriptions",
"organizations_url": "https://api.github.com/users/mille-s/orgs",
"repos_url": "https://api.github.com/users/mille-s/repos",
"events_url": "https://api.github.com/users/mille-s/events{/privacy}",
"received_events_url": "https://api.github.com/users/mille-s/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi,\r\n\r\nsadly I can't replicate the problem on my Windows machine. Try to update the library to the newest version with:\r\n```bash\r\npip install git+https://github.com/huggingface/datasets\r\n``` ",
"Thanks for the answer! I updated the library but unfortunately it didn't solve the problem.",
"Is there an error message ?\r\nWhat stacktrace do you get if you interrupt the execution of the program while downloading ?",
"Sorry for the long time since my last comment, I tried again and don't seem to have the problem anymore, thanks for your support!",
"Great ! I'm closing the issue then. Feel free to re-open if you experience this issue again"
] | 1,616,870,488,000 | 1,620,836,118,000 | 1,620,836,117,000 | NONE | null | null | @yjernite
### Summary
I am currently working on the GEM datasets and do not manage to download the wiki_auto_asset_turk data, whereas all other datasets download well with the same code.
### Steps to reproduce
Code snippet:
from datasets import load_dataset
#dataset = load_dataset('gem', 'web_nlg_en')
dataset = load_dataset('gem', 'wiki_auto_asset_turk')
```
**Expected behavior:**
I expect the dataset to start downloading (download bar appears and progresses toward 100%)
**Actual behavior:**
Instead of seeing the download bar appearing, nothing happens; the following appears in the console as expected, but nothing more:
Downloading: 36.6kB [00:00, 37.2MB/s]
Downloading: 41.7kB [00:00, ?B/s]
Downloading and preparing dataset gem/wiki_auto_asset_turk (download: 121.37 MiB, generated: 145.69 MiB, post-processed: Unknown size, total: 267.07 MiB) to C:\Users\sfmil\.cache\huggingface\datasets\gem\wiki_auto_asset_turk\1.0.0\f252756d7f1b8f019aac71a1623b2950acfe10d25d956668ac4eae4e93c58b8d...
### Is this a regression?
No, it was the first time I was trying to download this dataset (same for the other ones).
### Debug info
- Python version: Python 3.8.2
- OS version: Windows 10 Family | https://api.github.com/repos/huggingface/datasets/issues/2123/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2122 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2122/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2122/comments | https://api.github.com/repos/huggingface/datasets/issues/2122/events | https://github.com/huggingface/datasets/pull/2122 | 842,194,588 | MDExOlB1bGxSZXF1ZXN0NjAxODE3MjI0 | 2,122 | Fast table queries with interpolation search | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,616,782,160,000 | 1,628,100,719,000 | 1,617,719,581,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2122",
"html_url": "https://github.com/huggingface/datasets/pull/2122",
"diff_url": "https://github.com/huggingface/datasets/pull/2122.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2122.patch"
} | ## Intro
This should fix issue #1803
Currently querying examples in a dataset is O(n) because of the underlying pyarrow ChunkedArrays implementation.
To fix this I implemented interpolation search that is pretty effective since datasets usually verifies the condition of evenly distributed chunks (the default chunk size is fixed).
## Benchmark
Here is a [benchmark](https://pastebin.com/utEXUqsR) I did on bookcorpus (74M rows):
for the current implementation
```python
>>> python speed.py
Loaded dataset 'bookcorpus', len=74004228, nbytes=4835358766
========================= Querying unshuffled bookcorpus =========================
Avg access time key=1 : 0.018ms
Avg access time key=74004227 : 0.215ms
Avg access time key=range(74003204, 74004228) : 1.416ms
Avg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 92.532ms
========================== Querying shuffled bookcorpus ==========================
Avg access time key=1 : 0.187ms
Avg access time key=74004227 : 6.642ms
Avg access time key=range(74003204, 74004228) : 90.941ms
Avg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 3448.456ms
```
for the new one using interpolation search:
```python
>>> python speed.py
Loaded dataset 'bookcorpus', len=74004228, nbytes=4835358766
========================= Querying unshuffled bookcorpus =========================
Avg access time key=1 : 0.076ms
Avg access time key=74004227 : 0.056ms
Avg access time key=range(74003204, 74004228) : 1.807ms
Avg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 24.028ms
========================== Querying shuffled bookcorpus ==========================
Avg access time key=1 : 0.061ms
Avg access time key=74004227 : 0.058ms
Avg access time key=range(74003204, 74004228) : 22.166ms
Avg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 42.757ms
```
The RandIter class is just an iterable of 1024 random indices from 0 to 74004228.
Here is also a plot showing the speed improvement depending on the dataset size:
![image](https://user-images.githubusercontent.com/42851186/112673587-32335c80-8e65-11eb-9a0c-58ad774abaec.png)
## Implementation details:
- `datasets.table.Table` objects implement interpolation search for the `slice` method
- The interpolation search requires to store the offsets of all the chunks of a table. The offsets are stored when the `Table` is initialized.
- `datasets.table.Table.slice` returns a `datasets.table.Table` using interpolation search
- `datasets.table.Table.fast_slice` returns a `pyarrow.Table` object using interpolation search. This is useful to get a part of a dataset if we don't need the indexing structure for future computations. For example it's used when querying an example as a dictionary.
- Now a `Dataset` object is always backed by a `datasets.table.Table` object. If one passes a `pyarrow.Table` to initialize a `Dataset`, then it's converted to a `datasets.table.Table`
## Checklist:
- [x] implement interpolation search
- [x] use `datasets.table.Table` in `Dataset` objects
- [x] update current tests
- [x] add tests for interpolation search
- [x] comments and docstring
- [x] add the benchmark to the CI
Fix #1803. | https://api.github.com/repos/huggingface/datasets/issues/2122/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2121 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2121/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2121/comments | https://api.github.com/repos/huggingface/datasets/issues/2121/events | https://github.com/huggingface/datasets/pull/2121 | 842,148,633 | MDExOlB1bGxSZXF1ZXN0NjAxNzc4NDc4 | 2,121 | Add Validation For README | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Good start! Here are some proposed next steps:\r\n- We want the Class structure to reflect the template - so the parser know what section titles to expect and when something has gone wrong\r\n- As a result, we don't need to parse the table of contents, since it will always be the same\r\n- For each section/subsection it would be cool to have a variable saying whether it's filled out or not (when it's either empty or has `[More Information Needed]`)\r\n- `attributes` should probably be `text`",
"@yjernite @lhoestq \r\n\r\nI have added basic validation checking in the class. It works based on a YAML string. The YAML string determines the expected structure and which text is to be checked. The `text` can be true or false showing whether the text has to be checked or not for emptiness. Similarly, each subsection is parsed recursively. I have used print statement currently so that all issues are shown.\r\n\r\nPlease let me know your thoughts.\r\n\r\nI haven't added a variable that keeps a track of whether the text is empty or not but it can be done easliy if required.",
"This looks like a good start !\r\nMaybe we can use a field named `allow_empty` instead of `text` ?\r\nAlso +1 for keeping track of empty texts\r\n\r\nDo you think you can have a way to collect all the validation fails of a readme and then raise an error showing all the failures instead of using print ?\r\n\r\nThen we can create a `tests/test_dataset_cards.py` test file to make sure all the readmes of the repo are valid !",
"Hi @lhoestq \r\n\r\nI have added changes accordingly. I prepared a list which stores all the errors and raises them at the end. I'm not sure if there is a better way.",
"Hi @lhoestq @yjernite \r\n\r\nPlease find the output for the existing READMEs here: http://p.ip.fi/2vYU\r\n\r\nThanks,\r\nGunjan",
"Hi @lhoestq\r\n\r\nI have added some basic tests, also have restructured `ReadMe` class slightly.\r\n\r\nThere is one print statement currently, I'm not sure how to remove it. Basically, I want to warn but not stop further validation. I can't append to a list because the `error_list` and `warning_list` are both only present in `validate` method, and this print is present in the `parse` method. This is done when someone has repeated a section multiple times. For e.g.:\r\n\r\n```markdown\r\n---\r\n---\r\n\r\n# Dataset Card for FashionMNIST\r\n## Dataset Description\r\n## Dataset Description\r\n```\r\n\r\nIn this case, I check for validation only in the latest entry.\r\n\r\nI can also raise an error (ideal case scenario), but still, it is in the `parse`. Should I add `error_lines` and `warning_lines` as instance variables? That would probably solve the issue.\r\n\r\nIn tests, I'm using a dummy YAML string for structure, we can also make it into a file but I feel that is not a hard requirement. Let me know your thoughts.\r\n\r\nI will add tests for `from_readme` as well.\r\n\r\nHowever, I would love to be able to check the exact message in the test when an error is raised. I checked a couple of methods but couldn't get it working. Let me know if you're aware of a way to do that.",
"Hi @lhoestq \r\n\r\nThanks for merging. :)\r\nThanks a lot to you and @yjernite for guiding me and helping me out.\r\n\r\nYes, I'll also use the next PR for combining the readme and tags validation. ^_^"
] | 1,616,778,137,000 | 1,620,652,638,000 | 1,620,639,701,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2121",
"html_url": "https://github.com/huggingface/datasets/pull/2121",
"diff_url": "https://github.com/huggingface/datasets/pull/2121.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2121.patch"
} | Hi @lhoestq, @yjernite
This is a simple Readme parser. All classes specific to different sections can inherit `Section` class, and we can define more attributes in each.
Let me know if this is going in the right direction :)
Currently the output looks like this, for `to_dict()` on `FashionMNIST` `README.md`:
```json
{
"name": "./datasets/fashion_mnist/README.md",
"attributes": "",
"subsections": [
{
"name": "Dataset Card for FashionMNIST",
"attributes": "",
"subsections": [
{
"name": "Table of Contents",
"attributes": "- [Dataset Description](#dataset-description)\n - [Dataset Summary](#dataset-summary)\n - [Supported Tasks](#supported-tasks-and-leaderboards)\n - [Languages](#languages)\n- [Dataset Structure](#dataset-structure)\n - [Data Instances](#data-instances)\n - [Data Fields](#data-instances)\n - [Data Splits](#data-instances)\n- [Dataset Creation](#dataset-creation)\n - [Curation Rationale](#curation-rationale)\n - [Source Data](#source-data)\n - [Annotations](#annotations)\n - [Personal and Sensitive Information](#personal-and-sensitive-information)\n- [Considerations for Using the Data](#considerations-for-using-the-data)\n - [Social Impact of Dataset](#social-impact-of-dataset)\n - [Discussion of Biases](#discussion-of-biases)\n - [Other Known Limitations](#other-known-limitations)\n- [Additional Information](#additional-information)\n - [Dataset Curators](#dataset-curators)\n - [Licensing Information](#licensing-information)\n - [Citation Information](#citation-information)\n - [Contributions](#contributions)",
"subsections": []
},
{
"name": "Dataset Description",
"attributes": "- **Homepage:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Repository:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Paper:** [arXiv](https://arxiv.org/pdf/1708.07747.pdf)\n- **Leaderboard:**\n- **Point of Contact:**",
"subsections": [
{
"name": "Dataset Summary",
"attributes": "Fashion-MNIST is a dataset of Zalando's article images\u2014consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.",
"subsections": []
},
{
"name": "Supported Tasks and Leaderboards",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Languages",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Dataset Structure",
"attributes": "",
"subsections": [
{
"name": "Data Instances",
"attributes": "A data point comprises an image and its label.",
"subsections": []
},
{
"name": "Data Fields",
"attributes": "- `image`: a 2d array of integers representing the 28x28 image.\n- `label`: an integer between 0 and 9 representing the classes with the following mapping:\n | Label | Description |\n | --- | --- |\n | 0 | T-shirt/top |\n | 1 | Trouser |\n | 2 | Pullover |\n | 3 | Dress |\n | 4 | Coat |\n | 5 | Sandal |\n | 6 | Shirt |\n | 7 | Sneaker |\n | 8 | Bag |\n | 9 | Ankle boot |",
"subsections": []
},
{
"name": "Data Splits",
"attributes": "The data is split into training and test set. The training set contains 60,000 images and the test set 10,000 images.",
"subsections": []
}
]
},
{
"name": "Dataset Creation",
"attributes": "",
"subsections": [
{
"name": "Curation Rationale",
"attributes": "**From the arXiv paper:**\nThe original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. \"If it doesn't work on MNIST, it won't work at all\", they said. \"Well, if it does work on MNIST, it may still fail on others.\"\nHere are some good reasons:\n- MNIST is too easy. Convolutional nets can achieve 99.7% on MNIST. Classic machine learning algorithms can also achieve 97% easily. Check out our side-by-side benchmark for Fashion-MNIST vs. MNIST, and read \"Most pairs of MNIST digits can be distinguished pretty well by just one pixel.\"\n- MNIST is overused. In this April 2017 Twitter thread, Google Brain research scientist and deep learning expert Ian Goodfellow calls for people to move away from MNIST.\n- MNIST can not represent modern CV tasks, as noted in this April 2017 Twitter thread, deep learning expert/Keras author Fran\u00e7ois Chollet.",
"subsections": []
},
{
"name": "Source Data",
"attributes": "",
"subsections": [
{
"name": "Initial Data Collection and Normalization",
"attributes": "**From the arXiv paper:**\nFashion-MNIST is based on the assortment on Zalando\u2019s website. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762 \u00d7 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny.\nWe use the front look thumbnail images of 70,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, whitecolor products are not included in the dataset as they have low contrast to the background. The thumbnails (51 \u00d7 73) are then fed into the following conversion pipeline:\n1. Converting the input to a PNG image.\n2. Trimming any edges that are close to the color of the corner pixels. The \u201ccloseness\u201d is defined by the distance within 5% of the maximum possible intensity in RGB space.\n3. Resizing the longest edge of the image to 28 by subsampling the pixels, i.e. some rows and columns are skipped over.\n4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.0, with increasing effect near outlines.\n5. Extending the shortest edge to 28 and put the image to the center of the canvas.\n6. Negating the intensities of the image.\n7. Converting the image to 8-bit grayscale pixels.",
"subsections": []
},
{
"name": "Who are the source image producers?",
"attributes": "**From the arXiv paper:**\nEvery fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit.",
"subsections": []
}
]
},
{
"name": "Annotations",
"attributes": "",
"subsections": [
{
"name": "Annotation process",
"attributes": "**From the arXiv paper:**\nFor the class labels, they use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product Zalando is the Europe\u2019s largest online fashion platform. Each product contains only one silhouette code.",
"subsections": []
},
{
"name": "Who are the annotators?",
"attributes": "**From the arXiv paper:**\nThe silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando.",
"subsections": []
}
]
},
{
"name": "Personal and Sensitive Information",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Considerations for Using the Data",
"attributes": "",
"subsections": [
{
"name": "Social Impact of Dataset",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Discussion of Biases",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Other Known Limitations",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Additional Information",
"attributes": "",
"subsections": [
{
"name": "Dataset Curators",
"attributes": "Han Xiao and Kashif Rasul and Roland Vollgraf",
"subsections": []
},
{
"name": "Licensing Information",
"attributes": "MIT Licence",
"subsections": []
},
{
"name": "Citation Information",
"attributes": "@article{DBLP:journals/corr/abs-1708-07747,\n author = {Han Xiao and\n Kashif Rasul and\n Roland Vollgraf},\n title = {Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning\n Algorithms},\n journal = {CoRR},\n volume = {abs/1708.07747},\n year = {2017},\n url = {http://arxiv.org/abs/1708.07747},\n archivePrefix = {arXiv},\n eprint = {1708.07747},\n timestamp = {Mon, 13 Aug 2018 16:47:27 +0200},\n biburl = {https://dblp.org/rec/bib/journals/corr/abs-1708-07747},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}",
"subsections": []
},
{
"name": "Contributions",
"attributes": "Thanks to [@gchhablani](https://github.com/gchablani) for adding this dataset.",
"subsections": []
}
]
}
]
}
]
}
```
Thanks,
Gunjan | https://api.github.com/repos/huggingface/datasets/issues/2121/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2120 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2120/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2120/comments | https://api.github.com/repos/huggingface/datasets/issues/2120/events | https://github.com/huggingface/datasets/issues/2120 | 841,954,521 | MDU6SXNzdWU4NDE5NTQ1MjE= | 2,120 | dataset viewer does not work anymore | {
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2107841032,
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer",
"name": "nlp-viewer",
"color": "94203D",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"Thanks for reporting :) We're looking into it",
"Back up. "
] | 1,616,764,933,000 | 1,616,773,942,000 | 1,616,773,942,000 | NONE | null | null | Hi
I normally use this link to see all datasets and how I can load them
https://huggingface.co/datasets/viewer/
Now I am getting
502 Bad Gateway
nginx/1.18.0 (Ubuntu)
could you bring this webpage back ? this was very helpful @lhoestq
thanks for your help | https://api.github.com/repos/huggingface/datasets/issues/2120/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2119 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2119/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2119/comments | https://api.github.com/repos/huggingface/datasets/issues/2119/events | https://github.com/huggingface/datasets/pull/2119 | 841,567,199 | MDExOlB1bGxSZXF1ZXN0NjAxMjg2MjIy | 2,119 | copy.deepcopy os.environ instead of copy | {
"login": "NihalHarish",
"id": 5506053,
"node_id": "MDQ6VXNlcjU1MDYwNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5506053?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NihalHarish",
"html_url": "https://github.com/NihalHarish",
"followers_url": "https://api.github.com/users/NihalHarish/followers",
"following_url": "https://api.github.com/users/NihalHarish/following{/other_user}",
"gists_url": "https://api.github.com/users/NihalHarish/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NihalHarish/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NihalHarish/subscriptions",
"organizations_url": "https://api.github.com/users/NihalHarish/orgs",
"repos_url": "https://api.github.com/users/NihalHarish/repos",
"events_url": "https://api.github.com/users/NihalHarish/events{/privacy}",
"received_events_url": "https://api.github.com/users/NihalHarish/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,616,731,118,000 | 1,616,771,632,000 | 1,616,771,632,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2119",
"html_url": "https://github.com/huggingface/datasets/pull/2119",
"diff_url": "https://github.com/huggingface/datasets/pull/2119.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2119.patch"
} | Fixes: https://github.com/huggingface/datasets/issues/2115
- bug fix: using envrion.copy() returns a dict.
- using deepcopy(environ) returns an `_environ` object
- Changing the datatype of the _environ object can break code, if subsequent libraries perform operations using apis exclusive to the environ object, like `environ.getenv()` for example.
Testing:
Tested the change on my terminal:
```
>>> import os
>>> x = deepcopy(os.environ)
>>> y = os.environ
>>> x is y
False
>>> isinstance(x, type(os.environ))
True
>>> z = os.environ.copy()
>>> isinstance(z, type(os.environ))
False
>>> isinstance(z, dict)
True
``` | https://api.github.com/repos/huggingface/datasets/issues/2119/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2118 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2118/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2118/comments | https://api.github.com/repos/huggingface/datasets/issues/2118/events | https://github.com/huggingface/datasets/pull/2118 | 841,563,329 | MDExOlB1bGxSZXF1ZXN0NjAxMjgzMDUx | 2,118 | Remove os.environ.copy in Dataset.map | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I thought deepcopy on `os.environ` is unsafe (see [this](https://stackoverflow.com/questions/13142972/using-copy-deepcopy-on-os-environ-in-python-appears-broken)), but I can't replicate the behavior described in the linked SO thread.\r\n\r\nClosing this one because #2119 has a much cleaner approach."
] | 1,616,730,497,000 | 1,616,760,203,000 | 1,616,760,005,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2118",
"html_url": "https://github.com/huggingface/datasets/pull/2118",
"diff_url": "https://github.com/huggingface/datasets/pull/2118.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2118.patch"
} | Replace `os.environ.copy` with in-place modification
Fixes #2115 | https://api.github.com/repos/huggingface/datasets/issues/2118/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2117 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2117/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2117/comments | https://api.github.com/repos/huggingface/datasets/issues/2117/events | https://github.com/huggingface/datasets/issues/2117 | 841,535,283 | MDU6SXNzdWU4NDE1MzUyODM= | 2,117 | load_metric from local "glue.py" meet error 'NoneType' object is not callable | {
"login": "Frankie123421",
"id": 54012361,
"node_id": "MDQ6VXNlcjU0MDEyMzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/54012361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Frankie123421",
"html_url": "https://github.com/Frankie123421",
"followers_url": "https://api.github.com/users/Frankie123421/followers",
"following_url": "https://api.github.com/users/Frankie123421/following{/other_user}",
"gists_url": "https://api.github.com/users/Frankie123421/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Frankie123421/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Frankie123421/subscriptions",
"organizations_url": "https://api.github.com/users/Frankie123421/orgs",
"repos_url": "https://api.github.com/users/Frankie123421/repos",
"events_url": "https://api.github.com/users/Frankie123421/events{/privacy}",
"received_events_url": "https://api.github.com/users/Frankie123421/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@Frankie123421 what was the resolution to this?",
"> @Frankie123421 what was the resolution to this?\r\n\r\nuse glue_metric.py instead of glue.py in load_metric",
"thank you!"
] | 1,616,726,122,000 | 1,629,927,845,000 | 1,616,726,426,000 | NONE | null | null | actual_task = "mnli" if task == "mnli-mm" else task
dataset = load_dataset(path='/home/glue.py', name=actual_task)
metric = load_metric(path='/home/glue.py', name=actual_task)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-8-7ab77a465d81> in <module>
1 actual_task = "mnli" if task == "mnli-mm" else task
2 dataset = load_dataset(path='/home/jcli/glue.py', name=actual_task)
----> 3 metric = load_metric(path='/home/jcli/glue.py', name=actual_task)
~/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, script_version, **metric_init_kwargs)
508 keep_in_memory=keep_in_memory,
509 experiment_id=experiment_id,
--> 510 **metric_init_kwargs,
511 )
512
TypeError: 'NoneType' object is not callable
Please help | https://api.github.com/repos/huggingface/datasets/issues/2117/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2116 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2116/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2116/comments | https://api.github.com/repos/huggingface/datasets/issues/2116/events | https://github.com/huggingface/datasets/issues/2116 | 841,481,292 | MDU6SXNzdWU4NDE0ODEyOTI= | 2,116 | Creating custom dataset results in error while calling the map() function | {
"login": "GeetDsa",
"id": 13940397,
"node_id": "MDQ6VXNlcjEzOTQwMzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/13940397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GeetDsa",
"html_url": "https://github.com/GeetDsa",
"followers_url": "https://api.github.com/users/GeetDsa/followers",
"following_url": "https://api.github.com/users/GeetDsa/following{/other_user}",
"gists_url": "https://api.github.com/users/GeetDsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GeetDsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GeetDsa/subscriptions",
"organizations_url": "https://api.github.com/users/GeetDsa/orgs",
"repos_url": "https://api.github.com/users/GeetDsa/repos",
"events_url": "https://api.github.com/users/GeetDsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/GeetDsa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi,\r\n\r\nthe `_data` attribute is missing due to `MyDataset.__init__` not calling the parent `__init__`. However, I don't think it's a good idea to subclass the `datasets.Dataset` class (e.g. it's kind of dangerous to override `datasets.Dataset.__getitem__`). Instead, it's better to follow the \"association over inheritance\" approach with a simple wrapper class that delegates calls to a wrapped `Dataset` (map, etc.). Btw, the library offers the `datasets.Dataset.from_pandas` class method to directly create a `datasets.Dataset` from the dataframe."
] | 1,616,719,066,000 | 1,617,201,032,000 | 1,617,201,032,000 | NONE | null | null | calling `map()` of `datasets` library results into an error while defining a Custom dataset.
Reproducible example:
```
import datasets
class MyDataset(datasets.Dataset):
def __init__(self, sentences):
"Initialization"
self.samples = sentences
def __len__(self):
"Denotes the total number of samples"
return len(self.samples)
def __getitem__(self, index):
"Generates one sample of data"
# Select sample
# Load data and get label
samples = self.samples[index]
return samples
def preprocess_function_train(examples):
inputs = examples
labels = [example+tokenizer.eos_token for example in examples ]
inputs = tokenizer(inputs, max_length=30, padding=True, truncation=True)
labels = tokenizer(labels, max_length=30, padding=True, truncation=True)
model_inputs = inputs
model_inputs["labels"] = labels["input_ids"]
print("about to return")
return model_inputs
##train["sentence"] is dataframe column
train_dataset = MyDataset(train['sentence'].values.tolist())
train_dataset = train_dataset.map(
preprocess_function,
batched = True,
batch_size=32
)
```
Stack trace of error:
```
Traceback (most recent call last):
File "dir/train_generate.py", line 362, in <module>
main()
File "dir/train_generate.py", line 245, in main
train_dataset = train_dataset.map(
File "anaconda_dir/anaconda3/envs/env1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1244, in map
return self._map_single(
File "anaconda_dir/anaconda3/envs/env1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 149, in wrapper
unformatted_columns = set(self.column_names) - set(self._format_columns or [])
File "anaconda_dir/anaconda3/envs/env1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 526, in column_names
return self._data.column_names
AttributeError: 'MyDataset' object has no attribute '_data'
``` | https://api.github.com/repos/huggingface/datasets/issues/2116/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2115 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2115/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2115/comments | https://api.github.com/repos/huggingface/datasets/issues/2115/events | https://github.com/huggingface/datasets/issues/2115 | 841,283,974 | MDU6SXNzdWU4NDEyODM5NzQ= | 2,115 | The datasets.map() implementation modifies the datatype of os.environ object | {
"login": "leleamol",
"id": 19983848,
"node_id": "MDQ6VXNlcjE5OTgzODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/19983848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leleamol",
"html_url": "https://github.com/leleamol",
"followers_url": "https://api.github.com/users/leleamol/followers",
"following_url": "https://api.github.com/users/leleamol/following{/other_user}",
"gists_url": "https://api.github.com/users/leleamol/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leleamol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leleamol/subscriptions",
"organizations_url": "https://api.github.com/users/leleamol/orgs",
"repos_url": "https://api.github.com/users/leleamol/repos",
"events_url": "https://api.github.com/users/leleamol/events{/privacy}",
"received_events_url": "https://api.github.com/users/leleamol/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,616,704,159,000 | 1,616,771,632,000 | 1,616,771,632,000 | NONE | null | null | In our testing, we noticed that the datasets.map() implementation is modifying the datatype of python os.environ object from '_Environ' to 'dict'.
This causes following function calls to fail as follows:
`
x = os.environ.get("TEST_ENV_VARIABLE_AFTER_dataset_map", default=None)
TypeError: get() takes no keyword arguments
`
It looks like the following line in datasets.map implementation introduced this functionality.
https://github.com/huggingface/datasets/blob/0cb1ac06acb0df44a1cf4128d03a01865faa2504/src/datasets/arrow_dataset.py#L1421
Here is the test script to reproduce this error.
```
from datasets import load_dataset
from transformers import AutoTokenizer
import os
def test_train():
model_checkpoint = "distilgpt2"
datasets = load_dataset('wikitext', 'wikitext-2-raw-v1')
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True)
tokenizer.pad_token = tokenizer.eos_token
def tokenize_function(examples):
y = tokenizer(examples['text'], truncation=True, max_length=64)
return y
x = os.environ.get("TEST_ENV_VARIABLE_BEFORE_dataset_map", default=None)
print(f"Testing environment variable: TEST_ENV_VARIABLE_BEFORE_dataset_map {x}")
print(f"Data type of os.environ before datasets.map = {os.environ.__class__.__name__}")
datasets.map(tokenize_function, batched=True, num_proc=2, remove_columns=["text"])
print(f"Data type of os.environ after datasets.map = {os.environ.__class__.__name__}")
x = os.environ.get("TEST_ENV_VARIABLE_AFTER_dataset_map", default=None)
print(f"Testing environment variable: TEST_ENV_VARIABLE_AFTER_dataset_map {x}")
if __name__ == "__main__":
test_train()
```
| https://api.github.com/repos/huggingface/datasets/issues/2115/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2114 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2114/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2114/comments | https://api.github.com/repos/huggingface/datasets/issues/2114/events | https://github.com/huggingface/datasets/pull/2114 | 841,207,878 | MDExOlB1bGxSZXF1ZXN0NjAwOTc1MTA3 | 2,114 | Support for legal NLP datasets (EURLEX, ECtHR cases and EU-REG-IR) | {
"login": "iliaschalkidis",
"id": 1626984,
"node_id": "MDQ6VXNlcjE2MjY5ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iliaschalkidis",
"html_url": "https://github.com/iliaschalkidis",
"followers_url": "https://api.github.com/users/iliaschalkidis/followers",
"following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}",
"gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions",
"organizations_url": "https://api.github.com/users/iliaschalkidis/orgs",
"repos_url": "https://api.github.com/users/iliaschalkidis/repos",
"events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}",
"received_events_url": "https://api.github.com/users/iliaschalkidis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> Awesome thank you :)\r\n> This is really cool\r\n> \r\n> I left a few comments.\r\n> \r\n> Also it looks like the dummy data are quite big (100-200KB each). Can you try to reduce their sizes please ? For example I noticed that all the jsonl files inside the `dummy_data.zip` files have 20 lines. Can you only keep 2 lines instead ?\r\n\r\nHi @lhoestq, I did my best to improve the README files, while I also decreased dummy data examples. I included one more legal dataset.",
"@lhoestq thanks for your review.\r\n\r\n I shortened the examples in README files and removed `DEFAULT_CONFIG_BUILDER` from `eu_regulatory_ir.py`."
] | 1,616,697,617,000 | 1,617,187,130,000 | 1,617,187,130,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2114",
"html_url": "https://github.com/huggingface/datasets/pull/2114",
"diff_url": "https://github.com/huggingface/datasets/pull/2114.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2114.patch"
} | Add support for two legal NLP datasets:
- EURLEX (https://www.aclweb.org/anthology/P19-1636/)
- ECtHR cases (https://arxiv.org/abs/2103.13084)
- EU-REG-IR (https://arxiv.org/abs/2101.10726) | https://api.github.com/repos/huggingface/datasets/issues/2114/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2113 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2113/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2113/comments | https://api.github.com/repos/huggingface/datasets/issues/2113/events | https://github.com/huggingface/datasets/pull/2113 | 841,191,303 | MDExOlB1bGxSZXF1ZXN0NjAwOTYxMDEz | 2,113 | Implement Dataset as context manager | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,616,696,310,000 | 1,617,190,214,000 | 1,617,179,411,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2113",
"html_url": "https://github.com/huggingface/datasets/pull/2113",
"diff_url": "https://github.com/huggingface/datasets/pull/2113.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2113.patch"
} | When used as context manager, it would be safely deleted if some exception is raised.
This will avoid
> During handling of the above exception, another exception occurred: | https://api.github.com/repos/huggingface/datasets/issues/2113/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2112 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2112/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2112/comments | https://api.github.com/repos/huggingface/datasets/issues/2112/events | https://github.com/huggingface/datasets/pull/2112 | 841,098,008 | MDExOlB1bGxSZXF1ZXN0NjAwODgyMjA0 | 2,112 | Support for legal NLP datasets (EURLEX and ECtHR cases) | {
"login": "iliaschalkidis",
"id": 1626984,
"node_id": "MDQ6VXNlcjE2MjY5ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iliaschalkidis",
"html_url": "https://github.com/iliaschalkidis",
"followers_url": "https://api.github.com/users/iliaschalkidis/followers",
"following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}",
"gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions",
"organizations_url": "https://api.github.com/users/iliaschalkidis/orgs",
"repos_url": "https://api.github.com/users/iliaschalkidis/repos",
"events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}",
"received_events_url": "https://api.github.com/users/iliaschalkidis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,616,689,457,000 | 1,616,697,571,000 | 1,616,697,271,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2112",
"html_url": "https://github.com/huggingface/datasets/pull/2112",
"diff_url": "https://github.com/huggingface/datasets/pull/2112.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2112.patch"
} | Add support for two legal NLP datasets:
- EURLEX (https://www.aclweb.org/anthology/P19-1636/)
- ECtHR cases (https://arxiv.org/abs/2103.13084) | https://api.github.com/repos/huggingface/datasets/issues/2112/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2111 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2111/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2111/comments | https://api.github.com/repos/huggingface/datasets/issues/2111/events | https://github.com/huggingface/datasets/pull/2111 | 841,082,087 | MDExOlB1bGxSZXF1ZXN0NjAwODY4OTg5 | 2,111 | Compute WER metric iteratively | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I discussed with Patrick and I think we could have a nice addition: have a parameter `concatenate_texts` that, if `True`, uses the old implementation.\r\n\r\nBy default `concatenate_texts` would be `False`, so that sentences are evaluated independently, and to save resources (the WER computation has a quadratic complexity).\r\n\r\nSome users might still want to use the old implementation.",
"@lhoestq @patrickvonplaten are you sure of the parameter name `concatenate_texts`? I was thinking about something like `iter`...",
"Not sure about the name, if you can improve it feel free to do so ^^'\r\nThe old implementation computes the WER on the concatenation of all the input texts, while the new one makes WER measures computation independent for each reference/prediction pair.\r\nThat's why I thought of `concatenate_texts`",
"@lhoestq yes, but the end user does not necessarily know the details of the implementation of the WER computation.\r\n\r\nFrom the end user perspective I think it might make more sense: how do you want to compute the metric?\r\n- all in once, more RAM memory needed?\r\n- iteratively, less RAM requirements?\r\n\r\nBecause of that I was thinking of something like `iter` or `iterative`...",
"Personally like `concatenate_texts` better since I feel like `iter` or `iterate` are quite vague",
"Therefore, you can merge... ;)",
"Ok ! merging :)"
] | 1,616,688,408,000 | 1,617,693,643,000 | 1,617,693,643,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2111",
"html_url": "https://github.com/huggingface/datasets/pull/2111",
"diff_url": "https://github.com/huggingface/datasets/pull/2111.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2111.patch"
} | Compute WER metric iteratively to avoid MemoryError.
Fix #2078. | https://api.github.com/repos/huggingface/datasets/issues/2111/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2110 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2110/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2110/comments | https://api.github.com/repos/huggingface/datasets/issues/2110/events | https://github.com/huggingface/datasets/pull/2110 | 840,794,995 | MDExOlB1bGxSZXF1ZXN0NjAwNjI1NDQ5 | 2,110 | Fix incorrect assertion in builder.py | {
"login": "dreamgonfly",
"id": 2340721,
"node_id": "MDQ6VXNlcjIzNDA3MjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2340721?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dreamgonfly",
"html_url": "https://github.com/dreamgonfly",
"followers_url": "https://api.github.com/users/dreamgonfly/followers",
"following_url": "https://api.github.com/users/dreamgonfly/following{/other_user}",
"gists_url": "https://api.github.com/users/dreamgonfly/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dreamgonfly/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dreamgonfly/subscriptions",
"organizations_url": "https://api.github.com/users/dreamgonfly/orgs",
"repos_url": "https://api.github.com/users/dreamgonfly/repos",
"events_url": "https://api.github.com/users/dreamgonfly/events{/privacy}",
"received_events_url": "https://api.github.com/users/dreamgonfly/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! The SplitInfo is not always available. By default you would get `split_info.num_examples == 0`\r\nSo unfortunately we can't use this assertion you suggested",
"> Hi ! The SplitInfo is not always available. By default you would get `split_info.num_examples == 0`\r\n> So unfortunately we can't use this assertion you suggested\r\n\r\nThen it would be better to just remove the assertion, because the existing assertion does nothing."
] | 1,616,668,760,000 | 1,618,234,383,000 | 1,618,234,383,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2110",
"html_url": "https://github.com/huggingface/datasets/pull/2110",
"diff_url": "https://github.com/huggingface/datasets/pull/2110.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2110.patch"
} | Fix incorrect num_examples comparison assertion in builder.py | https://api.github.com/repos/huggingface/datasets/issues/2110/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2109 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2109/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2109/comments | https://api.github.com/repos/huggingface/datasets/issues/2109/events | https://github.com/huggingface/datasets/pull/2109 | 840,746,598 | MDExOlB1bGxSZXF1ZXN0NjAwNTg1MzM5 | 2,109 | Add more issue templates and customize issue template chooser | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"If you agree, I could also add a link to [Discussions](https://github.com/huggingface/datasets/discussions) in order to reinforce the use of Discussion to make Questions (instead of Issues).\r\n\r\nI could also add some other templates: Bug, Feature Request,...",
"@theo-m we wrote our same comments at the same time... 😉 "
] | 1,616,665,313,000 | 1,618,813,211,000 | 1,618,813,211,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2109",
"html_url": "https://github.com/huggingface/datasets/pull/2109",
"diff_url": "https://github.com/huggingface/datasets/pull/2109.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2109.patch"
} | When opening an issue, it is not evident for the users how to choose a blank issue template. There is a link at the bottom of all the other issue templates (`Don’t see your issue here? Open a blank issue.`), but this is not very visible for users. This is the reason why many users finally chose the `add-dataset` template instead (this is more visible) for issues that indeed are not requesting the addition of a new dataset.
~~With this PR, the default blank issue template would be as visible as the other templates (as the `add-dataset` template), thus making easier for the users to choose it.~~
With this PR:
- more issue templates, besides `add-dataset`, are added: `bug-report` and `feature-request`
- the issue template chooser is customized, so that it now includes a link to `Discussions` for questions | https://api.github.com/repos/huggingface/datasets/issues/2109/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2108 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2108/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2108/comments | https://api.github.com/repos/huggingface/datasets/issues/2108/events | https://github.com/huggingface/datasets/issues/2108 | 840,181,055 | MDU6SXNzdWU4NDAxODEwNTU= | 2,108 | Is there a way to use a GPU only when training an Index in the process of add_faisis_index? | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892912,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] | open | false | null | [] | null | [] | 1,616,621,536,000 | 1,616,653,903,000 | null | NONE | null | null | Motivation - Some FAISS indexes like IVF consist of the training step that clusters the dataset into a given number of indexes. It would be nice if we can use a GPU to do the training step and covert the index back to CPU as mention in [this faiss example](https://gist.github.com/mdouze/46d6bbbaabca0b9778fca37ed2bcccf6). | https://api.github.com/repos/huggingface/datasets/issues/2108/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2107 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2107/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2107/comments | https://api.github.com/repos/huggingface/datasets/issues/2107/events | https://github.com/huggingface/datasets/pull/2107 | 839,495,825 | MDExOlB1bGxSZXF1ZXN0NTk5NTAxODE5 | 2,107 | Metadata validation | {
"login": "theo-m",
"id": 17948980,
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theo-m",
"html_url": "https://github.com/theo-m",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"repos_url": "https://api.github.com/users/theo-m/repos",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"> Also I was wondering this is really needed to have `utils.metadata` as a submodule of `datasets` ? This is only used by the CI so I'm not sure we should have this in the actual `datasets` package.\r\n\r\nI'm unclear on the suggestion, would you rather have a root-level `./metadata.py` file? I think it's well where it is, if anything we could move it out of utils and into `datasets` as it could be used by e.g. `DatasetDict` so that users can pull the metadata easily rather than have to reparse the readme.\r\n",
"Ok that makes sense if we want to have functions that parse the metadata for users",
"Hi @theo-m @lhoestq \r\n\r\nThis seems very interesting. Should I add the descriptions to the PR on `datasets-tagging`? Alternatively, I can also create a google-sheet/markdown table :)\r\n\r\nSorry for the delay in responding.\r\n\r\nThanks,\r\nGunjan",
"> Hi @theo-m @lhoestq\r\n> \r\n> This seems very interesting. Should I add the descriptions to the PR on `datasets-tagging`? Alternatively, I can also create a google-sheet/markdown table :)\r\n> \r\n> Sorry for the delay in responding.\r\n> \r\n> Thanks,\r\n> Gunjan\r\n\r\nHi @gchhablani, yes I think at the moment the best solution is for you to write in `datasets-tagging`, as the PR will allow us to discuss and review, even though the work will be ported to this repo in the end. \r\nOr we wait for this to be merged and you reopen the PR here, your call :)",
"cc @abhi1thakur "
] | 1,616,575,961,000 | 1,619,425,634,000 | 1,619,425,633,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2107",
"html_url": "https://github.com/huggingface/datasets/pull/2107",
"diff_url": "https://github.com/huggingface/datasets/pull/2107.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2107.patch"
} | - `pydantic` metadata schema with dedicated validators against our taxonomy
- ci script to validate new changes against this schema and start a vertuous loop
- soft validation on tasks ids since we expect the taxonomy to undergo some changes in the near future
for reference with the current validation we have ~365~ 378 datasets with invalid metadata! full error report [_here_.](https://gist.github.com/theo-m/61b3c0c47fc6121d08d3174bd4c2a26b) | https://api.github.com/repos/huggingface/datasets/issues/2107/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2106 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2106/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2106/comments | https://api.github.com/repos/huggingface/datasets/issues/2106/events | https://github.com/huggingface/datasets/issues/2106 | 839,084,264 | MDU6SXNzdWU4MzkwODQyNjQ= | 2,106 | WMT19 Dataset for Kazakh-English is not formatted correctly | {
"login": "trina731",
"id": 22580542,
"node_id": "MDQ6VXNlcjIyNTgwNTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/22580542?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trina731",
"html_url": "https://github.com/trina731",
"followers_url": "https://api.github.com/users/trina731/followers",
"following_url": "https://api.github.com/users/trina731/following{/other_user}",
"gists_url": "https://api.github.com/users/trina731/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trina731/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trina731/subscriptions",
"organizations_url": "https://api.github.com/users/trina731/orgs",
"repos_url": "https://api.github.com/users/trina731/repos",
"events_url": "https://api.github.com/users/trina731/events{/privacy}",
"received_events_url": "https://api.github.com/users/trina731/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | null | [] | null | [
"Hi ! Thanks for reporting\r\n\r\nBy looking at the raw `news-commentary-v14.en-kk.tsv` file, it looks like there are at least 17 lines with this issue.\r\nMoreover these issues are not always the same:\r\n- L97 is only `kk` text and must be appended at the end of the `kk` text of the **next** line\r\n- L2897 is only `kk` text and must be appended at the end of the `kk` text of the **previous** line\r\n- L1247 and L1248 are only `kk` texts and must be inserted at the **beginning** of the `kk` text of the next line\r\n- (and there are many others)\r\n\r\nIt would be nice to have a corrected version of this file ! The file is available in the `wmt/news-commentary` repository on the Datasets Hub here:\r\nhttps://huggingface.co/datasets/wmt/news-commentary/tree/main/v14/training\r\n\r\nThen maybe we can notify the WMT authors and host the corrected version somewhere"
] | 1,616,530,487,000 | 1,616,708,180,000 | null | NONE | null | null | In addition to the bug of languages being switched from Issue @415, there are incorrect translations in the dataset because the English-Kazakh translations have a one off formatting error.
The News Commentary v14 parallel data set for kk-en from http://www.statmt.org/wmt19/translation-task.html has a bug here:
> Line 94. The Swiss National Bank, for its part, has been battling with the deflationary effects of the franc’s dramatic appreciation over the past few years. Швейцарияның Ұлттық банкі өз тарапынан, соңғы бірнеше жыл ішінде франк құнының қатты өсуінің дефляциялық әсерімен күресіп келеді.
>
> Line 95. Дефляциялық күштер 2008 жылы терең және ұзаққа созылған жаһандық дағдарысқа байланысты орын алған ірі экономикалық және қаржылық орын алмасулардың арқасында босатылды. Жеке қарыз қаражаты үлесінің қысқаруы орталық банктің рефляцияға жұмсалған күш-жігеріне тұрақты соққан қарсы желдей болды.
>
> Line 96. The deflationary forces were unleashed by the major economic and financial dislocations associated with the deep and protracted global crisis that erupted in 2008. Private deleveraging became a steady headwind to central bank efforts to reflate. 2009 жылы, алдыңғы қатарлы экономикалардың шамамен үштен бірі бағаның төмендеуін көрсетті, бұл соғыстан кейінгі жоғары деңгей болды.
As you can see, line 95 has only the Kazakh translation which should be part of line 96. This causes all of the following English-Kazakh translation pairs to be one off rendering ALL of those translations incorrect. This issue was not fixed when the dataset was imported to Huggingface. By running this code
```
import datasets
from datasets import load_dataset
dataset = load_dataset('wmt19', 'kk-en')
for key in dataset['train']['translation']:
if 'The deflationary forces were unleashed by the major economic and financial dislocations associated with the deep and protracted global crisis that erupted in 2008.' in key['kk']:
print(key['en'])
print(key['kk'])
break
```
we get:
> 2009 жылы, алдыңғы қатарлы экономикалардың шамамен үштен бірі бағаның төмендеуін көрсетті, бұл соғыстан кейінгі жоғары деңгей болды.
> The deflationary forces were unleashed by the major economic and financial dislocations associated with the deep and protracted global crisis that erupted in 2008. Private deleveraging became a steady headwind to central bank efforts to reflate.
which shows that the issue still persists in the Huggingface dataset. The Kazakh sentence matches up to the next English sentence in the dataset instead of the current one.
Please let me know if there's you have any ideas to fix this one-off error from the dataset or if this can be fixed by Huggingface. | https://api.github.com/repos/huggingface/datasets/issues/2106/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2105 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2105/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2105/comments | https://api.github.com/repos/huggingface/datasets/issues/2105/events | https://github.com/huggingface/datasets/issues/2105 | 839,059,226 | MDU6SXNzdWU4MzkwNTkyMjY= | 2,105 | Request to remove S2ORC dataset | {
"login": "kyleclo",
"id": 13603748,
"node_id": "MDQ6VXNlcjEzNjAzNzQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/13603748?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kyleclo",
"html_url": "https://github.com/kyleclo",
"followers_url": "https://api.github.com/users/kyleclo/followers",
"following_url": "https://api.github.com/users/kyleclo/following{/other_user}",
"gists_url": "https://api.github.com/users/kyleclo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kyleclo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kyleclo/subscriptions",
"organizations_url": "https://api.github.com/users/kyleclo/orgs",
"repos_url": "https://api.github.com/users/kyleclo/repos",
"events_url": "https://api.github.com/users/kyleclo/events{/privacy}",
"received_events_url": "https://api.github.com/users/kyleclo/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hello @kyleclo! Currently, we are getting the data from your bucket, so if you remove it the HF script won't work anymore :) \r\n\r\nUntil you solve things on your end, @lhoestq suggested we just return a warning message when people try to load that dataset from HF. What would you like it to say?",
"Hi @kyleclo, as of today, you have not removed your bucket data yet, and therefore HuggingFace can download it from there.\r\n\r\nIs it OK? Are you planning to eventually delete it? Thank you.",
"Hi! Sorry I missed @yjernite 's previous message, thanks for responding! \r\n\r\nIs there an option where we can keep our data in our bucket, but the HF script no longer pulls data from it? "
] | 1,616,528,586,000 | 1,628,104,682,000 | null | NONE | null | null | Hi! I was wondering if it's possible to remove [S2ORC](https://huggingface.co/datasets/s2orc) from hosting on Huggingface's platform? Unfortunately, there are some legal considerations about how we make this data available. Happy to add back to Huggingface's platform once we work out those hurdles! Thanks! | https://api.github.com/repos/huggingface/datasets/issues/2105/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2104 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2104/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2104/comments | https://api.github.com/repos/huggingface/datasets/issues/2104/events | https://github.com/huggingface/datasets/issues/2104 | 839,027,834 | MDU6SXNzdWU4MzkwMjc4MzQ= | 2,104 | Trouble loading wiki_movies | {
"login": "adityaarunsinghal",
"id": 35391599,
"node_id": "MDQ6VXNlcjM1MzkxNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/35391599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adityaarunsinghal",
"html_url": "https://github.com/adityaarunsinghal",
"followers_url": "https://api.github.com/users/adityaarunsinghal/followers",
"following_url": "https://api.github.com/users/adityaarunsinghal/following{/other_user}",
"gists_url": "https://api.github.com/users/adityaarunsinghal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adityaarunsinghal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adityaarunsinghal/subscriptions",
"organizations_url": "https://api.github.com/users/adityaarunsinghal/orgs",
"repos_url": "https://api.github.com/users/adityaarunsinghal/repos",
"events_url": "https://api.github.com/users/adityaarunsinghal/events{/privacy}",
"received_events_url": "https://api.github.com/users/adityaarunsinghal/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi ! `wiki_movies` was added in `datasets==1.2.0`. However it looks like you have `datasets==1.1.2`.\r\n\r\nTo use `wiki_movies`, please update `datasets` with\r\n```\r\npip install --upgrade datasets\r\n```",
"Thanks a lot! That solved it and I was able to upload a model trained on it as well :)"
] | 1,616,525,994,000 | 1,617,664,646,000 | null | NONE | null | null | Hello,
I am trying to load_dataset("wiki_movies") and it gives me this error -
`FileNotFoundError: Couldn't find file locally at wiki_movies/wiki_movies.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/wiki_movies/wiki_movies.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/wiki_movies/wiki_movies.py`
Trying to do `python run_mlm.py \
--model_name_or_path roberta-base \
--dataset_name wiki_movies \` also gives the same error.
Is this something on my end? From what I can tell, this dataset was re-added by @lhoestq a few months ago.
Thank you! | https://api.github.com/repos/huggingface/datasets/issues/2104/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2103 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2103/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2103/comments | https://api.github.com/repos/huggingface/datasets/issues/2103/events | https://github.com/huggingface/datasets/issues/2103 | 838,946,916 | MDU6SXNzdWU4Mzg5NDY5MTY= | 2,103 | citation, homepage, and license fields of `dataset_info.json` are duplicated many times | {
"login": "samsontmr",
"id": 15007950,
"node_id": "MDQ6VXNlcjE1MDA3OTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/15007950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samsontmr",
"html_url": "https://github.com/samsontmr",
"followers_url": "https://api.github.com/users/samsontmr/followers",
"following_url": "https://api.github.com/users/samsontmr/following{/other_user}",
"gists_url": "https://api.github.com/users/samsontmr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/samsontmr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samsontmr/subscriptions",
"organizations_url": "https://api.github.com/users/samsontmr/orgs",
"repos_url": "https://api.github.com/users/samsontmr/repos",
"events_url": "https://api.github.com/users/samsontmr/events{/privacy}",
"received_events_url": "https://api.github.com/users/samsontmr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | null | [] | null | [
"Thanks for reporting :)\r\nMaybe we can concatenate fields only if they are different.\r\n\r\nCurrently this is done here:\r\n\r\nhttps://github.com/huggingface/nlp/blob/349ac4398a3bcae6356f14c5754483383a60e8a4/src/datasets/info.py#L180-L196\r\n\r\nThis can be a good first contribution to the library.\r\nPlease comment if you'd like to improve this and open a PR :)"
] | 1,616,519,889,000 | 1,617,719,999,000 | 1,617,719,999,000 | NONE | null | null | This happens after a `map` operation when `num_proc` is set to `>1`. I tested this by cleaning up the json before running the `map` op on the dataset so it's unlikely it's coming from an earlier concatenation.
Example result:
```
"citation": "@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n
```
@lhoestq and I believe this is happening due to the fields being concatenated `num_proc` times. | https://api.github.com/repos/huggingface/datasets/issues/2103/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2102 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2102/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2102/comments | https://api.github.com/repos/huggingface/datasets/issues/2102/events | https://github.com/huggingface/datasets/pull/2102 | 838,794,090 | MDExOlB1bGxSZXF1ZXN0NTk4OTEyNzUw | 2,102 | Move Dataset.to_csv to csv module | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2851292821,
"node_id": "MDU6TGFiZWwyODUxMjkyODIx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring",
"name": "refactoring",
"color": "B67A40",
"default": false,
"description": "Restructuring existing code without changing its external behavior"
}
] | closed | false | null | [] | null | [] | 1,616,510,146,000 | 1,616,594,855,000 | 1,616,594,854,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2102",
"html_url": "https://github.com/huggingface/datasets/pull/2102",
"diff_url": "https://github.com/huggingface/datasets/pull/2102.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2102.patch"
} | Move the implementation of `Dataset.to_csv` to module `datasets.io.csv`. | https://api.github.com/repos/huggingface/datasets/issues/2102/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2101 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2101/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2101/comments | https://api.github.com/repos/huggingface/datasets/issues/2101/events | https://github.com/huggingface/datasets/pull/2101 | 838,586,184 | MDExOlB1bGxSZXF1ZXN0NTk4NzQzMDM4 | 2,101 | MIAM dataset - new citation details | {
"login": "eusip",
"id": 1551356,
"node_id": "MDQ6VXNlcjE1NTEzNTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1551356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eusip",
"html_url": "https://github.com/eusip",
"followers_url": "https://api.github.com/users/eusip/followers",
"following_url": "https://api.github.com/users/eusip/following{/other_user}",
"gists_url": "https://api.github.com/users/eusip/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eusip/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eusip/subscriptions",
"organizations_url": "https://api.github.com/users/eusip/orgs",
"repos_url": "https://api.github.com/users/eusip/repos",
"events_url": "https://api.github.com/users/eusip/events{/privacy}",
"received_events_url": "https://api.github.com/users/eusip/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi !\r\nLooks like there's a unicode error in the new citation in the miam.py file.\r\nCould you try to fix it ? Not sure from which character it comes from though\r\n\r\nYou can test if it works on your side with\r\n```\r\nRUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_miam\r\n```",
"Unicode error resolved!"
] | 1,616,496,083,000 | 1,616,522,890,000 | 1,616,522,890,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2101",
"html_url": "https://github.com/huggingface/datasets/pull/2101",
"diff_url": "https://github.com/huggingface/datasets/pull/2101.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2101.patch"
} | Hi @lhoestq, I have updated the citations to reference an OpenReview preprint. | https://api.github.com/repos/huggingface/datasets/issues/2101/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2100 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2100/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2100/comments | https://api.github.com/repos/huggingface/datasets/issues/2100/events | https://github.com/huggingface/datasets/pull/2100 | 838,574,631 | MDExOlB1bGxSZXF1ZXN0NTk4NzMzOTM0 | 2,100 | Fix deprecated warning message and docstring | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"I have a question: what about `dictionary_encode_column_`?\r\n- It is deprecated in Dataset, but it recommends using a non-existing method instead: `Dataset.dictionary_encode_column` does not exist.\r\n- It is NOT deprecated in DatasetDict.",
"`dictionary_encode_column_ ` should be deprecated since it never worked correctly. It will be removed in a major release.\r\nThis has to be deprecated in `DatasetDict` as well.\r\nAnd `Dataset.dictionary_encode_column` doesn't exist indeed.",
"Thanks @lhoestq. I have fixed deprecated for `dictionary_encode_column_`."
] | 1,616,495,272,000 | 1,616,573,981,000 | 1,616,522,629,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2100",
"html_url": "https://github.com/huggingface/datasets/pull/2100",
"diff_url": "https://github.com/huggingface/datasets/pull/2100.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2100.patch"
} | Fix deprecated warnings:
- Use deprecated Sphinx directive in docstring
- Fix format of deprecated message
- Raise FutureWarning | https://api.github.com/repos/huggingface/datasets/issues/2100/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2099 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2099/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2099/comments | https://api.github.com/repos/huggingface/datasets/issues/2099/events | https://github.com/huggingface/datasets/issues/2099 | 838,523,819 | MDU6SXNzdWU4Mzg1MjM4MTk= | 2,099 | load_from_disk takes a long time to load local dataset | {
"login": "samsontmr",
"id": 15007950,
"node_id": "MDQ6VXNlcjE1MDA3OTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/15007950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samsontmr",
"html_url": "https://github.com/samsontmr",
"followers_url": "https://api.github.com/users/samsontmr/followers",
"following_url": "https://api.github.com/users/samsontmr/following{/other_user}",
"gists_url": "https://api.github.com/users/samsontmr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/samsontmr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samsontmr/subscriptions",
"organizations_url": "https://api.github.com/users/samsontmr/orgs",
"repos_url": "https://api.github.com/users/samsontmr/repos",
"events_url": "https://api.github.com/users/samsontmr/events{/privacy}",
"received_events_url": "https://api.github.com/users/samsontmr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi !\r\nCan you share more information about the features of your dataset ? You can get them by printing `my_dataset.features`\r\nCan you also share the code of your `map` function ?",
"It is actually just the tokenized `wikipedia` dataset with `input_ids`, `attention_mask`, etc, with one extra column which is a list of integers. The `text` column is removed during tokenization.\r\n\r\n```\r\ndef add_len_and_seq(example):\r\n end_idx = example['input_ids'].index(SEP)\r\n example['actual_len'] = end_idx-1\r\n seq_len = len(example['input_ids'])\r\n \r\n\r\n example['seq'] = [PAD_ID] + [np.uint8(example['some_integer'])]*(end_idx-1) + [PAD_ID]*(seq_len-end_idx)\r\n \r\n return example\r\n```\r\n",
"Is `PAD_ID` a python integer ? You need all the integers in `example['seq']` to have the same type.\r\nDoes this work if you remove the `np.uint8` and use python integers instead ?",
"yup I casted it to `np.uint8` outside the function where it was defined. It was originally using python integers.",
"Strangely, even when I manually created `np.arrays` of specific `dtypes`, the types in the final `dataset_info.json` that gets written are still `int64`.\r\n\r\nUpdate: I tried creating lists of `int8`s and got the same result.",
"Yes this is a known issue: #625 \r\nWe're working on making the precision kept for numpy :)\r\nTo specify the precision of the integers, currently one needs to specify the output features with `.map(..., features=output_features)`",
"Do you know what step is taking forever in the code ?\r\nWhat happens if you interrupt the execution of the dataset loading ?",
"After a synchronous discussion, we found that the cache file sizes have an enormous effect on the loading speed: smaller cache files result in faster load times. `num_proc` controls the number of cache files that are being written and is inversely proportional to the individual file size. In other words, increase `num_proc` for smaller cache files :)\r\n\r\nMaybe this can be highlighted somewhere in the docs."
] | 1,616,491,717,000 | 1,616,519,536,000 | 1,616,519,536,000 | NONE | null | null | I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helping (the total size seems to be smaller though).
Does anyone know what could be the issue? Or does the casting of that column to `int8` need to happen in the function that writes the arrow table instead of in the `map` where I create the list of integers?
Tagging @lhoestq since you seem to be working on these issues and PRs :) | https://api.github.com/repos/huggingface/datasets/issues/2099/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2098 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2098/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2098/comments | https://api.github.com/repos/huggingface/datasets/issues/2098/events | https://github.com/huggingface/datasets/issues/2098 | 838,447,959 | MDU6SXNzdWU4Mzg0NDc5NTk= | 2,098 | SQuAD version | {
"login": "h-peng17",
"id": 39556019,
"node_id": "MDQ6VXNlcjM5NTU2MDE5",
"avatar_url": "https://avatars.githubusercontent.com/u/39556019?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h-peng17",
"html_url": "https://github.com/h-peng17",
"followers_url": "https://api.github.com/users/h-peng17/followers",
"following_url": "https://api.github.com/users/h-peng17/following{/other_user}",
"gists_url": "https://api.github.com/users/h-peng17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h-peng17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h-peng17/subscriptions",
"organizations_url": "https://api.github.com/users/h-peng17/orgs",
"repos_url": "https://api.github.com/users/h-peng17/repos",
"events_url": "https://api.github.com/users/h-peng17/events{/privacy}",
"received_events_url": "https://api.github.com/users/h-peng17/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! This is 1.1 as specified by the download urls here:\r\n\r\nhttps://github.com/huggingface/nlp/blob/349ac4398a3bcae6356f14c5754483383a60e8a4/datasets/squad/squad.py#L50-L55",
"Got it. Thank you~"
] | 1,616,485,674,000 | 1,616,752,134,000 | 1,616,752,134,000 | NONE | null | null | Hi~
I want train on squad dataset. What's the version of the squad? Is it 1.1 or 1.0? I'm new in QA, I don't find some descriptions about it. | https://api.github.com/repos/huggingface/datasets/issues/2098/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2097 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2097/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2097/comments | https://api.github.com/repos/huggingface/datasets/issues/2097/events | https://github.com/huggingface/datasets/pull/2097 | 838,105,289 | MDExOlB1bGxSZXF1ZXN0NTk4MzM4MTA3 | 2,097 | fixes issue #1110 by descending further if `obj["_type"]` is a dict | {
"login": "dcfidalgo",
"id": 15979778,
"node_id": "MDQ6VXNlcjE1OTc5Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/15979778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dcfidalgo",
"html_url": "https://github.com/dcfidalgo",
"followers_url": "https://api.github.com/users/dcfidalgo/followers",
"following_url": "https://api.github.com/users/dcfidalgo/following{/other_user}",
"gists_url": "https://api.github.com/users/dcfidalgo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dcfidalgo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dcfidalgo/subscriptions",
"organizations_url": "https://api.github.com/users/dcfidalgo/orgs",
"repos_url": "https://api.github.com/users/dcfidalgo/repos",
"events_url": "https://api.github.com/users/dcfidalgo/events{/privacy}",
"received_events_url": "https://api.github.com/users/dcfidalgo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,616,446,855,000 | 1,616,446,871,000 | 1,616,446,871,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2097",
"html_url": "https://github.com/huggingface/datasets/pull/2097",
"diff_url": "https://github.com/huggingface/datasets/pull/2097.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2097.patch"
} | Check metrics | https://api.github.com/repos/huggingface/datasets/issues/2097/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2096 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2096/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2096/comments | https://api.github.com/repos/huggingface/datasets/issues/2096/events | https://github.com/huggingface/datasets/issues/2096 | 838,038,379 | MDU6SXNzdWU4MzgwMzgzNzk= | 2,096 | CoNLL 2003 dataset not including German | {
"login": "rxian",
"id": 8406802,
"node_id": "MDQ6VXNlcjg0MDY4MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8406802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rxian",
"html_url": "https://github.com/rxian",
"followers_url": "https://api.github.com/users/rxian/followers",
"following_url": "https://api.github.com/users/rxian/following{/other_user}",
"gists_url": "https://api.github.com/users/rxian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rxian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rxian/subscriptions",
"organizations_url": "https://api.github.com/users/rxian/orgs",
"repos_url": "https://api.github.com/users/rxian/repos",
"events_url": "https://api.github.com/users/rxian/events{/privacy}",
"received_events_url": "https://api.github.com/users/rxian/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [] | 1,616,441,036,000 | 1,617,097,535,000 | null | NONE | null | null | Hello, thanks for all the work on developing and maintaining this amazing platform, which I am enjoying working with!
I was wondering if there is a reason why the German CoNLL 2003 dataset is not included in the [repository](https://github.com/huggingface/datasets/tree/master/datasets/conll2003), since a copy of it could be found in some places on the internet such as GitHub? I could help adding the German data to the hub, unless there are some copyright issues that I am unaware of...
This is considering that many work use the union of CoNLL 2002 and 2003 datasets for comparing cross-lingual NER transfer performance in `en`, `de`, `es`, and `nl`. E.g., [XLM-R](https://www.aclweb.org/anthology/2020.acl-main.747.pdf).
## Adding a Dataset
- **Name:** CoNLL 2003 German
- **Paper:** https://www.aclweb.org/anthology/W03-0419/
- **Data:** https://github.com/huggingface/datasets/tree/master/datasets/conll2003
| https://api.github.com/repos/huggingface/datasets/issues/2096/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2093 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2093/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2093/comments | https://api.github.com/repos/huggingface/datasets/issues/2093/events | https://github.com/huggingface/datasets/pull/2093 | 837,209,211 | MDExOlB1bGxSZXF1ZXN0NTk3NTgyNjUx | 2,093 | Fix: Allows a feature to be named "_type" | {
"login": "dcfidalgo",
"id": 15979778,
"node_id": "MDQ6VXNlcjE1OTc5Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/15979778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dcfidalgo",
"html_url": "https://github.com/dcfidalgo",
"followers_url": "https://api.github.com/users/dcfidalgo/followers",
"following_url": "https://api.github.com/users/dcfidalgo/following{/other_user}",
"gists_url": "https://api.github.com/users/dcfidalgo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dcfidalgo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dcfidalgo/subscriptions",
"organizations_url": "https://api.github.com/users/dcfidalgo/orgs",
"repos_url": "https://api.github.com/users/dcfidalgo/repos",
"events_url": "https://api.github.com/users/dcfidalgo/events{/privacy}",
"received_events_url": "https://api.github.com/users/dcfidalgo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Nice thank you !\r\nThis looks like a pretty simple yet effective fix ;)\r\nCould you just add a test in `test_features.py` to make sure that you can create `features` with a `_type` field and that it is possible to convert it as a dict and reload it ?\r\n```python\r\nfrom datasets import Features, Value\r\n\r\n# We usually use `asdict` on a `DatasetInfo` object which is a dataclass instance that contains the features.\r\n# So we need the conversion of features to dict to work.\r\n# You can test that using `dataclasses._asdict_inner`.\r\n# This is the function used by `dataclasses.asdict` to convert a dataclass instance attribute to a dict\r\nfrom dataclasses import _asdict_inner \r\n\r\nf = Features({\"_type\": Value(\"string\")})\r\nreloaded_f = Features.from_dict(_asdict_inner(f, dict))\r\nassert reloaded_f == f\r\n```",
"Sure, i will add a test. \r\nOne question: are the posted benchmarks reliable? The extra type check seems to add quite some overhead judging by the relative differences. Do you think this is an issue?",
"The benchmark has a bit of noise, the values are fine ;)\r\nespecially in the change you did since the overhead added is negligible.",
"Ok, i added the test you described above. \r\n\r\nI avoided importing the private `_asdict_inner` method and directly used the `DatasetInfo` class, if this is ok with you. Thanks a lot for your support during this PR!"
] | 1,616,368,917,000 | 1,616,682,954,000 | 1,616,682,954,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2093",
"html_url": "https://github.com/huggingface/datasets/pull/2093",
"diff_url": "https://github.com/huggingface/datasets/pull/2093.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2093.patch"
} | This PR tries to fix issue #1110. Sorry for taking so long to come back to this.
It's a simple fix, but i am not sure if it works for all possible types of `obj`. Let me know what you think @lhoestq | https://api.github.com/repos/huggingface/datasets/issues/2093/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2092 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2092/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2092/comments | https://api.github.com/repos/huggingface/datasets/issues/2092/events | https://github.com/huggingface/datasets/issues/2092 | 836,984,043 | MDU6SXNzdWU4MzY5ODQwNDM= | 2,092 | How to disable making arrow tables in load_dataset ? | {
"login": "Jeevesh8",
"id": 48825663,
"node_id": "MDQ6VXNlcjQ4ODI1NjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/48825663?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jeevesh8",
"html_url": "https://github.com/Jeevesh8",
"followers_url": "https://api.github.com/users/Jeevesh8/followers",
"following_url": "https://api.github.com/users/Jeevesh8/following{/other_user}",
"gists_url": "https://api.github.com/users/Jeevesh8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jeevesh8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jeevesh8/subscriptions",
"organizations_url": "https://api.github.com/users/Jeevesh8/orgs",
"repos_url": "https://api.github.com/users/Jeevesh8/repos",
"events_url": "https://api.github.com/users/Jeevesh8/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jeevesh8/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi ! We plan to add streaming features in the future.\r\n\r\nThis should allow to load a dataset instantaneously without generating the arrow table. The trade-off is that accessing examples from a streaming dataset must be done in an iterative way, and with an additional (but hopefully minor) overhead.\r\nWhat do you think about this ?\r\n\r\nIf you have ideas or suggestions of what you expect from such features as a user, feel free to share them, this is really valuable to us !",
"People mainly want this feature either because it takes too much time too make arrow tables, or they occupy too much memory on the disk. I think both the problem can be solved if we provide arrow tables themselves on datasets hub. Can we do this currently @lhoestq ? \r\n",
"@lhoestq I think the ```try_from_hf_gcs``` provide the same functionality. What all datasets are available on HF GCS? Are all the datasets on huggingFace datasets hub are made available on GCS, automatically?",
"Only datasets like wikipedia, wiki40b, wiki_dpr and natural questions are available already processed on the HF google storage. This is used to download directly the arrow file instead of building it from the original data files.",
"@lhoestq How can we make sure that the data we upload on HuggingFace hub is available in form of preprocessed arrow files ?",
"We're still working on this :) This will be available soon\r\nUsers will be able to put their processed arrow files on the Hub"
] | 1,616,302,207,000 | 1,616,783,860,000 | null | NONE | null | null | Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ? | https://api.github.com/repos/huggingface/datasets/issues/2092/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2091 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2091/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2091/comments | https://api.github.com/repos/huggingface/datasets/issues/2091/events | https://github.com/huggingface/datasets/pull/2091 | 836,831,403 | MDExOlB1bGxSZXF1ZXN0NTk3Mjk4ODI3 | 2,091 | Fix copy snippet in docs | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [] | 1,616,252,902,000 | 1,616,574,050,000 | 1,616,519,911,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2091",
"html_url": "https://github.com/huggingface/datasets/pull/2091",
"diff_url": "https://github.com/huggingface/datasets/pull/2091.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2091.patch"
} | With this change the lines starting with `...` in the code blocks can be properly copied to clipboard. | https://api.github.com/repos/huggingface/datasets/issues/2091/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2090 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2090/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2090/comments | https://api.github.com/repos/huggingface/datasets/issues/2090/events | https://github.com/huggingface/datasets/pull/2090 | 836,807,498 | MDExOlB1bGxSZXF1ZXN0NTk3MjgwNTEy | 2,090 | Add machine translated multilingual STS benchmark dataset | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hello dear maintainer, are there any comments or questions about this PR?",
"@iamollas thanks for the feedback. I did not see the template.\r\nI improved it...",
"Should be clean for merge IMO.",
"@lhoestq CI is green. ;-)",
"Thanks again ! this is awesome :)",
"Thanks for merging. :-)"
] | 1,616,246,887,000 | 1,617,024,282,000 | 1,617,022,815,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2090",
"html_url": "https://github.com/huggingface/datasets/pull/2090",
"diff_url": "https://github.com/huggingface/datasets/pull/2090.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2090.patch"
} | also see here https://github.com/PhilipMay/stsb-multi-mt | https://api.github.com/repos/huggingface/datasets/issues/2090/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2089 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2089/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2089/comments | https://api.github.com/repos/huggingface/datasets/issues/2089/events | https://github.com/huggingface/datasets/issues/2089 | 836,788,019 | MDU6SXNzdWU4MzY3ODgwMTk= | 2,089 | Add documentaton for dataset README.md files | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi ! We are using the [datasets-tagging app](https://github.com/huggingface/datasets-tagging) to select the tags to add.\r\n\r\nWe are also adding the full list of tags in #2107 \r\nThis covers multilinguality, language_creators, licenses, size_categories and task_categories.\r\n\r\nIn general if you want to add a tag that doesn't exist (for example for a custom license) you must make it start with `other-` and then a custom tag name.\r\n\r\nedit (@theo-m) if you ever find yourself resorting to adding an `other-*` tag, please do ping us somewhere so we can think about adding it to the \"official\" list :)",
"@lhoestq hmm - ok thanks for the answer.\r\nTo be honest I am not sure if this issue can be closed now.\r\nI just wanted to point out that this should either be documented or linked in the documentation.\r\nIf you feel like it is (will be) please just close this.",
"We're still working on the validation+documentation in this.\r\nFeel free to keep this issue open till we've added them",
"@lhoestq what is the status on this? Did you add documentation?",
"Hi ! There's the tagging app at https://huggingface.co/datasets/tagging/ that you can use.\r\nIt shows the list of all the tags you can use.\r\n\r\nIt is based on all the tag sets defined in this folder:\r\nhttps://github.com/huggingface/datasets/tree/master/src/datasets/utils/resources",
"@lhoestq is there something like this form Models?",
"I don't think so. Feel free to take a look at the tags of other models (example [here](https://huggingface.co/bert-base-uncased/blob/main/README.md)). But we should definitely have some docs or an app to write the tags. Feel free to open an issue in the `transformers` repo or in the `huggingface_hub` repo so we can discuss this"
] | 1,616,240,678,000 | 1,626,111,700,000 | null | CONTRIBUTOR | null | null | Hi,
the dataset README files have special headers.
Somehow a documenation of the allowed values and tags is missing.
Could you add that?
Just to give some concrete questions that should be answered imo:
- which values can be passted to multilinguality?
- what should be passed to language_creators?
- which values should licenses have? What do I say when it is a custom license? Should I add a link?
- how should I choose size_categories ? What are valid ranges?
- what are valid task_categories?
Thanks
Philip | https://api.github.com/repos/huggingface/datasets/issues/2089/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2088 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2088/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2088/comments | https://api.github.com/repos/huggingface/datasets/issues/2088/events | https://github.com/huggingface/datasets/pull/2088 | 836,763,733 | MDExOlB1bGxSZXF1ZXN0NTk3MjQ4Mzk1 | 2,088 | change bibtex template to author instead of authors | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Trailing whitespace was removed. So more changes in diff than just this fix."
] | 1,616,232,224,000 | 1,616,514,012,000 | 1,616,514,012,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2088",
"html_url": "https://github.com/huggingface/datasets/pull/2088",
"diff_url": "https://github.com/huggingface/datasets/pull/2088.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2088.patch"
} | Hi,
IMO when using BibTex Author should be used instead of Authors.
See here: http://www.bibtex.org/Using/de/
Thanks
Philip | https://api.github.com/repos/huggingface/datasets/issues/2088/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2087 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2087/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2087/comments | https://api.github.com/repos/huggingface/datasets/issues/2087/events | https://github.com/huggingface/datasets/pull/2087 | 836,587,392 | MDExOlB1bGxSZXF1ZXN0NTk3MDg4NTk2 | 2,087 | Update metadata if dataset features are modified | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq I'll try to add a test later if you think this approach with the wrapper is good.",
"Awesome thank you !\r\nYes this approach with a wrapper is good :)",
"@lhoestq Added a test. To verify that this change fixes the problem, replace:\r\n```\r\n!pip install datasets==1.5\r\n```\r\nwith:\r\n```\r\n!pip install git+https://github.com/mariosasko/datasets-1.git@update-metadata\r\n```\r\nin the first cell of the notebook that is attached to the linked issue.\r\n\r\nThe CI failure is unrelated I think (building the docs locally doesn't throw an error).",
"The CI fail for the docs has been fixed on master.\r\nMerging :)"
] | 1,616,205,923,000 | 1,617,960,333,000 | 1,617,960,333,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2087",
"html_url": "https://github.com/huggingface/datasets/pull/2087",
"diff_url": "https://github.com/huggingface/datasets/pull/2087.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2087.patch"
} | This PR adds a decorator that updates the dataset metadata if a previously executed transform modifies its features.
Fixes #2083
| https://api.github.com/repos/huggingface/datasets/issues/2087/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2086 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2086/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2086/comments | https://api.github.com/repos/huggingface/datasets/issues/2086/events | https://github.com/huggingface/datasets/pull/2086 | 836,249,587 | MDExOlB1bGxSZXF1ZXN0NTk2Nzg0Mjcz | 2,086 | change user permissions to -rw-r--r-- | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I tried this with `ade_corpus_v2` dataset. `ade_corpus_v2-train.arrow` (downloaded dataset) and `cache-25d41a4d3c2d8a25.arrow` (ran a mapping function on the dataset) both had file permission with octal value of `0644`. "
] | 1,616,177,696,000 | 1,616,594,344,000 | 1,616,594,344,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2086",
"html_url": "https://github.com/huggingface/datasets/pull/2086",
"diff_url": "https://github.com/huggingface/datasets/pull/2086.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2086.patch"
} | Fix for #2065 | https://api.github.com/repos/huggingface/datasets/issues/2086/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2085 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2085/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2085/comments | https://api.github.com/repos/huggingface/datasets/issues/2085/events | https://github.com/huggingface/datasets/pull/2085 | 835,870,994 | MDExOlB1bGxSZXF1ZXN0NTk2NDYyOTc2 | 2,085 | Fix max_wait_time in requests | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,616,152,946,000 | 1,616,513,798,000 | 1,616,513,797,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2085",
"html_url": "https://github.com/huggingface/datasets/pull/2085",
"diff_url": "https://github.com/huggingface/datasets/pull/2085.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2085.patch"
} | it was handled as a min time, not max cc @SBrandeis | https://api.github.com/repos/huggingface/datasets/issues/2085/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2084 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2084/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2084/comments | https://api.github.com/repos/huggingface/datasets/issues/2084/events | https://github.com/huggingface/datasets/issues/2084 | 835,750,671 | MDU6SXNzdWU4MzU3NTA2NzE= | 2,084 | CUAD - Contract Understanding Atticus Dataset | {
"login": "theo-m",
"id": 17948980,
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theo-m",
"html_url": "https://github.com/theo-m",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"repos_url": "https://api.github.com/users/theo-m/repos",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"+1 on this request"
] | 1,616,146,063,000 | 1,618,563,044,000 | 1,618,563,044,000 | CONTRIBUTOR | null | null | ## Adding a Dataset
- **Name:** CUAD - Contract Understanding Atticus Dataset
- **Description:** As one of the only large, specialized NLP benchmarks annotated by experts, CUAD can serve as a challenging research benchmark for the broader NLP community.
- **Paper:** https://arxiv.org/abs/2103.06268
- **Data:** https://github.com/TheAtticusProject/cuad/
- **Motivation:** good domain specific datasets are valuable
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| https://api.github.com/repos/huggingface/datasets/issues/2084/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2083 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2083/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2083/comments | https://api.github.com/repos/huggingface/datasets/issues/2083/events | https://github.com/huggingface/datasets/issues/2083 | 835,695,425 | MDU6SXNzdWU4MzU2OTU0MjU= | 2,083 | `concatenate_datasets` throws error when changing the order of datasets to concatenate | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi,\r\n\r\nthis bug is related to `Dataset.{remove_columns, rename_column, flatten}` not propagating the change to the schema metadata when the info features are updated, so this line is the culprit:\r\n```python\r\ncommon_voice_train = common_voice_train.remove_columns(['client_id', 'up_votes', 'down_votes', 'age', 'gender', 'accent', 'locale', 'segment'])\r\n\r\n``` \r\nThe order is important because the resulting dataset inherits the schema metadata of the first dataset passed to the `concatenate_datasets(...)` function (`pa.concat_tables` [docs](https://arrow.apache.org/docs/python/generated/pyarrow.concat_tables.html)). I'll try to fix this ASAP."
] | 1,616,142,588,000 | 1,617,960,333,000 | 1,617,960,333,000 | MEMBER | null | null | Hey,
I played around with the `concatenate_datasets(...)` function: https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate_datasets#datasets.concatenate_datasets
and noticed that when the order in which the datasets are concatenated changes an error is thrown where it should not IMO.
Here is a google colab to reproduce the error: https://colab.research.google.com/drive/17VTFU4KQ735-waWZJjeOHS6yDTfV5ekK?usp=sharing | https://api.github.com/repos/huggingface/datasets/issues/2083/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2082 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2082/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2082/comments | https://api.github.com/repos/huggingface/datasets/issues/2082/events | https://github.com/huggingface/datasets/pull/2082 | 835,401,555 | MDExOlB1bGxSZXF1ZXN0NTk2MDY1NTM0 | 2,082 | Updated card using information from data statement and datasheet | {
"login": "mcmillanmajora",
"id": 26722925,
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcmillanmajora",
"html_url": "https://github.com/mcmillanmajora",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
"gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions",
"organizations_url": "https://api.github.com/users/mcmillanmajora/orgs",
"repos_url": "https://api.github.com/users/mcmillanmajora/repos",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"received_events_url": "https://api.github.com/users/mcmillanmajora/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,616,114,378,000 | 1,616,164,149,000 | 1,616,164,149,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2082",
"html_url": "https://github.com/huggingface/datasets/pull/2082",
"diff_url": "https://github.com/huggingface/datasets/pull/2082.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2082.patch"
} | I updated and clarified the REFreSD [data card](https://github.com/mcmillanmajora/datasets/blob/refresd_card/datasets/refresd/README.md) with information from the Eleftheria's [website](https://elbria.github.io/post/refresd/). I added brief descriptions where the initial card referred to the paper, and I also recreated some of the tables in the paper to show relevant dataset statistics.
I'll email Eleftheria to see if she has any comments on the card. | https://api.github.com/repos/huggingface/datasets/issues/2082/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2081 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2081/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2081/comments | https://api.github.com/repos/huggingface/datasets/issues/2081/events | https://github.com/huggingface/datasets/pull/2081 | 835,112,968 | MDExOlB1bGxSZXF1ZXN0NTk1ODE3OTM4 | 2,081 | Fix docstrings issues | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [] | 1,616,091,061,000 | 1,617,806,263,000 | 1,617,806,263,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2081",
"html_url": "https://github.com/huggingface/datasets/pull/2081",
"diff_url": "https://github.com/huggingface/datasets/pull/2081.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2081.patch"
} | Fix docstring issues. | https://api.github.com/repos/huggingface/datasets/issues/2081/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2080 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2080/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2080/comments | https://api.github.com/repos/huggingface/datasets/issues/2080/events | https://github.com/huggingface/datasets/issues/2080 | 835,023,000 | MDU6SXNzdWU4MzUwMjMwMDA= | 2,080 | Multidimensional arrays in a Dataset | {
"login": "vermouthmjl",
"id": 3142085,
"node_id": "MDQ6VXNlcjMxNDIwODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3142085?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vermouthmjl",
"html_url": "https://github.com/vermouthmjl",
"followers_url": "https://api.github.com/users/vermouthmjl/followers",
"following_url": "https://api.github.com/users/vermouthmjl/following{/other_user}",
"gists_url": "https://api.github.com/users/vermouthmjl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vermouthmjl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vermouthmjl/subscriptions",
"organizations_url": "https://api.github.com/users/vermouthmjl/orgs",
"repos_url": "https://api.github.com/users/vermouthmjl/repos",
"events_url": "https://api.github.com/users/vermouthmjl/events{/privacy}",
"received_events_url": "https://api.github.com/users/vermouthmjl/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi !\r\n\r\nThis is actually supported ! but not yet in `from_pandas`.\r\nYou can use `from_dict` for now instead:\r\n```python\r\nfrom datasets import Dataset, Array2D, Features, Value\r\nimport pandas as pd\r\nimport numpy as np\r\n\r\ndataset = {\r\n 'bbox': [\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]])\r\n ],\r\n 'input_ids': [1, 2, 3, 4]\r\n}\r\ndataset = Dataset.from_dict(dataset)\r\n```\r\n\r\nThis will work but to use it with the torch formatter you must specify the `Array2D` feature type in order to tell the shape:\r\n```python\r\nfrom datasets import Dataset, Array2D, Features, Value\r\nimport pandas as pd\r\nimport numpy as np\r\n\r\ndataset = {\r\n 'bbox': [\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]])\r\n ],\r\n 'input_ids': [1, 2, 3, 4]\r\n}\r\ndataset = Dataset.from_dict(dataset, features=Features({\r\n \"bbox\": Array2D(shape=(3, 4), dtype=\"int64\"),\r\n \"input_ids\": Value(\"int64\")\r\n}))\r\ndataset.set_format(\"torch\")\r\nprint(dataset[0]['bbox'])\r\n# tensor([[1, 2, 3, 4],\r\n# [1, 2, 3, 4],\r\n# [1, 2, 3, 4]])\r\n```\r\nIf you don't specify the `Array2D` feature type, then the inferred type will be Sequence(Sequence(Value(\"int64\"))) and therefore the torch formatter will return list of tensors",
"Thanks for the explanation. \r\nWith my original DataFrame, I did\r\n```\r\ndataset = dataset.to_dict(\"list\")\r\n```\r\nand then the rest of the transformation from dictionary works just fine."
] | 1,616,084,954,000 | 1,616,676,413,000 | 1,616,676,413,000 | NONE | null | null | Hi,
I'm trying to put together a `datasets.Dataset` to be used with LayoutLM which is available in `transformers`. This model requires as input the bounding boxes of each of the token of a sequence. This is when I realized that `Dataset` does not support multi-dimensional arrays as a value for a column in a row.
The following code results in conversion error in pyarrow (`pyarrow.lib.ArrowInvalid: ('Can only convert 1-dimensional array values', 'Conversion failed for column bbox with type object')`)
```
from datasets import Dataset
import pandas as pd
import numpy as np
dataset = pd.DataFrame({
'bbox': [
np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),
np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),
np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),
np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]])
],
'input_ids': [1, 2, 3, 4]
})
dataset = Dataset.from_pandas(dataset)
```
Since I wanted to use pytorch for the downstream training task, I also tried a few ways to directly put in a column of 2-D pytorch tensor in a formatted dataset, but I can only have a list of 1-D tensors, or a list of arrays, or a list of lists.
```
import torch
from datasets import Dataset
import pandas as pd
dataset = pd.DataFrame({
'bbox': [
[[1,2,3,4],[1,2,3,4],[1,2,3,4]],
[[1,2,3,4],[1,2,3,4],[1,2,3,4]],
[[1,2,3,4],[1,2,3,4],[1,2,3,4]],
[[1,2,3,4],[1,2,3,4],[1,2,3,4]]
],
'input_ids': [1, 2, 3, 4]
})
dataset = Dataset.from_pandas(dataset)
def test(examples):
return {'bbbox': torch.Tensor(examples['bbox'])}
dataset = dataset.map(test)
print(dataset[0]['bbox'])
print(dataset[0]['bbbox'])
dataset.set_format(type='torch', columns=['input_ids', 'bbox'], output_all_columns=True)
print(dataset[0]['bbox'])
print(dataset[0]['bbbox'])
def test2(examples):
return {'bbbox': torch.stack(examples['bbox'])}
dataset = dataset.map(test2)
print(dataset[0]['bbox'])
print(dataset[0]['bbbox'])
```
Is is possible to support n-D arrays/tensors in datasets?
It seems that it can also be useful for this [feature request](https://github.com/huggingface/datasets/issues/263). | https://api.github.com/repos/huggingface/datasets/issues/2080/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2079 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2079/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2079/comments | https://api.github.com/repos/huggingface/datasets/issues/2079/events | https://github.com/huggingface/datasets/pull/2079 | 834,920,493 | MDExOlB1bGxSZXF1ZXN0NTk1NjU2MDQ5 | 2,079 | Refactorize Metric.compute signature to force keyword arguments only | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,616,079,950,000 | 1,616,513,504,000 | 1,616,513,504,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2079",
"html_url": "https://github.com/huggingface/datasets/pull/2079",
"diff_url": "https://github.com/huggingface/datasets/pull/2079.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2079.patch"
} | Minor refactoring of Metric.compute signature to force the use of keyword arguments, by using the single star syntax. | https://api.github.com/repos/huggingface/datasets/issues/2079/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2078 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2078/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2078/comments | https://api.github.com/repos/huggingface/datasets/issues/2078/events | https://github.com/huggingface/datasets/issues/2078 | 834,694,819 | MDU6SXNzdWU4MzQ2OTQ4MTk= | 2,078 | MemoryError when computing WER metric | {
"login": "diego-fustes",
"id": 5707233,
"node_id": "MDQ6VXNlcjU3MDcyMzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5707233?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/diego-fustes",
"html_url": "https://github.com/diego-fustes",
"followers_url": "https://api.github.com/users/diego-fustes/followers",
"following_url": "https://api.github.com/users/diego-fustes/following{/other_user}",
"gists_url": "https://api.github.com/users/diego-fustes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/diego-fustes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/diego-fustes/subscriptions",
"organizations_url": "https://api.github.com/users/diego-fustes/orgs",
"repos_url": "https://api.github.com/users/diego-fustes/repos",
"events_url": "https://api.github.com/users/diego-fustes/events{/privacy}",
"received_events_url": "https://api.github.com/users/diego-fustes/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067393914,
"node_id": "MDU6TGFiZWwyMDY3MzkzOTE0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug",
"name": "metric bug",
"color": "25b21e",
"default": false,
"description": "A bug in a metric script"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi ! Thanks for reporting.\r\nWe're indeed using `jiwer` to compute the WER.\r\n\r\nMaybe instead of calling `jiwer.wer` once for all the preditions/references we can compute the WER iteratively to avoid memory issues ? I'm not too familial with `jiwer` but this must be possible.\r\n\r\nCurrently the code to compute the WER is defined here:\r\n\r\nhttps://github.com/huggingface/nlp/blob/349ac4398a3bcae6356f14c5754483383a60e8a4/metrics/wer/wer.py#L93-L94",
"Hi,\r\n\r\nI've just pushed a pull request that is related to this issue https://github.com/huggingface/datasets/pull/2169. It's not iterative, but it should avoid memory errors. It's based on the editdistance python library. An iterative implementation should be as easy as storing scores and words stepwise and dividing at the end. ",
"I see, this was solved by other thread. Ok, let me know if you want to switch the implementation for any reason :)",
"Thanks for diving into this anyway ^^'\r\nAs you said this actually got solved a few days ago",
"Someone created an issue https://github.com/jitsi/jiwer/issues/40 at jiwer which shows that this is still a problem in the current version. Would be curious to figure out how this can be fixed by jiwer... :) I assume that it runs of out memory because it's trying to compute the WER over (too many) test samples?",
"Hi !\r\n\r\nIt's computed iteratively so not sure what could go wrong\r\n\r\nhttps://github.com/huggingface/datasets/blob/8afd0ba8c27800a55ea69d9fcd702dc97d9c16d8/metrics/wer/wer.py#L100-L106\r\n\r\n@NiklasHoltmeyer what version of `datasets` are you running ?\r\n",
"One possible explanation might be that it is the user who is passing all the sentences in a single element to `wer.compute`?\r\n\r\nAs current implementation iterates over the elements of `predictions` and `references`, this can be problematic if `predictions` and `references` contain a single huge element each. \r\n\r\nThis could be the case, for example, with a single string with all sentences:\r\n```python\r\nresult[\"predicted\"] = \"One sentence. Other sentence.\"\r\n```\r\nor with a __double__ nested list of sentence lists\r\n```python\r\nresult[\"predicted\"] = [[ [\"One sentence.\"], [\"Other sentence\"] ]]\r\n```\r\n\r\nThe user should check the dimensions of the data structure passed to `predictions` and `references`.",
"Hi all,\r\n\r\nin my case I was using and older version of datasets and, as @albertvillanova points out, passing the full list of sentences for the metric calculation. The problem was in the way jiwer implements WER, as it tries to compute WER for the full list at once instead of doing it element-wise. I think that with the latest implementation of datasets, or by using the alternative WER function that I've contributed on this [pull request](https://github.com/huggingface/datasets/pull/2169) there shouldn't be memory errors.",
"@lhoestq i was using Datasets==1.5.0 with 1.6.1 it worked (atleast the first run) but 1.5.0 is not compatible with my preprocessing. i cant save my dataset to a parquet file while using the latest datasets version\r\n\r\n-> \r\n```\r\n File \"../preprocess_dataset.py\", line 132, in <module>\r\n pq.write_table(train_dataset.data, f'{resampled_data_dir}/{data_args.dataset_config_name}.train.parquet')\r\n File \"/usr/local/lib/python3.8/dist-packages/pyarrow/parquet.py\", line 1674, in write_table\r\n writer.write_table(table, row_group_size=row_group_size)\r\n File \"/usr/local/lib/python3.8/dist-packages/pyarrow/parquet.py\", line 588, in write_table\r\n self.writer.write_table(table, row_group_size=row_group_size)\r\nTypeError: Argument 'table' has incorrect type (expected pyarrow.lib.Table, got ConcatenationTable)\r\n``` \r\n\r\nif i do \r\n```\r\nimport pyarrow.parquet as pq\r\n...\r\n...\r\npq.write_table(train_dataset.data, 'train.parquet')\r\npq.write_table(eval_dataset.data, 'eval.parquet')\r\n```\r\n\r\nwhile using 1.6.1. and its working with 1.5.0\r\n",
"Hi ! You can pass dataset.data.table instead of dataset.data to pq.write_table",
"This seems to be working so far! Thanks!"
] | 1,616,067,005,000 | 1,619,857,909,000 | 1,617,693,643,000 | NONE | null | null | Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation:
```
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
However, I receive the following exception:
`Traceback (most recent call last):
File "/home/diego/IpGlobal/wav2vec/test_wav2vec.py", line 51, in <module>
print(wer.compute(predictions=result["predicted"], references=result["target"]))
File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/datasets/metric.py", line 403, in compute
output = self._compute(predictions=predictions, references=references, **kwargs)
File "/home/diego/.cache/huggingface/modules/datasets_modules/metrics/wer/73b2d32b723b7fb8f204d785c00980ae4d937f12a65466f8fdf78706e2951281/wer.py", line 94, in _compute
return wer(references, predictions)
File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 81, in wer
truth, hypothesis, truth_transform, hypothesis_transform, **kwargs
File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 192, in compute_measures
H, S, D, I = _get_operation_counts(truth, hypothesis)
File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 273, in _get_operation_counts
editops = Levenshtein.editops(source_string, destination_string)
MemoryError`
My system has more than 10GB of available RAM. Looking at the code, I think that it could be related to the way jiwer does the calculation, as it is pasting all the sentences in a single string before calling Levenshtein editops function.
| https://api.github.com/repos/huggingface/datasets/issues/2078/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2077 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2077/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2077/comments | https://api.github.com/repos/huggingface/datasets/issues/2077/events | https://github.com/huggingface/datasets/pull/2077 | 834,649,536 | MDExOlB1bGxSZXF1ZXN0NTk1NDI0MTYw | 2,077 | Bump huggingface_hub version | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"🔥 "
] | 1,616,064,874,000 | 1,616,067,206,000 | 1,616,067,206,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2077",
"html_url": "https://github.com/huggingface/datasets/pull/2077",
"diff_url": "https://github.com/huggingface/datasets/pull/2077.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2077.patch"
} | `0.0.2 => 0.0.6` | https://api.github.com/repos/huggingface/datasets/issues/2077/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2076 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2076/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2076/comments | https://api.github.com/repos/huggingface/datasets/issues/2076/events | https://github.com/huggingface/datasets/issues/2076 | 834,445,296 | MDU6SXNzdWU4MzQ0NDUyOTY= | 2,076 | Issue: Dataset download error | {
"login": "XuhuiZhou",
"id": 20436061,
"node_id": "MDQ6VXNlcjIwNDM2MDYx",
"avatar_url": "https://avatars.githubusercontent.com/u/20436061?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XuhuiZhou",
"html_url": "https://github.com/XuhuiZhou",
"followers_url": "https://api.github.com/users/XuhuiZhou/followers",
"following_url": "https://api.github.com/users/XuhuiZhou/following{/other_user}",
"gists_url": "https://api.github.com/users/XuhuiZhou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XuhuiZhou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XuhuiZhou/subscriptions",
"organizations_url": "https://api.github.com/users/XuhuiZhou/orgs",
"repos_url": "https://api.github.com/users/XuhuiZhou/repos",
"events_url": "https://api.github.com/users/XuhuiZhou/events{/privacy}",
"received_events_url": "https://api.github.com/users/XuhuiZhou/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | null | [] | null | [
"Hi @XuhuiZhou, thanks for reporting this issue. \r\n\r\nIndeed, the old links are no longer valid (404 Not Found error), and the script must be updated with the new links to Google Drive.",
"It would be nice to update the urls indeed !\r\n\r\nTo do this, you just need to replace the urls in `iwslt2017.py` and then update the dataset_infos.json file with\r\n```\r\ndatasets-cli test ./datasets/iwslt2017 --all_configs --save_infos --ignore_verifications\r\n```",
"Is this a command to update my local files or fix the file Github repo in general? (I am not so familiar with the datasets-cli command here)\r\n\r\nI also took a brief look at the **Sharing your dataset** section, looks like I could fix that locally and push it to the repo? I guess we are \"canonical\" category?",
"This command will update your local file. Then you can open a Pull Request to push your fix to the github repo :)\r\nAnd yes you are right, it is a \"canonical\" dataset, i.e. a dataset script defined in this github repo (as opposed to dataset repositories of users on the huggingface hub)",
"Hi, thanks for the answer. \r\n\r\nI gave a try to the problem today. But I encountered an upload error: \r\n\r\n```\r\ngit push -u origin fix_link_iwslt\r\nEnter passphrase for key '/home2/xuhuizh/.ssh/id_rsa': \r\nERROR: Permission to huggingface/datasets.git denied to XuhuiZhou.\r\nfatal: Could not read from remote repository.\r\n\r\nPlease make sure you have the correct access rights\r\nand the repository exists.\r\n```\r\n\r\nAny insight here? \r\n\r\nBy the way, when I run the datasets-cli command, it shows the following error, but does not seem to be the error coming from `iwslt.py`\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home2/xuhuizh/anaconda3/envs/UMT/bin/datasets-cli\", line 33, in <module>\r\n sys.exit(load_entry_point('datasets', 'console_scripts', 'datasets-cli')())\r\n File \"/home2/xuhuizh/projects/datasets/src/datasets/commands/datasets_cli.py\", line 35, in main\r\n service.run()\r\n File \"/home2/xuhuizh/projects/datasets/src/datasets/commands/test.py\", line 141, in run\r\n try_from_hf_gcs=False,\r\n File \"/home2/xuhuizh/projects/datasets/src/datasets/builder.py\", line 579, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/home2/xuhuizh/projects/datasets/src/datasets/builder.py\", line 639, in _download_and_prepare\r\n self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), \"dataset source files\"\r\n File \"/home2/xuhuizh/projects/datasets/src/datasets/utils/info_utils.py\", line 32, in verify_checksums\r\n raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))\r\ndatasets.utils.info_utils.ExpectedMoreDownloadedFiles: {'https://wit3.fbk.eu/archive/2017-01-trnmted//texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.tgz'}\r\n```",
"Hi ! To create a PR on this repo your must fork it and create a branch on your fork. See how to fork the repo [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#start-by-preparing-your-environment).\r\nAnd to make the command work without the `ExpectedMoreDownloadedFiles` error, you just need to use the `--ignore_verifications` flag.",
"Hi @XuhuiZhou,\r\n\r\nAs @lhoestq has well explained, you need to fork HF's repository, create a feature branch in your fork, push your changes to it and then open a Pull Request to HF's upstream repository. This is so because at HuggingFace Datasets we follow a development model called \"Fork and Pull Model\". You can find more information here:\r\n- [Understanding the GitHub flow](https://guides.github.com/introduction/flow/)\r\n- [Forking Projects](https://guides.github.com/activities/forking/)\r\n\r\nAlternatively, if you find all these steps too complicated, you can use the GitHub official command line tool: [GitHub CLI](https://cli.github.com/). Once installed, in order to create a Pull Request, you only need to use this command:\r\n```shell\r\ngh pr create --web\r\n```\r\nThis utility will automatically create the fork, push your changes and open a Pull Request, under the hood."
] | 1,616,049,366,000 | 1,616,413,951,000 | null | NONE | null | null | The download link in `iwslt2017.py` file does not seem to work anymore.
For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz`
Would be nice if we could modify it script and use the new downloadable link? | https://api.github.com/repos/huggingface/datasets/issues/2076/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2075 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2075/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2075/comments | https://api.github.com/repos/huggingface/datasets/issues/2075/events | https://github.com/huggingface/datasets/issues/2075 | 834,301,246 | MDU6SXNzdWU4MzQzMDEyNDY= | 2,075 | ConnectionError: Couldn't reach common_voice.py | {
"login": "LifaSun",
"id": 6188893,
"node_id": "MDQ6VXNlcjYxODg4OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6188893?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LifaSun",
"html_url": "https://github.com/LifaSun",
"followers_url": "https://api.github.com/users/LifaSun/followers",
"following_url": "https://api.github.com/users/LifaSun/following{/other_user}",
"gists_url": "https://api.github.com/users/LifaSun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LifaSun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LifaSun/subscriptions",
"organizations_url": "https://api.github.com/users/LifaSun/orgs",
"repos_url": "https://api.github.com/users/LifaSun/repos",
"events_url": "https://api.github.com/users/LifaSun/events{/privacy}",
"received_events_url": "https://api.github.com/users/LifaSun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @LifaSun, thanks for reporting this issue.\r\n\r\nSometimes, GitHub has some connectivity problems. Could you confirm that the problem persists?",
"@albertvillanova Thanks! It works well now. "
] | 1,616,030,346,000 | 1,616,236,181,000 | 1,616,236,181,000 | NONE | null | null | When I run:
from datasets import load_dataset, load_metric
common_voice_train = load_dataset("common_voice", "zh-CN", split="train+validation")
common_voice_test = load_dataset("common_voice", "zh-CN", split="test")
Got:
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/master/datasets/common_voice/common_voice.py
Version:
1.4.1
Thanks! @lhoestq @LysandreJik @thomwolf | https://api.github.com/repos/huggingface/datasets/issues/2075/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2074 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2074/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2074/comments | https://api.github.com/repos/huggingface/datasets/issues/2074/events | https://github.com/huggingface/datasets/pull/2074 | 834,268,463 | MDExOlB1bGxSZXF1ZXN0NTk1MTIzMjYw | 2,074 | Fix size categories in YAML Tags | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> It would be great if there was a way to make the task categories consistent too. For this, the streamlit app can look into all the datasets and check for existing categories and show them in the list. This may add some consistency.\r\n\r\nWe can also update the task lists here: https://github.com/huggingface/datasets-tagging/blob/main/task_set.json",
"Hi @lhoestq,\r\n\r\nThanks for approving.\r\nHow do I add the new categories to the tagging app? What I have added is till `1T` and not `1M`.\r\n\r\nI'll also check the task list :)\r\n\r\nThanks,\r\nGunjan",
"I think you can change it here: https://github.com/huggingface/datasets-tagging/blob/main/tagging_app.py#L412-L423",
"Hi @lhoestq,\r\n\r\nI have made a PR for size categories on `datasets-tagging`\r\n\r\nFor tags, I have thought of adding more tags and categories, based on what I know about the existing datasets, any list will not be exhaustive because the contributors can be very specific or very general. Hence, there could be a continuous process of evaluating existing tags and adding more and more.\r\n\r\n```json\r\n{\r\n \"image-classification\": {\r\n \"description\": \"image classification tasks\",\r\n \"options\": [\r\n \"multi-class-classification\",\r\n \"multi-label-classification\",\r\n \"other\"\r\n ]\r\n },\r\n \"conditional-text-generation\": {\r\n \"description\": \"data-to-text and text transduction tasks such as translation or summarization\",\r\n \"options\": [\r\n \"machine-translation\",\r\n \"sentence-splitting-fusion\",\r\n \"extractive-and-abstractive-summarization\",\r\n \"abstractive-summarization\",\r\n \"extractive-summarization\",\r\n \"multi-document-summarization\",\r\n \"table-to-text\",\r\n \"text-simplification\",\r\n \"explanation-generation\",\r\n \"stuctured-to-text\",\r\n \"other\"\r\n ]\r\n },\r\n \"conditional-speech-generation\": {\r\n \"description\": \"speech generation tasks\",\r\n \"options\": [\r\n \"text-to-speech\",\r\n \"speech-translation\",\r\n \"other\"\r\n ]\r\n },\r\n\r\n \"conditional-structure-generation\":{\r\n \"description\": \"text or speech to structured data\",\r\n \"options\":[\r\n \"knowlege-graph-mining\",\r\n \"code-generation\",\r\n ]\r\n },\r\n \"question-answering\": {\r\n \"description\": \"question answering tasks\",\r\n \"options\": [\r\n \"open-domain-qa\",\r\n \"closed-domain-qa\",\r\n \"multiple-choice-qa\",\r\n \"extractive-qa\",\r\n \"abstractive-qa\",\r\n \"conversational-qa\",\r\n \"multi-document-qa\",\r\n \"other\"\r\n ]\r\n },\r\n \"speech-classification\": {\r\n \"description\": \"speech to label tasks\",\r\n \"options\": [\r\n \"other\"\r\n ]\r\n },\r\n \"sequence-modeling\": {\r\n \"description\": \"such as language, speech or dialogue modeling\",\r\n \"options\": [\r\n \"dialogue-modeling\",\r\n \"language-modeling\",\r\n \"speech-modeling\",\r\n \"multi-turn\",\r\n \"slot-filling\",\r\n \"other\"\r\n ]\r\n },\r\n \"speech-recognition\": {\r\n \"description\": \"speech to text tasks\",\r\n \"options\": [\r\n \"automatic-speech-recognition\",\r\n \"other\"\r\n ]\r\n },\r\n \"structure-prediction\": {\r\n \"description\": \"predicting structural properties of the text, such as syntax\",\r\n \"options\": [\r\n \"coreference-resolution\",\r\n \"named-entity-recognition\",\r\n \"part-of-speech-tagging\",\r\n \"parsing\",\r\n \"sentence-segmentation\",\r\n \"single-span-prediction\",\r\n \"multi-span-prediction\",\r\n \"clause-or-phrase-segmentation\",\r\n \"dependency-parsing\",\r\n \"constituency-parsing\",\r\n \"other\"\r\n ]\r\n },\r\n\r\n \"text-classification\": {\r\n \"description\": \"predicting a class index or boolean value\",\r\n \"options\": [\r\n \"acceptability-classification\",\r\n \"entity-linking-classification\",\r\n \"relation-extraction\",\r\n \"common-sense-reasoning\",\r\n \"fact-checking\",\r\n \"intent-classification\",\r\n \"multi-class-classification\",\r\n \"multi-label-classification\",\r\n \"natural-language-inference\",\r\n \"semantic-similarity-classification\",\r\n \"sentiment-classification\",\r\n \"topic-classification\",\r\n \"emotion-classification\",\r\n \"token-classification\",\r\n \"word-sense-disambiguation\",\r\n \"offense-classification\",\r\n \"hate-speech-classification\",\r\n \"language-classification\",\r\n \"bias-classification\",\r\n \"other\"\r\n ]\r\n },\r\n \"text-retrieval\": {\r\n \"description\": \"information or text retrieval tasks\",\r\n \"options\": [\r\n \"document-retrieval\",\r\n \"utterance-retrieval\",\r\n \"entity-linking-retrieval\",\r\n \"fact-checking-retrieval\",\r\n \"other\"\r\n ]\r\n },\r\n \"text-scoring\": {\r\n \"description\": \"text scoring tasks, predicting a real valued score for some text\",\r\n \"options\": [\r\n \"semantic-similarity-scoring\",\r\n \"sentiment-scoring\",\r\n \"other\"\r\n ]\r\n },\r\n \"other\": {\r\n \"description\": \"raw data or other task families\",\r\n \"options\": [\r\n \"data-mining\",\r\n \"raw-text\",\r\n \"raw-speech\",\r\n \"raw-image\",\r\n \"other\"\r\n ]\r\n }\r\n}\r\n```\r\nI'll sort this when adding it to the .json. Also, I'll change categories according to this if this seems okay to you and commit it to this PR.\r\n\r\nI'll also fix spelling others, and some categories which are partially correct, for e.g. `other-machine-translation` to the correct tag.\r\nLastly, with the options also we can add a description to make it easier for the users to understand what we mean by each option. Example, for \"emotion-classification\", we can explain what kinds of data we are talking about, or what we mean by \"single-span-prediction\", etc.",
"Good idea thank you ! Can you open a PR on datasets-tagging for the tasks as well ?\r\nAlso you can update the dataset card with the new tasks categories in another PR if you don't mind",
"Hi @lhoestq,\r\n\r\nThanks, what all do I need to add to merge this PR?",
"We can merge this one once the PR on dataset sizes is merged on `datasets-tagging` ;)",
"Hi @lhoestq,\r\n\r\nOne problem with this approach is that for datasets like `ccaligned_multilingual`, the infos won't be complete because we don't have all configs. In that case, people might face trouble finding the datatset using the tag. Although, they probably won't be checking the size tag for a dataset like that.\r\n\r\nWhat do you think?\r\n\r\nCC @theo-m ",
"For datasets like `ccaligned_multilingual` it's important to have all the tags for users to search and find it. Currently is has the full list of tags (without the config names). So you can actually find the dataset, but you don't know what tag correspond to what configuration. "
] | 1,616,025,756,000 | 1,616,519,470,000 | 1,616,519,470,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2074",
"html_url": "https://github.com/huggingface/datasets/pull/2074",
"diff_url": "https://github.com/huggingface/datasets/pull/2074.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2074.patch"
} | This PR fixes several `size_categories` in YAML tags and makes them consistent. Additionally, I have added a few more categories after `1M`, up to `1T`. I would like to add that to the streamlit app also.
This PR also adds a couple of infos that I found missing.
The code for generating this:
```python
for dataset in sorted(os.listdir('./datasets/')):
if '.' not in dataset and dataset not in ['c4', 'csv', 'downloads', 'cc100', 'ccaligned_multilingual', 'celeb_a', 'chr_en', 'emea', 'glue']:
infos = {}
stats = {}
st = ''
with open(f'datasets/{dataset}/README.md') as f:
d = f.read()
start_dash = d.find('---') + 3
end_dash = d[start_dash:].find('---') + 3
rest_text = d[end_dash + 3:]
try:
full_yaml = OmegaConf.create(d[start_dash:end_dash])
readme = OmegaConf.to_container(full_yaml['size_categories'], resolve=True)
except Exception as e:
print(e)
continue
try:
with open(f'datasets/{dataset}/dataset_infos.json') as f:
data = json.load(f)
except Exception as e:
print(e)
continue # Skip those without infos.
done_set = set([])
num_keys = len(data.keys())
for keys in data:
# dataset = load_dataset('opus100', f'{dirs}')
total = 0
for split in data[keys]['splits']:
total = total + data[keys]['splits'][split]['num_examples']
if total < 1000:
st += "- n<1K" + '\n'
infos[keys] = ["n<1K"]
elif total >= 1000 and total < 10000:
infos[keys] = ["1K<n<10K"]
elif total >= 10000 and total < 100000:
infos[keys] = ["10K<n<100K"]
elif total >= 100000 and total < 1000000:
infos[keys] = ["100K<n<1M"]
elif total >= 1000000 and total < 10000000:
infos[keys] = ["1M<n<10M"]
elif total >= 10000000 and total < 100000000:
infos[keys] = ["10M<n<100M"]
elif total >= 100000000 and total < 1000000000:
infos[keys] = ["100M<n<1B"]
elif total >= 1000000000 and total < 10000000000:
infos[keys] = ["1B<n<10B"]
elif total >= 10000000000 and total < 100000000000:
infos[keys] = ["10B<n<100B"]
elif total >= 100000000000 and total < 1000000000000:
infos[keys] = ["100B<n<1T"]
else:
infos[keys] = ["n>1T"]
done_set = done_set.union(infos[keys])
if (isinstance(readme, list) and list(infos.values())[0] != readme) or (isinstance(readme, dict) and readme != infos):
print('-' * 30)
print(done_set)
print(f"Changing Full YAML for {dataset}")
print(OmegaConf.to_yaml(full_yaml))
if len(done_set) == 1:
full_yaml['size_categories'] = list(done_set)
else:
full_yaml['size_categories'] = dict([(k, v) for k, v in sorted(infos.items(), key=lambda x: x[0])])
full_yaml_string = OmegaConf.to_yaml(full_yaml)
print('-' * 30)
print(full_yaml_string)
inp = input('Do you wish to continue?(Y/N)')
if inp == 'Y':
with open(f'./datasets/{dataset}/README.md', 'w') as f:
f.write('---\n')
f.write(full_yaml_string)
f.write('---')
f.write(rest_text)
else:
break
```
Note that the lower-bound is inclusive. I'm unsure if this is how it is done in the tagging app.
EDIT:
It would be great if there was a way to make the task categories consistent too. For this, the streamlit app can look into all the datasets and check for existing categories and show them in the list. This may add some consistency.
EDIT:
I understand this will not work for cases where only the infos for some of the configs are present, for example: `ccaligned_multingual` has only 5 out of several configs present, and infos has only information about them. Hence, I have skipped a few datasets in the code, if there are more such datasets, then I'll ignore them too. | https://api.github.com/repos/huggingface/datasets/issues/2074/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2073 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2073/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2073/comments | https://api.github.com/repos/huggingface/datasets/issues/2073/events | https://github.com/huggingface/datasets/pull/2073 | 834,192,501 | MDExOlB1bGxSZXF1ZXN0NTk1MDYyMzQ2 | 2,073 | Fixes check of TF_AVAILABLE and TORCH_AVAILABLE | {
"login": "philschmid",
"id": 32632186,
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philschmid",
"html_url": "https://github.com/philschmid",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"repos_url": "https://api.github.com/users/philschmid/repos",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,616,016,533,000 | 1,616,058,565,000 | 1,616,058,564,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2073",
"html_url": "https://github.com/huggingface/datasets/pull/2073",
"diff_url": "https://github.com/huggingface/datasets/pull/2073.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2073.patch"
} | # What is this PR doing
This PR implements the checks if `Tensorflow` and `Pytorch` are available the same way as `transformers` does it. I added the additional checks for the different `Tensorflow` and `torch` versions. #2068 | https://api.github.com/repos/huggingface/datasets/issues/2073/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2072 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2072/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2072/comments | https://api.github.com/repos/huggingface/datasets/issues/2072/events | https://github.com/huggingface/datasets/pull/2072 | 834,054,837 | MDExOlB1bGxSZXF1ZXN0NTk0OTQ5NjA4 | 2,072 | Fix docstring issues | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"I think I will stop pushing to this PR, so that it can me merged for today release. \r\n\r\nI will open another PR for further fixing docs.\r\n\r\nDo you agree, @lhoestq ?",
"Sounds good thanks !"
] | 1,616,004,824,000 | 1,616,574,057,000 | 1,616,071,281,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2072",
"html_url": "https://github.com/huggingface/datasets/pull/2072",
"diff_url": "https://github.com/huggingface/datasets/pull/2072.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2072.patch"
} | Fix docstring issues. | https://api.github.com/repos/huggingface/datasets/issues/2072/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2071 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2071/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2071/comments | https://api.github.com/repos/huggingface/datasets/issues/2071/events | https://github.com/huggingface/datasets/issues/2071 | 833,950,824 | MDU6SXNzdWU4MzM5NTA4MjQ= | 2,071 | Multiprocessing is slower than single process | {
"login": "theo-m",
"id": 17948980,
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theo-m",
"html_url": "https://github.com/theo-m",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"repos_url": "https://api.github.com/users/theo-m/repos",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"dupe of #1992"
] | 1,615,997,338,000 | 1,616,058,623,000 | 1,616,058,623,000 | CONTRIBUTOR | null | null | ```python
# benchmark_filter.py
import logging
import sys
import time
from datasets import load_dataset, set_caching_enabled
if __name__ == "__main__":
set_caching_enabled(False)
logging.basicConfig(level=logging.DEBUG)
bc = load_dataset("bookcorpus")
now = time.time()
try:
bc["train"].filter(lambda x: len(x["text"]) < 64, num_proc=int(sys.argv[1]))
except Exception as e:
print(f"cancelled: {e}")
elapsed = time.time() - now
print(elapsed)
```
Running `python benchmark_filter.py 1` (20min+) is faster than `python benchmark_filter.py 2` (2hrs+) | https://api.github.com/repos/huggingface/datasets/issues/2071/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2070 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2070/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2070/comments | https://api.github.com/repos/huggingface/datasets/issues/2070/events | https://github.com/huggingface/datasets/issues/2070 | 833,799,035 | MDU6SXNzdWU4MzM3OTkwMzU= | 2,070 | ArrowInvalid issue for squad v2 dataset | {
"login": "MichaelYxWang",
"id": 29818977,
"node_id": "MDQ6VXNlcjI5ODE4OTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/29818977?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MichaelYxWang",
"html_url": "https://github.com/MichaelYxWang",
"followers_url": "https://api.github.com/users/MichaelYxWang/followers",
"following_url": "https://api.github.com/users/MichaelYxWang/following{/other_user}",
"gists_url": "https://api.github.com/users/MichaelYxWang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MichaelYxWang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MichaelYxWang/subscriptions",
"organizations_url": "https://api.github.com/users/MichaelYxWang/orgs",
"repos_url": "https://api.github.com/users/MichaelYxWang/repos",
"events_url": "https://api.github.com/users/MichaelYxWang/events{/privacy}",
"received_events_url": "https://api.github.com/users/MichaelYxWang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! This error happens when you use `map` in batched mode and then your function doesn't return the same number of values per column.\r\n\r\nIndeed since you're using `map` in batched mode, `prepare_validation_features` must take a batch as input (i.e. a dictionary of multiple rows of the dataset), and return a batch.\r\n\r\nHowever it seems like `tokenized_examples` doesn't have the same number of elements in each field. One field seems to have `1180` elements while `candidate_attention_mask` only has `1178`."
] | 1,615,989,109,000 | 1,628,099,836,000 | 1,628,099,836,000 | NONE | null | null | Hello, I am using the huggingface official question answering example notebook (https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb).
In the prepare_validation_features function, I made some modifications to tokenize a new set of quesions with the original contexts and save them in three different list called candidate_input_dis, candidate_attetion_mask and candidate_token_type_ids. When I try to run the next cell for dataset.map, I got the following error:
`ArrowInvalid: Column 1 named candidate_attention_mask expected length 1180 but got length 1178`
My code is as follows:
```
def generate_candidate_questions(examples):
val_questions = examples["question"]
candididate_questions = random.sample(datasets["train"]["question"], len(val_questions))
candididate_questions = [x[:max_length] for x in candididate_questions]
return candididate_questions
def prepare_validation_features(examples, use_mixing=False):
pad_on_right = tokenizer.padding_side == "right"
tokenized_examples = tokenizer(
examples["question" if pad_on_right else "context"],
examples["context" if pad_on_right else "question"],
truncation="only_second" if pad_on_right else "only_first",
max_length=max_length,
stride=doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)
if use_mixing:
candidate_questions = generate_candidate_questions(examples)
tokenized_candidates = tokenizer(
candidate_questions if pad_on_right else examples["context"],
examples["context"] if pad_on_right else candidate_questions,
truncation="only_second" if pad_on_right else "only_first",
max_length=max_length,
stride=doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
tokenized_examples["example_id"] = []
if use_mixing:
tokenized_examples["candidate_input_ids"] = tokenized_candidates["input_ids"]
tokenized_examples["candidate_attention_mask"] = tokenized_candidates["attention_mask"]
tokenized_examples["candidate_token_type_ids"] = tokenized_candidates["token_type_ids"]
for i in range(len(tokenized_examples["input_ids"])):
sequence_ids = tokenized_examples.sequence_ids(i)
context_index = 1 if pad_on_right else 0
sample_index = sample_mapping[i]
tokenized_examples["example_id"].append(examples["id"][sample_index])
tokenized_examples["offset_mapping"][i] = [
(o if sequence_ids[k] == context_index else None)
for k, o in enumerate(tokenized_examples["offset_mapping"][i])
]
return tokenized_examples
validation_features = datasets["validation"].map(
lambda xs: prepare_validation_features(xs, True),
batched=True,
remove_columns=datasets["validation"].column_names
)
```
I guess this might happen because of the batched=True. I see similar issues in this repo related to arrow table length mismatch error, but in their cases, the numbers vary a lot. In my case, this error always happens when the expected length and unexpected length are very close. Thanks for the help! | https://api.github.com/repos/huggingface/datasets/issues/2070/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2069 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2069/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2069/comments | https://api.github.com/repos/huggingface/datasets/issues/2069/events | https://github.com/huggingface/datasets/pull/2069 | 833,768,926 | MDExOlB1bGxSZXF1ZXN0NTk0NzA5ODYw | 2,069 | Add and fix docstring for NamedSplit | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Maybe we should add some other split classes?"
] | 1,615,987,168,000 | 1,616,063,260,000 | 1,616,063,260,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2069",
"html_url": "https://github.com/huggingface/datasets/pull/2069",
"diff_url": "https://github.com/huggingface/datasets/pull/2069.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2069.patch"
} | Add and fix docstring for `NamedSplit`, which was missing. | https://api.github.com/repos/huggingface/datasets/issues/2069/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2068 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2068/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2068/comments | https://api.github.com/repos/huggingface/datasets/issues/2068/events | https://github.com/huggingface/datasets/issues/2068 | 833,602,832 | MDU6SXNzdWU4MzM2MDI4MzI= | 2,068 | PyTorch not available error on SageMaker GPU docker though it is installed | {
"login": "sivakhno",
"id": 1651457,
"node_id": "MDQ6VXNlcjE2NTE0NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1651457?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sivakhno",
"html_url": "https://github.com/sivakhno",
"followers_url": "https://api.github.com/users/sivakhno/followers",
"following_url": "https://api.github.com/users/sivakhno/following{/other_user}",
"gists_url": "https://api.github.com/users/sivakhno/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sivakhno/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sivakhno/subscriptions",
"organizations_url": "https://api.github.com/users/sivakhno/orgs",
"repos_url": "https://api.github.com/users/sivakhno/repos",
"events_url": "https://api.github.com/users/sivakhno/events{/privacy}",
"received_events_url": "https://api.github.com/users/sivakhno/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"cc @philschmid ",
"Hey @sivakhno,\r\n\r\nhow does your `requirements.txt` look like to install the `datasets` library and which version of it are you running? Can you try to install `datasets>=1.4.0`",
"Hi @philschmid - thanks for suggestion. I am using `datasets==1.4.1`. \r\nI have also tried using `torch=1.6.0` (docker `763104351884.dkr.ecr.eu-central-1.amazonaws.com/pytorch-training:1.6.0-gpu-py3 `), but the error is the same. ",
"Could paste the code you use the start your training job and the fine-tuning script you run? ",
"@sivakhno this should be now fixed in `datasets>=1.5.0`. ",
"@philschmid Recently released tensorflow-macos seems to be missing. ",
"I've created a PR to add this. "
] | 1,615,975,467,000 | 1,623,646,050,000 | 1,623,646,050,000 | NONE | null | null | I get en error when running data loading using SageMaker SDK
```
File "main.py", line 34, in <module>
run_training()
File "main.py", line 25, in run_training
dm.setup('fit')
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn
return fn(*args, **kwargs)
File "/opt/ml/code/data_module.py", line 103, in setup
self.dataset[split].set_format(type="torch", columns=self.columns)
File "/opt/conda/lib/python3.6/site-packages/datasets/fingerprint.py", line 337, in wrapper
out = func(self, *args, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 995, in set_format
_ = get_formatter(type, **format_kwargs)
File "/opt/conda/lib/python3.6/site-packages/datasets/formatting/__init__.py", line 114, in get_formatter
raise _FORMAT_TYPES_ALIASES_UNAVAILABLE[format_type]
ValueError: PyTorch needs to be installed to be able to return PyTorch tensors.
```
when trying to execute dataset loading using this notebook https://github.com/PyTorchLightning/pytorch-lightning/blob/master/notebooks/04-transformers-text-classification.ipynb, specifically lines
```
self.columns = [c for c in self.dataset[split].column_names if c in self.loader_columns]
self.dataset[split].set_format(type="torch", columns=self.columns)
```
The SageMaker docker image used is 763104351884.dkr.ecr.eu-central-1.amazonaws.com/pytorch-training:1.4.0-gpu-py3 .
By running container interactively I have checked that torch loading completes successfully by executing `https://github.com/huggingface/datasets/blob/master/src/datasets/config.py#L39`.
Also as a first line in the data loading module I have
```
import os
os.environ["USE_TF"] = "0"
os.environ["USE_TORCH"] = "1"
````
But unfortunately the error stills persists. Any suggestions would be appreciated as I am stack.
Many Thanks!
| https://api.github.com/repos/huggingface/datasets/issues/2068/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2067 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2067/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2067/comments | https://api.github.com/repos/huggingface/datasets/issues/2067/events | https://github.com/huggingface/datasets/issues/2067 | 833,559,940 | MDU6SXNzdWU4MzM1NTk5NDA= | 2,067 | Multiprocessing windows error | {
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Thanks for reporting.\r\nThis looks like a bug, could you try to provide a minimal code example that reproduces the issue ? This would be very helpful !\r\n\r\nOtherwise I can try to run the wav2vec2 code above on my side but probably not this week..",
"```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset('glue', 'mrpc', split='train')\r\n\r\n\r\nupdated_dataset = dataset.map(lambda example: {'sentence1': 'My sentence: ' + example['sentence1']}, num_proc=4)\r\n\r\n```",
"\r\n\r\n\r\n\r\n\r\nI was able to copy some of the shell \r\nThis is repeating every half second\r\nWin 10, Anaconda with python 3.8, datasets installed from main branche\r\n```\r\n\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 287, in _fixup_main_from_path\r\n _check_not_importing_main()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 116, in spawn_main\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 134, in _check_not_importing_main\r\n main_content = runpy.run_path(main_path,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 265, in run_path\r\n exitcode = _main(fd, parent_sentinel)\r\n raise RuntimeError('''\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 125, in _main\r\nRuntimeError:\r\n An attempt has been made to start a new process before the\r\n current process has finished its bootstrapping phase.\r\n\r\n This probably means that you are not using fork to start your\r\n child processes and you have forgotten to use the proper idiom\r\n in the main module:\r\n\r\n if __name__ == '__main__':\r\n freeze_support()\r\n ...\r\n\r\n The \"freeze_support()\" line can be omitted if the program\r\n is not going to be frozen to produce an executable. return _run_module_code(code, init_globals, run_name,\r\n prepare(preparation_data)\r\n\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 97, in _run_module_code\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 236, in prepare\r\n _run_code(code, mod_globals, init_globals,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 87, in _run_code\r\n _fixup_main_from_path(data['init_main_from_path'])\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 287, in _fixup_main_from_path\r\n exec(code, run_globals)\r\n File \"F:\\Codes\\Python Apps\\asr\\test.py\", line 6, in <module>\r\n updated_dataset = dataset.map(lambda example: {'sentence1': 'My sentence: ' + example['sentence1']}, num_proc=4)\r\n main_content = runpy.run_path(main_path,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 1370, in map\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 265, in run_path\r\n with Pool(num_proc, initargs=(RLock(),), initializer=tqdm.set_lock) as pool:\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\context.py\", line 119, in Pool\r\n return _run_module_code(code, init_globals, run_name,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 97, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n return Pool(processes, initializer, initargs, maxtasksperchild,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 87, in _run_code\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 212, in __init__\r\n exec(code, run_globals)\r\n File \"F:\\Codes\\Python Apps\\asr\\test.py\", line 6, in <module>\r\n self._repopulate_pool()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 303, in _repopulate_pool\r\n updated_dataset = dataset.map(lambda example: {'sentence1': 'My sentence: ' + example['sentence1']}, num_proc=4)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 1370, in map\r\n return self._repopulate_pool_static(self._ctx, self.Process,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 326, in _repopulate_pool_static\r\n with Pool(num_proc, initargs=(RLock(),), initializer=tqdm.set_lock) as pool:\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\context.py\", line 119, in Pool\r\n w.start()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\process.py\", line 121, in start\r\n return Pool(processes, initializer, initargs, maxtasksperchild,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 212, in __init__\r\n self._popen = self._Popen(self)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\context.py\", line 327, in _Popen\r\n self._repopulate_pool()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 303, in _repopulate_pool\r\n return Popen(process_obj)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\popen_spawn_win32.py\", line 45, in __init__\r\n return self._repopulate_pool_static(self._ctx, self.Process,\r\n prep_data = spawn.get_preparation_data(process_obj._name)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 326, in _repopulate_pool_static\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 154, in get_preparation_data\r\n _check_not_importing_main()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 134, in _check_not_importing_main\r\n w.start()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\process.py\", line 121, in start\r\n raise RuntimeError('''\r\nRuntimeError:\r\n An attempt has been made to start a new process before the\r\n current process has finished its bootstrapping phase.\r\n\r\n This probably means that you are not using fork to start your\r\n child processes and you have forgotten to use the proper idiom\r\n in the main module:\r\n\r\n if __name__ == '__main__':\r\n freeze_support()\r\n ...\r\n```",
"Thanks this is really helpful !\r\nI'll try to reproduce on my side and come back to you",
"if __name__ == '__main__':\r\n\r\n\r\nThis line before calling the map function stops the error but the script still repeats endless",
"Indeed you needed `if __name__ == '__main__'` since accoding to [this stackoverflow post](https://stackoverflow.com/a/18205006):\r\n\r\n> On Windows the subprocesses will import (i.e. execute) the main module at start. You need to insert an if __name__ == '__main__': guard in the main module to avoid creating subprocesses recursively.\r\n\r\nRegarding the hanging issue, can you try to update `dill` and `multiprocess` ?",
"It's already on the newest version",
"```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\shutil.py\", line 791, in move\r\n os.rename(src, real_dst)\r\nFileExistsError: [WinError 183] Eine Datei kann nicht erstellt werden, wenn sie bereits vorhanden ist: 'D:\\\\huggingfacecache\\\\common_voice\\\\de\\\\6.1.0\\\\0041e06ab061b91d0a23234a2221e87970a19cf3a81b20901474cffffeb7869f\\\\tmpx9fl_jg8' -> 'D:\\\\huggingfacecache\\\\common_voice\\\\de\\\\6.1.0\\\\0041e06ab061b91d0a23234a2221e87970a19cf3a81b20901474cffffeb7869f\\\\cache-9b4f203a63742dfc.arrow'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 116, in spawn_main\r\n exitcode = _main(fd, parent_sentinel)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 125, in _main\r\n prepare(preparation_data)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 236, in prepare\r\n _fixup_main_from_path(data['init_main_from_path'])\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 287, in _fixup_main_from_path\r\n main_content = runpy.run_path(main_path,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 265, in run_path\r\n return _run_module_code(code, init_globals, run_name,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 97, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"F:\\Codes\\Python Apps\\asr\\cvtrain.py\", line 243, in <module>\r\n common_voice_train = common_voice_train.map(remove_special_characters, remove_columns=[\"sentence\"])\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 1339, in map\r\n return self._map_single(\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 203, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\fingerprint.py\", line 337, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 1646, in _map_single\r\n shutil.move(tmp_file.name, cache_file_name)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\shutil.py\", line 805, in move\r\n copy_function(src, real_dst)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\shutil.py\", line 435, in copy2\r\n copyfile(src, dst, follow_symlinks=follow_symlinks)\r\n 0%| | 0/27771 [00:00<?, ?ex/s] \r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\shutil.py\", line 264, in copyfile\r\n with open(src, 'rb') as fsrc, open(dst, 'wb') as fdst:\r\nOSError: [Errno 22] Invalid argument: 'D:\\\\huggingfacecache\\\\common_voice\\\\de\\\\6.1.0\\\\0041e06ab061b91d0a23234a2221e87970a19cf3a81b20901474cffffeb7869f\\\\cache-9b4f203a63742dfc.arrow'\r\n```\r\n\r\nI was adding freeze support before calling the mapping function like this\r\nif __name__ == '__main__':\r\n freeze_support()\r\n dataset.map(....)",
"Usually OSError of an arrow file on windows means that the file is currently opened as a dataset object, so you can't overwrite it until the dataset object falls out of scope.\r\nCan you make sure that there's no dataset object that loaded the `cache-9b4f203a63742dfc.arrow` file ?",
"Now I understand\r\nThe error occures because the script got restarted in another thread, so the object is already loaded.\r\nStill don't have an idea why a new thread starts the whole script again"
] | 1,615,972,348,000 | 1,628,099,948,000 | 1,628,099,948,000 | NONE | null | null | As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2
When using the num_proc argument on windows the whole Python environment crashes and hanging in loop.
For example at the map_to_array part.
An error occures because the cache file already exists and windows throws and error. After this the log crashes into an loop | https://api.github.com/repos/huggingface/datasets/issues/2067/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2066 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2066/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2066/comments | https://api.github.com/repos/huggingface/datasets/issues/2066/events | https://github.com/huggingface/datasets/pull/2066 | 833,480,551 | MDExOlB1bGxSZXF1ZXN0NTk0NDcwMjEz | 2,066 | Fix docstring rendering of Dataset/DatasetDict.from_csv args | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,615,965,790,000 | 1,615,972,881,000 | 1,615,972,881,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2066",
"html_url": "https://github.com/huggingface/datasets/pull/2066",
"diff_url": "https://github.com/huggingface/datasets/pull/2066.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2066.patch"
} | Fix the docstring rendering of Dataset/DatasetDict.from_csv args. | https://api.github.com/repos/huggingface/datasets/issues/2066/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2065 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2065/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2065/comments | https://api.github.com/repos/huggingface/datasets/issues/2065/events | https://github.com/huggingface/datasets/issues/2065 | 833,291,432 | MDU6SXNzdWU4MzMyOTE0MzI= | 2,065 | Only user permission of saved cache files, not group | {
"login": "lorr1",
"id": 57237365,
"node_id": "MDQ6VXNlcjU3MjM3MzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/57237365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lorr1",
"html_url": "https://github.com/lorr1",
"followers_url": "https://api.github.com/users/lorr1/followers",
"following_url": "https://api.github.com/users/lorr1/following{/other_user}",
"gists_url": "https://api.github.com/users/lorr1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lorr1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lorr1/subscriptions",
"organizations_url": "https://api.github.com/users/lorr1/orgs",
"repos_url": "https://api.github.com/users/lorr1/repos",
"events_url": "https://api.github.com/users/lorr1/events{/privacy}",
"received_events_url": "https://api.github.com/users/lorr1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | null | [] | null | [
"Hi ! Thanks for reporting.\r\n\r\nCurrently there's no way to specify this.\r\n\r\nWhen loading/processing a dataset, the arrow file is written using a temporary file. Then once writing is finished, it's moved to the cache directory (using `shutil.move` [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1646))\r\n\r\nThat means it keeps the permissions specified by the `tempfile.NamedTemporaryFile` object, i.e. `-rw-------` instead of `-rw-r--r--`. Improving this could be a nice first contribution to the library :)",
"Hi @lhoestq,\r\nI looked into this and yes you're right. The `NamedTemporaryFile` is always created with mode 0600, which prevents group from reading the file. Should we change the permissions of `tmp_file.name` [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1871) and [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1590), post creation to 0644 inorder for group and others to read it?",
"Good idea :) we could even update the permissions after the file has been moved by shutil.move [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1899) and [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1646) actually.\r\nApparently they set the default 0600 for temporary files for security reasons, so let's update the umask only after the file has been moved",
"Would it be possible to actually set the umask based on a user provided argument? For example, a popular usecase my team has is using a shared file-system for processing datasets. This may involve writing/deleting other files, or changing filenames, which a -rw-r--r-- wouldn't fix. ",
"Note that you can get the cache files of a dataset with the `cache_files` attributes.\r\nThen you can `chmod` those files and all the other cache files in the same directory.\r\n\r\nMoreover we can probably keep the same permissions after each transform. This way you just need to set the permissions once after doing `load_dataset` for example, and then all the new transformed cached files will have the same permissions.\r\nWhat do you think ?",
"This means we'll check the permissions of other `cache_files` already created for a dataset before setting permissions for new `cache_files`?",
"You can just check the permission of `dataset.cache_files[0]` imo",
"> This way you just need to set the permissions once after doing load_dataset for example, and then all the new transformed cached files will have the same permissions.\r\n\r\nI was referring to this. Ensuring that newly generated `cache_files` have the same permissions",
"Yes exactly\r\n\r\nI imagine users can first do `load_dataset`, then chmod on the arrow files. After that all the new cache files could have the same permissions as the first arrow files. Opinions on this ?",
"Sounds nice but I feel this is a sub-part of the approach mentioned by @siddk. Instead of letting the user set new permissions by itself first and then making sure newly generated files have same permissions why don't we ask the user initially only what they want? What are your thoughts?",
"Yes sounds good. Should this be a parameter in `load_dataset` ? Or an env variable ? Or use the value of `os.umask` ?",
"Ideally it should be a parameter in `load_dataset` but I'm not sure how important it is for the users (considering only important things should go into `load_dataset` parameters)",
"I think it's fairly important; for context, our team uses a shared file-system where many folks run experiments based on datasets that are cached by other users.\r\n\r\nFor example, I might start a training run, downloading a dataset. Then, a couple of days later, a collaborator using the same repository might want to use the same dataset on the same shared filesystem, but won't be able to under the default permissions.\r\n\r\nBeing able to specify directly in the top-level `load_dataset()` call seems important, but an equally valid option would be to just inherit from the running user's `umask` (this should probably be the default anyway).\r\n\r\nSo basically, argument that takes a custom set of permissions, and by default, use the running user's umask!",
"Maybe let's start by defaulting to the user's umask !\r\nDo you want to give it a try @bhavitvyamalik ?",
"Yeah sure! Instead of using default `0o644` should I first extract umask of current user and then use `os.umask` on it? We can do it inside `Dataset` class so that all folders/files created during the call use running user's umask\r\n\r\n",
"You can get the umask using `os.umask` and then I guess you can just use `os.chmod` as in your previous PR, but with the right permissions depending on the umask.",
"FWIW, we have this issue with other caches - e.g. `transformers` model files. So probably will need to backport this into `transformers` as well.\r\n\r\nthanks @thomwolf for the pointer.",
"Hi @stas00,\r\nFor this should we use the same umask code in the respective model directory inside `TRANSFORMERS_CACHE`?",
"That sounds very right to me, @bhavitvyamalik "
] | 1,615,940,422,000 | 1,620,629,129,000 | 1,620,629,129,000 | NONE | null | null | Hello,
It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you know any ways around this or a way to correctly set the permissions? | https://api.github.com/repos/huggingface/datasets/issues/2065/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2064 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2064/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2064/comments | https://api.github.com/repos/huggingface/datasets/issues/2064/events | https://github.com/huggingface/datasets/pull/2064 | 833,002,360 | MDExOlB1bGxSZXF1ZXN0NTk0MDczOTQ1 | 2,064 | Fix ted_talks_iwslt version error | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,615,913,025,000 | 1,615,917,608,000 | 1,615,917,608,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2064",
"html_url": "https://github.com/huggingface/datasets/pull/2064",
"diff_url": "https://github.com/huggingface/datasets/pull/2064.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2064.patch"
} | This PR fixes the bug where the version argument would be passed twice if the dataset configuration was created on the fly.
Fixes #2059 | https://api.github.com/repos/huggingface/datasets/issues/2064/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2063 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2063/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2063/comments | https://api.github.com/repos/huggingface/datasets/issues/2063/events | https://github.com/huggingface/datasets/pull/2063 | 832,993,705 | MDExOlB1bGxSZXF1ZXN0NTk0MDY2NzI5 | 2,063 | [Common Voice] Adapt dataset script so that no manual data download is actually needed | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,615,912,424,000 | 1,615,974,172,000 | 1,615,974,157,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2063",
"html_url": "https://github.com/huggingface/datasets/pull/2063",
"diff_url": "https://github.com/huggingface/datasets/pull/2063.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2063.patch"
} | This PR changes the dataset script so that no manual data dir is needed anymore. | https://api.github.com/repos/huggingface/datasets/issues/2063/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2062 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2062/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2062/comments | https://api.github.com/repos/huggingface/datasets/issues/2062/events | https://github.com/huggingface/datasets/pull/2062 | 832,625,483 | MDExOlB1bGxSZXF1ZXN0NTkzNzUyNTMz | 2,062 | docs: fix missing quotation | {
"login": "neal2018",
"id": 46561493,
"node_id": "MDQ6VXNlcjQ2NTYxNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/46561493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neal2018",
"html_url": "https://github.com/neal2018",
"followers_url": "https://api.github.com/users/neal2018/followers",
"following_url": "https://api.github.com/users/neal2018/following{/other_user}",
"gists_url": "https://api.github.com/users/neal2018/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neal2018/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neal2018/subscriptions",
"organizations_url": "https://api.github.com/users/neal2018/orgs",
"repos_url": "https://api.github.com/users/neal2018/repos",
"events_url": "https://api.github.com/users/neal2018/events{/privacy}",
"received_events_url": "https://api.github.com/users/neal2018/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,615,889,274,000 | 1,615,972,917,000 | 1,615,972,917,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2062",
"html_url": "https://github.com/huggingface/datasets/pull/2062",
"diff_url": "https://github.com/huggingface/datasets/pull/2062.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2062.patch"
} | The json code misses a quote | https://api.github.com/repos/huggingface/datasets/issues/2062/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2061 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2061/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2061/comments | https://api.github.com/repos/huggingface/datasets/issues/2061/events | https://github.com/huggingface/datasets/issues/2061 | 832,596,228 | MDU6SXNzdWU4MzI1OTYyMjg= | 2,061 | Cannot load udpos subsets from xtreme dataset using load_dataset() | {
"login": "adzcodez",
"id": 55791365,
"node_id": "MDQ6VXNlcjU1NzkxMzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/55791365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adzcodez",
"html_url": "https://github.com/adzcodez",
"followers_url": "https://api.github.com/users/adzcodez/followers",
"following_url": "https://api.github.com/users/adzcodez/following{/other_user}",
"gists_url": "https://api.github.com/users/adzcodez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adzcodez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adzcodez/subscriptions",
"organizations_url": "https://api.github.com/users/adzcodez/orgs",
"repos_url": "https://api.github.com/users/adzcodez/repos",
"events_url": "https://api.github.com/users/adzcodez/events{/privacy}",
"received_events_url": "https://api.github.com/users/adzcodez/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | null | [] | null | [
"@lhoestq Adding \"_\" to the class labels in the dataset script will fix the issue.\r\n\r\nThe bigger issue IMO is that the data files are in conll format, but the examples are tokens, not sentences.",
"Hi ! Thanks for reporting @adzcodez \r\n\r\n\r\n> @lhoestq Adding \"_\" to the class labels in the dataset script will fix the issue.\r\n> \r\n> The bigger issue IMO is that the data files are in conll format, but the examples are tokens, not sentences.\r\n\r\nYou're right: \"_\" should be added to the list of labels, and the examples must be sequences of tokens, not singles tokens.\r\n",
"@lhoestq Can you please label this issue with the \"good first issue\" label? I'm not sure I'll find time to fix this.\r\n\r\nTo resolve it, the user should:\r\n1. add `\"_\"` to the list of labels\r\n2. transform the udpos subset to the conll format (I think the preprocessing logic can be borrowed from [the original repo](https://github.com/google-research/xtreme/blob/58a76a0d02458c4b3b6a742d3fd4ffaca80ff0de/utils_preprocess.py#L187-L204))\r\n3. update the dummy data\r\n4. update the dataset info\r\n5. [optional] add info about the data fields structure of the udpos subset to the dataset readme",
"I tried fixing this issue, but its working fine in the dev version : \"1.6.2.dev0\"\r\n\r\nI think somebody already fixed it. ",
"Hi,\r\n\r\nafter #2326, the lines with pos tags equal to `\"_\"` are filtered out when generating the dataset, so this fixes the KeyError described above. However, the udpos subset should be in the conll format i.e. it should yield sequences of tokens and not single tokens, so it would be great to see this fixed (feel free to borrow the logic from [here](https://github.com/google-research/xtreme/blob/58a76a0d02458c4b3b6a742d3fd4ffaca80ff0de/utils_preprocess.py#L187-L204) if you decide to work on this). ",
"Closed by #2466."
] | 1,615,887,133,000 | 1,624,017,251,000 | 1,624,017,250,000 | NONE | null | null | Hello,
I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset altogether (such as XNLI) has no issue. I have also tried on Colab and faced the same error.
Reprex is:
`from datasets import load_dataset `
`dataset = load_dataset('xtreme', 'udpos.English')`
The error is:
`KeyError: '_'`
The full traceback is:
KeyError Traceback (most recent call last)
<ipython-input-5-7181359ea09d> in <module>
1 from datasets import load_dataset
----> 2 dataset = load_dataset('xtreme', 'udpos.English')
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs)
738
739 # Download and prepare data
--> 740 builder_instance.download_and_prepare(
741 download_config=download_config,
742 download_mode=download_mode,
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
576 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
577 if not downloaded_from_gcs:
--> 578 self._download_and_prepare(
579 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
580 )
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
654 try:
655 # Prepare split will record examples associated to the split
--> 656 self._prepare_split(split_generator, **prepare_split_kwargs)
657 except OSError as e:
658 raise OSError(
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in _prepare_split(self, split_generator)
977 generator, unit=" examples", total=split_info.num_examples, leave=False, disable=not_verbose
978 ):
--> 979 example = self.info.features.encode_example(record)
980 writer.write(example)
981 finally:
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_example(self, example)
946 def encode_example(self, example):
947 example = cast_to_python_objects(example)
--> 948 return encode_nested_example(self, example)
949
950 def encode_batch(self, batch):
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_nested_example(schema, obj)
840 # Nested structures: we allow dict, list/tuples, sequences
841 if isinstance(schema, dict):
--> 842 return {
843 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
844 }
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in <dictcomp>(.0)
841 if isinstance(schema, dict):
842 return {
--> 843 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
844 }
845 elif isinstance(schema, (list, tuple)):
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_nested_example(schema, obj)
868 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks
869 elif isinstance(schema, (ClassLabel, TranslationVariableLanguages, Value, _ArrayXD)):
--> 870 return schema.encode_example(obj)
871 # Other object should be directly convertible to a native Arrow type (like Translation and Translation)
872 return obj
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_example(self, example_data)
647 # If a string is given, convert to associated integer
648 if isinstance(example_data, str):
--> 649 example_data = self.str2int(example_data)
650
651 # Allowing -1 to mean no label.
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in str2int(self, values)
605 if value not in self._str2int:
606 value = value.strip()
--> 607 output.append(self._str2int[str(value)])
608 else:
609 # No names provided, try to integerize
KeyError: '_'
| https://api.github.com/repos/huggingface/datasets/issues/2061/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2060 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2060/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2060/comments | https://api.github.com/repos/huggingface/datasets/issues/2060/events | https://github.com/huggingface/datasets/pull/2060 | 832,588,591 | MDExOlB1bGxSZXF1ZXN0NTkzNzIxNzcx | 2,060 | Filtering refactor | {
"login": "theo-m",
"id": 17948980,
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theo-m",
"html_url": "https://github.com/theo-m",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"repos_url": "https://api.github.com/users/theo-m/repos",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "theo-m",
"id": 17948980,
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theo-m",
"html_url": "https://github.com/theo-m",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"repos_url": "https://api.github.com/users/theo-m/repos",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "theo-m",
"id": 17948980,
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theo-m",
"html_url": "https://github.com/theo-m",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"repos_url": "https://api.github.com/users/theo-m/repos",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I thought at first that the multiproc test was not relevant now that we do stuff only in memory, but I think there's something that's actually broken, my tiny benchmark on bookcorpus runs forever (2hrs+) when I add `num_proc=4` as a kwarg, will investigate 👀 \r\n\r\nI'm not familiar with the caching you describe for `.map`, I'll look it up.",
"turns out the multi proc issue is also on master, I won't fix it in this PR but opened #2071 to track the problem.",
"tracemalloc outputs from this script:\r\n\r\n```python\r\nimport logging\r\nimport sys\r\nimport time\r\nimport tracemalloc\r\n\r\nfrom datasets import load_dataset, set_caching_enabled\r\n\r\n\r\nif __name__ == \"__main__\":\r\n set_caching_enabled(False)\r\n logging.basicConfig(level=logging.DEBUG)\r\n\r\n tracemalloc.start()\r\n bc = load_dataset(\"bookcorpus\")\r\n\r\n now = time.time()\r\n try:\r\n snapshot1 = tracemalloc.take_snapshot()\r\n bc[\"train\"].filter(lambda x: len(x[\"text\"]) < 64, num_proc=int(sys.argv[1]))\r\n except Exception as e:\r\n print(f\"cancelled: {e}\")\r\n exit(1)\r\n snapshot2 = tracemalloc.take_snapshot()\r\n tracemalloc.stop()\r\n elapsed = time.time() - now\r\n\r\n print(elapsed)\r\n top_stats = snapshot2.compare_to(snapshot1, \"lineno\")\r\n\r\n print(\"[ Top 10 differences ]\")\r\n for stat in top_stats[:10]:\r\n print(stat)\r\n\r\n```\r\n\r\n\r\nThis branch:\r\n\r\n```\r\n ssh://theo@35.205.12.130:22/home/theo/.local/share/miniconda3/envs/datasets/bin/python -u benchmark_filter.py 1\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): s3.amazonaws.com:443\r\n DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 \"HEAD /datasets.huggingface.co/datasets/datasets/bookcorpus/bookcorpus.py HTTP/1.1\" 200 0\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): raw.githubusercontent.com:443\r\n DEBUG:urllib3.connectionpool:https://raw.githubusercontent.com:443 \"HEAD /huggingface/datasets/master/datasets/bookcorpus/bookcorpus.py HTTP/1.1\" 200 0\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): raw.githubusercontent.com:443\r\n DEBUG:urllib3.connectionpool:https://raw.githubusercontent.com:443 \"HEAD /huggingface/datasets/master/datasets/bookcorpus/dataset_infos.json HTTP/1.1\" 200 0\r\n WARNING:datasets.builder:Reusing dataset bookcorpus (/home/theo/.cache/huggingface/datasets/bookcorpus/plain_text/1.0.0/af844be26c089fb64810e9f2cd841954fd8bd596d6ddd26326e4c70e2b8c96fc)\r\n 0%| | 0/74005 [00:00<?, ?ba/s]2021-03-23 10:23:20.051255: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\r\n 2021-03-23 10:23:20.051304: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\n DEBUG:tensorflow:Falling back to TensorFlow client; we recommended you install the Cloud TPU client directly with pip install cloud-tpu-client.\r\n 100%|████████████████████████████████████| 74005/74005 [12:16<00:00, 100.54ba/s]\r\n 815.6356580257416\r\n [ Top 10 differences ]\r\n <frozen importlib._bootstrap_external>:580: size=38.0 MiB (+33.7 MiB), count=326226 (+307928), average=122 B\r\n <frozen importlib._bootstrap>:219: size=7643 KiB (+7553 KiB), count=26372 (+25473), average=297 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/torch/__init__.py:427: size=1291 KiB (+1291 KiB), count=5924 (+5924), average=223 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/abc.py:85: size=1039 KiB (+1026 KiB), count=3428 (+3384), average=310 B\r\n <frozen importlib._bootstrap_external>:64: size=917 KiB (+891 KiB), count=5300 (+5132), average=177 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/collections/__init__.py:456: size=720 KiB (+709 KiB), count=3403 (+3349), average=217 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/tf_export.py:346: size=607 KiB (+607 KiB), count=3962 (+3962), average=157 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/linecache.py:137: size=998 KiB (+487 KiB), count=9551 (+4517), average=107 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/tf_decorator.py:241: size=367 KiB (+367 KiB), count=5225 (+5225), average=72 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/decorator_utils.py:114: size=359 KiB (+359 KiB), count=330 (+330), average=1114 B\r\n```\r\n\r\nOn master:\r\n```\r\n ssh://theo@35.205.12.130:22/home/theo/.local/share/miniconda3/envs/datasets/bin/python -u benchmark_filter.py 1\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): s3.amazonaws.com:443\r\n DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 \"HEAD /datasets.huggingface.co/datasets/datasets/bookcorpus/bookcorpus.py HTTP/1.1\" 200 0\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): raw.githubusercontent.com:443\r\n DEBUG:urllib3.connectionpool:https://raw.githubusercontent.com:443 \"HEAD /huggingface/datasets/master/datasets/bookcorpus/bookcorpus.py HTTP/1.1\" 200 0\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): raw.githubusercontent.com:443\r\n DEBUG:urllib3.connectionpool:https://raw.githubusercontent.com:443 \"HEAD /huggingface/datasets/master/datasets/bookcorpus/dataset_infos.json HTTP/1.1\" 200 0\r\n WARNING:datasets.builder:Reusing dataset bookcorpus (/home/theo/.cache/huggingface/datasets/bookcorpus/plain_text/1.0.0/af844be26c089fb64810e9f2cd841954fd8bd596d6ddd26326e4c70e2b8c96fc)\r\n 0%| | 0/74005 [00:00<?, ?ba/s]2021-03-23 12:26:47.219622: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\r\n 2021-03-23 12:26:47.219669: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\n DEBUG:tensorflow:Falling back to TensorFlow client; we recommended you install the Cloud TPU client directly with pip install cloud-tpu-client.\r\n 100%|███████████████████████████████████| 74005/74005 [1:02:17<00:00, 19.80ba/s]\r\n 3738.870892047882\r\n [ Top 10 differences ]\r\n <frozen importlib._bootstrap_external>:580: size=38.0 MiB (+33.7 MiB), count=326221 (+307919), average=122 B\r\n <frozen importlib._bootstrap>:219: size=7648 KiB (+7557 KiB), count=26455 (+25555), average=296 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/torch/__init__.py:427: size=1291 KiB (+1291 KiB), count=5924 (+5924), average=223 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/abc.py:85: size=1039 KiB (+1026 KiB), count=3429 (+3385), average=310 B\r\n <frozen importlib._bootstrap_external>:64: size=917 KiB (+891 KiB), count=5300 (+5132), average=177 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/collections/__init__.py:456: size=720 KiB (+709 KiB), count=3403 (+3349), average=217 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/tf_export.py:346: size=607 KiB (+607 KiB), count=3962 (+3962), average=157 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/linecache.py:137: size=1000 KiB (+489 KiB), count=9569 (+4535), average=107 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/tf_decorator.py:241: size=367 KiB (+367 KiB), count=5225 (+5225), average=72 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/decorator_utils.py:114: size=359 KiB (+359 KiB), count=330 (+330), average=1114 B\r\n```\r\n\r\nI'm not concluding much, it seems nothing is really happening to memory on `pyarrow::Table.filter`? ",
"Cool ! Maybe it increases the memory a bit but what's brought in memory is not the resulting Table but something else (not sure what though).\r\nWhat's the length of the resulting dataset ?\r\nYou can also take a look at `pyarrow.total_allocated_memory()` to show how much memory is being used by pyarrow",
"```diff\r\ndiff --git a/benchmarks/benchmark_filter.py b/benchmarks/benchmark_filter.py\r\nindex 4b9efd4e..a862c204 100644\r\n--- a/benchmarks/benchmark_filter.py\r\n+++ b/benchmarks/benchmark_filter.py\r\n@@ -1,6 +1,9 @@\r\n import logging\r\n import sys\r\n import time\r\n+import tracemalloc\r\n+\r\n+import pyarrow as pa\r\n \r\n from datasets import load_dataset, set_caching_enabled\r\n \r\n@@ -9,13 +12,28 @@ if __name__ == \"__main__\":\r\n set_caching_enabled(False)\r\n logging.basicConfig(level=logging.DEBUG)\r\n \r\n+ tracemalloc.start()\r\n bc = load_dataset(\"bookcorpus\")\r\n \r\n now = time.time()\r\n try:\r\n+ snapshot1 = tracemalloc.take_snapshot()\r\n+ pamem1 = pa.total_allocated_bytes()\r\n bc[\"train\"].filter(lambda x: len(x[\"text\"]) < 64, num_proc=int(sys.argv[1]))\r\n+ pamem2 = pa.total_allocated_bytes()\r\n+ snapshot2 = tracemalloc.take_snapshot()\r\n except Exception as e:\r\n print(f\"cancelled: {e}\")\r\n+ exit(1)\r\n+ tracemalloc.stop()\r\n elapsed = time.time() - now\r\n \r\n print(elapsed)\r\n+ top_stats = snapshot2.compare_to(snapshot1, \"lineno\")\r\n+\r\n+ print(\"[ Top 10 differences ]\")\r\n+ for stat in top_stats[:10]:\r\n+ print(stat)\r\n+\r\n+ print(\"[ pyarrow reporting ]\")\r\n+ print(f\"before: ({pamem1}) after: ({pamem2})\")\r\n```\r\n\r\nthis yields 0-0, does not seem like a good tool 😛 and the documentation is [quite mysterious.](https://arrow.apache.org/docs/python/generated/pyarrow.total_allocated_bytes.html)",
"Personally if I use your script to benchmark on this branch\r\n```python\r\nbc = load_dataset(\"bookcorpus\", split=\"train[:1%]\")\r\nbc = bc.filter(lambda x: len(x[\"text\"]) < 64)\r\n```\r\n\r\nthen I get\r\n```\r\n[ pyarrow reporting ]\r\nbefore: (0) after: (15300672)\r\n```\r\n\r\nMaybe you got 0-0 because the filter output is directly garbage collected, since you didn't do\r\n```python\r\nbc[\"train\"] = bc[\"train\"].filter(...)\r\n```\r\nCan you try again on your side just to make sure ?\r\n\r\nEven if the documentation doesn't say much, `pa.total_allocated_bytes` if pretty useful, and also very consistent.\r\nIt tracks the number of bytes used for arrow data.",
"> Maybe you got 0-0 because the filter output is directly garbage collected, since you didn't do\r\n> \r\n> ```python\r\n> bc[\"train\"] = bc[\"train\"].filter(...)\r\n> ```\r\nNice catch! I get 1.74GB for this branch",
"Looks like we may need to write the filtered table on the disk then.\r\n\r\nThe other option is to slice the table to keep only the good rows and concatenate them but this is too slow at the moment since slicing is O(n) until #1803 is fixed. I'll work on this issue this afternoon",
"From investigation it looks like the lib's `Table.filter` cannot send its output to memorymap, asked a question on the mailing list, see [here](https://lists.apache.org/thread.html/r8cd8591ce83a967eb0097a7f31785ac2f3ee95ea371c8c5beb0720ad%40%3Cuser.arrow.apache.org%3E)"
] | 1,615,886,610,000 | 1,617,183,528,000 | null | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2060",
"html_url": "https://github.com/huggingface/datasets/pull/2060",
"diff_url": "https://github.com/huggingface/datasets/pull/2060.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2060.patch"
} | fix https://github.com/huggingface/datasets/issues/2032
benchmarking is somewhat inconclusive, currently running on `book_corpus` with:
```python
bc = load_dataset("bookcorpus")
now = time.time()
bc.filter(lambda x: len(x["text"]) < 64)
elapsed = time.time() - now
print(elapsed)
```
this branch does it in 233 seconds, master in 1409 seconds. | https://api.github.com/repos/huggingface/datasets/issues/2060/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2059 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2059/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2059/comments | https://api.github.com/repos/huggingface/datasets/issues/2059/events | https://github.com/huggingface/datasets/issues/2059 | 832,579,156 | MDU6SXNzdWU4MzI1NzkxNTY= | 2,059 | Error while following docs to load the `ted_talks_iwslt` dataset | {
"login": "ekdnam",
"id": 40426312,
"node_id": "MDQ6VXNlcjQwNDI2MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/40426312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ekdnam",
"html_url": "https://github.com/ekdnam",
"followers_url": "https://api.github.com/users/ekdnam/followers",
"following_url": "https://api.github.com/users/ekdnam/following{/other_user}",
"gists_url": "https://api.github.com/users/ekdnam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ekdnam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ekdnam/subscriptions",
"organizations_url": "https://api.github.com/users/ekdnam/orgs",
"repos_url": "https://api.github.com/users/ekdnam/repos",
"events_url": "https://api.github.com/users/ekdnam/events{/privacy}",
"received_events_url": "https://api.github.com/users/ekdnam/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"@skyprince999 as you authored the PR for this dataset, any comments?",
"This has been fixed in #2064 by @mariosasko (thanks again !)\r\n\r\nThe fix is available on the master branch and we'll do a new release very soon :)"
] | 1,615,885,939,000 | 1,615,917,631,000 | 1,615,917,607,000 | NONE | null | null | I am currently trying to load the `ted_talks_iwslt` dataset into google colab.
The [docs](https://huggingface.co/datasets/ted_talks_iwslt) mention the following way of doing so.
```python
dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014")
```
Executing it results in the error attached below.
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-7dcc67154ef9> in <module>()
----> 1 dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014")
4 frames
/usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs)
730 hash=hash,
731 features=features,
--> 732 **config_kwargs,
733 )
734
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in __init__(self, writer_batch_size, *args, **kwargs)
927
928 def __init__(self, *args, writer_batch_size=None, **kwargs):
--> 929 super(GeneratorBasedBuilder, self).__init__(*args, **kwargs)
930 # Batch size used by the ArrowWriter
931 # It defines the number of samples that are kept in memory before writing them
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in __init__(self, cache_dir, name, hash, features, **config_kwargs)
241 name,
242 custom_features=features,
--> 243 **config_kwargs,
244 )
245
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs)
337 if "version" not in config_kwargs and hasattr(self, "VERSION") and self.VERSION:
338 config_kwargs["version"] = self.VERSION
--> 339 builder_config = self.BUILDER_CONFIG_CLASS(**config_kwargs)
340
341 # otherwise use the config_kwargs to overwrite the attributes
/root/.cache/huggingface/modules/datasets_modules/datasets/ted_talks_iwslt/024d06b1376b361e59245c5878ab8acf9a7576d765f2d0077f61751158e60914/ted_talks_iwslt.py in __init__(self, language_pair, year, **kwargs)
219 description=description,
220 version=datasets.Version("1.1.0", ""),
--> 221 **kwargs,
222 )
223
TypeError: __init__() got multiple values for keyword argument 'version'
```
How to resolve this?
PS: Thanks a lot @huggingface team for creating this great library! | https://api.github.com/repos/huggingface/datasets/issues/2059/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2058 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2058/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2058/comments | https://api.github.com/repos/huggingface/datasets/issues/2058/events | https://github.com/huggingface/datasets/issues/2058 | 832,159,844 | MDU6SXNzdWU4MzIxNTk4NDQ= | 2,058 | Is it possible to convert a `tfds` to HuggingFace `dataset`? | {
"login": "abarbosa94",
"id": 6608232,
"node_id": "MDQ6VXNlcjY2MDgyMzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abarbosa94",
"html_url": "https://github.com/abarbosa94",
"followers_url": "https://api.github.com/users/abarbosa94/followers",
"following_url": "https://api.github.com/users/abarbosa94/following{/other_user}",
"gists_url": "https://api.github.com/users/abarbosa94/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abarbosa94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abarbosa94/subscriptions",
"organizations_url": "https://api.github.com/users/abarbosa94/orgs",
"repos_url": "https://api.github.com/users/abarbosa94/repos",
"events_url": "https://api.github.com/users/abarbosa94/events{/privacy}",
"received_events_url": "https://api.github.com/users/abarbosa94/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,615,839,527,000 | 1,615,839,527,000 | null | CONTRIBUTOR | null | null | I was having some weird bugs with `C4`dataset version of HuggingFace, so I decided to try to download `C4`from `tfds`. I would like to know if it is possible to convert a tfds dataset to HuggingFace dataset format :)
I can also open a new issue reporting the bug I'm receiving with `datasets.load_dataset('c4','en')` in the future if you think that it would be useful.
Thanks!
| https://api.github.com/repos/huggingface/datasets/issues/2058/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2057 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2057/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2057/comments | https://api.github.com/repos/huggingface/datasets/issues/2057/events | https://github.com/huggingface/datasets/pull/2057 | 832,120,522 | MDExOlB1bGxSZXF1ZXN0NTkzMzMzMjM0 | 2,057 | update link to ZEST dataset | {
"login": "matt-peters",
"id": 619844,
"node_id": "MDQ6VXNlcjYxOTg0NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/619844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matt-peters",
"html_url": "https://github.com/matt-peters",
"followers_url": "https://api.github.com/users/matt-peters/followers",
"following_url": "https://api.github.com/users/matt-peters/following{/other_user}",
"gists_url": "https://api.github.com/users/matt-peters/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matt-peters/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matt-peters/subscriptions",
"organizations_url": "https://api.github.com/users/matt-peters/orgs",
"repos_url": "https://api.github.com/users/matt-peters/repos",
"events_url": "https://api.github.com/users/matt-peters/events{/privacy}",
"received_events_url": "https://api.github.com/users/matt-peters/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,615,836,177,000 | 1,615,914,388,000 | 1,615,914,388,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2057",
"html_url": "https://github.com/huggingface/datasets/pull/2057",
"diff_url": "https://github.com/huggingface/datasets/pull/2057.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2057.patch"
} | Updating the link as the original one is no longer working. | https://api.github.com/repos/huggingface/datasets/issues/2057/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2056 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2056/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2056/comments | https://api.github.com/repos/huggingface/datasets/issues/2056/events | https://github.com/huggingface/datasets/issues/2056 | 831,718,397 | MDU6SXNzdWU4MzE3MTgzOTc= | 2,056 | issue with opus100/en-fr dataset | {
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq I also deleted the cache and redownload the file and still the same issue, I appreciate any help on this. thanks ",
"Here please find the minimal code to reproduce the issue @lhoestq note this only happens with MT5TokenizerFast\r\n\r\n```\r\nfrom datasets import load_dataset\r\nfrom transformers import MT5TokenizerFast\r\n\r\ndef get_tokenized_dataset(dataset_name, dataset_config_name, tokenizer):\r\n datasets = load_dataset(dataset_name, dataset_config_name, script_version=\"master\")\r\n column_names = datasets[\"train\"].column_names\r\n text_column_name = \"translation\"\r\n def process_dataset(datasets):\r\n def process_function(examples):\r\n lang = \"fr\"\r\n return {\"src_texts\": [example[lang] for example in examples[text_column_name]]}\r\n datasets = datasets.map(\r\n process_function,\r\n batched=True,\r\n num_proc=None,\r\n remove_columns=column_names,\r\n load_from_cache_file=True,\r\n )\r\n return datasets\r\n datasets = process_dataset(datasets)\r\n text_column_name = \"src_texts\"\r\n column_names = [\"src_texts\"]\r\n def tokenize_function(examples):\r\n return tokenizer(examples[text_column_name], return_special_tokens_mask=True)\r\n tokenized_datasets = datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=None,\r\n remove_columns=column_names,\r\n load_from_cache_file=True\r\n )\r\n\r\nif __name__ == \"__main__\":\r\n tokenizer_kwargs = {\r\n \"cache_dir\": None,\r\n \"use_fast\": True,\r\n \"revision\": \"main\",\r\n \"use_auth_token\": None\r\n }\r\n tokenizer = MT5TokenizerFast.from_pretrained(\"google/mt5-small\", **tokenizer_kwargs)\r\n get_tokenized_dataset(dataset_name=\"opus100\", dataset_config_name=\"en-fr\", tokenizer=tokenizer)\r\n~ \r\n```",
"as per https://github.com/huggingface/tokenizers/issues/626 this looks like to be the tokenizer bug, I therefore, reported it there https://github.com/huggingface/tokenizers/issues/626 and I am closing this one."
] | 1,615,807,962,000 | 1,615,909,740,000 | 1,615,909,739,000 | NONE | null | null | Hi
I am running run_mlm.py code of huggingface repo with opus100/fr-en pair, I am getting this error, note that this error occurs for only this pairs and not the other pairs. Any idea why this is occurring? and how I can solve this?
Thanks a lot @lhoestq for your help in advance.
`
thread '<unnamed>' panicked at 'index out of bounds: the len is 617 but the index is 617', /__w/tokenizers/tokenizers/tokenizers/src/tokenizer/normalizer.rs:382:21
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
63%|██████████████████████████████████████████████████████████▊ | 626/1000 [00:27<00:16, 22.69ba/s]
Traceback (most recent call last):
File "run_mlm.py", line 550, in <module>
main()
File "run_mlm.py", line 412, in main
in zip(data_args.dataset_name, data_args.dataset_config_name)]
File "run_mlm.py", line 411, in <listcomp>
logger) for dataset_name, dataset_config_name\
File "/user/dara/dev/codes/seq2seq/data/tokenize_datasets.py", line 96, in get_tokenized_dataset
load_from_cache_file=not data_args.overwrite_cache,
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/dataset_dict.py", line 448, in map
for k, dataset in self.items()
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/dataset_dict.py", line 448, in <dictcomp>
for k, dataset in self.items()
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1309, in map
update_data=update_data,
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 204, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/fingerprint.py", line 337, in wrapper
out = func(self, *args, **kwargs)
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1574, in _map_single
batch, indices, check_same_num_examples=len(self.list_indexes()) > 0, offset=offset
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1490, in apply_function_on_filtered_inputs
function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "/user/dara/dev/codes/seq2seq/data/tokenize_datasets.py", line 89, in tokenize_function
return tokenizer(examples[text_column_name], return_special_tokens_mask=True)
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2347, in __call__
**kwargs,
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2532, in batch_encode_plus
**kwargs,
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py", line 384, in _batch_encode_plus
is_pretokenized=is_split_into_words,
pyo3_runtime.PanicException: index out of bounds: the len is 617 but the index is 617
` | https://api.github.com/repos/huggingface/datasets/issues/2056/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2055 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2055/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2055/comments | https://api.github.com/repos/huggingface/datasets/issues/2055/events | https://github.com/huggingface/datasets/issues/2055 | 831,684,312 | MDU6SXNzdWU4MzE2ODQzMTI= | 2,055 | is there a way to override a dataset object saved with save_to_disk? | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi\r\nYou can rename the arrow file and update the name in `state.json`",
"I tried this way, but when there is a mapping process to the dataset, it again uses a random cache name. atm, I am trying to use the following method by setting an exact cache file,\r\n\r\n```\r\n dataset_with_embedding =csv_dataset.map(\r\n partial(self.embed, ctx_encoder=ctx_encoder, ctx_tokenizer=self.context_tokenizer),\r\n batched=True,\r\n batch_size=1,\r\n features=new_features,\r\n cache_file_name=cache_arrow_path,\r\n load_from_cache_file=False\r\n )\r\n```\r\nSo here we set a cache_file_name , after this it uses the same file name when saving again and again. ",
"I'm not sure I understand your issue, can you elaborate ?\r\n\r\n`cache_file_name` is indeed an argument you can set to specify the cache file that will be used for the processed dataset. By default the file is named with something like `cache-<fingerprint>.arrow` where the fingerprint is a hash.",
"Let's say I am updating a set of embedding in a dataset that is around 40GB inside a training loop every 500 steps (Ex: calculating the embeddings in updated ctx_encoder in RAG and saving it to the passage path). So when we use **dataset_object.save_to_disk('passage_path_directory')** it will save the new dataset object every time with a random file name, especially when we do some transformations to dataset objects such as map or shards. This way, we keep collecting unwanted files that will eventually eat up all the disk space. \r\n\r\nBut if we can save the dataset object every time by a single name like **data_shard_1.arrow**, it will automatically remove the previous file and save the new one in the same directory. I found the above-mentioned code snippet useful to complete this task. \r\n\r\nIs this clear?"
] | 1,615,805,453,000 | 1,616,385,977,000 | 1,616,385,977,000 | NONE | null | null | At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object? | https://api.github.com/repos/huggingface/datasets/issues/2055/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2054 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2054/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2054/comments | https://api.github.com/repos/huggingface/datasets/issues/2054/events | https://github.com/huggingface/datasets/issues/2054 | 831,597,665 | MDU6SXNzdWU4MzE1OTc2NjU= | 2,054 | Could not find file for ZEST dataset | {
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"The zest dataset url was changed (allenai/zest#3) and #2057 should resolve this.",
"This has been fixed in #2057 by @matt-peters (thanks again !)\r\n\r\nThe fix is available on the master branch and we'll do a new release very soon :)",
"Thanks @lhoestq and @matt-peters ",
"I am closing this issue since its fixed!"
] | 1,615,799,518,000 | 1,620,034,224,000 | 1,620,034,224,000 | CONTRIBUTOR | null | null | I am trying to use zest dataset from Allen AI using below code in colab,
```
!pip install -q datasets
from datasets import load_dataset
dataset = load_dataset("zest")
```
I am getting the following error,
```
Using custom data configuration default
Downloading and preparing dataset zest/default (download: 5.53 MiB, generated: 19.96 MiB, post-processed: Unknown size, total: 25.48 MiB) to /root/.cache/huggingface/datasets/zest/default/0.0.0/1f7a230fbfc964d979bbca0f0130fbab3259fce547ee758ad8aa4f9c9bec6cca...
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-6-18dbbc1a4b8a> in <module>()
1 from datasets import load_dataset
2
----> 3 dataset = load_dataset("zest")
9 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)
612 )
613 elif response is not None and response.status_code == 404:
--> 614 raise FileNotFoundError("Couldn't find file at {}".format(url))
615 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
616 raise ConnectionError("Couldn't reach {}".format(url))
FileNotFoundError: Couldn't find file at https://ai2-datasets.s3-us-west-2.amazonaws.com/zest/zest.zip
``` | https://api.github.com/repos/huggingface/datasets/issues/2054/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2053 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2053/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2053/comments | https://api.github.com/repos/huggingface/datasets/issues/2053/events | https://github.com/huggingface/datasets/pull/2053 | 831,151,728 | MDExOlB1bGxSZXF1ZXN0NTkyNTM4ODY2 | 2,053 | Add bAbI QA tasks | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lhoestq,\r\n\r\nShould I remove the 160 configurations? Is it too much?\r\n\r\nEDIT:\r\nCan you also check the task category? I'm not sure if there is an appropriate tag for the same.",
"Thanks for the changes !\r\n\r\n> Should I remove the 160 configurations? Is it too much?\r\n\r\nYea 160 configuration is a lot.\r\nMaybe this dataset can work with parameters `type` and `task_no` ?\r\nYou can just remove the configuration in BUILDER_CONFIGS to only keep a few ones.\r\nAlso feel free to add an example in the dataset card of how to load the other configurations\r\n```\r\nload_dataset(\"babi_qa\", type=\"hn\", task_no=\"qa1\")\r\n```\r\nfor example, and with a list of the possible combinations.\r\n\r\n> Can you also check the task category? I'm not sure if there is an appropriate tag for the same.\r\n\r\nIt looks appropriate, thanks :)",
"Hi @lhoestq \r\n\r\nI'm unable to test it locally using:\r\n```python\r\nload_dataset(\"datasets/babi_qa\", type=\"hn\", task_no=\"qa1\")\r\n```\r\nIt raises an error:\r\n```python\r\nTypeError: __init__() got an unexpected keyword argument 'type'\r\n```\r\nWill this be possible only after merging? Or am I missing something here?",
"Can you try adding this class attribute to `BabiQa` ?\r\n```python\r\nBUILDER_CONFIG_CLASS = BabiQaConfig\r\n```\r\nThis should fix the TypeError issue you got",
"My bad. Thanks a lot!",
"Hi @lhoestq \r\n\r\nI have added the changes. Only the \"qa1\" task for each category is included. Also, I haven't removed the size categories and other description because I think it will still be useful. I have updated the line in README showing the example.\r\n\r\nThanks,\r\nGunjan",
"Hi @lhoestq,\r\n\r\nDoes this look good now?"
] | 1,615,727,079,000 | 1,617,021,708,000 | 1,617,021,708,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2053",
"html_url": "https://github.com/huggingface/datasets/pull/2053",
"diff_url": "https://github.com/huggingface/datasets/pull/2053.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2053.patch"
} | - **Name:** *The (20) QA bAbI tasks*
- **Description:** *The (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. The aim is to classify these tasks into skill sets,so that researchers can identify (and then rectify) the failings of their systems.*
- **Paper:** [arXiv](https://arxiv.org/pdf/1502.05698.pdf)
- **Data:** [Facebook Research Page](https://research.fb.com/downloads/babi/)
- **Motivation:** This is a unique dataset with story-based Question Answering. It is a part of the `bAbI` project by Facebook Research.
**Note**: I have currently added all the 160 configs. If this seems impractical, I can keep only a few. While each `dummy_data.zip` weighs a few KBs, overall it is around 1.3MB for all configurations. This is problematic. Let me know what is to be done.
Thanks :)
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`
- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.
- [x] Generate the metadata file `dataset_infos.json` for all configurations
- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs
- [x] Both tests for the real data and the dummy data pass.
| https://api.github.com/repos/huggingface/datasets/issues/2053/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2052 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2052/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2052/comments | https://api.github.com/repos/huggingface/datasets/issues/2052/events | https://github.com/huggingface/datasets/issues/2052 | 831,135,704 | MDU6SXNzdWU4MzExMzU3MDQ= | 2,052 | Timit_asr dataset repeats examples | {
"login": "fermaat",
"id": 7583522,
"node_id": "MDQ6VXNlcjc1ODM1MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7583522?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fermaat",
"html_url": "https://github.com/fermaat",
"followers_url": "https://api.github.com/users/fermaat/followers",
"following_url": "https://api.github.com/users/fermaat/following{/other_user}",
"gists_url": "https://api.github.com/users/fermaat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fermaat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fermaat/subscriptions",
"organizations_url": "https://api.github.com/users/fermaat/orgs",
"repos_url": "https://api.github.com/users/fermaat/repos",
"events_url": "https://api.github.com/users/fermaat/events{/privacy}",
"received_events_url": "https://api.github.com/users/fermaat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi,\r\n\r\nthis was fixed by #1995, so you can wait for the next release or install the package directly from the master branch with the following command: \r\n```bash\r\npip install git+https://github.com/huggingface/datasets\r\n```",
"Ty!"
] | 1,615,722,223,000 | 1,615,804,636,000 | 1,615,804,636,000 | NONE | null | null | Summary
When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same
Steps to reproduce
As an example, on this code there is the text from the training part:
Code snippet:
```
from datasets import load_dataset, load_metric
timit = load_dataset("timit_asr")
timit['train']['text']
#['Would such an act of refusal be useful?',
# 'Would such an act of refusal be useful?',
# 'Would such an act of refusal be useful?',
# 'Would such an act of refusal be useful?',
# 'Would such an act of refusal be useful?',
# 'Would such an act of refusal be useful?',
```
The same behavior happens for other columns
Expected behavior:
Different info on the actual timit_asr dataset
Actual behavior:
When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same. I've checked datasets 1.3 and the rows are different
Debug info
Streamlit version: (get it with $ streamlit version)
Python version: Python 3.6.12
Using Conda? PipEnv? PyEnv? Pex? Using pip
OS version: Centos-release-7-9.2009.1.el7.centos.x86_64
Additional information
You can check the same behavior on https://huggingface.co/datasets/viewer/?dataset=timit_asr | https://api.github.com/repos/huggingface/datasets/issues/2052/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2051 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2051/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2051/comments | https://api.github.com/repos/huggingface/datasets/issues/2051/events | https://github.com/huggingface/datasets/pull/2051 | 831,027,021 | MDExOlB1bGxSZXF1ZXN0NTkyNDQ2MDU1 | 2,051 | Add MDD Dataset | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lhoestq,\r\n\r\nI have added changes from review.",
"Thanks for approving :)"
] | 1,615,680,065,000 | 1,616,152,544,000 | 1,616,149,919,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2051",
"html_url": "https://github.com/huggingface/datasets/pull/2051",
"diff_url": "https://github.com/huggingface/datasets/pull/2051.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2051.patch"
} | - **Name:** *MDD Dataset*
- **Description:** The Movie Dialog dataset (MDD) is designed to measure how well models can perform at goal and non-goal orientated dialog centered around the topic of movies (question answering, recommendation and discussion), from various movie reviews sources such as MovieLens and OMDb.
- **Paper:** [arXiv](https://arxiv.org/pdf/1511.06931.pdf)
- **Data:** https://research.fb.com/downloads/babi/
- **Motivation:** This is one of the popular dialog datasets, a part of Facebook Research's "bAbI project".
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`
- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.
- [x] Generate the metadata file `dataset_infos.json` for all configurations
- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs
- [x] Both tests for the real data and the dummy data pass.
**Note**: I haven't included the following from the data files: `entities` (the file containing list of all entities in the first three subtasks), `dictionary`(the dictionary of words they use in their models), `movie_kb`(contains the knowledge base of information about the movies, actors and other entities that are mentioned in the dialogs). Please let me know if those are needed, and if yes, should I make separate configurations for them? | https://api.github.com/repos/huggingface/datasets/issues/2051/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2050 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2050/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2050/comments | https://api.github.com/repos/huggingface/datasets/issues/2050/events | https://github.com/huggingface/datasets/issues/2050 | 831,006,551 | MDU6SXNzdWU4MzEwMDY1NTE= | 2,050 | Build custom dataset to fine-tune Wav2Vec2 | {
"login": "Omarnabk",
"id": 72882909,
"node_id": "MDQ6VXNlcjcyODgyOTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/72882909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Omarnabk",
"html_url": "https://github.com/Omarnabk",
"followers_url": "https://api.github.com/users/Omarnabk/followers",
"following_url": "https://api.github.com/users/Omarnabk/following{/other_user}",
"gists_url": "https://api.github.com/users/Omarnabk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Omarnabk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Omarnabk/subscriptions",
"organizations_url": "https://api.github.com/users/Omarnabk/orgs",
"repos_url": "https://api.github.com/users/Omarnabk/repos",
"events_url": "https://api.github.com/users/Omarnabk/events{/privacy}",
"received_events_url": "https://api.github.com/users/Omarnabk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"@lhoestq - We could simply use the \"general\" json dataset for this no? ",
"Sure you can use the json loader\r\n```python\r\ndata_files = {\"train\": \"path/to/your/train_data.json\", \"test\": \"path/to/your/test_data.json\"}\r\ntrain_dataset = load_dataset(\"json\", data_files=data_files, split=\"train\")\r\ntest_dataset = load_dataset(\"json\", data_files=data_files, split=\"test\")\r\n```\r\n\r\nYou just need to make sure that the data contain the paths to the audio files.\r\nIf not, feel free to use `.map()` to add them.",
"Many thanks! that was what I was looking for. "
] | 1,615,672,870,000 | 1,615,800,448,000 | 1,615,800,448,000 | NONE | null | null | Thank you for your recent tutorial on how to finetune Wav2Vec2 on a custom dataset. The example you gave here (https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) was on the CommonVoice dataset. However, what if I want to load my own dataset? I have a manifest (transcript and their audio files) in a JSON file.
| https://api.github.com/repos/huggingface/datasets/issues/2050/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2049 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2049/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2049/comments | https://api.github.com/repos/huggingface/datasets/issues/2049/events | https://github.com/huggingface/datasets/pull/2049 | 830,978,687 | MDExOlB1bGxSZXF1ZXN0NTkyNDE2MzQ0 | 2,049 | Fix text-classification tags | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"LGTM, thanks for fixing."
] | 1,615,665,102,000 | 1,615,909,666,000 | 1,615,909,666,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2049",
"html_url": "https://github.com/huggingface/datasets/pull/2049",
"diff_url": "https://github.com/huggingface/datasets/pull/2049.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2049.patch"
} | There are different tags for text classification right now: `text-classification` and `text_classification`:
![image](https://user-images.githubusercontent.com/29076344/111042457-856bdf00-8463-11eb-93c9-50a30106a1a1.png).
This PR fixes it.
| https://api.github.com/repos/huggingface/datasets/issues/2049/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2048 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2048/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2048/comments | https://api.github.com/repos/huggingface/datasets/issues/2048/events | https://github.com/huggingface/datasets/issues/2048 | 830,953,431 | MDU6SXNzdWU4MzA5NTM0MzE= | 2,048 | github is not always available - probably need a back up | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,615,658,612,000 | 1,615,658,612,000 | null | CONTRIBUTOR | null | null | Yesterday morning github wasn't working:
```
:/tmp$ wget https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py--2021-03-12 18:35:59-- https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.111.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 500 Internal Server Error
2021-03-12 18:36:11 ERROR 500: Internal Server Error.
```
Suggestion: have a failover system and replicate the data on another system and reach there if gh isn't reachable? perhaps gh can be a master and the replicate a slave - so there is only one true source. | https://api.github.com/repos/huggingface/datasets/issues/2048/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2047 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2047/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2047/comments | https://api.github.com/repos/huggingface/datasets/issues/2047/events | https://github.com/huggingface/datasets/pull/2047 | 830,626,430 | MDExOlB1bGxSZXF1ZXN0NTkyMTI2NzQ3 | 2,047 | Multilingual dIalogAct benchMark (miam) | {
"login": "eusip",
"id": 1551356,
"node_id": "MDQ6VXNlcjE1NTEzNTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1551356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eusip",
"html_url": "https://github.com/eusip",
"followers_url": "https://api.github.com/users/eusip/followers",
"following_url": "https://api.github.com/users/eusip/following{/other_user}",
"gists_url": "https://api.github.com/users/eusip/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eusip/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eusip/subscriptions",
"organizations_url": "https://api.github.com/users/eusip/orgs",
"repos_url": "https://api.github.com/users/eusip/repos",
"events_url": "https://api.github.com/users/eusip/events{/privacy}",
"received_events_url": "https://api.github.com/users/eusip/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hello. All aforementioned changes have been made. I've also re-run black on miam.py. :-)",
"I will run isort again. Hopefully it resolves the current check_code_quality test failure.",
"Once the review period is over, feel free to open a PR to add all the missing information ;)",
"Hi! I will follow up right now with one more pull request as I have new anonymous citation information to include."
] | 1,615,590,175,000 | 1,616,495,794,000 | 1,616,150,833,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2047",
"html_url": "https://github.com/huggingface/datasets/pull/2047",
"diff_url": "https://github.com/huggingface/datasets/pull/2047.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2047.patch"
} | My collaborators (@EmileChapuis, @PierreColombo) and I within the Affective Computing team at Telecom Paris would like to anonymously publish the miam dataset. It is assocated with a publication currently under review. We will update the dataset with full citations once the review period is over. | https://api.github.com/repos/huggingface/datasets/issues/2047/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2046 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2046/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2046/comments | https://api.github.com/repos/huggingface/datasets/issues/2046/events | https://github.com/huggingface/datasets/issues/2046 | 830,423,033 | MDU6SXNzdWU4MzA0MjMwMzM= | 2,046 | add_faisis_index gets very slow when doing it interatively | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I think faiss automatically sets the number of threads to use to build the index.\r\nCan you check how many CPU cores are being used when you build the index in `use_own_knowleldge_dataset` as compared to this script ? Are there other programs running (maybe for rank>0) ?",
"Hi,\r\n I am running the add_faiss_index during the training process of the RAG from the master process (rank 0). But at the exact moment, I do not run any other process since I do it in every 5000 training steps. \r\n \r\n I think what you say is correct. It depends on the number of CPU cores. I did an experiment to compare the time taken to finish the add_faiss_index process on use_own_knowleldge_dataset.py vs the training loop thing. The training loop thing takes 40 mins more. It might be natural right? \r\n \r\n \r\n at the moment it uses around 40 cores of a 96 core machine (I am fine-tuning the entire process). ",
"Can you try to set the number of threads manually ?\r\nIf you set the same number of threads for both the `use_own_knowledge_dataset.py` and RAG training, it should take the same amount of time.\r\nYou can see how to set the number of thread in the faiss wiki: https://github.com/facebookresearch/faiss/wiki/Threads-and-asynchronous-calls",
"Ok, I will report the details too soon. I am the first one on the list and currently add_index being computed for the 3rd time in the loop. Actually seems like the time is taken to complete each interaction is the same, but around 1 hour more compared to running it without the training loop. A the moment this takes 5hrs and 30 mins. If there is any way to faster the process, an end-to-end rag will be perfect. So I will also try out with different thread numbers too. \r\n\r\n![image](https://user-images.githubusercontent.com/16892570/111453464-798c5f80-8778-11eb-86d0-19d212f58e38.png)\r\n",
"@lhoestq on a different note, I read about using Faiss-GPU, but the documentation says we should use it when the dataset has the ability to fit into the GPU memory. Although this might work, in the long-term this is not that practical for me.\r\n\r\nhttps://github.com/matsui528/faiss_tips",
"@lhoestq \r\n\r\nHi, I executed the **use_own_dataset.py** script independently and ask a few of my friends to run their programs in the HPC machine at the same time. \r\n\r\n Once there are so many other processes are running the add_index function gets slows down naturally. So basically the speed of the add_index depends entirely on the number of CPU processes. Then I set the number of threads as you have mentioned and got actually the same time for RAG training and independat running. So you are correct! :) \r\n\r\n \r\n Then I added this [issue in Faiss repostiary](https://github.com/facebookresearch/faiss/issues/1767). I got an answer saying our current **IndexHNSWFlat** can get slow for 30 million vectors and it would be better to use alternatives. What do you think?",
"It's a matter of tradeoffs.\r\nHSNW is fast at query time but takes some time to build.\r\nA flat index is flat to build but is \"slow\" at query time.\r\nAn IVF index is probably a good choice for you: fast building and fast queries (but still slower queries than HSNW).\r\n\r\nNote that for an IVF index you would need to have an `nprobe` parameter (number of cells to visit for one query, there are `nlist` in total) that is not too small in order to have good retrieval accuracy, but not too big otherwise the queries will take too much time. From the faiss documentation:\r\n> The nprobe parameter is always a way of adjusting the tradeoff between speed and accuracy of the result. Setting nprobe = nlist gives the same result as the brute-force search (but slower).\r\n\r\nFrom my experience with indexes on DPR embeddings, setting nprobe around 1/4 of nlist gives really good retrieval accuracy and there's no need to have a value higher than that (or you would need to brute-force in order to see a difference).",
"@lhoestq \r\n\r\nThanks a lot for sharing all this prior knowledge. \r\n\r\nJust asking what would be a good nlist of parameters for 30 million embeddings?",
"When IVF is used alone, nlist should be between `4*sqrt(n)` and `16*sqrt(n)`.\r\nFor more details take a look at [this section of the Faiss wiki](https://github.com/facebookresearch/faiss/wiki/Guidelines-to-choose-an-index#how-big-is-the-dataset)",
"Thanks a lot. I was lost with calling the index from class and using faiss_index_factory. ",
"@lhoestq Thanks a lot for the help you have given to solve this issue. As per my experiments, IVF index suits well for my case and it is a lot faster. The use of this can make the entire RAG end-to-end trainable lot faster. So I will close this issue. Will do the final PR soon. "
] | 1,615,580,838,000 | 1,616,624,951,000 | 1,616,624,951,000 | NONE | null | null | As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any way to make this process faster?
@lhoestq
```
def training_step(self, batch, batch_idx) -> Dict:
if (not batch_idx==0) and (batch_idx%5==0):
print("******************************************************")
ctx_encoder=self.trainer.model.module.module.model.rag.ctx_encoder
model_copy =type(ctx_encoder)(self.config_dpr) # get a new instance #this will be load in the CPU
model_copy.load_state_dict(ctx_encoder.state_dict()) # copy weights and stuff
list_of_gpus = ['cuda:2','cuda:3']
c_dir='/custom/cache/dir'
kb_dataset = load_dataset("csv", data_files=[self.custom_config.csv_path], split="train", delimiter="\t", column_names=["title", "text"],cache_dir=c_dir)
print(kb_dataset)
n=len(list_of_gpus) #nunber of dedicated GPUs
kb_list=[kb_dataset.shard(n, i, contiguous=True) for i in range(n)]
#kb_dataset.save_to_disk('/hpc/gsir059/MY-Test/RAY/transformers/examples/research_projects/rag/haha-dir')
print(self.trainer.global_rank)
dataset_shards = self.re_encode_kb(model_copy.to(device=list_of_gpus[self.trainer.global_rank]),kb_list[self.trainer.global_rank])
output = [None for _ in list_of_gpus]
#self.trainer.accelerator_connector.accelerator.barrier("embedding_process")
dist.all_gather_object(output, dataset_shards)
#This creation and re-initlaization of the new index
if (self.trainer.global_rank==0): #saving will be done in the main process
combined_dataset = concatenate_datasets(output)
passages_path =self.config.passages_path
logger.info("saving the dataset with ")
#combined_dataset.save_to_disk('/hpc/gsir059/MY-Test/RAY/transformers/examples/research_projects/rag/MY-Passage')
combined_dataset.save_to_disk(passages_path)
logger.info("Add faiss index to the dataset that consist of embeddings")
embedding_dataset=combined_dataset
index = faiss.IndexHNSWFlat(768, 128, faiss.METRIC_INNER_PRODUCT)
embedding_dataset.add_faiss_index("embeddings", custom_index=index)
embedding_dataset.get_index("embeddings").save(self.config.index_path)
| https://api.github.com/repos/huggingface/datasets/issues/2046/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2045 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2045/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2045/comments | https://api.github.com/repos/huggingface/datasets/issues/2045/events | https://github.com/huggingface/datasets/pull/2045 | 830,351,527 | MDExOlB1bGxSZXF1ZXN0NTkxODc2Mjcz | 2,045 | Preserve column ordering in Dataset.rename_column | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Not sure why CI isn't triggered.\r\n\r\n@lhoestq Can you please help me with this? ",
"I don't know how to trigger it manually, but an empty commit should do the job"
] | 1,615,573,607,000 | 1,615,906,085,000 | 1,615,905,305,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2045",
"html_url": "https://github.com/huggingface/datasets/pull/2045",
"diff_url": "https://github.com/huggingface/datasets/pull/2045.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2045.patch"
} | Currently `Dataset.rename_column` doesn't necessarily preserve the order of the columns:
```python
>>> from datasets import Dataset
>>> d = Dataset.from_dict({'sentences': ["s1", "s2"], 'label': [0, 1]})
>>> d
Dataset({
features: ['sentences', 'label'],
num_rows: 2
})
>>> d.rename_column('sentences', 'text')
Dataset({
features: ['label', 'text'],
num_rows: 2
})
```
This PR fixes this. | https://api.github.com/repos/huggingface/datasets/issues/2045/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2044 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2044/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2044/comments | https://api.github.com/repos/huggingface/datasets/issues/2044/events | https://github.com/huggingface/datasets/pull/2044 | 830,339,905 | MDExOlB1bGxSZXF1ZXN0NTkxODY2NzM1 | 2,044 | Add CBT dataset | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lhoestq,\r\n\r\nI have added changes from the review.",
"Thanks for approving @lhoestq "
] | 1,615,572,259,000 | 1,616,152,213,000 | 1,616,149,755,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2044",
"html_url": "https://github.com/huggingface/datasets/pull/2044",
"diff_url": "https://github.com/huggingface/datasets/pull/2044.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2044.patch"
} | This PR adds the [CBT Dataset](https://arxiv.org/abs/1511.02301).
Note that I have also added the `raw` dataset as a separate configuration. I couldn't find a suitable "task" for it in YAML tags.
The dummy files have one example each, as the examples are slightly big. For `raw` dataset, I just used top few lines, because they are entire books and would take up a lot of space.
Let me know in case of any issues. | https://api.github.com/repos/huggingface/datasets/issues/2044/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2043 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2043/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2043/comments | https://api.github.com/repos/huggingface/datasets/issues/2043/events | https://github.com/huggingface/datasets/pull/2043 | 830,279,098 | MDExOlB1bGxSZXF1ZXN0NTkxODE1ODAz | 2,043 | Support pickle protocol for dataset splits defined as ReadInstruction | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq But we don't perform conversion to a `NamedSplit` if `_split` is not a string which means it **will** be a `ReadInstruction` after reloading.",
"Yes right ! I read it wrong.\r\nPerfect then"
] | 1,615,566,911,000 | 1,615,904,738,000 | 1,615,903,505,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2043",
"html_url": "https://github.com/huggingface/datasets/pull/2043",
"diff_url": "https://github.com/huggingface/datasets/pull/2043.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2043.patch"
} | Fixes #2022 (+ some style fixes) | https://api.github.com/repos/huggingface/datasets/issues/2043/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2042 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2042/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2042/comments | https://api.github.com/repos/huggingface/datasets/issues/2042/events | https://github.com/huggingface/datasets/pull/2042 | 830,190,276 | MDExOlB1bGxSZXF1ZXN0NTkxNzQwNzQ3 | 2,042 | Fix arrow memory checks issue in tests | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,615,560,592,000 | 1,615,561,463,000 | 1,615,561,462,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2042",
"html_url": "https://github.com/huggingface/datasets/pull/2042",
"diff_url": "https://github.com/huggingface/datasets/pull/2042.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2042.patch"
} | The tests currently fail on `master` because the arrow memory verification doesn't return the expected memory evolution when loading an arrow table in memory.
From my experiments, the tests fail only when the full test suite is ran.
This made me think that maybe some arrow objects from other tests were not freeing their memory until they do and cause the memory verifications to fail in other tests.
Collecting the garbage collector before checking the arrow memory usage seems to fix this issue.
I added a context manager `assert_arrow_memory_increases` that we can use in tests and that deals with the gc. | https://api.github.com/repos/huggingface/datasets/issues/2042/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2041 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2041/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2041/comments | https://api.github.com/repos/huggingface/datasets/issues/2041/events | https://github.com/huggingface/datasets/pull/2041 | 830,180,803 | MDExOlB1bGxSZXF1ZXN0NTkxNzMyNzMw | 2,041 | Doc2dial update data_infos and data_loaders | {
"login": "songfeng",
"id": 2062185,
"node_id": "MDQ6VXNlcjIwNjIxODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/songfeng",
"html_url": "https://github.com/songfeng",
"followers_url": "https://api.github.com/users/songfeng/followers",
"following_url": "https://api.github.com/users/songfeng/following{/other_user}",
"gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songfeng/subscriptions",
"organizations_url": "https://api.github.com/users/songfeng/orgs",
"repos_url": "https://api.github.com/users/songfeng/repos",
"events_url": "https://api.github.com/users/songfeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/songfeng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,615,559,969,000 | 1,615,892,960,000 | 1,615,892,960,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2041",
"html_url": "https://github.com/huggingface/datasets/pull/2041",
"diff_url": "https://github.com/huggingface/datasets/pull/2041.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2041.patch"
} | https://api.github.com/repos/huggingface/datasets/issues/2041/timeline | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2040 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2040/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2040/comments | https://api.github.com/repos/huggingface/datasets/issues/2040/events | https://github.com/huggingface/datasets/issues/2040 | 830,169,387 | MDU6SXNzdWU4MzAxNjkzODc= | 2,040 | ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk | {
"login": "simonschoe",
"id": 53626067,
"node_id": "MDQ6VXNlcjUzNjI2MDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/53626067?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simonschoe",
"html_url": "https://github.com/simonschoe",
"followers_url": "https://api.github.com/users/simonschoe/followers",
"following_url": "https://api.github.com/users/simonschoe/following{/other_user}",
"gists_url": "https://api.github.com/users/simonschoe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simonschoe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonschoe/subscriptions",
"organizations_url": "https://api.github.com/users/simonschoe/orgs",
"repos_url": "https://api.github.com/users/simonschoe/repos",
"events_url": "https://api.github.com/users/simonschoe/events{/privacy}",
"received_events_url": "https://api.github.com/users/simonschoe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! To help me understand the situation, can you print the values of `load_from_disk(PATH_DATA_CLS_A)['train']._indices_data_files` and `load_from_disk(PATH_DATA_CLS_B)['train']._indices_data_files` ?\r\nThey should both have a path to an arrow file\r\n\r\nAlso note that from #2025 concatenating datasets will no longer have such restrictions.",
"Sure, thanks for the fast reply!\r\n\r\nFor dataset A: `[{'filename': 'drive/MyDrive/data_target_task/dataset_a/train/cache-4797266bf4db1eb7.arrow'}]`\r\nFor dataset B: `[]`\r\n\r\nNo clue why for B it returns nothing. `PATH_DATA_CLS_B` is exactly the same in `save_to_disk` and `load_from_disk`... Also I can verify that the folder physically exists under 'drive/MyDrive/data_target_task/dataset_b/'",
"In the next release you'll be able to concatenate any kinds of dataset (either from memory or from disk).\r\n\r\nFor now I'd suggest you to flatten the indices of the A and B datasets. This will remove the indices mapping and you will be able to concatenate them. You can flatten the indices with\r\n```python\r\ndataset = dataset.flatten_indices()\r\n```",
"Indeed this works. Not the most elegant solution, but it does the trick. Thanks a lot! "
] | 1,615,559,220,000 | 1,628,100,043,000 | 1,628,100,043,000 | NONE | null | null | Hi there,
I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects):
```python
concatenate_datasets([load_from_disk(PATH_DATA_CLS_A)['train'], load_from_disk(PATH_DATA_CLS_B)['train']])
```
Yielding the following error:
```python
ValueError: Datasets' indices should ALL come from memory, or should ALL come from disk.
However datasets' indices [1] come from memory and datasets' indices [0] come from disk.
```
Been trying to solve this for quite some time now. Both `DataDict` have been created by reading in a `csv` via `load_dataset` and subsequently processed using the various `datasets` methods (i.e. filter, map, remove col, rename col). Can't figure out tho...
`load_from_disk(PATH_DATA_CLS_A)['train']` yields:
```python
Dataset({
features: ['labels', 'text'],
num_rows: 785
})
```
`load_from_disk(PATH_DATA_CLS_B)['train']` yields:
```python
Dataset({
features: ['labels', 'text'],
num_rows: 3341
})
``` | https://api.github.com/repos/huggingface/datasets/issues/2040/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2039 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2039/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2039/comments | https://api.github.com/repos/huggingface/datasets/issues/2039/events | https://github.com/huggingface/datasets/pull/2039 | 830,047,652 | MDExOlB1bGxSZXF1ZXN0NTkxNjE3ODY3 | 2,039 | Doc2dial rc | {
"login": "songfeng",
"id": 2062185,
"node_id": "MDQ6VXNlcjIwNjIxODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/songfeng",
"html_url": "https://github.com/songfeng",
"followers_url": "https://api.github.com/users/songfeng/followers",
"following_url": "https://api.github.com/users/songfeng/following{/other_user}",
"gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songfeng/subscriptions",
"organizations_url": "https://api.github.com/users/songfeng/orgs",
"repos_url": "https://api.github.com/users/songfeng/repos",
"events_url": "https://api.github.com/users/songfeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/songfeng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,615,550,188,000 | 1,615,563,156,000 | 1,615,563,156,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2039",
"html_url": "https://github.com/huggingface/datasets/pull/2039",
"diff_url": "https://github.com/huggingface/datasets/pull/2039.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2039.patch"
} | Added fix to handle the last turn that is a user turn. | https://api.github.com/repos/huggingface/datasets/issues/2039/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2038 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2038/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2038/comments | https://api.github.com/repos/huggingface/datasets/issues/2038/events | https://github.com/huggingface/datasets/issues/2038 | 830,036,875 | MDU6SXNzdWU4MzAwMzY4NzU= | 2,038 | outdated dataset_infos.json might fail verifications | {
"login": "songfeng",
"id": 2062185,
"node_id": "MDQ6VXNlcjIwNjIxODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/songfeng",
"html_url": "https://github.com/songfeng",
"followers_url": "https://api.github.com/users/songfeng/followers",
"following_url": "https://api.github.com/users/songfeng/following{/other_user}",
"gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songfeng/subscriptions",
"organizations_url": "https://api.github.com/users/songfeng/orgs",
"repos_url": "https://api.github.com/users/songfeng/repos",
"events_url": "https://api.github.com/users/songfeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/songfeng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Thanks for reporting.\r\n\r\nTo update the dataset_infos.json you can run:\r\n```\r\ndatasets-cli test ./datasets/doc2dial --all_configs --save_infos --ignore_verifications\r\n```",
"Fixed by #2041, thanks again @songfeng !"
] | 1,615,549,314,000 | 1,615,912,060,000 | 1,615,912,060,000 | CONTRIBUTOR | null | null | The [doc2dial/dataset_infos.json](https://github.com/huggingface/datasets/blob/master/datasets/doc2dial/dataset_infos.json) is outdated. It would fail data_loader when verifying download checksum etc..
Could you please update this file or point me how to update this file?
Thank you. | https://api.github.com/repos/huggingface/datasets/issues/2038/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2037 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2037/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2037/comments | https://api.github.com/repos/huggingface/datasets/issues/2037/events | https://github.com/huggingface/datasets/pull/2037 | 829,919,685 | MDExOlB1bGxSZXF1ZXN0NTkxNTA4MTQz | 2,037 | Fix: Wikipedia - save memory by replacing root.clear with elem.clear | {
"login": "miyamonz",
"id": 6331508,
"node_id": "MDQ6VXNlcjYzMzE1MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6331508?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/miyamonz",
"html_url": "https://github.com/miyamonz",
"followers_url": "https://api.github.com/users/miyamonz/followers",
"following_url": "https://api.github.com/users/miyamonz/following{/other_user}",
"gists_url": "https://api.github.com/users/miyamonz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/miyamonz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/miyamonz/subscriptions",
"organizations_url": "https://api.github.com/users/miyamonz/orgs",
"repos_url": "https://api.github.com/users/miyamonz/repos",
"events_url": "https://api.github.com/users/miyamonz/events{/privacy}",
"received_events_url": "https://api.github.com/users/miyamonz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The error you got is minor and appeared in the last version of pyarrow, we'll fix the CI to take this into account. You can ignore it"
] | 1,615,540,920,000 | 1,616,479,696,000 | 1,615,892,482,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2037",
"html_url": "https://github.com/huggingface/datasets/pull/2037",
"diff_url": "https://github.com/huggingface/datasets/pull/2037.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2037.patch"
} | see: https://github.com/huggingface/datasets/issues/2031
What I did:
- replace root.clear with elem.clear
- remove lines to get root element
- $ make style
- $ make test
- some tests required some pip packages, I installed them.
test results on origin/master and my branch are same. I think it's not related on my modification, isn't it?
```
==================================================================================== short test summary info ====================================================================================
FAILED tests/test_arrow_writer.py::TypedSequenceTest::test_catch_overflow - AssertionError: OverflowError not raised
============================================================= 1 failed, 2332 passed, 5138 skipped, 70 warnings in 91.75s (0:01:31) ==============================================================
make: *** [Makefile:19: test] Error 1
```
Is there anything else I should do? | https://api.github.com/repos/huggingface/datasets/issues/2037/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2036 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2036/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2036/comments | https://api.github.com/repos/huggingface/datasets/issues/2036/events | https://github.com/huggingface/datasets/issues/2036 | 829,909,258 | MDU6SXNzdWU4Mjk5MDkyNTg= | 2,036 | Cannot load wikitext | {
"login": "Gpwner",
"id": 19349207,
"node_id": "MDQ6VXNlcjE5MzQ5MjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/19349207?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Gpwner",
"html_url": "https://github.com/Gpwner",
"followers_url": "https://api.github.com/users/Gpwner/followers",
"following_url": "https://api.github.com/users/Gpwner/following{/other_user}",
"gists_url": "https://api.github.com/users/Gpwner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Gpwner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gpwner/subscriptions",
"organizations_url": "https://api.github.com/users/Gpwner/orgs",
"repos_url": "https://api.github.com/users/Gpwner/repos",
"events_url": "https://api.github.com/users/Gpwner/events{/privacy}",
"received_events_url": "https://api.github.com/users/Gpwner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Solved!"
] | 1,615,540,179,000 | 1,615,797,902,000 | 1,615,797,884,000 | NONE | null | null | when I execute these codes
```
>>> from datasets import load_dataset
>>> test_dataset = load_dataset("wikitext")
```
I got an error,any help?
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.py", line 589, in load_dataset
path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True
File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 487, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/wikitext/wikitext.py
``` | https://api.github.com/repos/huggingface/datasets/issues/2036/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2035 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2035/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2035/comments | https://api.github.com/repos/huggingface/datasets/issues/2035/events | https://github.com/huggingface/datasets/issues/2035 | 829,475,544 | MDU6SXNzdWU4Mjk0NzU1NDQ= | 2,035 | wiki40b/wikipedia for almost all languages cannot be downloaded | {
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Dear @lhoestq for wikipedia dataset I also get the same error, I greatly appreciate if you could have a look into this dataset as well. Below please find the command to reproduce the error:\r\n\r\n```\r\ndataset = load_dataset(\"wikipedia\", \"20200501.bg\")\r\nprint(dataset)\r\n```\r\n\r\nYour library is my only chance to be able training the models at scale and I am grateful for your help.\r\n\r\n",
"Hi @dorost1234,\r\nTry installing this library first, `pip install 'apache-beam[gcp]' --use-feature=2020-resolver` followed by loading dataset like this using beam runner.\r\n\r\n`dataset = load_dataset(\"wiki40b\", \"cs\", beam_runner='DirectRunner')`\r\n\r\n I also read in error stack trace that:\r\n\r\n> Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc.\r\n\r\nWorked perfectly fine after this (Ignore these warnings)\r\n\r\n![image](https://user-images.githubusercontent.com/19718818/110908410-c7e2ce00-8334-11eb-8d10-7354359e9ec3.png)\r\n\r\n",
"For wikipedia dataset, looks like the files it's looking for are no longer available. For `bg`, I checked [here](https://dumps.wikimedia.org/bgwiki/). For this I think `dataset_infos.json` for this dataset has to made again? You'll have to load this dataset also using beam runner.\r\n\r\n",
"Hello @dorost1234,\r\n\r\nIndeed, Wikipedia datasets need a lot of preprocessing and this is done using Apache Beam. That is the reason why it is required that you install Apache Beam in order to preform this preprocessing.\r\n\r\nFor some specific default parameters (English Wikipedia), Hugging Face has already preprocessed the dataset for you (and it is stored in the cloud). That is the reason why you do not get the error for English: the preprocessing is already done by HF and you just get the preprocessed dataset; Apache Beam is not required in that case.",
"Hi\nI really appreciate if huggingface can kindly provide preprocessed\ndatasets, processing these datasets require sufficiently large resources\nand I do not have unfortunately access to, and perhaps many others too.\nthanks\n\nOn Fri, Mar 12, 2021 at 9:04 AM Albert Villanova del Moral <\n***@***.***> wrote:\n\n> Hello @dorost1234 <https://github.com/dorost1234>,\n>\n> Indeed, Wikipedia datasets need a lot of preprocessing and this is done\n> using Apache Beam. That is the reason why it is required that you install\n> Apache Beam in order to preform this preprocessing.\n>\n> For some specific default parameters (English Wikipedia), Hugging Face has\n> already preprocessed the dataset for you (and it is stored in the cloud).\n> That is the reason why you do not get the error for English: the\n> preprocessing is already done by HF and you just get the preprocessed\n> dataset; Apache Beam is not required in that case.\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2035#issuecomment-797310899>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AS37NMXACFQZAGMK4VGXRETTDHDI3ANCNFSM4ZA5R2UA>\n> .\n>\n",
"Hi everyone\r\nthanks for the helpful pointers, I did it as @bhavitvyamalik suggested, for me this freezes on this command for several hours, \r\n\r\n`Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /users/dara/cache/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...\r\n`\r\n\r\nDo you know how long this takes? Any specific requirements the machine should have? like very large memory or so? @lhoestq \r\n\r\nthanks \r\n\r\n\r\n",
"HI @dorost1234, \r\nThe dataset size is 631.84 MiB so depending on your internet speed it'll take some time. You can monitor your internet speed meanwhile to see if it's downloading the dataset or not (use `nload` if you're using linux/mac to monitor the same). In my case it took around 3-4 mins. Since they haven't used `download_and_extract` here that's why there's no download progress bar.",
"Hi\r\nthanks, my internet speed should be good, but this really freezes for me, this is how I try to get this dataset:\r\n\r\n`from datasets import load_dataset\r\ndataset = load_dataset(\"wiki40b\", \"cs\", beam_runner='DirectRunner')`\r\n\r\nthe output I see if different also from what you see after writing this command:\r\n\r\n`Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /users/dara/cache/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...`\r\n\r\ndo you have any idea why it might get freezed? anything I am missing @lhoestq @bhavitvyamalik. Do I need maybe to set anything special for apache-beam? \r\n\r\nthanks a lot \r\n\r\nOn Tue, Mar 16, 2021 at 9:03 AM Bhavitvya Malik ***@***.***>\r\nwrote:\r\n\r\n> HI @dorost1234 <https://github.com/dorost1234>,\r\n> The dataset size is 631.84 MiB so depending on your internet speed it'll\r\n> take some time. You can monitor your internet speed meanwhile to see if\r\n> it's downloading the dataset or not (use nload if you're using linux/mac\r\n> to monitor the same). In my case it took around 3-4 mins. Since they\r\n> haven't used download_and_extract here that's why there's no download\r\n> progress bar.\r\n>\r\n> —\r\n> You are receiving this because you were mentioned.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/2035#issuecomment-800044303>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AS37NMQIHNNLM2LGG6QKZ73TD4GDJANCNFSM4ZA5R2UA>\r\n> .\r\n>\r\n",
"I tried this on another machine (followed the same procedure I've mentioned above). This is what it shows (during the freeze period) for me:\r\n```\r\n>>> dataset = load_dataset(\"wiki40b\", \"cs\", beam_runner='DirectRunner')\r\nDownloading: 5.26kB [00:00, 1.23MB/s] \r\nDownloading: 1.40kB [00:00, 327kB/s] \r\nDownloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/bhavitvya/.cache/huggingface/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...\r\nWARNING:apache_beam.internal.gcp.auth:Unable to find default credentials to use: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.\r\nConnecting anonymously.\r\nWARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.\r\n```\r\nAfter around 10 minutes, here's the loading of dataset:\r\n```\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:16<00:00, 16.42s/sources]\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.12sources/s]\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.14sources/s]\r\nDataset wiki40b downloaded and prepared to /home/bhavitvya/.cache/huggingface/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f. Subsequent calls will reuse this data.\r\n```",
"Hi\r\nI honestly also now tried on another machine and nothing shows up after\r\nhours of waiting. Are you sure you have not set any specific setting? maybe\r\ngoogle cloud which seems it is used here, needs some credential setting?\r\nthanks for any suggestions on this\r\n\r\nOn Tue, Mar 16, 2021 at 10:02 AM Bhavitvya Malik ***@***.***>\r\nwrote:\r\n\r\n> I tried this on another machine (followed the same procedure I've\r\n> mentioned above). This is what it shows (during the freeze period) for me:\r\n>\r\n> >>> dataset = load_dataset(\"wiki40b\", \"cs\", beam_runner='DirectRunner')\r\n> Downloading: 5.26kB [00:00, 1.23MB/s]\r\n> Downloading: 1.40kB [00:00, 327kB/s]\r\n> Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/bhavitvya/.cache/huggingface/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...\r\n> WARNING:apache_beam.internal.gcp.auth:Unable to find default credentials to use: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.\r\n> Connecting anonymously.\r\n> WARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.\r\n>\r\n> After around 10 minutes, here's the loading of dataset:\r\n>\r\n> 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:16<00:00, 16.42s/sources]\r\n> 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.12sources/s]\r\n> 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.14sources/s]\r\n> Dataset wiki40b downloaded and prepared to /home/bhavitvya/.cache/huggingface/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f. Subsequent calls will reuse this data.\r\n>\r\n> —\r\n> You are receiving this because you were mentioned.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/2035#issuecomment-800081772>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AS37NMX6A2ZTRZUIIZVFRCDTD4NC3ANCNFSM4ZA5R2UA>\r\n> .\r\n>\r\n"
] | 1,615,492,494,000 | 1,615,906,417,000 | null | NONE | null | null | Hi
I am trying to download the data as below:
```
from datasets import load_dataset
dataset = load_dataset("wiki40b", "cs")
print(dataset)
```
I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error.
I really need majority of languages in this dataset to be able to train my models for a deadline and your great scalable super well-written library is my only hope to train the models at scale while being low on resources.
thank you very much.
```
(fast) dara@vgne046:/user/dara/dev/codes/seq2seq$ python test_data.py
Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to temp/dara/cache_home_2/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...
Traceback (most recent call last):
File "test_data.py", line 3, in <module>
dataset = load_dataset("wiki40b", "cs")
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/load.py", line 746, in load_dataset
use_auth_token=use_auth_token,
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/builder.py", line 579, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/builder.py", line 1105, in _download_and_prepare
import apache_beam as beam
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/__init__.py", line 96, in <module>
from apache_beam import io
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/io/__init__.py", line 23, in <module>
from apache_beam.io.avroio import *
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/io/avroio.py", line 55, in <module>
import avro
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 668, in _load_unlocked
File "<frozen importlib._bootstrap>", line 638, in _load_backward_compatible
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/__init__.py", line 34, in <module>
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/__init__.py", line 30, in LoadResource
NotADirectoryError: [Errno 20] Not a directory: '/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/VERSION.txt'
``` | https://api.github.com/repos/huggingface/datasets/issues/2035/timeline | null | false |