url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.83B
node_id
stringlengths
18
32
number
int64
1
6.09k
title
stringlengths
1
290
labels
list
state
stringclasses
2 values
locked
bool
1 class
milestone
dict
comments
int64
0
54
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
comments_text
sequence
https://api.github.com/repos/huggingface/datasets/issues/1709
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1709/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1709/comments
https://api.github.com/repos/huggingface/datasets/issues/1709/events
https://github.com/huggingface/datasets/issues/1709
781,875,640
MDU6SXNzdWU3ODE4NzU2NDA=
1,709
Databases
[]
closed
false
null
0
2021-01-08T06:14:03Z
2021-01-08T09:00:08Z
2021-01-08T09:00:08Z
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1709/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1709/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/2864
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2864/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2864/comments
https://api.github.com/repos/huggingface/datasets/issues/2864/events
https://github.com/huggingface/datasets/pull/2864
986,159,438
MDExOlB1bGxSZXF1ZXN0NzI1MzkyNjcw
2,864
Fix data URL in ToTTo dataset
[]
closed
false
{ "closed_at": null, "closed_issues": 2, "created_at": "2021-07-21T15:34:56Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-08-30T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/8", "id": 6968069, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/8/labels", "node_id": "MI_kwDODunzps4AalMF", "number": 8, "open_issues": 4, "state": "open", "title": "1.12", "updated_at": "2021-10-13T10:26:33Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/8" }
0
2021-09-02T05:25:08Z
2021-09-02T06:47:40Z
2021-09-02T06:47:40Z
null
Data source host changed their data URL: google-research-datasets/ToTTo@cebeb43. Fix #2860.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2864/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2864/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2864.diff", "html_url": "https://github.com/huggingface/datasets/pull/2864", "merged_at": "2021-09-02T06:47:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/2864.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2864" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/5276
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5276/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5276/comments
https://api.github.com/repos/huggingface/datasets/issues/5276/events
https://github.com/huggingface/datasets/issues/5276
1,459,363,442
I_kwDODunzps5W_B5y
5,276
Bug in downloading common_voice data and snall chunk of it to one's own hub
[]
closed
false
null
17
2022-11-22T08:17:53Z
2023-07-21T14:33:10Z
2023-07-21T14:33:10Z
null
### Describe the bug I'm trying to load the common voice dataset. Currently there is no implementation to download just par tof the data, and I need just one part of it, without downloading the entire dataset Help please? ![image](https://user-images.githubusercontent.com/48530104/203260511-26df766f-6013-4eaf-be26-8aa13794def2.png) ### Steps to reproduce the bug So here is what I have done: 1. Download common_voice data 2. Trim part of it and publish it to my own repo. 3. Download data from my own repo, but am getting this error. ### Expected behavior There shouldn't be an error in downloading part of the data and publishing it to one's own repo ### Environment info common_voice 11
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5276/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5276/timeline
null
completed
null
null
false
[ "Sounds like one of the file is not a valid one, can you make sure you uploaded valid mp3 files ?", "Well I just sharded the original commonVoice dataset and pushed a small chunk of it in a private rep\n\nWhat did go wrong?\n\nHolen Sie sich Outlook für iOS<https://aka.ms/o0ukef>\n________________________________\nVon: Quentin Lhoest ***@***.***>\nGesendet: Tuesday, November 22, 2022 3:03:40 PM\nAn: huggingface/datasets ***@***.***>\nCc: capsabogdan ***@***.***>; Author ***@***.***>\nBetreff: Re: [huggingface/datasets] Bug in downloading common_voice data and snall chunk of it to one's own hub (Issue #5276)\n\n\nSounds like one of the file is not a valid one, can you make sure you uploaded valid mp3 files ?\n\n—\nReply to this email directly, view it on GitHub<https://github.com/huggingface/datasets/issues/5276#issuecomment-1323727434>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ALSIFOAPAL2V4TBJTSPMAULWJTHDZANCNFSM6AAAAAASHQJ63U>.\nYou are receiving this because you authored the thread.Message ID: ***@***.***>\n", "It should be all good then !\r\nCould you share a link to your repository for me to investigate what went wrong ?", "https://huggingface.co/datasets/DTU54DL/common-voice-test16k\n\nAm Di., 22. Nov. 2022 um 16:43 Uhr schrieb Quentin Lhoest <\n***@***.***>:\n\n> It should be all good then !\n> Could you share a link to your repository for me to investigate what went\n> wrong ?\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/5276#issuecomment-1323876682>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ALSIFOEUJRZWXAM7DYA5VJDWJTS3NANCNFSM6AAAAAASHQJ63U>\n> .\n> You are receiving this because you authored the thread.Message ID:\n> ***@***.***>\n>\n", "I see ! This is a bug with MP3 files.\r\n\r\nWhen we store audio data in parquet, we store the bytes and the file name. From the file name extension we know if it's a WAV, an MP3 or else. But here it looks like the paths are all None.\r\n\r\nIt looks like it comes from here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/7feeb5648a63b6135a8259dedc3b1e19185ee4c7/src/datasets/features/audio.py#L212\r\n\r\nCc @polinaeterna maybe we should simply put the file name instead of None values ?", "@lhoestq I remember we wanted to avoid storing redundant data but maybe it's not that crucial indeed to store one more string value. \r\nOr we can store paths only for mp3s, considering that for other formats we don't have such a problem with reading from bytes without format specified. ", "It doesn't cost much to always store the file name IMO", "thanks for the help!\n\ncan I do anything on my side? we are doing a DL project and we need the\ndata really quick.\n\nthanks\nbogdan\n\n> Message ID: ***@***.***>\n>\n", "I opened a pull requests here: https://github.com/huggingface/datasets/pull/5285, we'll do a new release soon with this fix.\r\n\r\nOtherwise if you're really in a hurry you can install `datasets` from this PR", "[image: image.png]\n\n> Message ID: ***@***.***>\n>\n", "any idea on what's going wrong here?\n\nthanks\n\nAm So., 27. Nov. 2022 um 13:53 Uhr schrieb Bogdan Capsa <\n***@***.***>:\n\n> [image: image.png]\n>\n>> Message ID: ***@***.***>\n>>\n>\n", "hi @capsabogdan! \r\ncould you please share more specifically what problem do you have now?", "I have attached this screenshot above . can u pls help? So can not pip from pull request\r\n\r\n![image](https://user-images.githubusercontent.com/48530104/204354027-6173e6d1-e3d4-4085-a363-e924cfe1a7f4.png)\r\n", "The pull request has been merged on `main`.\r\nYou can install `datasets` from `main` using\r\n```\r\npip install git+https://github.com/huggingface/datasets.git\r\n```", "I've tried to load this dataset DTU54DL/common-voice-test16k, but am\ngetting the same error.\n\nSo the bug fix will fix only if I upload a new dataset, or also loading\npreviously uploaded datasets?\n\nthanks\n\nAm Mo., 28. Nov. 2022 um 19:51 Uhr schrieb Quentin Lhoest <\n***@***.***>:\n\n> The pull request has been merged on main.\n> You can install datasets from main using\n>\n> pip install git+https://github.com/huggingface/datasets.git\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/5276#issuecomment-1329587334>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ALSIFOCNYYIGHM2EX3ZIO6DWKT5MXANCNFSM6AAAAAASHQJ63U>\n> .\n> You are receiving this because you were mentioned.Message ID:\n> ***@***.***>\n>\n", "> So the bug fix will fix only if I upload a new dataset, or also loading\r\npreviously uploaded datasets?\r\n\r\nYou have to reupload the dataset, sorry for the inconvenience", "thank you so much for the help! works like a charm!\n\nAm Di., 29. Nov. 2022 um 12:15 Uhr schrieb Quentin Lhoest <\n***@***.***>:\n\n> So the bug fix will fix only if I upload a new dataset, or also loading\n> previously uploaded datasets?\n>\n> You have to reupload the dataset, sorry for the inconvenience\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/5276#issuecomment-1330468393>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ALSIFOBKEFZO57BAKY4IGW3WKXQUZANCNFSM6AAAAAASHQJ63U>\n> .\n> You are receiving this because you were mentioned.Message ID:\n> ***@***.***>\n>\n" ]
https://api.github.com/repos/huggingface/datasets/issues/111
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/111/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/111/comments
https://api.github.com/repos/huggingface/datasets/issues/111/events
https://github.com/huggingface/datasets/pull/111
618,528,060
MDExOlB1bGxSZXF1ZXN0NDE4MjQwMjMy
111
[Clean-up] remove under construction datastes
[]
closed
false
null
0
2020-05-14T20:52:13Z
2020-05-14T20:52:23Z
2020-05-14T20:52:22Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/111/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/111/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/111.diff", "html_url": "https://github.com/huggingface/datasets/pull/111", "merged_at": "2020-05-14T20:52:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/111.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/111" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2723
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2723/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2723/comments
https://api.github.com/repos/huggingface/datasets/issues/2723/events
https://github.com/huggingface/datasets/pull/2723
954,864,104
MDExOlB1bGxSZXF1ZXN0Njk4Njk0NDMw
2,723
Fix en subset by modifying dataset_info with correct validation infos
[]
closed
false
null
0
2021-07-28T13:36:19Z
2021-07-28T15:22:23Z
2021-07-28T15:22:23Z
null
- Related to: #2682 We correct the values of `en` subset concerning the expected validation values (both `num_bytes` and `num_examples`. Instead of having: `{"name": "validation", "num_bytes": 828589180707, "num_examples": 364868892, "dataset_name": "c4"}` We replace with correct values: `{"name": "validation", "num_bytes": 825767266, "num_examples": 364608, "dataset_name": "c4"}` There are still issues with validation with other subsets, but I can't download all the files, unzip to check for the correct number of bytes. (If you have a fast way to obtain those values for other subsets, I can do this in this PR ... otherwise I can't spend those resources)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2723/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2723/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2723.diff", "html_url": "https://github.com/huggingface/datasets/pull/2723", "merged_at": "2021-07-28T15:22:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/2723.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2723" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1237
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1237/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1237/comments
https://api.github.com/repos/huggingface/datasets/issues/1237/events
https://github.com/huggingface/datasets/pull/1237
758,318,353
MDExOlB1bGxSZXF1ZXN0NTMzNTExMDky
1,237
Add AmbigQA dataset
[]
closed
false
null
0
2020-12-07T09:07:19Z
2020-12-08T13:38:52Z
2020-12-08T13:38:52Z
null
# AmbigQA: Answering Ambiguous Open-domain Questions Dataset Adding the [AmbigQA](https://nlp.cs.washington.edu/ambigqa/) dataset as part of the sprint 🎉 (from Open dataset list for Dataset sprint) Added both the light and full versions (as seen on the dataset homepage) The json format changes based on the value of one 'type' field, so I set the unavailable field to an empty list. This is explained in the README -> Data Fields ```py train_light_dataset = load_dataset('./datasets/ambig_qa',"light",split="train") val_light_dataset = load_dataset('./datasets/ambig_qa',"light",split="validation") train_full_dataset = load_dataset('./datasets/ambig_qa',"full",split="train") val_full_dataset = load_dataset('./datasets/ambig_qa',"full",split="validation") for example in train_light_dataset: for i,t in enumerate(example['annotations']['type']): if t =='singleAnswer': # use the example['annotations']['answer'][i] # example['annotations']['qaPairs'][i] - > is [] print(example['annotations']['answer'][i]) else: # use the example['annotations']['qaPairs'][i] # example['annotations']['answer'][i] - > is [] print(example['annotations']['qaPairs'][i]) ``` - [x] All tests passed - [x] Added dummy data - [x] Added data card (as much as I could)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1237/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1237/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1237.diff", "html_url": "https://github.com/huggingface/datasets/pull/1237", "merged_at": "2020-12-08T13:38:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/1237.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1237" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/3579
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3579/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3579/comments
https://api.github.com/repos/huggingface/datasets/issues/3579/events
https://github.com/huggingface/datasets/pull/3579
1,103,451,118
PR_kwDODunzps4xBmY4
3,579
Add Text2log Dataset
[]
closed
false
null
1
2022-01-14T10:45:01Z
2022-01-20T17:09:44Z
2022-01-20T17:09:44Z
null
Adding the text2log dataset used for training FOL sentence translating models
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3579/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3579/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3579.diff", "html_url": "https://github.com/huggingface/datasets/pull/3579", "merged_at": "2022-01-20T17:09:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/3579.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3579" }
true
[ "The CI fails are unrelated to your PR and fixed on master, I think we can merge now !" ]
https://api.github.com/repos/huggingface/datasets/issues/3157
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3157/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3157/comments
https://api.github.com/repos/huggingface/datasets/issues/3157/events
https://github.com/huggingface/datasets/pull/3157
1,034,775,165
PR_kwDODunzps4tm3_I
3,157
Fixed: duplicate parameter and missing parameter in docstring
[]
closed
false
null
0
2021-10-25T07:26:00Z
2021-10-25T14:02:19Z
2021-10-25T14:02:19Z
null
changing duplicate parameter `data_files` in `DatasetBuilder.__init__` to the missing parameter `data_dir`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3157/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3157/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3157.diff", "html_url": "https://github.com/huggingface/datasets/pull/3157", "merged_at": "2021-10-25T14:02:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/3157.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3157" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1568
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1568/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1568/comments
https://api.github.com/repos/huggingface/datasets/issues/1568/events
https://github.com/huggingface/datasets/pull/1568
766,722,994
MDExOlB1bGxSZXF1ZXN0NTM5NjY2ODg1
1,568
Added the dataset clickbait_news_bg
[]
closed
false
null
2
2020-12-14T17:03:00Z
2020-12-15T18:28:56Z
2020-12-15T18:28:56Z
null
There was a problem with my [previous PR 1445](https://github.com/huggingface/datasets/pull/1445) after rebasing, so I'm copying the dataset code into a new branch and submitting a new PR.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1568/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1568/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1568.diff", "html_url": "https://github.com/huggingface/datasets/pull/1568", "merged_at": "2020-12-15T18:28:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/1568.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1568" }
true
[ "Hi @tsvm Great work! \r\nSince you have raised a clean PR could you close the earlier one - #1445 ? \r\n", "> Hi @tsvm Great work!\r\n> Since you have raised a clean PR could you close the earlier one - #1445 ?\r\n\r\nDone." ]
https://api.github.com/repos/huggingface/datasets/issues/3643
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3643/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3643/comments
https://api.github.com/repos/huggingface/datasets/issues/3643/events
https://github.com/huggingface/datasets/pull/3643
1,116,417,428
PR_kwDODunzps4xr8mX
3,643
Fix sem_eval_2018_task_1 download location
[]
closed
false
null
1
2022-01-27T15:45:00Z
2022-02-04T15:15:26Z
2022-02-04T15:15:26Z
null
As discussed with @lhoestq in https://github.com/huggingface/datasets/issues/3549#issuecomment-1020176931_ this is the new pull request to fix the download location.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3643/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3643/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3643.diff", "html_url": "https://github.com/huggingface/datasets/pull/3643", "merged_at": "2022-02-04T15:15:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/3643.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3643" }
true
[ "I fixed those two things, the two remaining failing checks seem to be due to some dependency missing in the tests." ]
https://api.github.com/repos/huggingface/datasets/issues/2903
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2903/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2903/comments
https://api.github.com/repos/huggingface/datasets/issues/2903/events
https://github.com/huggingface/datasets/pull/2903
995,715,191
PR_kwDODunzps4rtxxV
2,903
Fix xpathopen to accept positional arguments
[]
closed
false
null
1
2021-09-14T08:02:50Z
2021-09-14T08:51:21Z
2021-09-14T08:40:47Z
null
Fix `xpathopen()` so that it also accepts positional arguments. Fix #2901.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2903/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2903/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2903.diff", "html_url": "https://github.com/huggingface/datasets/pull/2903", "merged_at": "2021-09-14T08:40:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/2903.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2903" }
true
[ "thanks!" ]
https://api.github.com/repos/huggingface/datasets/issues/2395
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2395/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2395/comments
https://api.github.com/repos/huggingface/datasets/issues/2395/events
https://github.com/huggingface/datasets/pull/2395
898,762,730
MDExOlB1bGxSZXF1ZXN0NjUwNTk3NjI0
2,395
`pretty_name` for dataset in YAML tags
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
19
2021-05-22T09:24:45Z
2022-09-23T13:29:14Z
2022-09-23T13:29:13Z
null
I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good. If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2395/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2395/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2395.diff", "html_url": "https://github.com/huggingface/datasets/pull/2395", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2395.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2395" }
true
[ "Initially I removed the ` - ` since there was only one `pretty_name` per config but turns out it was breaking here in `from_yaml_string`https://github.com/huggingface/datasets/blob/74751e3f98c74d22c48c6beb1fab0c13b5dfd075/src/datasets/utils/metadata.py#L197 in `/utils/metadata.py`", "@lhoestq I guess this will also need some validation?", "Looks like the parser doesn't allow things like\r\n```\r\npretty_name:\r\n config_name1: My awesome config number 1\r\n config_name2: My amazing config number 2\r\n```\r\ntherefore you had to use `-` and consider them as a list.\r\n\r\nI would be nice to add support for this case in the validator.\r\n\r\nThere's one thing though: the DatasetMetadata object currently corresponds to the yaml tags that are flattened: the config names are just ignored, and the lists are concatenated.\r\n\r\nTherefore I think we would potentially need to instantiate several `DatasetMetadata` objects: one per config. Otherwise we would end up with a list of several pretty_name while we actually need at most 1 per config.\r\n\r\nWhat do you think @gchhablani ?", "I was thinking of returning `metada_dict` (on line 193) whenever `load_dataset_card` is called (we can pass an extra parameter to `from_readme` or `from_yaml_string` for that to happen).\r\n\r\nOne just needs config_name as key for the dictionary inside `pretty_name` dict and for single config, there would be only one value to print. We can do this for other fields as well like `size_categories`, `languages` etc. This will obviate the need to flatten the YAML tags so that don't have to instantiate several DatasetMetadata objects. What do you guys think @lhoestq @gchhablani? \r\n\r\nUpdate: I was thinking of returning the whole dictionary before flattening so that user can access whatever they want with specific configs. Let's say [this](https://pastebin.com/eJ84314f) is my `metadata_dict` before flattening (the loaded YAML string), so instead of validating it and then returning the items individually we can return it just after loading the YAML string.", "Hi @lhoestq @bhavitvyamalik \r\n\r\n@bhavitvyamalik, I'm not sure I understand your approach, can you please elaborate? The `metadata_dict` is flattened before instantiating the object, do you want to remove that? Still confused.\r\n\r\nFew things come to my mind after going through this PR. They might not be entirely relevant to the current task, but I'm just trying to think about possible cases and discuss them here.\r\n\r\n1. Instead of instantiating a new `DatasetMetadata` for each config with flattened tags, why can't we make it more flexible and validate only non-dict items? However, in that case, the types wouldn't be as strict for the class attributes. It would also not work for cases that are like `Dict[str,List[Dict[str,str]]`, but I guess that won't be needed anyway in the foreseeable future?\r\n\r\n Ideally, it would be something like - Check the metadata tag type (root), do a DFS, and find the non-dict objects (leaves), and validate them. Is this an overkill to handle the problem?\r\n2. For single-config datasets, there can be slightly different validation for `pretty_names`, than for multi-config. The user shouldn't need to provide a config name for single-config datasets, wdyt @bhavitvyamalik @lhoestq? Either way, for multi-config, the validation can use the dictionary keys in the path to that leaf node to verify `pretty_names: ... (config)` as well. This will check whether the config name is same as the key (might be unnecessary but prevents typos, so less work for the reviewer(s)). For future, however, it might be beneficial to have something like this.\r\n3. Should we have a default config name for single-config datasets? People use any string they feel like. I've seen `plain_text`, `default` and the dataset name. I've used `image` for a few datasets myself AFAIR. For smarter validation (again, a future case ;-;), it'd be easier for us to have a simple rule for naming configs in single-config datasets. Wdyt @lhoestq?", "Btw, `pretty_names` can also be used to handle this during validation :P \r\n\r\n```\r\n-# Dataset Card for [Dataset Name]\r\n+# Dataset Card for Allegro Reviews\r\n```\r\n\r\nThis is where `DatasetMetadata` and `ReadMe` should be combined. But there are very few overlaps, I guess.\r\n\r\n\n@bhavitvyamalik @lhoestq What about adding a pretty name across all configs, and then config-specific names?\n\nLike\n\n```yaml\npretty_names:\n all_configs: X (dataset_name)\n config_1: X1 (config_1_name)\n config_2: X2 (config_2_name)\n```\nThen, using the `metadata_dict`, the ReadMe header can be validated against `X`.\n\nSorry if I'm throwing too many ideas at once.", "@bhavitvyamalik\r\n\r\nNow, I think I better understand what you're saying. So you want to skip validation for the unflattened metadata and just return it? And let the validation run for the flattened version?", "Exactly! Validation is important but once the YAML tags are validated I feel we shouldn't do that again while calling `load_dataset_card`. +1 for default config name for single-config datasets.", "@bhavitvyamalik\r\nActually, I made the `ReadMe` validation similar to `DatasetMetadata` validation and the class was validating the metadata during the creation. \r\n\r\nMaybe we need to have a separate validation method instead of having it in `__post_init__`? Wdyt @lhoestq? \r\n\r\nI'm sensing too many things to look into. It'd be great to discuss these sometime. \r\n\r\nBut if this PR is urgent then @bhavitvyamalik's logic seems good to me. It doesn't need major modifications in validation.", "> Maybe we need to have a separate validation method instead of having it in __post_init__? Wdyt @lhoestq?\r\n\r\nWe can definitely have a `is_valid()` method instead of doing it in the post init.\r\n\r\n> What about adding a pretty name across all configs, and then config-specific names?\r\n\r\nLet's keep things simple to starts with. If we can allow both single-config and multi-config cases it would already be great :)\r\n\r\nfor single-config:\r\n```yaml\r\npretty_name: Allegro Reviews\r\n```\r\n\r\nfor multi-config:\r\n```yaml\r\npretty_name:\r\n mrpc: Microsoft Research Paraphrase Corpus (MRPC)\r\n sst2: Stanford Sentiment Treebank\r\n ...\r\n```\r\n\r\nTo support the multi-config case I see two options:\r\n1. Don't allow DatasetMetadata to have dictionaries but instead have separate DatasetMetadata objects per config\r\n2. allow DatasetMetadata to have dictionaries. It implies to remove the flattening step. Then we could get metadata for a specific config this way for example:\r\n```python\r\nfrom datasets import load_dataset_card\r\n\r\nglue_dataset_card = load_dataset_card(\"glue\")\r\nprint(glue_dataset_card.metadata)\r\n# DatasetMetatada object with dictionaries since there are many configs\r\nprint(glue_dataset_card.metadata.get_metadata_for_config(\"mrpc\"))\r\n# DatasetMetatada object with no dictionaries since there are only the mrpc tags\r\n```\r\n\r\nLet me know what you think or if you have other ideas.", "I think Option 2 is better.\n\nJust to clarify, will `get_metadata_for_config` also return common details, like language, say?", "> Just to clarify, will get_metadata_for_config also return common details, like language, say?\r\n\r\nYes that would be more convenient IMO. For example a dataset card like this\r\n```yaml\r\nlanguages:\r\n- en\r\npretty_name:\r\n config1: Pretty Name for Config 1\r\n config3: Pretty Name for Config 2\r\n```\r\n\r\nthen `metadat.get_metadata_for_config(\"config1\")` would return something like\r\n```python\r\nDatasetMetadata(languages=[\"en\"], pretty_name=\"Pretty Name for Config 1\")\r\n```", "@lhoestq, should we do this post-processing in `load_dataset_card` by returning unflattened dictionary from `DatasetMetadata` or send this from `DatasetMetadata`? Since there isn't much to do I feel once we have the unflattened dictionary", "Not sure I understand the difference @bhavitvyamalik , could you elaborate please ?", "I was talking about this unflattened dictionary:\r\n\r\n> I was thinking of returning the whole dictionary before flattening so that user can access whatever they want with specific configs. Let's say [this](https://pastebin.com/eJ84314f) is my metadata_dict before flattening (the loaded YAML string), so instead of validating it and then returning the items individually we can return it just after loading the YAML string.\r\n\r\nPost-processing meant extracting config-specific fields from this dictionary and then return this `languages=[\"en\"], pretty_name=\"Pretty Name for Config 1\"`", "I still don't understand what you mean by \"returning unflattened dictionary from DatasetMetadata or send this from DatasetMetadata\", sorry. Can you give an example or rephrase this ?\r\n\r\nIMO load_dataset_card can return a dataset card object with a metadata field. If the metadata isn't flat (i.e. it has several configs), you can get the flat metadata of 1 specific config with `get_metadata_for_config`. But of course if you have better ideas or suggestions, we can discuss this", "@lhoestq, I think he is saying whatever `get_metadata_for_config` is doing can be done in `load_dataset_card` by taking the unflattened `metadata_dict` as input.\r\n\r\n@bhavitvyamalik, I think it'd be better to have this \"post-processing\" in `DatasetMetadata` instead of `load_dataset_card`, as @lhoestq has shown. I'll quickly get on that.\r\n\r\n---\r\nThree things that are to be changed in `DatasetMetadata`:\r\n1. Allow Non-flat elements and their validation.\r\n2. Create a method to get metadata by config name.\r\n3. Create a `validate()` method.\r\n\r\nOnce that is done, this PR can be updated and reviewed, wdys?", "Thanks @gchhablani for the help ! Now that https://github.com/huggingface/datasets/pull/2436 is merged you can remove the `-` in the pretty_name @bhavitvyamalik :)", "Thanks @bhavitvyamalik.\r\n\r\nI think this PR was superseded by these others also made by you:\r\n- #3498\r\n- #3536\r\n\r\nI'm closing this." ]
https://api.github.com/repos/huggingface/datasets/issues/973
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/973/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/973/comments
https://api.github.com/repos/huggingface/datasets/issues/973/events
https://github.com/huggingface/datasets/pull/973
754,807,963
MDExOlB1bGxSZXF1ZXN0NTMwNjQxMTky
973
Adding The Microsoft Terminology Collection dataset.
[]
closed
false
null
9
2020-12-01T23:36:23Z
2020-12-04T15:25:44Z
2020-12-04T15:12:46Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/973/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/973/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/973.diff", "html_url": "https://github.com/huggingface/datasets/pull/973", "merged_at": "2020-12-04T15:12:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/973.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/973" }
true
[ "I have to manually copy a dataset_infos.json file from other dataset and modify it since the `datasets-cli` isn't able to handle manually downloaded datasets yet (as far as I know).", "you can generate the dataset_infos.json file even for dataset with manual data\r\nTo do so just specify `--data_dir <path/to/the/folder/containing/the/manual/data>`", "Also, dummy_data seems having difficulty to handle manually downloaded datasets. `python datasets-cli dummy_data datasets/ms_terms --data_dir ...` reported `error: unrecognized arguments: --data_dir` error. Without `--data_dir`, it reported this error:\r\n```\r\nDataset ms_terms with config BuilderConfig(name='ms_terms-full', version=1.0.0, data_dir=None, data_files=None, description='...\\n') seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has to be created with less guidance. Make sure you create the file None.\r\nTraceback (most recent call last):\r\n File \"datasets-cli\", line 36, in <module>\r\n service.run()\r\n File \"/Users/lzhao/Downloads/huggingface/datasets/src/datasets/commands/dummy_data.py\", line 326, in run\r\n dataset_builder=dataset_builder, mock_dl_manager=mock_dl_manager\r\n File \"/Users/lzhao/Downloads/huggingface/datasets/src/datasets/commands/dummy_data.py\", line 406, in _print_dummy_data_instructions\r\n for split in generator_splits:\r\nUnboundLocalError: local variable 'generator_splits' referenced before assignment\r\n```", "Oh yes `--data_dir` seems to only be supported for the `datasets_cli test` command. Sorry about that.\r\n\r\nCan you try to build the dummy_data.zip file manually ?\r\n\r\nIt has to be inside `./datasets/ms_terms/dummy/ms_terms-full/1.0.0`.\r\nInside this folder, please create a folder `dummy_data` that contains a dummy file `MicrosoftTermCollection.tbx` (with just a few examples in it). Then you can zip the `dummy_data` folder to `dummy_data.zip`\r\n\r\nThen you can check if it worked using the command\r\n```\r\npytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_ms_terms\r\n```\r\n\r\nFeel free to use some debugging print statements in your script if it doesn't work first try to see what `dl_manager.manual_dir` ends up being and also `path_to_manual_file`.\r\n\r\nFeel free to ping me if you have other questions", "`pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_ms_terms` gave `1 passed, 4 warnings in 8.13s`. Existing datasets, like `wikihow`, and `newsroom`, also report 4 warnings. So, I guess that is not related to this dataset.", "Could you run `make style` before we merge @leoxzhao ?", "the other errors are fixed on master so it's fine", "> Could you run `make style` before we merge @leoxzhao ?\r\n\r\nSure thing. Done. Thanks Quentin. I have other datasets in mind. All of which requires manual download. This process is very helpful", "Thank you :) " ]
https://api.github.com/repos/huggingface/datasets/issues/4411
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4411/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4411/comments
https://api.github.com/repos/huggingface/datasets/issues/4411/events
https://github.com/huggingface/datasets/pull/4411
1,249,462,390
PR_kwDODunzps44g_yL
4,411
Update `_format_columns` in `remove_columns`
[]
closed
false
null
20
2022-05-26T11:40:06Z
2022-06-14T19:05:37Z
2022-06-14T16:01:56Z
null
As explained at #4398, when calling `dataset.add_faiss_index` under certain conditions when calling a sequence of operations `cast_column`, `map`, and `remove_columns`, it fails as it's trying to look for already removed columns. So on, after testing some possible fixes, it seems that setting the dataset format right after removing the columns seems to be working fine, so I had to add a call to `.set_format` in the `remove_columns` function. Hope this helps!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4411/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4411/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4411.diff", "html_url": "https://github.com/huggingface/datasets/pull/4411", "merged_at": "2022-06-14T16:01:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/4411.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4411" }
true
[ "🤗 This PR closes https://github.com/huggingface/datasets/issues/4398", "_The documentation is not available anymore as the PR was closed or merged._", "Hi! Thanks for reporting and providing a fix. I made a small change to make the fix easier to understand.", "Hi, @mariosasko thanks! It makes sense, sorry I'm not that familiar with `datasets` code 😩 ", "Sure @albertvillanova I'll do that later today and ping you once done, thanks! :hugs:", "Hi again @albertvillanova! Let me know if those tests are fine 🤗 ", "Hi @alvarobartt,\r\n\r\nI think your tests are failing. I don't know why previously, after your last commit, the CI tests were not triggered. \r\n\r\nIn order to force the re-running of the CI tests, I had to edit your file using the GitHub UI.\r\n\r\nFirst I tried to do it using my terminal, but I don't have push right to your PR branch: I would ask you next time you open a PR, please mark the checkbox \"Allow edits from maintainers\": https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/allowing-changes-to-a-pull-request-branch-created-from-a-fork#enabling-repository-maintainer-permissions-on-existing-pull-requests", "Hi @albertvillanova, let me check those again! And regarding that checkbox I thought it was already checked so my bad there 😩 ", "@albertvillanova again it seems that the tests were not automatically triggered, but I tested those locally and now they work, as previously those were failing as I created an assertion as `self.assertEqual` over an empty list that was compared as `None` while the value was `[]` so I updated it to be `self.assertListEqual` and changed the comparison value to `[]`.", "@lhoestq any idea why the CI is not triggered?", "@alvarobartt I have tested locally and the tests continue failing.\r\n\r\nI think there is a basis error: `new_dset._format_columns` is always `None` in those cases.\r\n", "You're right @albertvillanova I was indeed running the tests with `datasets==2.2.0` rather than with the branch version, I'll check it again! Sorry for the inconvenience...", "> @alvarobartt I have tested locally and the tests continue failing.\r\n> \r\n> I think there is a basis error: `new_dset._format_columns` is always `None` in those cases.\r\n\r\nIn order to have some regressions tests for the fixed scenario, I've manually updated the value of `_format_columns` in the `ArrowDataset` so as to check whether it's updated or not right after calling `remove_columns`, and it does behave as expected, so with the latest version of this branch the reported issue doesn't occur anymore.", "Hi again @albertvillanova sorry I was on leave! I'll do that ASAP :hugs:", "@albertvillanova, does it make sense to add regression tests for `DatasetDict`? As `DatasetDict` doesn't have the attribute `_format_columns`, when we call `remove_columns` over a `DatasetDict` it removes the columns and updates the attributes of each split which is an `ArrowDataset`.\r\n\r\nSo on, we can either:\r\n- Update first the `_format_columns` attribute of each split and then remove the columns over the `DatasetDict`\r\n- Loop over the splits of `DatasetDict` and remove the columns right after updating `_format_columns` of each `ArrowDataset`.\r\n\r\nI assume that the best regression test is the one implemented (mentioned first above), let me know if there's a better way to do that 👍🏻 ", "I think there's already a decorator to support transmitting the right `_format_columns`: `@transmit_format`, have you tried adding this decorator to `remove_columns` ?", "> I think there's already a decorator to support transmitting the right `_format_columns`: `@transmit_format`, have you tried adding this decorator to `remove_columns` ?\r\n\r\nHi @lhoestq I can check now!", "It worked indeed @lhoestq, thanks for the proposal and the review! 🤗 ", "Oops, I forgot about `@transmit_format`'s existence. From what I see, we should also use this decorator in `flatten`, `rename_column` and `rename_columns`. \r\n\r\n@alvarobartt Let me know if you'd like to work on this (in a subsequent PR).", "Sure @mariosasko I can prepare another PR to add those too, thanks! " ]
https://api.github.com/repos/huggingface/datasets/issues/4999
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4999/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4999/comments
https://api.github.com/repos/huggingface/datasets/issues/4999/events
https://github.com/huggingface/datasets/pull/4999
1,379,610,030
PR_kwDODunzps4_SQxL
4,999
Add EmptyDatasetError
[]
closed
false
null
1
2022-09-20T15:28:05Z
2022-09-21T12:23:43Z
2022-09-21T12:21:24Z
null
examples: from the hub: ```python Traceback (most recent call last): File "playground/ttest.py", line 3, in <module> print(load_dataset("lhoestq/empty")) File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1686, in load_dataset **config_kwargs, File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1458, in load_dataset_builder data_files=data_files, File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1171, in dataset_module_factory raise e1 from None File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1162, in dataset_module_factory download_mode=download_mode, File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 760, in get_module else get_data_patterns_in_dataset_repository(hfh_dataset_info, self.data_dir) File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/data_files.py", line 678, in get_data_patterns_in_dataset_repository ) from None datasets.data_files.EmptyDatasetError: The dataset repository at 'lhoestq/empty' doesn't contain any data file. ``` from local directory: ```python Traceback (most recent call last): File "playground/ttest.py", line 3, in <module> print(load_dataset("playground/empty")) File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1686, in load_dataset **config_kwargs, File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1458, in load_dataset_builder data_files=data_files, File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1107, in dataset_module_factory path, data_dir=data_dir, data_files=data_files, download_mode=download_mode File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 625, in get_module else get_data_patterns_locally(base_path) File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/data_files.py", line 460, in get_data_patterns_locally raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data file") from None datasets.data_files.EmptyDatasetError: The directory at playground/empty doesn't contain any data file ``` Close https://github.com/huggingface/datasets/issues/4995
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4999/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4999/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4999.diff", "html_url": "https://github.com/huggingface/datasets/pull/4999", "merged_at": "2022-09-21T12:21:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/4999.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4999" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/1455
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1455/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1455/comments
https://api.github.com/repos/huggingface/datasets/issues/1455/events
https://github.com/huggingface/datasets/pull/1455
761,205,073
MDExOlB1bGxSZXF1ZXN0NTM1OTA1OTQy
1,455
Add HEAD-QA: A Healthcare Dataset for Complex Reasoning
[]
closed
false
null
1
2020-12-10T12:36:56Z
2020-12-17T17:03:32Z
2020-12-17T16:58:11Z
null
HEAD-QA is a multi-choice HEAlthcare Dataset, the questions come from exams to access a specialized position in the Spanish healthcare system.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1455/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1455/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1455.diff", "html_url": "https://github.com/huggingface/datasets/pull/1455", "merged_at": "2020-12-17T16:58:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/1455.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1455" }
true
[ "Thank you for your review @lhoestq, I've changed the types of `qid` and `ra` and now they are integers as `aid`.\r\n\r\nReady for another review!" ]
https://api.github.com/repos/huggingface/datasets/issues/3172
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3172/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3172/comments
https://api.github.com/repos/huggingface/datasets/issues/3172/events
https://github.com/huggingface/datasets/issues/3172
1,038,351,587
I_kwDODunzps494_zj
3,172
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1`
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
9
2021-10-28T10:29:00Z
2023-01-26T07:07:54Z
2021-11-03T11:26:10Z
null
## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. The exception is raised only when the code runs within a specific context. Despite ~10h spent investigating this issue, I have failed to isolate the bug, so let me describe my setup. In my project, `Dataset` is wrapped into a `LightningDataModule` and the data is preprocessed when calling `LightningDataModule.setup()`. Calling `.setup()` in an isolated script works fine (even when wrapped with `hydra.main()`). However, when calling `.setup()` within the experiment script (depends on `pytorch_lightning`), the script crashes and `SystemError 15`. I could avoid throwing this error by modifying ` Dataset.__del__()` (see bellow), but I believe this only moves the problem somewhere else. I am completely stuck with this issue, any hint would be welcome. ```python class Dataset() ... def __del__(self): if hasattr(self, "_data"): _ = self._data # <- ugly trick that allows avoiding the issue. del self._data if hasattr(self, "_indices"): del self._indices ``` ## Steps to reproduce the bug ```python # Unfortunately I couldn't isolate the bug. ``` ## Expected results Calling `Dataset.map()` without throwing an exception. Or at least raising a more detailed exception/traceback. ## Actual results ``` Exception ignored in: <function Dataset.__del__ at 0x7f7cec179160>███████████████████████████████████████████████████| 5/5 [00:05<00:00, 1.17ba/s] Traceback (most recent call last): File ".../python3.8/site-packages/datasets/arrow_dataset.py", line 906, in __del__ del self._data File ".../python3.8/site-packages/ray/worker.py", line 1033, in sigterm_handler sys.exit(signum) SystemExit: 15 ``` ## Environment info Tested on 2 environments: **Environment 1.** - `datasets` version: 1.14.0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.8 - PyArrow version: 6.0.0 **Environment 2.** - `datasets` version: 1.14.0 - Platform: Linux-4.18.0-305.19.1.el8_4.x86_64-x86_64-with-glibc2.28 - Python version: 3.9.7 - PyArrow version: 6.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3172/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3172/timeline
null
completed
null
null
false
[ "NB: even if the error is raised, the dataset is successfully cached. So restarting the script after every `map()` allows to ultimately run the whole preprocessing. But this prevents to realistically run the code over multiple nodes.", "Hi,\r\n\r\nIt's not easy to debug the problem without the script. I may be wrong since I'm not very familiar with PyTorch Lightning, but shouldn't you preprocess the data in the `prepare_data` function of `LightningDataModule` and not in the `setup` function.\r\nAs you can't modify the module state in `prepare_data` (according to the docs), use the `cache_file_name` argument in `Dataset.map` there, and reload the processed data in `setup` with `Dataset.from_file(cache_file_name)`. If `num_proc>1`, check the docs on the `suffix_template` argument of `Dataset.map` to get an idea what the final `cache_file_names` are going to be.\r\n\r\nLet me know if this helps.", "Hi @mariosasko, thank you for the hint, that helped me to move forward with that issue. \r\n\r\nI did a major refactoring of my project to disentangle my `LightningDataModule` and `Dataset`. Just FYI, it looks like:\r\n\r\n```python\r\nclass Builder():\r\n def __call__() -> DatasetDict:\r\n # load and preprocess the data\r\n return dataset\r\n\r\nclass DataModule(LightningDataModule):\r\n def prepare_data():\r\n self.builder()\r\n def setup():\r\n self.dataset = self.builder()\r\n```\r\n\r\nUnfortunately, the entanglement between `LightningDataModule` and `Dataset` was not the issue.\r\n\r\nThe culprit was `hydra` and a slight adjustment of the structure of my project solved this issue. The problematic project structure was:\r\n\r\n```\r\nsrc/\r\n | - cli.py\r\n | - training/\r\n | -experiment.py\r\n\r\n# code in experiment.py\r\ndef run_experiment(config):\r\n # preprocess data and run\r\n \r\n# code in cli.py\r\n@hydra.main(...)\r\ndef run(config):\r\n return run_experiment(config)\r\n```\r\n\r\nMoving `run()` from `clip.py` to `training.experiment.py` solved the issue with `SystemError 15`. No idea why. \r\n\r\nEven if the traceback was referring to `Dataset.__del__`, the problem does not seem to be primarily related to `datasets`, so I will close this issue. Thank you for your help!", "Please allow me to revive this discussion, as I have an extremely similar issue. Instead of an error, my datasets functions simply aren't caching properly. My setup is almost the same as yours, with hydra to configure my experiment parameters.\r\n\r\n@vlievin Could you confirm if your code correctly loads the cache? If so, do you have any public code that I can reference for comparison?\r\n\r\nI will post a full example with hydra that illustrates this problem in a little bit, probably on another thread.", "Hello @mariomeissner, very sorry for the late reply, I hope you have found a solution to your problem!\r\n\r\nI don't have public code at the moment. I have not experienced any other issue with hydra, even if I don't understand why changing the location of the definition of `run()` fixed the problem. \r\n\r\nOverall, I don't have issue with caching anymore, even when \r\n1. using custom fingerprints using the argument `new_fingerprint \r\n2. when using `num_proc>1`", "I solved my issue by turning the map callable into a class static method, like they do in `lightning-transformers`. Very strange...", "I have this issue with datasets v2.5.2 with Python 3.8.10 on Ubuntu 20.04.4 LTS. It does not occur when num_proc=1. When num_proc>1, it intermittently occurs and will cause process to hang. As previously mentioned, it occurs even when datasets have been previously cached. I have tried wrapping logic in a static class as suggested with @mariomeissner with no improvement.", "@philipchung hello ,i have the same issue like yours,did you solve it?", "No. I was not able to get num_proc>1 to work." ]
https://api.github.com/repos/huggingface/datasets/issues/306
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/306/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/306/comments
https://api.github.com/repos/huggingface/datasets/issues/306/events
https://github.com/huggingface/datasets/pull/306
644,176,078
MDExOlB1bGxSZXF1ZXN0NDM4ODQ2MTI3
306
add pg19 dataset
[]
closed
false
null
12
2020-06-23T22:03:52Z
2020-07-06T07:55:59Z
2020-07-06T07:55:59Z
null
https://github.com/huggingface/nlp/issues/274 Add functioning PG19 dataset with dummy data `cos_e.py` was just auto-linted by `make style`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/306/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/306/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/306.diff", "html_url": "https://github.com/huggingface/datasets/pull/306", "merged_at": "2020-07-06T07:55:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/306.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/306" }
true
[ "@lucidrains - Thanks a lot for making the PR - PG19 is a super important dataset! Thanks for making it. Many people are asking for PG-19, so it would be great to have that in the library as soon as possible @thomwolf .", "@mariamabarham yup! around 11GB!", "I'm looking forward to our first deep learning written novel already lol. It's definitely happening", "Good to merge IMO.", "Oh I just noticed but as we changed the urls to download the files, we have to update `dataset_infos.json`.\r\nCould you re-rurn `nlp-cli test ./datasets/pg19 --save_infos` ?", "@lhoestq on it!", "should be good!", "@lhoestq - I think it's good to merge no?", "`dataset_infos.json` is still not up to date with the new urls (we can see that there are urls like `gs://deepmind-gutenberg/train/*` instead of `https://storage.googleapis.com/deepmind-gutenberg/train/*` in the json file)\r\n\r\nCan you check that you re-ran the command to update the json file, and that you pushed the changes @lucidrains ?", "@lhoestq ohhh, I made the change in this commit https://github.com/lucidrains/nlp/commit/f3e23d823ad9942031be80b7c4e4212c592cd90c , that's interesting that the pull request didn't pick it up. maybe it's because I did it on another machine, let me check and get back to you!", "@lhoestq wrong branch 😅 thanks for catching! ", "Awesome thanks 🎉" ]
https://api.github.com/repos/huggingface/datasets/issues/789
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/789/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/789/comments
https://api.github.com/repos/huggingface/datasets/issues/789/events
https://github.com/huggingface/datasets/pull/789
734,237,839
MDExOlB1bGxSZXF1ZXN0NTEzODM1MzE0
789
dataset(ncslgr): add initial loading script
[]
closed
false
null
4
2020-11-02T06:50:10Z
2020-12-01T13:41:37Z
2020-12-01T13:41:36Z
null
Its a small dataset, but its heavily annotated https://www.bu.edu/asllrp/ncslgr.html ![image](https://user-images.githubusercontent.com/5757359/97838609-3c539380-1ce9-11eb-885b-a15d4c91ea49.png)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/789/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/789/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/789.diff", "html_url": "https://github.com/huggingface/datasets/pull/789", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/789.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/789" }
true
[ "Hi @AmitMY, sorry for leaving you hanging for a minute :) \r\n\r\nWe've developed a new pipeline for adding datasets with a few extra steps, including adding a dataset card. You can find the full process [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md)\r\n\r\nWould you be up for adding the tags and description in the README.md so we can merge this cool dataset?", "@lhoestq should be ready for another review :) ", "Awesome thank you !\r\n\r\nIt looks like the PR now includes changes from other PR that were previously merged. \r\nFeel free to create another branch and another PR so that we can have a clean diff.\r\n", "Closing for #958 " ]
https://api.github.com/repos/huggingface/datasets/issues/6018
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6018/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6018/comments
https://api.github.com/repos/huggingface/datasets/issues/6018/events
https://github.com/huggingface/datasets/pull/6018
1,799,411,999
PR_kwDODunzps5VOmKY
6,018
test1
[]
closed
false
null
1
2023-07-11T17:25:49Z
2023-07-20T10:11:41Z
2023-07-20T10:11:41Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6018/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6018/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/6018.diff", "html_url": "https://github.com/huggingface/datasets/pull/6018", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6018.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6018" }
true
[ "We no longer host datasets in this repo. You should use the HF Hub instead." ]
https://api.github.com/repos/huggingface/datasets/issues/3570
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3570/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3570/comments
https://api.github.com/repos/huggingface/datasets/issues/3570/events
https://github.com/huggingface/datasets/pull/3570
1,100,480,791
PR_kwDODunzps4w3Xez
3,570
Add the KMWP dataset (extension of #3564)
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
3
2022-01-12T15:33:08Z
2022-10-01T06:43:16Z
2022-10-01T06:43:16Z
null
New pull request of #3564 (Add the KMWP dataset)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3570/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3570/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3570.diff", "html_url": "https://github.com/huggingface/datasets/pull/3570", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3570.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3570" }
true
[ "Sorry, I'm late to check! I'll send it to you soon!", "Thanks for your contribution, @sooftware. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there, under this organization namespace: https://huggingface.co/tunib\r\n\r\nPlease, feel free to tell us if you need some help.", "Close this PR. Thanks!" ]
https://api.github.com/repos/huggingface/datasets/issues/5081
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5081/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5081/comments
https://api.github.com/repos/huggingface/datasets/issues/5081/events
https://github.com/huggingface/datasets/issues/5081
1,399,340,050
I_kwDODunzps5TaDwS
5,081
Bug loading `sentence-transformers/parallel-sentences`
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
8
2022-10-06T10:47:51Z
2022-10-11T10:00:48Z
null
null
## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("sentence-transformers/parallel-sentences") ``` raises this: ``` /home/phmay/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py:697: FutureWarning: the 'mangle_dupe_cols' keyword is deprecated and will be removed in a future version. Please take steps to stop the use of 'mangle_dupe_cols' return pd.read_csv(xopen(filepath_or_buffer, "rb", use_auth_token=use_auth_token), **kwargs) /home/phmay/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py:697: FutureWarning: the 'mangle_dupe_cols' keyword is deprecated and will be removed in a future version. Please take steps to stop the use of 'mangle_dupe_cols' return pd.read_csv(xopen(filepath_or_buffer, "rb", use_auth_token=use_auth_token), **kwargs) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In [4], line 1 ----> 1 dataset = load_dataset("sentence-transformers/parallel-sentences", split="train") File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/load.py:1693, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1690 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES 1692 # Download and prepare data -> 1693 builder_instance.download_and_prepare( 1694 download_config=download_config, 1695 download_mode=download_mode, 1696 ignore_verifications=ignore_verifications, 1697 try_from_hf_gcs=try_from_hf_gcs, 1698 use_auth_token=use_auth_token, 1699 ) 1701 # Build dataset for splits 1702 keep_in_memory = ( 1703 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 1704 ) File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/builder.py:807, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, storage_options, **download_and_prepare_kwargs) 801 if not downloaded_from_gcs: 802 prepare_split_kwargs = { 803 "file_format": file_format, 804 "max_shard_size": max_shard_size, 805 **download_and_prepare_kwargs, 806 } --> 807 self._download_and_prepare( 808 dl_manager=dl_manager, 809 verify_infos=verify_infos, 810 **prepare_split_kwargs, 811 **download_and_prepare_kwargs, 812 ) 813 # Sync info 814 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/builder.py:898, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 894 split_dict.add(split_generator.split_info) 896 try: 897 # Prepare split will record examples associated to the split --> 898 self._prepare_split(split_generator, **prepare_split_kwargs) 899 except OSError as e: 900 raise OSError( 901 "Cannot find data file. " 902 + (self.manual_download_instructions or "") 903 + "\nOriginal error:\n" 904 + str(e) 905 ) from None File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/builder.py:1513, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, max_shard_size) 1506 shard_id += 1 1507 writer = writer_class( 1508 features=writer._features, 1509 path=fpath.replace("SSSSS", f"{shard_id:05d}"), 1510 storage_options=self._fs.storage_options, 1511 embed_local_files=embed_local_files, 1512 ) -> 1513 writer.write_table(table) 1514 finally: 1515 num_shards = shard_id + 1 File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/arrow_writer.py:540, in ArrowWriter.write_table(self, pa_table, writer_batch_size) 538 if self.pa_writer is None: 539 self._build_writer(inferred_schema=pa_table.schema) --> 540 pa_table = table_cast(pa_table, self._schema) 541 if self.embed_local_files: 542 pa_table = embed_table_storage(pa_table) File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/table.py:2044, in table_cast(table, schema) 2032 """Improved version of pa.Table.cast. 2033 2034 It supports casting to feature types stored in the schema metadata. (...) 2041 table (:obj:`pyarrow.Table`): the casted table 2042 """ 2043 if table.schema != schema: -> 2044 return cast_table_to_schema(table, schema) 2045 elif table.schema.metadata != schema.metadata: 2046 return table.replace_schema_metadata(schema.metadata) File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/table.py:2005, in cast_table_to_schema(table, schema) 2003 features = Features.from_arrow_schema(schema) 2004 if sorted(table.column_names) != sorted(features): -> 2005 raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match") 2006 arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] 2007 return pa.Table.from_arrays(arrays, schema=schema) ValueError: Couldn't cast Action taken on Parliament's resolutions: see Minutes: string Následný postup na základě usnesení Parlamentu: viz zápis: string -- schema metadata -- pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 742 to {'Membership of Parliament: see Minutes': Value(dtype='string', id=None), 'Състав на Парламента: вж. протоколи': Value(dtype='string', id=None)} because column names don't match ``` ## Expected results no error ## Actual results error ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Linux - Python version: Python 3.9.13 - PyArrow version: pyarrow 9.0.0 - transformers 4.22.2 - datasets 2.5.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5081/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5081/timeline
null
null
null
null
false
[ "tagging @nreimers ", "The dataset is sadly not really compatible to be loaded with `load_dataset`. So far it is better to git clone it and to use the files directly.\r\n\r\nA data loading script would be needed to be added to this dataset. But this was too much overhead / not really intuitive how to create it.", "Since the dataset is a bunch of TSVs we should not need a dataset script I think.\r\n\r\nBy default it tries to load all the TSVs at once, which fails here because they don't all have the same columns (pd.read_csv uses the first line as header by default). But those files have no header ! So, to properly load any TSV file in this repo, one has to pass `names=[...]` for pd.read_csv to know which column names to use.\r\n\r\nTo fix this situation, we can either do\r\n1. replace the TSVs by TSV with column names\r\n2. OR specify the pd.read_csv kwargs as YAML in the dataset card - and `datasets` would use that by default\r\n\r\nWDTY ?", "There are more issues in the dataset.\r\nTo load OpenSubtitles I have to provide this (see `skiprows`):\r\n\r\n```python\r\ndf_os = pd.read_csv(\r\n \"./parallel-sentences/OpenSubtitles/OpenSubtitles-en-de-train.tsv.gz\", \r\n sep=\"\\t\", \r\n quoting=csv.QUOTE_NONE,\r\n header=None,\r\n names=[\"en\", \"de\"],\r\n skiprows=[540344, 9151700, 10040173, 10040199, 11314673, 11338258, 11869223, 12159297, 12251078, 12303334],\r\n)\r\n```", "What's wrong with those lines exactly ?\r\nMaybe passing `error_bad_lines=False` (and maybe `warn_bad_lines=True`) can be helpful", "> What's wrong with those lines exactly ? \r\n\r\nStuff like this: `ParserError: Error tokenizing data. C error: Expected 2 fields in line 540345, saw 3`\r\n\r\n", "> Maybe passing error_bad_lines=False (and maybe warn_bad_lines=True) can be helpful\r\n\r\nYes. That would hide the issue but not solve it.", "@nreimers WDYT about the two options mentioned above ?" ]
https://api.github.com/repos/huggingface/datasets/issues/1367
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1367/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1367/comments
https://api.github.com/repos/huggingface/datasets/issues/1367/events
https://github.com/huggingface/datasets/pull/1367
760,208,191
MDExOlB1bGxSZXF1ZXN0NTM1MDc4MTAx
1,367
adding covid-tweets-japanese
[]
closed
false
null
2
2020-12-09T10:34:01Z
2020-12-09T17:25:14Z
2020-12-09T17:25:14Z
null
Adding COVID-19 Japanese Tweets Dataset as part of the sprint. Testing with dummy data is not working (the file is said to not exist). Sorry for the incomplete PR.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1367/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1367/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1367.diff", "html_url": "https://github.com/huggingface/datasets/pull/1367", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1367.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1367" }
true
[ "I think it's because the file you download uncompresses into a file and not a folder so `--autogenerate` couldn't create dummy data for you. See in your dummy_data.zip if there is a file there. If not, manually create your dummy data and compress them to dummy_data.zip.", "@cstorm125 Thank you for the comment! \r\nAs you point out, it seems my code has something wrong about downloading and uncompressing the file.\r\nHowever, my manually created dummy data seems to contain a file of the required format.\r\n\r\nOn Colaboratory,\r\n`!unzip /content/datasets/datasets/covid_tweets_japanese/dummy/1.1.0/dummy_data.zip`\r\nreturns:\r\n\r\n```\r\nArchive: /content/datasets/datasets/covid_tweets_japanese/dummy/1.1.0/dummy_data.zip\r\n creating: content/datasets/datasets/covid_tweets_japanese/dummy/1.1.0/dummy_data/\r\n extracting: content/datasets/datasets/covid_tweets_japanese/dummy/1.1.0/dummy_data/data.csv.bz2 \r\n```\r\n\r\nThe original data is `data.csv.bz2`, and I had a very hard time dealing with uncompressing bzip2.\r\nI think I could handle it, but there may be problems remain." ]
https://api.github.com/repos/huggingface/datasets/issues/3110
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3110/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3110/comments
https://api.github.com/repos/huggingface/datasets/issues/3110/events
https://github.com/huggingface/datasets/pull/3110
1,030,558,484
PR_kwDODunzps4tZakS
3,110
Stream TAR-based dataset using iter_archive
[]
closed
false
null
2
2021-10-19T17:16:24Z
2021-11-05T17:48:49Z
2021-11-05T17:48:48Z
null
I converted all the dataset based on TAR archive to use iter_archive instead, so that they can be streamable. It means that around 80 datasets become streamable :)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3110/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3110/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3110.diff", "html_url": "https://github.com/huggingface/datasets/pull/3110", "merged_at": "2021-11-05T17:48:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/3110.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3110" }
true
[ "I'm creating a new branch `stream-tar-audio` just for the audio datasets since they need https://github.com/huggingface/datasets/pull/3129 to be merged first", "The CI fails are only related to missing sections or tags in the dataset cards - which is unrelated to this PR" ]
https://api.github.com/repos/huggingface/datasets/issues/827
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/827/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/827/comments
https://api.github.com/repos/huggingface/datasets/issues/827/events
https://github.com/huggingface/datasets/issues/827
739,983,024
MDU6SXNzdWU3Mzk5ODMwMjQ=
827
[GEM] MultiWOZ dialogue dataset
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
2
2020-11-10T14:57:50Z
2022-10-05T12:31:13Z
2022-10-05T12:31:13Z
null
## Adding a Dataset - **Name:** MultiWOZ (Multi-Domain Wizard-of-Oz) - **Description:** 10k annotated human-human dialogues. Each dialogue consists of a goal, multiple user and system utterances as well as a belief state. Only system utterances are annotated with dialogue acts – there are no annotations from the user side. - **Paper:** https://arxiv.org/pdf/2007.12720.pdf - **Data:** https://github.com/budzianowski/multiwoz - **Motivation:** Will likely be part of the GEM shared task Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/827/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/827/timeline
null
completed
null
null
false
[ "Hi @yjernite can I help in adding this dataset? \r\n\r\nI am excited about this because this will be my first contribution to the datasets library as well as to hugginface.", "Resolved via https://github.com/huggingface/datasets/pull/979" ]
https://api.github.com/repos/huggingface/datasets/issues/2552
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2552/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2552/comments
https://api.github.com/repos/huggingface/datasets/issues/2552/events
https://github.com/huggingface/datasets/issues/2552
931,354,687
MDU6SXNzdWU5MzEzNTQ2ODc=
2,552
Keys should be unique error on code_search_net
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
8
2021-06-28T09:15:20Z
2021-09-06T14:08:30Z
2021-09-02T08:25:29Z
null
## Describe the bug Loading `code_search_net` seems not possible at the moment. ## Steps to reproduce the bug ```python >>> load_dataset('code_search_net') Downloading: 8.50kB [00:00, 3.09MB/s] Downloading: 19.1kB [00:00, 10.1MB/s] No config specified, defaulting to: code_search_net/all Downloading and preparing dataset code_search_net/all (download: 4.77 GiB, generated: 5.99 GiB, post-processed: Unknown size, total: 10.76 GiB) to /Users/thomwolf/.cache/huggingface/datasets/code_search_net/all/1.0.0/b3e8278faf5d67da1d06981efbeac3b76a2900693bd2239bbca7a4a3b0d6e52a... Traceback (most recent call last): File "/Users/thomwolf/Documents/GitHub/datasets/src/datasets/builder.py", line 1067, in _prepare_split writer.write(example, key) File "/Users/thomwolf/Documents/GitHub/datasets/src/datasets/arrow_writer.py", line 343, in write self.check_duplicate_keys() File "/Users/thomwolf/Documents/GitHub/datasets/src/datasets/arrow_writer.py", line 354, in check_duplicate_keys raise DuplicatedKeysError(key) datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 48 Keys should be unique and deterministic in nature ``` ## Environment info - `datasets` version: 1.8.1.dev0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.5 - PyArrow version: 2.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2552/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2552/timeline
null
completed
null
null
false
[ "Two questions:\r\n- with `datasets-cli env` we don't have any information on the dataset script version used. Should we give access to this somehow? Either as a note in the Error message or as an argument with the name of the dataset to `datasets-cli env`?\r\n- I don't really understand why the id is duplicated in the code of `code_search_net`, how can I debug this actually?", "Thanks for reporting. There was indeed an issue with the keys. The key was the addition of the file id and row id, which resulted in collisions. I just opened a PR to fix this at https://github.com/huggingface/datasets/pull/2555\r\n\r\nTo help users debug this kind of errors we could try to show a message like this\r\n```python\r\nDuplicateKeysError: both 42th and 1337th examples have the same keys `48`.\r\nPlease fix the dataset script at <path/to/the/dataset/script>\r\n```\r\n\r\nThis way users who what to look for if they want to debug this issue. I opened an issue to track this: https://github.com/huggingface/datasets/issues/2556", "and are we sure there are not a lot of datasets which are now broken with this change?", "Thanks to the dummy data, we know for sure that most of them work as expected.\r\n`code_search_net` wasn't caught because the dummy data only have one dummy data file while the dataset script can actually load several of them using `os.listdir`. Let me take a look at all the other datasets that use `os.listdir` to see if the keys are alright", "I found one issue on `fever` (PR here: https://github.com/huggingface/datasets/pull/2557)\r\nAll the other ones seem fine :)", "Hi! Got same error when loading other dataset:\r\n```python3\r\nload_dataset('wikicorpus', 'raw_en')\r\n```\r\n\r\ntb:\r\n```pytb\r\n---------------------------------------------------------------------------\r\nDuplicatedKeysError Traceback (most recent call last)\r\n/opt/conda/lib/python3.8/site-packages/datasets/builder.py in _prepare_split(self, split_generator)\r\n 1109 example = self.info.features.encode_example(record)\r\n-> 1110 writer.write(example, key)\r\n 1111 finally:\r\n\r\n/opt/conda/lib/python3.8/site-packages/datasets/arrow_writer.py in write(self, example, key, writer_batch_size)\r\n 341 if self._check_duplicates:\r\n--> 342 self.check_duplicate_keys()\r\n 343 # Re-intializing to empty list for next batch\r\n\r\n/opt/conda/lib/python3.8/site-packages/datasets/arrow_writer.py in check_duplicate_keys(self)\r\n 352 if hash in tmp_record:\r\n--> 353 raise DuplicatedKeysError(key)\r\n 354 else:\r\n\r\nDuplicatedKeysError: FAILURE TO GENERATE DATASET !\r\nFound duplicate Key: 519\r\nKeys should be unique and deterministic in nature\r\n```\r\n\r\nVersion: datasets==1.11.0", "Fixed by #2555.", "The wikicorpus issue has been fixed by https://github.com/huggingface/datasets/pull/2844\r\n\r\nWe'll do a new release of `datasets` soon :)" ]
https://api.github.com/repos/huggingface/datasets/issues/2196
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2196/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2196/comments
https://api.github.com/repos/huggingface/datasets/issues/2196/events
https://github.com/huggingface/datasets/issues/2196
854,126,114
MDU6SXNzdWU4NTQxMjYxMTQ=
2,196
`load_dataset` caches two arrow files?
[ { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
closed
false
null
3
2021-04-09T03:49:19Z
2021-04-12T05:25:29Z
2021-04-12T05:25:29Z
null
Hi, I am using datasets to load large json file of 587G. I checked the cached folder and found that there are two arrow files created: * `cache-ed205e500a7dc44c.arrow` - 355G * `json-train.arrow` - 582G Why is the first file created? If I delete it, would I still be able to `load_from_disk`?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2196/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2196/timeline
null
completed
null
null
false
[ "Hi ! Files that starts with `cache-*` are cached computation files, i.e. they are the cached results of map/filter/cast/etc. operations. For example if you used `map` on your dataset to transform it, then the resulting dataset is going to be stored and cached in a `cache-*` file. These files are used to avoid having to load the dataset in RAM, even after many transforms", "Thanks @lhoestq! Hmm.. that's strange because I specifically turned off auto caching, and saved mapped result, using `save_to_disk`, to another location. At this location, the following file is created:`355G\tcache-ed205e500a7dc44c.arrow`\r\n\r\nTo my observation, both `load_dataset` and `map` creates `cache-*` files, and I wonder what the `cache-*` file from `load_dataset` is for (as I believe the same information is stored in `json-train.arrow`.", "This is a wrong report -- `cache-*` files are created only my `map`, not by `load_dataset`. " ]
https://api.github.com/repos/huggingface/datasets/issues/3012
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3012/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3012/comments
https://api.github.com/repos/huggingface/datasets/issues/3012/events
https://github.com/huggingface/datasets/pull/3012
1,014,958,931
PR_kwDODunzps4soRTu
3,012
Replace item with float in metrics
[]
closed
false
null
0
2021-10-04T09:45:28Z
2021-10-04T11:30:34Z
2021-10-04T11:30:33Z
null
As pointed out by @mariosasko in #3001, calling `float()` instad of `.item()` is faster. Moreover, it might avoid potential issues if any of the third-party functions eventually returns a `float` instead of an `np.float64`. Related to #3001.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3012/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3012/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3012.diff", "html_url": "https://github.com/huggingface/datasets/pull/3012", "merged_at": "2021-10-04T11:30:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/3012.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3012" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/845
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/845/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/845/comments
https://api.github.com/repos/huggingface/datasets/issues/845/events
https://github.com/huggingface/datasets/pull/845
741,841,350
MDExOlB1bGxSZXF1ZXN0NTIwMDg1NDMy
845
amazon description fields as bullets
[]
closed
false
null
0
2020-11-12T18:50:41Z
2020-11-12T18:50:54Z
2020-11-12T18:50:54Z
null
One more minor formatting change to amazon reviews's description (in addition to #844). Just reformatting the fields to display as a bulleted list in markdown.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/845/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/845/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/845.diff", "html_url": "https://github.com/huggingface/datasets/pull/845", "merged_at": "2020-11-12T18:50:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/845.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/845" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/771
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/771/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/771/comments
https://api.github.com/repos/huggingface/datasets/issues/771/events
https://github.com/huggingface/datasets/issues/771
731,482,213
MDU6SXNzdWU3MzE0ODIyMTM=
771
Using `Dataset.map` with `n_proc>1` print multiple progress bars
[]
closed
false
null
3
2020-10-28T14:13:27Z
2023-02-13T20:16:39Z
2023-02-13T20:16:39Z
null
When using `Dataset.map` with `n_proc > 1`, only one of the processes should print a progress bar (to make the output readable). Right now, `n_proc` progress bars are printed.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/771/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/771/timeline
null
completed
null
null
false
[ "Yes it allows to monitor the speed of each process. Currently each process takes care of one shard of the dataset.\r\n\r\nAt one point we can consider using streaming batches to a pool of processes instead of sharding the dataset in `num_proc` parts. At that point it will be easy to use only one progress bar", "Hi @lhoestq, I am facing a similar issue, it is annoying when lots of progress bars are printed. Is there a way to turn off this behavior? ", "You can disable the progress bars with\r\n```python\r\nimport datasets\r\n\r\ndatasets.disable_progress_bar()\r\n```" ]
https://api.github.com/repos/huggingface/datasets/issues/649
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/649/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/649/comments
https://api.github.com/repos/huggingface/datasets/issues/649/events
https://github.com/huggingface/datasets/issues/649
704,838,415
MDU6SXNzdWU3MDQ4Mzg0MTU=
649
Inconsistent behavior in map
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
1
2020-09-19T08:41:12Z
2020-09-21T16:13:05Z
2020-09-21T16:13:05Z
null
I'm observing inconsistent behavior when applying .map(). This happens specifically when I'm incrementally adding onto a feature that is a nested dictionary. Here's a simple example that reproduces the problem. ```python import datasets # Dataset with a single feature called 'field' consisting of two examples dataset = datasets.Dataset.from_dict({'field': ['a', 'b']}) print(dataset[0]) # outputs {'field': 'a'} # Map this dataset to create another feature called 'otherfield', which is a dictionary containing a key called 'capital' dataset = dataset.map(lambda example: {'otherfield': {'capital': example['field'].capitalize()}}) print(dataset[0]) # output is okay {'field': 'a', 'otherfield': {'capital': 'A'}} # Now I want to map again to modify 'otherfield', by adding another key called 'append_x' to the dictionary under 'otherfield' print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x'}})[0]) # printing out the first example after applying the map shows that the new key 'append_x' doesn't get added # it also messes up the value stored at 'capital' {'field': 'a', 'otherfield': {'capital': None}} # Instead, I try to do the same thing by using a different mapped fn print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x', 'capital': example['otherfield']['capital']}})[0]) # this preserves the value under capital, but still no 'append_x' {'field': 'a', 'otherfield': {'capital': 'A'}} # Instead, I try to pass 'otherfield' to remove_columns print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x', 'capital': example['otherfield']['capital']}}, remove_columns=['otherfield'])[0]) # this still doesn't fix the problem {'field': 'a', 'otherfield': {'capital': 'A'}} # Alternately, here's what happens if I just directly map both 'capital' and 'append_x' on a fresh dataset. # Recreate the dataset dataset = datasets.Dataset.from_dict({'field': ['a', 'b']}) # Now map the entire 'otherfield' dict directly, instead of incrementally as before print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x', 'capital': example['field'].capitalize()}})[0]) # This looks good! {'field': 'a', 'otherfield': {'append_x': 'ax', 'capital': 'A'}} ``` This might be a new issue, because I didn't see this behavior in the `nlp` library. Any help is appreciated!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/649/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/649/timeline
null
completed
null
null
false
[ "Thanks for reporting !\r\n\r\nThis issue must have appeared when we refactored type inference in `nlp`\r\nBy default the library tries to keep the same feature types when applying `map` but apparently it has troubles with nested structures. I'll try to fix that next week" ]
https://api.github.com/repos/huggingface/datasets/issues/3548
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3548/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3548/comments
https://api.github.com/repos/huggingface/datasets/issues/3548/events
https://github.com/huggingface/datasets/issues/3548
1,096,409,512
I_kwDODunzps5BWeGo
3,548
Specify the feature types of a dataset on the Hub without needing a dataset script
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
1
2022-01-07T15:17:06Z
2022-01-20T14:48:38Z
2022-01-20T14:48:38Z
null
**Is your feature request related to a problem? Please describe.** Currently if I upload a CSV with paths to audio files, the column type is string instead of Audio. **Describe the solution you'd like** I'd like to be able to specify the types of the column, so that when loading the dataset I directly get the features types I want. The feature types could read from the `dataset_infos.json` for example. **Describe alternatives you've considered** Create a dataset script to specify the features, but that seems complicated for a simple thing. cc @abidlabs
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/3548/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3548/timeline
null
completed
null
null
false
[ "After looking into this, discovered that this is already supported if the `dataset_infos.json` file is configured correctly! Here is a working example: https://huggingface.co/datasets/abidlabs/test-audio-13\r\n\r\nThis should be probably be documented, though. " ]
https://api.github.com/repos/huggingface/datasets/issues/2190
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2190/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2190/comments
https://api.github.com/repos/huggingface/datasets/issues/2190/events
https://github.com/huggingface/datasets/issues/2190
853,181,564
MDU6SXNzdWU4NTMxODE1NjQ=
2,190
News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs
[]
closed
false
null
2
2021-04-08T07:53:43Z
2021-05-24T10:03:55Z
2021-05-24T10:03:55Z
null
I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi. ``` train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]') val_ds = load_dataset("news_commentary", "ar-en", split='train[98%:]') # filtering out examples that are not ar-en translations but ar-hi val_ds = val_ds.filter(lambda example, indice: indice not in chain(range(1312,1327) ,range(1384,1399), range(1030,1042)), with_indices=True) ``` * I'm fairly new to using datasets so I might be doing something wrong
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2190/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2190/timeline
null
completed
null
null
false
[ "Hi @anassalamah,\r\n\r\nCould you please try with this:\r\n```python\r\ntrain_ds = load_dataset(\"news_commentary\", lang1=\"ar\", lang2=\"en\", split='train[:98%]')\r\nval_ds = load_dataset(\"news_commentary\", lang1=\"ar\", lang2=\"en\", split='train[98%:]')\r\n```", "Hello @albertvillanova, \r\n\r\nThanks for the suggestion. I didn't know you could do that. however, it didn't resolve the issue\r\n\r\n![image](https://user-images.githubusercontent.com/8571003/114169966-ec819400-993a-11eb-8a67-930f9a9b2290.png)\r\n" ]
https://api.github.com/repos/huggingface/datasets/issues/5058
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5058/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5058/comments
https://api.github.com/repos/huggingface/datasets/issues/5058/events
https://github.com/huggingface/datasets/pull/5058
1,394,962,424
PR_kwDODunzps5AEVWn
5,058
Mark CI tests as xfail when 502 error
[]
closed
false
null
1
2022-10-03T15:53:55Z
2022-10-04T10:03:23Z
2022-10-04T10:01:23Z
null
To make CI more robust, we could mark as xfail when the Hub raises a 502 error (besides 500 error): - FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_to_hub_skip_identical_files - https://github.com/huggingface/datasets/actions/runs/3174626525/jobs/5171672431 ``` > raise HTTPError(http_error_msg, response=self) E requests.exceptions.HTTPError: 502 Server Error: Bad Gateway for url: https://hub-ci.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/test-16648055339047.git/info/lfs/objects/batch ``` - FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_overwrite_files - https://github.com/huggingface/datasets/actions/runs/3145587033/jobs/5113074889 ``` > raise HTTPError(http_error_msg, response=self) E requests.exceptions.HTTPError: 502 Server Error: Bad Gateway for url: https://hub-ci.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/test-16643866807322.git/info/lfs/objects/verify ``` Currently, we mark as xfail when 500 error: - #4845
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5058/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5058/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5058.diff", "html_url": "https://github.com/huggingface/datasets/pull/5058", "merged_at": "2022-10-04T10:01:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/5058.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5058" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/255
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/255/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/255/comments
https://api.github.com/repos/huggingface/datasets/issues/255/events
https://github.com/huggingface/datasets/pull/255
635,300,822
MDExOlB1bGxSZXF1ZXN0NDMxNjg3MDM0
255
Add dataset/piaf
[]
closed
false
null
1
2020-06-09T10:16:01Z
2020-06-12T08:31:27Z
2020-06-12T08:31:27Z
null
Small SQuAD-like French QA dataset [PIAF](https://www.aclweb.org/anthology/2020.lrec-1.673.pdf)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/255/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/255/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/255.diff", "html_url": "https://github.com/huggingface/datasets/pull/255", "merged_at": "2020-06-12T08:31:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/255.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/255" }
true
[ "Very nice !" ]
https://api.github.com/repos/huggingface/datasets/issues/3731
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3731/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3731/comments
https://api.github.com/repos/huggingface/datasets/issues/3731/events
https://github.com/huggingface/datasets/pull/3731
1,139,626,362
PR_kwDODunzps4y5-hi
3,731
Fix Multi-News dataset metadata and card
[]
closed
false
null
0
2022-02-16T07:14:57Z
2022-02-16T08:48:47Z
2022-02-16T08:48:47Z
null
Fix #3730.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3731/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3731/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3731.diff", "html_url": "https://github.com/huggingface/datasets/pull/3731", "merged_at": "2022-02-16T08:48:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/3731.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3731" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/5610
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5610/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5610/comments
https://api.github.com/repos/huggingface/datasets/issues/5610/events
https://github.com/huggingface/datasets/issues/5610
1,610,698,006
I_kwDODunzps5gAU0W
5,610
use datasets streaming mode in trainer ddp mode cause memory leak
[]
open
false
null
2
2023-03-06T05:26:49Z
2023-05-07T15:15:32Z
null
null
### Describe the bug use datasets streaming mode in trainer ddp mode cause memory leak ### Steps to reproduce the bug import os import time import datetime import sys import numpy as np import random import torch from torch.utils.data import Dataset, DataLoader, random_split, RandomSampler, SequentialSampler,DistributedSampler,BatchSampler torch.manual_seed(42) from transformers import GPT2LMHeadModel, GPT2Tokenizer, GPT2Config, GPT2Model,DataCollatorForLanguageModeling,AutoModelForCausalLM from transformers import AdamW, get_linear_schedule_with_warmup hf_model_path ='./Wenzhong-GPT2-110M' tokenizer = GPT2Tokenizer.from_pretrained(hf_model_path) tokenizer.add_special_tokens({'pad_token': '<|pad|>'}) from datasets import load_dataset gpus=8 max_len = 576 batch_size_node = 17 save_step = 5000 gradient_accumulation = 2 dataloader_num = 4 max_step = 351000*1000//batch_size_node//gradient_accumulation//gpus #max_step = -1 print("total_step:%d"%(max_step)) import datasets datasets.version dataset = load_dataset("text", data_files="./gpt_data_v1/*",split='train',cache_dir='./dataset_cache',streaming=True) print('load over') shuffled_dataset = dataset.shuffle(seed=42) print('shuffle over') def dataset_tokener(example,max_lenth=max_len): example['text'] = list(map(lambda x : x.strip()+'<|endoftext|>',example['text'] )) return tokenizer(example['text'], truncation=True, max_length=max_lenth, padding="longest") new_new_dataset = shuffled_dataset.map(dataset_tokener, batched=True, remove_columns=["text"]) print('map over') configuration = GPT2Config.from_pretrained(hf_model_path, output_hidden_states=False) model = AutoModelForCausalLM.from_pretrained(hf_model_path) model.resize_token_embeddings(len(tokenizer)) seed_val = 42 random.seed(seed_val) np.random.seed(seed_val) torch.manual_seed(seed_val) torch.cuda.manual_seed_all(seed_val) from transformers import Trainer,TrainingArguments import os print("strat train") training_args = TrainingArguments(output_dir="./test_trainer", num_train_epochs=1.0, report_to="none", do_train=True, dataloader_num_workers=dataloader_num, local_rank=int(os.environ.get('LOCAL_RANK', -1)), overwrite_output_dir=True, logging_strategy='steps', logging_first_step=True, logging_dir="./logs", log_on_each_node=False, per_device_train_batch_size=batch_size_node, warmup_ratio=0.03, save_steps=save_step, save_total_limit=5, gradient_accumulation_steps=gradient_accumulation, max_steps=max_step, disable_tqdm=False, data_seed=42 ) trainer = Trainer( model=model, args=training_args, train_dataset=new_new_dataset, eval_dataset=None, tokenizer=tokenizer, data_collator=DataCollatorForLanguageModeling(tokenizer,mlm=False), #compute_metrics=compute_metrics if training_args.do_eval and not is_torch_tpu_available() else None, #preprocess_logits_for_metrics=preprocess_logits_for_metrics #if training_args.do_eval and not is_torch_tpu_available() #else None, ) trainer.train(resume_from_checkpoint=True) ### Expected behavior use the train code uppper my dataset ./gpt_data_v1 have 1000 files, each file size is 120mb start cmd is : python -m torch.distributed.launch --nproc_per_node=8 my_train.py here is result: ![image](https://user-images.githubusercontent.com/15223544/223026042-1a81489f-897a-43e4-8339-65a202fd5dc7.png) here is memory usage monitor in 12 hours ![image](https://user-images.githubusercontent.com/15223544/223027076-14e32e8b-9608-4282-9a80-f15d0277026d.png) every dataloader work allocate over 24gb cpu memory according to memory usage monitor in 12 hours,sometime small memory releases, but total memory usage is increase. i think datasets streaming mode should not used so much memery,so maybe somewhere has memory leak. ### Environment info pytorch 1.11.0 py 3.8 cuda 11.3 transformers 4.26.1 datasets 2.9.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5610/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5610/timeline
null
null
null
null
false
[ "Same problem, \r\ntransformers 4.28.1\r\ndatasets 2.12.0\r\n\r\nleak around 100Mb per 10 seconds when use dataloader_num_werker > 0 in training argumennts for transformer train, possile bug in transformers repo, but still not found solution :(\r\n", "found an article described a problem, may be helpful for somebody:\r\nhttps://ppwwyyxx.com/blog/2022/Demystify-RAM-Usage-in-Multiprocess-DataLoader/\r\nI confirm, it`s not memory leak, after some time memory growing has stopped" ]
https://api.github.com/repos/huggingface/datasets/issues/2551
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2551/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2551/comments
https://api.github.com/repos/huggingface/datasets/issues/2551/events
https://github.com/huggingface/datasets/pull/2551
930,967,978
MDExOlB1bGxSZXF1ZXN0Njc4NTQzMjg1
2,551
Fix FileSystems documentation
[]
closed
false
null
0
2021-06-27T16:18:42Z
2021-06-28T13:09:55Z
2021-06-28T13:09:54Z
null
### What this fixes: This PR resolves several issues I discovered in the documentation on the `datasets.filesystems` module ([this page](https://huggingface.co/docs/datasets/filesystems.html)). ### What were the issues? When I originally tried implementing the code examples I faced several bugs attributed to: - out of date [botocore](https://github.com/boto/botocore) call signatures - capitalization errors in the `S3FileSystem` class name (written as `S3Filesystem` in one place) - call signature errors for the `S3FileSystem` class constructor (uses parameter `sessions` instead of `session` in some places) (see [`s3fs`](https://s3fs.readthedocs.io/en/latest/api.html#s3fs.core.S3FileSystem) for where this constructor signature is defined) ### Testing/reviewing notes Instructions for generating the documentation locally: [here](https://github.com/huggingface/datasets/tree/master/docs#generating-the-documentation).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2551/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2551/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2551.diff", "html_url": "https://github.com/huggingface/datasets/pull/2551", "merged_at": "2021-06-28T13:09:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/2551.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2551" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4508
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4508/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4508/comments
https://api.github.com/repos/huggingface/datasets/issues/4508/events
https://github.com/huggingface/datasets/issues/4508
1,272,718,921
I_kwDODunzps5L3CZJ
4,508
cast_storage method from datasets.features
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
2
2022-06-15T20:47:22Z
2022-06-16T13:54:07Z
2022-06-16T13:54:07Z
null
## Describe the bug A bug occurs when mapping a function to a dataset object. I ran the same code with the same data yesterday and it worked just fine. It works when i run locally on an old version of datasets. ## Steps to reproduce the bug Steps are: - load whatever datset - write a preprocessing function such as "tokenize_and_align_labels" written in https://huggingface.co/docs/transformers/tasks/token_classification - map the function on dataset and get "ValueError: Class label -100 less than -1" from cast_storage method from datasets.features # Sample code to reproduce the bug def tokenize_and_align_labels(examples): tokenized_inputs = tokenizer(examples["tokens"], truncation=True, is_split_into_words=True, max_length=38,padding="max_length") labels = [] for i, label in enumerate(examples[f"labels"]): word_ids = tokenized_inputs.word_ids(batch_index=i) # Map tokens to their respective word. previous_word_idx = None label_ids = [] for word_idx in word_ids: # Set the special tokens to -100. if word_idx is None: label_ids.append(-100) elif word_idx != previous_word_idx: # Only label the first token of a given word. label_ids.append(label[word_idx]) else: label_ids.append(-100) previous_word_idx = word_idx labels.append(label_ids) tokenized_inputs["labels"] = labels return tokenized_inputs tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") dt = dataset.map(tokenize_and_align_labels, batched=True) ## Expected results New dataset objects should load and do on older versions. ## Actual results "ValueError: Class label -100 less than -1" from cast_storage method from datasets.features ## Environment info everything works fine on older installations of datasets/transformers Issue arises when installing datasets on google collab under python3.7 I can't manage to find the exact output you're requirering but version printed is datasets-2.3.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4508/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4508/timeline
null
completed
null
null
false
[ "Hi! We've recently added a check to the `ClassLabel` type to ensure the values are in the valid label range `-1, 0, ..., num_classes-1` (-1 is used for missing values). The error in your case happens only if the `labels` column is of type `Sequence(ClassLabel(...))` before the `map` call and can be avoided by calling `dataset = dataset.cast_column(\"labels\", Sequence(Value(\"int\")))` beforehand. The token-classification examples in Transformers introduce a new `labels` column, so their type is also `Sequence(Value(\"int\"))`, which doesn't lead to an error as this type unbounded. ", "I'm fine with re-adding support for all negative values for unknown/missing labels @mariosasko, wdyt ?" ]
https://api.github.com/repos/huggingface/datasets/issues/300
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/300/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/300/comments
https://api.github.com/repos/huggingface/datasets/issues/300/events
https://github.com/huggingface/datasets/pull/300
643,688,304
MDExOlB1bGxSZXF1ZXN0NDM4NDQ4Mjk1
300
Fix bertscore references
[]
closed
false
null
0
2020-06-23T09:38:59Z
2020-06-23T14:47:38Z
2020-06-23T14:47:37Z
null
I added some type checking for metrics. There was an issue where a metric could interpret a string a a list. A `ValueError` is raised if a string is given instead of a list. Moreover I added support for both strings and lists of strings for `references` in `bertscore`, as it is the case in the original code. Both ways work: ``` import nlp scorer = nlp.load_metric("bertscore") with open("pred.txt") as p, open("ref.txt") as g: for lp, lg in zip(p, g): scorer.add(lp, [lg]) score = scorer.compute(lang="en") ``` ``` import nlp scorer = nlp.load_metric("bertscore") with open("pred.txt") as p, open("ref.txt") as g: for lp, lg in zip(p, g): scorer.add(lp, lg) score = scorer.compute(lang="en") ``` This should fix #295 and #238
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/300/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/300/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/300.diff", "html_url": "https://github.com/huggingface/datasets/pull/300", "merged_at": "2020-06-23T14:47:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/300.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/300" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/5262
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5262/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5262/comments
https://api.github.com/repos/huggingface/datasets/issues/5262/events
https://github.com/huggingface/datasets/issues/5262
1,455,171,100
I_kwDODunzps5WvCYc
5,262
AttributeError: 'Value' object has no attribute 'names'
[]
closed
false
null
2
2022-11-18T13:58:42Z
2022-11-22T10:09:24Z
2022-11-22T10:09:23Z
null
Hello I'm trying to build a model for custom token classification I already followed the token classification course on huggingface while adapting the code to my work, this message occures : 'Value' object has no attribute 'names' Here's my code: `raw_datasets` generates DatasetDict({ train: Dataset({ features: ['isDisf', 'pos', 'tokens', 'id'], num_rows: 14 }) }) `raw_datasets["train"][3]["isDisf"]` generates ['B_RM', 'I_RM', 'I_RM', 'B_RP', 'I_RP', 'O', 'O'] `dis_feature = raw_datasets["train"].features["isDisf"] dis_feature` generates Sequence(feature=Value(dtype='string', id=None), length=-1, id=None) and `label_names = dis_feature.feature.names label_names` generates AttributeError Traceback (most recent call last) [<ipython-input-28-972fd54a869a>](https://localhost:8080/#) in <module> ----> 1 label_names = dis_feature.feature.names 2 label_names AttributeError: 'Value' object has AttributeError: 'Value' object has no attribute 'names' Thank you for your help
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5262/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5262/timeline
null
completed
null
null
false
[ "Hi ! It looks like your \"isDif\" column is a Sequence of Value(\"string\"), not a Sequence of ClassLabel.\r\n\r\nYou can convert your Value(\"string\") feature type to a ClassLabel feature type this way:\r\n```python\r\nfrom datasets import ClassLabel, Sequence\r\n\r\n# provide the label_names yourself\r\nlabel_names = [...]\r\n# OR get them from the dataset\r\nlabel_names = sorted(set(label for labels in raw_datasets[\"train\"][\"isDif\"] for label in labels))\r\n\r\n# Cast to ClassLabel\r\nraw_datasets = raw_datasets.cast_column(\"isDif\", Sequence(ClassLabel(names=label_names)))\r\n```\r\n", "thank you \r\nit works 💯 " ]
https://api.github.com/repos/huggingface/datasets/issues/21
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/21/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/21/comments
https://api.github.com/repos/huggingface/datasets/issues/21/events
https://github.com/huggingface/datasets/pull/21
607,914,185
MDExOlB1bGxSZXF1ZXN0NDA5Nzk2MTM4
21
Cleanup Features - Updating convert command - Fix Download manager
[]
closed
false
null
2
2020-04-27T23:16:55Z
2020-05-01T09:29:47Z
2020-05-01T09:29:46Z
null
This PR makes a number of changes: # Updating `Features` Features are a complex mechanism provided in `tfds` to be able to modify a dataset on-the-fly when serializing to disk and when loading from disk. We don't really need this because (1) it hides too much from the user and (2) our datatype can be directly mapped to Arrow tables on drive so we usually don't need to change the format before/after serialization. This PR extracts and refactors these features in a single `features.py` files. It still keep a number of features classes for easy compatibility with tfds, namely the `Sequence`, `Tensor`, `ClassLabel` and `Translation` features. Some more complex features involving a pre-processing on-the-fly during serialization are kept: - `ClassLabel` which are able to convert from label strings to integers, - `Translation`which does some check on the languages. # Updating the `convert` command We do a few updates here - following the simplification of the `features` (cf above), conversion are updated - we also makes it simpler to convert a single file - some code need to be fixed manually after conversion (e.g. to remove some encoding processing in former tfds `Text` features. We highlight this code with a "git merge conflict" style syntax for easy manual fixing. # Fix download manager iterator You kept me up quite late on Tuesday night with this `os.scandir` change @lhoestq ;-)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/21/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/21/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/21.diff", "html_url": "https://github.com/huggingface/datasets/pull/21", "merged_at": "2020-05-01T09:29:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/21.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/21" }
true
[ "For conflicts, I think the mention hint \"This should be modified because it mentions ...\" is missing.", "Looks great!" ]
https://api.github.com/repos/huggingface/datasets/issues/2590
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2590/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2590/comments
https://api.github.com/repos/huggingface/datasets/issues/2590/events
https://github.com/huggingface/datasets/pull/2590
936,954,348
MDExOlB1bGxSZXF1ZXN0NjgzNTg1MDg2
2,590
Add language tags
[]
closed
false
null
0
2021-07-05T10:39:57Z
2021-07-05T10:58:48Z
2021-07-05T10:58:48Z
null
This PR adds some missing language tags needed for ASR datasets in #2565
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2590/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2590/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2590.diff", "html_url": "https://github.com/huggingface/datasets/pull/2590", "merged_at": "2021-07-05T10:58:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/2590.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2590" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/3116
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3116/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3116/comments
https://api.github.com/repos/huggingface/datasets/issues/3116/events
https://github.com/huggingface/datasets/pull/3116
1,031,270,611
PR_kwDODunzps4tbr6g
3,116
Update doc links to point to new docs
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
null
0
2021-10-20T11:00:47Z
2021-10-22T08:29:28Z
2021-10-22T08:26:45Z
null
This PR: * updates the README links and the ADD_NEW_DATASET template to point to the new docs (the new docs don't have a section with the list of all the possible features, so I added that info to the `Features` docstring, which is then referenced in the ADD_NEW_DATASET template) * fixes some broken links in the `.rst` files (fixed with the `make linkcheck` tool)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3116/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3116/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3116.diff", "html_url": "https://github.com/huggingface/datasets/pull/3116", "merged_at": "2021-10-22T08:26:45Z", "patch_url": "https://github.com/huggingface/datasets/pull/3116.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3116" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2670
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2670/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2670/comments
https://api.github.com/repos/huggingface/datasets/issues/2670/events
https://github.com/huggingface/datasets/issues/2670
947,120,709
MDU6SXNzdWU5NDcxMjA3MDk=
2,670
Using sharding to parallelize indexing
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
0
2021-07-18T21:26:26Z
2021-10-07T13:33:25Z
null
null
**Is your feature request related to a problem? Please describe.** Creating an elasticsearch index on large dataset could be quite long and cannot be parallelized on shard (the index creation is colliding) **Describe the solution you'd like** When working on dataset shards, if an index already exists, its mapping should be checked and if compatible, the indexing process should continue with the shard data. Additionally, at the end of the process, the `_indexes` dict should be send back to the original dataset object (from which the shards have been created) to allow to use the index for later filtering on the whole dataset. **Describe alternatives you've considered** Each dataset shard could created independent partial indices. then on the whole dataset level, indices should be all referred in `_indexes` dict and be used in querying through `get_nearest_examples()`. The drawback is that the scores will be computed independently on the partial indices leading to inconsistent values for most scoring based on corpus level statistics (tf/idf, BM25). **Additional context** The objectives is to parallelize the index creation to speed-up the process (ie surcharging the ES server which is fine to handle large load) while later enabling search on the whole dataset.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/2670/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2670/timeline
null
null
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/1095
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1095/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1095/comments
https://api.github.com/repos/huggingface/datasets/issues/1095/events
https://github.com/huggingface/datasets/pull/1095
756,934,964
MDExOlB1bGxSZXF1ZXN0NTMyMzk0Nzgy
1,095
Add TupleInf Open IE Dataset
[]
closed
false
null
2
2020-12-04T09:08:07Z
2020-12-04T15:40:54Z
2020-12-04T15:40:54Z
null
For more information: https://allenai.org/data/tuple-ie
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1095/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1095/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1095.diff", "html_url": "https://github.com/huggingface/datasets/pull/1095", "merged_at": "2020-12-04T15:40:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/1095.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1095" }
true
[ "Errors are in the CI are not related to this PR (RemoteDatasetError)\r\nthe CI is fixed on master so it's fine ", "@lhoestq Added the dataset card. Please let me know if more information needs to be added." ]
https://api.github.com/repos/huggingface/datasets/issues/5905
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5905/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5905/comments
https://api.github.com/repos/huggingface/datasets/issues/5905/events
https://github.com/huggingface/datasets/issues/5905
1,727,541,392
I_kwDODunzps5m-DCQ
5,905
Offer an alternative to Iterable Dataset that allows lazy loading and processing while skipping batches efficiently
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
1
2023-05-26T12:33:02Z
2023-06-15T13:34:18Z
null
null
### Feature request I would like a way to resume training from a checkpoint without waiting for a very long time when using an iterable dataset. ### Motivation I am training models on the speech-recognition task. I have very large datasets that I can't comfortably store on a disk and also quite computationally intensive audio processing to do. As a result I want to load data from my remote when it is needed and perform all processing on the fly. I am currently using the iterable dataset feature of _datasets_. It does everything I need with one exception. My issue is that when resuming training at a step n, we have to download all the data and perform the processing of steps < n, just to get the iterable at the right step. In my case it takes almost as long as training for the same steps, which make resuming training from a checkpoint useless in practice. I understand that the nature of iterators make it probably nearly impossible to quickly resume training. I thought about a possible solution nonetheless : I could in fact index my large dataset and make it a mapped dataset. Then I could use set_transform to perform the processing on the fly. Finally, if I'm not mistaken, the _accelerate_ package allows to [skip steps efficiently](https://github.com/huggingface/accelerate/blob/a73898027a211c3f6dc4460351b0ec246aa824aa/src/accelerate/data_loader.py#L827) for a mapped dataset. Is it possible to lazily load samples of a mapped dataset ? I'm used to [dataset scripts](https://huggingface.co/docs/datasets/dataset_script), maybe something can be done there. If not, I could do it using a plain _Pytorch_ dataset. Then I would need to convert it to a _datasets_' dataset to get all the features of _datasets_. Is it something possible ? ### Your contribution I could provide a PR to allow lazy loading of mapped dataset or the conversion of a mapped _Pytorch_ dataset into a _Datasets_ dataset if you think it is an useful new feature.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5905/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5905/timeline
null
null
null
null
false
[ "We plan to improve this eventually (see https://github.com/huggingface/datasets/issues/5454 and https://github.com/huggingface/datasets/issues/5380).\r\n\r\n> Is it possible to lazily load samples of a mapped dataset ? I'm used to [dataset scripts](https://huggingface.co/docs/datasets/dataset_script), maybe something can be done there.\r\nIf not, I could do it using a plain Pytorch dataset. Then I would need to convert it to a datasets' dataset to get all the features of datasets. Is it something possible ?\r\n\r\nYes, by creating a mapped dataset that stores audio URLs. Indexing a dataset in such format only downloads and decodes the bytes of the accessed samples (without storing them on disk).\r\n\r\nYou can do the following to create this dataset:\r\n```python\r\n\r\ndef gen():\r\n # Generator that yields (audio URL, text) pairs as dict\r\n ...\r\n yield {\"audio\": \"audio_url\", \"text\": \"some text\"}\r\n\r\nfeatures = Features({\"audio\": datasets.Audio(), \"text\": datasets.Value(\"string\")})\r\nds = Dataset.from_generator(gen, features=features)\r\nds[2:5] # downloads and decodes the samples each time they are accessed\r\n```" ]
https://api.github.com/repos/huggingface/datasets/issues/4647
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4647/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4647/comments
https://api.github.com/repos/huggingface/datasets/issues/4647/events
https://github.com/huggingface/datasets/issues/4647
1,296,311,270
I_kwDODunzps5NRCPm
4,647
Add Reddit dataset
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
0
2022-07-06T19:49:18Z
2022-07-06T19:49:18Z
null
null
## Adding a Dataset - **Name:** *Reddit comments (2015-2018)* - **Description:** *Reddit is an American social news aggregation website, where users can post links, and take part in discussions on these posts. These threaded discussions provide a large corpus, which is converted into a conversational dataset using the tools in this directory.* - **Paper:** *https://arxiv.org/abs/1904.06472* - **Data:** *https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit* - **Motivation:** *Dataset for training and evaluating models of conversational response*
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4647/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4647/timeline
null
null
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/4879
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4879/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4879/comments
https://api.github.com/repos/huggingface/datasets/issues/4879/events
https://github.com/huggingface/datasets/pull/4879
1,348,346,407
PR_kwDODunzps49qbOl
4,879
Fix Citation Information section in dataset cards
[]
closed
false
null
1
2022-08-23T18:06:43Z
2022-09-27T14:04:45Z
2022-08-24T04:09:07Z
null
Fix Citation Information section in dataset cards: - cc_news - conllpp - datacommons_factcheck - gnad10 - id_panl_bppt - jigsaw_toxicity_pred - kinnews_kirnews - kor_sarcasm - makhzan - reasoning_bg - ro_sts - ro_sts_parallel - sanskrit_classic - telugu_news - thaiqa_squad - wiki_movies This PR partially fixes the Citation Information section in dataset cards. Subsequent PRs will follow to complete this task.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4879/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4879/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4879.diff", "html_url": "https://github.com/huggingface/datasets/pull/4879", "merged_at": "2022-08-24T04:09:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/4879.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4879" }
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4879). All of your documentation changes will be reflected on that endpoint." ]
https://api.github.com/repos/huggingface/datasets/issues/5168
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5168/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5168/comments
https://api.github.com/repos/huggingface/datasets/issues/5168/events
https://github.com/huggingface/datasets/pull/5168
1,424,368,572
PR_kwDODunzps5BmYnq
5,168
Fix CI require beam
[]
closed
false
null
2
2022-10-26T16:49:33Z
2022-10-27T09:25:19Z
2022-10-27T09:23:26Z
null
This PR: - Fixes the CI `require_beam`: before it was requiring PyTorch instead ```python def require_beam(test_case): if not config.TORCH_AVAILABLE: test_case = unittest.skip("test requires PyTorch")(test_case) return test_case ``` - Fixes a missing `require_beam` in `test_beam_based_builder_download_and_prepare_as_parquet` - Refactors `require_beam` to use `pytest` (`skipif`) instead
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5168/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5168/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5168.diff", "html_url": "https://github.com/huggingface/datasets/pull/5168", "merged_at": "2022-10-27T09:23:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/5168.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5168" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "I'm merging this PR because it is quite a trivial fix and this is required by:\r\n- #5166" ]
https://api.github.com/repos/huggingface/datasets/issues/1819
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1819/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1819/comments
https://api.github.com/repos/huggingface/datasets/issues/1819/events
https://github.com/huggingface/datasets/pull/1819
801,448,670
MDExOlB1bGxSZXF1ZXN0NTY3NzYyMzI2
1,819
Fixed spelling `S3Fileystem` to `S3FileSystem`
[]
closed
false
null
0
2021-02-04T16:36:46Z
2021-02-04T16:52:27Z
2021-02-04T16:52:26Z
null
Fixed documentation spelling errors. Wrong `S3Fileystem` Right `S3FileSystem`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1819/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1819/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1819.diff", "html_url": "https://github.com/huggingface/datasets/pull/1819", "merged_at": "2021-02-04T16:52:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/1819.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1819" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/5555
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5555/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5555/comments
https://api.github.com/repos/huggingface/datasets/issues/5555/events
https://github.com/huggingface/datasets/issues/5555
1,592,469,938
I_kwDODunzps5e6ymy
5,555
`.shuffle` throwing error `ValueError: Protocol not known: parent`
[]
open
false
null
4
2023-02-20T21:33:45Z
2023-02-27T09:23:34Z
null
null
### Describe the bug ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In [16], line 1 ----> 1 train_dataset = train_dataset.shuffle() File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:551, in transmit_format.<locals>.wrapper(*args, **kwargs) 544 self_format = { 545 "type": self._format_type, 546 "format_kwargs": self._format_kwargs, 547 "columns": self._format_columns, 548 "output_all_columns": self._output_all_columns, 549 } 550 # apply actual function --> 551 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 552 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 553 # re-apply format to the output File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs) 476 validate_fingerprint(kwargs[fingerprint_name]) 478 # Call actual function --> 480 out = func(self, *args, **kwargs) 482 # Update fingerprint of in-place transforms + update in-place history of transforms 484 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:3616, in Dataset.shuffle(self, seed, generator, keep_in_memory, load_from_cache_file, indices_cache_file_name, writer_batch_size, new_fingerprint) 3610 return self._new_dataset_with_indices( 3611 fingerprint=new_fingerprint, indices_cache_file_name=indices_cache_file_name 3612 ) 3614 permutation = generator.permutation(len(self)) -> 3616 return self.select( 3617 indices=permutation, 3618 keep_in_memory=keep_in_memory, 3619 indices_cache_file_name=indices_cache_file_name if not keep_in_memory else None, 3620 writer_batch_size=writer_batch_size, 3621 new_fingerprint=new_fingerprint, 3622 ) File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:551, in transmit_format.<locals>.wrapper(*args, **kwargs) 544 self_format = { 545 "type": self._format_type, 546 "format_kwargs": self._format_kwargs, 547 "columns": self._format_columns, 548 "output_all_columns": self._output_all_columns, 549 } 550 # apply actual function --> 551 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 552 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 553 # re-apply format to the output File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs) 476 validate_fingerprint(kwargs[fingerprint_name]) 478 # Call actual function --> 480 out = func(self, *args, **kwargs) 482 # Update fingerprint of in-place transforms + update in-place history of transforms 484 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:3266, in Dataset.select(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint) 3263 return self._select_contiguous(start, length, new_fingerprint=new_fingerprint) 3265 # If not contiguous, we need to create a new indices mapping -> 3266 return self._select_with_indices_mapping( 3267 indices, 3268 keep_in_memory=keep_in_memory, 3269 indices_cache_file_name=indices_cache_file_name, 3270 writer_batch_size=writer_batch_size, 3271 new_fingerprint=new_fingerprint, 3272 ) File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:551, in transmit_format.<locals>.wrapper(*args, **kwargs) 544 self_format = { 545 "type": self._format_type, 546 "format_kwargs": self._format_kwargs, 547 "columns": self._format_columns, 548 "output_all_columns": self._output_all_columns, 549 } 550 # apply actual function --> 551 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 552 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 553 # re-apply format to the output File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs) 476 validate_fingerprint(kwargs[fingerprint_name]) 478 # Call actual function --> 480 out = func(self, *args, **kwargs) 482 # Update fingerprint of in-place transforms + update in-place history of transforms 484 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:3389, in Dataset._select_with_indices_mapping(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint) 3387 logger.info(f"Caching indices mapping at {indices_cache_file_name}") 3388 tmp_file = tempfile.NamedTemporaryFile("wb", dir=os.path.dirname(indices_cache_file_name), delete=False) -> 3389 writer = ArrowWriter( 3390 path=tmp_file.name, writer_batch_size=writer_batch_size, fingerprint=new_fingerprint, unit="indices" 3391 ) 3393 indices = indices if isinstance(indices, list) else list(indices) 3395 size = len(self) File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_writer.py:315, in ArrowWriter.__init__(self, schema, features, path, stream, fingerprint, writer_batch_size, hash_salt, check_duplicates, disable_nullable, update_features, with_metadata, unit, embed_local_files, storage_options) 312 self._disable_nullable = disable_nullable 314 if stream is None: --> 315 fs_token_paths = fsspec.get_fs_token_paths(path, storage_options=storage_options) 316 self._fs: fsspec.AbstractFileSystem = fs_token_paths[0] 317 self._path = ( 318 fs_token_paths[2][0] 319 if not is_remote_filesystem(self._fs) 320 else self._fs.unstrip_protocol(fs_token_paths[2][0]) 321 ) File /opt/conda/envs/pytorch/lib/python3.9/site-packages/fsspec/core.py:593, in get_fs_token_paths(urlpath, mode, num, name_function, storage_options, protocol, expand) 591 else: 592 urlpath = stringify_path(urlpath) --> 593 chain = _un_chain(urlpath, storage_options or {}) 594 if len(chain) > 1: 595 inkwargs = {} File /opt/conda/envs/pytorch/lib/python3.9/site-packages/fsspec/core.py:330, in _un_chain(path, kwargs) 328 for bit in reversed(bits): 329 protocol = split_protocol(bit)[0] or "file" --> 330 cls = get_filesystem_class(protocol) 331 extra_kwargs = cls._get_kwargs_from_urls(bit) 332 kws = kwargs.get(protocol, {}) File /opt/conda/envs/pytorch/lib/python3.9/site-packages/fsspec/registry.py:240, in get_filesystem_class(protocol) 238 if protocol not in registry: 239 if protocol not in known_implementations: --> 240 raise ValueError("Protocol not known: %s" % protocol) 241 bit = known_implementations[protocol] 242 try: ValueError: Protocol not known: parent ``` This is what the `train_dataset` object looks like ``` Dataset({ features: ['label', 'input_ids', 'attention_mask'], num_rows: 364166 }) ``` ### Steps to reproduce the bug The `train_dataset` obj is created by concatenating two datasets And then shuffle is called, but it throws the mentioned error. ### Expected behavior Should shuffle the dataset properly. ### Environment info - `datasets` version: 2.6.1 - Platform: Linux-5.15.0-1022-aws-x86_64-with-glibc2.31 - Python version: 3.9.13 - PyArrow version: 10.0.0 - Pandas version: 1.4.4
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5555/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5555/timeline
null
null
null
null
false
[ "Hi ! The indices mapping is written in the same cachedirectory as your dataset.\r\n\r\nCan you run this to show your current cache directory ?\r\n```python\r\nprint(train_dataset.cache_files)\r\n```", "```\r\n[{'filename': '.../train/dataset.arrow'}, {'filename': '.../train/dataset.arrow'}]\r\n```\r\n\r\nThese are the actual paths where `.hf` files are stored. ", "I'm not aware of any `.hf` file ? What are you referring to ?\r\n\r\nAlso the error says \"Protocol unknown: parent\". Is there a chance you may have ended up with a path that contains this string `parent://` ?", "I figured out why the issue was occuring but don't know the long-term fix.\r\nThe dataset I was trying to shuffle was loaded from a saved file which had `::` delimiter in filename. When I try with the exact same file without `::` in filename, it works as expected.\r\nQuick fix is to not use colons in filename. But if this is expected behaviour, this should be clearly stated in the documentation.\r\nThanks for help @lhoestq " ]
https://api.github.com/repos/huggingface/datasets/issues/2947
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2947/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2947/comments
https://api.github.com/repos/huggingface/datasets/issues/2947/events
https://github.com/huggingface/datasets/pull/2947
1,000,798,338
PR_kwDODunzps4r9GIP
2,947
Don't use old, incompatible cache for the new `filter`
[]
closed
false
null
0
2021-09-20T10:18:59Z
2021-09-20T16:25:09Z
2021-09-20T13:43:02Z
null
#2836 changed `Dataset.filter` and the resulting data that are stored in the cache are different and incompatible with the ones of the previous `filter` implementation. However the caching mechanism wasn't able to differentiate between the old and the new implementation of filter (only the method name was taken into account). This is an issue because anyone that update `datasets` and re-runs some code that uses `filter` would see an error, because the cache would try to load an incompatible `filter` result. To fix this I added the notion of versioning for dataset transform in the caching mechanism, and bumped the version of the `filter` implementation to 2.0.0 This way the new `filter` outputs are now considered different from the old ones from the caching point of view. This should fix #2943 cc @anton-l
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2947/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2947/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2947.diff", "html_url": "https://github.com/huggingface/datasets/pull/2947", "merged_at": "2021-09-20T13:43:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/2947.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2947" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4177
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4177/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4177/comments
https://api.github.com/repos/huggingface/datasets/issues/4177/events
https://github.com/huggingface/datasets/pull/4177
1,207,535,920
PR_kwDODunzps42Yxca
4,177
Adding missing subsets to the `SemEval-2018 Task 1` dataset
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
open
false
null
1
2022-04-18T22:59:30Z
2022-10-05T10:38:16Z
null
null
This dataset for the [1st task of SemEval-2018](https://competitions.codalab.org/competitions/17751) competition was missing all subtasks except for subtask 5. I added another two subtasks (subtask 1 and 2), which are each comprised of 12 additional data subsets: for each language in En, Es, Ar, there are 4 datasets, broken down by emotions (anger, fear, joy, sadness). ## Remaining questions I wasn't able to find any documentation about how one should make PRs to modify datasets. Because of that, I just did my best to integrate the new data into the code, and tested locally that this worked. I'm sorry if I'm not respecting your contributing guidelines – if they are documented somewhere, I'd appreciate if you could send a pointer! Not sure how `dataset_infos.json` and `dummy` should be updated. My understanding is that they were automatically generated at the time of the original dataset creation?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4177/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4177/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4177.diff", "html_url": "https://github.com/huggingface/datasets/pull/4177", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4177.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4177" }
true
[ "Datasets are not tracked in this repository anymore. You should move this PR to the [discussions page of this dataset](https://huggingface.co/datasets/sem_eval_2018_task_1/discussions)" ]
https://api.github.com/repos/huggingface/datasets/issues/3085
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3085/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3085/comments
https://api.github.com/repos/huggingface/datasets/issues/3085/events
https://github.com/huggingface/datasets/pull/3085
1,026,467,384
PR_kwDODunzps4tNFza
3,085
Fixes to `to_tf_dataset`
[]
closed
false
null
2
2021-10-14T14:25:56Z
2021-10-21T15:05:29Z
2021-10-21T15:05:28Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3085/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3085/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3085.diff", "html_url": "https://github.com/huggingface/datasets/pull/3085", "merged_at": "2021-10-21T15:05:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/3085.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3085" }
true
[ "Hi ! Can you give some details about why you need these changes ?", "Hey, sorry, I should have explained! I've been getting a lot of `VisibleDeprecationWarning` from Numpy, due to an issue in the formatter, see #3084 . This is a temporary workaround (since I'm using these methods in the upcoming course) until I can fix that issue, because I couldn't see an obvious fix for the Numpy formatter. If you can see a quick way to fix that, though, that might be even better!" ]
https://api.github.com/repos/huggingface/datasets/issues/3902
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3902/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3902/comments
https://api.github.com/repos/huggingface/datasets/issues/3902/events
https://github.com/huggingface/datasets/issues/3902
1,167,403,377
I_kwDODunzps5FlSlx
3,902
Can't import datasets: partially initialized module 'fsspec' has no attribute 'utils'
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
5
2022-03-12T21:22:03Z
2023-02-09T14:53:49Z
2022-03-22T07:10:41Z
null
## Describe the bug Unable to import datasets ## Steps to reproduce the bug ```python from datasets import Dataset, DatasetDict ``` ## Expected results The import works without errors ## Actual results ``` AttributeError Traceback (most recent call last) <ipython-input-37-c8cfcbe62127> in <module> 11 # from tqdm import tqdm 12 # import torch ---> 13 from datasets import Dataset 14 # from transformers import Trainer, TrainingArguments, AutoModel, AutoTokenizer, AutoModelForMaskedLM, DataCollatorForLanguageModeling 15 # from sentence_transformers import SentenceTransformer ~/.local/lib/python3.8/site-packages/datasets/__init__.py in <module> 31 ) 32 ---> 33 from .arrow_dataset import Dataset, concatenate_datasets 34 from .arrow_reader import ArrowReader, ReadInstruction 35 from .arrow_writer import ArrowWriter ~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py in <module> 46 ) 47 ---> 48 import fsspec 49 import numpy as np 50 import pandas as pd ~/.local/lib/python3.8/site-packages/fsspec/__init__.py in <module> 10 from . import _version, caching 11 from .callbacks import Callback ---> 12 from .core import get_fs_token_paths, open, open_files, open_local 13 from .exceptions import FSTimeoutError 14 from .mapping import FSMap, get_mapper ~/.local/lib/python3.8/site-packages/fsspec/core.py in <module> 16 caches, 17 ) ---> 18 from .compression import compr 19 from .registry import filesystem, get_filesystem_class 20 from .utils import ( ~/.local/lib/python3.8/site-packages/fsspec/compression.py in <module> 68 69 ---> 70 register_compression("zip", unzip, "zip") 71 register_compression("bz2", BZ2File, "bz2") 72 ~/.local/lib/python3.8/site-packages/fsspec/compression.py in register_compression(name, callback, extensions, force) 44 45 for ext in extensions: ---> 46 if ext in fsspec.utils.compressions and not force: 47 raise ValueError( 48 "Duplicate compression file extension: %s (%s)" % (ext, name) AttributeError: partially initialized module 'fsspec' has no attribute 'utils' (most likely due to a circular import) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.4 - Platform: Jupyter notebook - Python version: 3.8.10 - PyArrow version: 7.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3902/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3902/timeline
null
completed
null
null
false
[ "Update: `\"python3 -c \"from from datasets import Dataset, DatasetDict\"` works, but not if I import without the `python3 -c`", "Hi @arunasank, thanks for reporting.\r\n\r\nIt seems that this can be caused because you are using an old version of `fsspec`: the reason why it works if you run `python3` seems to be that `python3` runs in a Python virtual env (with an updated version of `fsspec`); whereas the error arises when you run the import from other Python virtual env (with an old version of `fsspec`).\r\n\r\nIn order to fix this, you should update `fsspec` from within the \"problematic\" Python virtual env:\r\n```\r\npip install -U \"fsspec[http]>=2021.05.0\"", "I'm closing this issue, @arunasank.\r\n\r\nFeel free to re-open it if the problem persists. ", "from lightgbm import LGBMModel,LGBMClassifier, plot_importance\r\nafter importing lib getting (partially initialized module 'fsspec' has no attribute 'utils' (most likely due to a circular import) error, can help me", "@deepakmahtha I think you are not using `datasets`: this is the GitHub repository of Hugging Face Datasets.\r\n\r\nIf you are using `lightgbm`, you should report the issue to their repository instead.\r\n\r\nAnyway, we have proposed a possible fix just in a comment above: to update fsspec.\r\nhttps://github.com/huggingface/datasets/issues/3902#issuecomment-1066517824" ]
https://api.github.com/repos/huggingface/datasets/issues/5017
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5017/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5017/comments
https://api.github.com/repos/huggingface/datasets/issues/5017/events
https://github.com/huggingface/datasets/issues/5017
1,384,022,463
I_kwDODunzps5SfoG_
5,017
xcsr: X-CSQA simply uses english for all alleged non-english data
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
1
2022-09-23T16:11:54Z
2022-09-26T10:57:31Z
2022-09-26T10:57:31Z
null
## Describe the bug All the alleged non-english subcollections for the X-CSQA task in the [xcsr benchmark dataset ](https://huggingface.co/datasets/xcsr) seem to be copies of the english subcollection, rather than translations. This is in contrast to the data description: > we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR ## Steps to reproduce the bug ```python # let's say you want to load the french X-CSQA subcollection french = datasets.load_dataset("xcsr", "X-CSQA-fr") # for good measure, let's load english too english = datasets.load_dataset("xcsr", "X-CSQA-en") # let's inspect "".join(english['test'][0]['question']['stem']) # output: 'The people wanted to stop the parade, so what did they set up to thwart it?' "".join(french['test'][0]['question']['stem']) # output: 'The people wanted to stop the parade, so what did they set up to thwart it?' # what? Why are they both in english? # I've checked this for validation and train splits too, across many datapoints. It's all the same english dataset # maybe i need to look better? french['test'].unique('lang') # output: ['en'] # no, it's all english ``` ## Expected results Accessing a subcollection in language X should return a subcollection containg samples in language X ## Actual results Accessing a subcollection in language X returns a subcollection containing samples in English. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.5.1 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.13 - PyArrow version: 9.0.0 - Pandas version: 1.4.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5017/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5017/timeline
null
completed
null
null
false
[ "Thanks for reporting, @thesofakillers. Good catch. We are fixing this. " ]
https://api.github.com/repos/huggingface/datasets/issues/4581
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4581/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4581/comments
https://api.github.com/repos/huggingface/datasets/issues/4581/events
https://github.com/huggingface/datasets/issues/4581
1,286,362,907
I_kwDODunzps5MrFcb
4,581
Dataset Viewer issue for pn_summary
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
null
3
2022-06-27T20:56:12Z
2022-06-28T14:42:03Z
2022-06-28T14:42:03Z
null
### Link https://huggingface.co/datasets/pn_summary/viewer/1.0.0/validation ### Description Getting an index error on the `validation` and `test` splits: ``` Server error Status code: 400 Exception: IndexError Message: list index out of range ``` ### Owner No
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4581/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4581/timeline
null
completed
null
null
false
[ "linked to https://github.com/huggingface/datasets/issues/4580#issuecomment-1168373066?", "Note that I refreshed twice this dataset, and I still have (another) error on one of the splits\r\n\r\n```\r\nStatus code: 400\r\nException: ClientResponseError\r\nMessage: 403, message='Forbidden', url=URL('https://doc-14-4c-docs.googleusercontent.com/docs/securesc/ha0ro937gcuc7l7deffksulhg5h7mbp1/pgotjmcuh77q0lk7p44rparfrhv459kp/1656403650000/11771870722949762109/*/16OgJ_OrfzUF_i3ftLjFn9kpcyoi7UJeO?e=download')\r\n```\r\n\r\nLike the three splits are processed in parallel by the workers, I imagine that the Google hosting is rate-limiting us.\r\n\r\ncc @albertvillanova \r\n\r\n", "Exactly, Google Drive bans our loading scripts.\r\n\r\nWhen possible, we should host somewhere else." ]
https://api.github.com/repos/huggingface/datasets/issues/6009
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6009/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6009/comments
https://api.github.com/repos/huggingface/datasets/issues/6009/events
https://github.com/huggingface/datasets/pull/6009
1,792,059,808
PR_kwDODunzps5U1mus
6,009
Fix cast for dictionaries with no keys
[]
closed
false
null
3
2023-07-06T18:48:14Z
2023-07-07T14:13:00Z
2023-07-07T14:01:13Z
null
Fix #5677
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6009/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6009/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/6009.diff", "html_url": "https://github.com/huggingface/datasets/pull/6009", "merged_at": "2023-07-07T14:01:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/6009.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6009" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006961 / 0.011353 (-0.004392) | 0.004390 / 0.011008 (-0.006618) | 0.103249 / 0.038508 (0.064741) | 0.048084 / 0.023109 (0.024975) | 0.351213 / 0.275898 (0.075315) | 0.416918 / 0.323480 (0.093439) | 0.005539 / 0.007986 (-0.002446) | 0.003555 / 0.004328 (-0.000774) | 0.079306 / 0.004250 (0.075055) | 0.066937 / 0.037052 (0.029884) | 0.382601 / 0.258489 (0.124112) | 0.406125 / 0.293841 (0.112284) | 0.032269 / 0.128546 (-0.096277) | 0.009133 / 0.075646 (-0.066514) | 0.354449 / 0.419271 (-0.064822) | 0.068978 / 0.043533 (0.025445) | 0.352314 / 0.255139 (0.097175) | 0.390398 / 0.283200 (0.107199) | 0.025640 / 0.141683 (-0.116043) | 1.553865 / 1.452155 (0.101710) | 1.601292 / 1.492716 (0.108576) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208310 / 0.018006 (0.190303) | 0.440076 / 0.000490 (0.439586) | 0.000363 / 0.000200 (0.000163) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029173 / 0.037411 (-0.008238) | 0.111323 / 0.014526 (0.096797) | 0.123001 / 0.176557 (-0.053556) | 0.180180 / 0.737135 (-0.556955) | 0.125804 / 0.296338 (-0.170534) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419919 / 0.215209 (0.204710) | 4.194515 / 2.077655 (2.116860) | 1.881234 / 1.504120 (0.377114) | 1.672914 / 1.541195 (0.131720) | 1.723102 / 1.468490 (0.254612) | 0.543584 / 4.584777 (-4.041193) | 3.822477 / 3.745712 (0.076765) | 1.837946 / 5.269862 (-3.431915) | 1.094975 / 4.565676 (-3.470701) | 0.066788 / 0.424275 (-0.357487) | 0.011689 / 0.007607 (0.004082) | 0.520983 / 0.226044 (0.294938) | 5.209245 / 2.268929 (2.940316) | 2.392916 / 55.444624 (-53.051708) | 2.060042 / 6.876477 (-4.816434) | 2.162291 / 2.142072 (0.020219) | 0.668472 / 4.805227 (-4.136755) | 0.144373 / 6.500664 (-6.356291) | 0.066152 / 0.075469 (-0.009318) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.251256 / 1.841788 (-0.590532) | 15.161338 / 8.074308 (7.087030) | 14.416133 / 10.191392 (4.224741) | 0.166145 / 0.680424 (-0.514279) | 0.018168 / 0.534201 (-0.516033) | 0.433364 / 0.579283 (-0.145919) | 0.417484 / 0.434364 (-0.016880) | 0.502543 / 0.540337 (-0.037794) | 0.602904 / 1.386936 (-0.784032) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006946 / 0.011353 (-0.004407) | 0.004248 / 0.011008 (-0.006761) | 0.079707 / 0.038508 (0.041199) | 0.046226 / 0.023109 (0.023117) | 0.375864 / 0.275898 (0.099966) | 0.430740 / 0.323480 (0.107260) | 0.006222 / 0.007986 (-0.001764) | 0.003474 / 0.004328 (-0.000854) | 0.079622 / 0.004250 (0.075372) | 0.066666 / 0.037052 (0.029613) | 0.379487 / 0.258489 (0.120998) | 0.423002 / 0.293841 (0.129161) | 0.032836 / 0.128546 (-0.095710) | 0.008976 / 0.075646 (-0.066670) | 0.086578 / 0.419271 (-0.332693) | 0.055651 / 0.043533 (0.012118) | 0.360787 / 0.255139 (0.105648) | 0.384265 / 0.283200 (0.101065) | 0.025350 / 0.141683 (-0.116333) | 1.547880 / 1.452155 (0.095725) | 1.605850 / 1.492716 (0.113134) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.184227 / 0.018006 (0.166220) | 0.442071 / 0.000490 (0.441582) | 0.002887 / 0.000200 (0.002687) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031923 / 0.037411 (-0.005488) | 0.119093 / 0.014526 (0.104568) | 0.128704 / 0.176557 (-0.047853) | 0.187065 / 0.737135 (-0.550070) | 0.134135 / 0.296338 (-0.162204) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.455731 / 0.215209 (0.240522) | 4.562911 / 2.077655 (2.485256) | 2.247431 / 1.504120 (0.743311) | 2.053346 / 1.541195 (0.512151) | 2.049611 / 1.468490 (0.581121) | 0.546069 / 4.584777 (-4.038708) | 3.821852 / 3.745712 (0.076140) | 3.358497 / 5.269862 (-1.911364) | 1.667697 / 4.565676 (-2.897979) | 0.067968 / 0.424275 (-0.356307) | 0.012344 / 0.007607 (0.004737) | 0.550864 / 0.226044 (0.324820) | 5.496867 / 2.268929 (3.227939) | 2.680031 / 55.444624 (-52.764594) | 2.328673 / 6.876477 (-4.547804) | 2.436754 / 2.142072 (0.294682) | 0.681195 / 4.805227 (-4.124033) | 0.148761 / 6.500664 (-6.351904) | 0.067716 / 0.075469 (-0.007753) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.353798 / 1.841788 (-0.487990) | 15.992965 / 8.074308 (7.918657) | 14.051539 / 10.191392 (3.860147) | 0.181087 / 0.680424 (-0.499337) | 0.018653 / 0.534201 (-0.515548) | 0.433499 / 0.579283 (-0.145784) | 0.428845 / 0.434364 (-0.005519) | 0.501100 / 0.540337 (-0.039238) | 0.603666 / 1.386936 (-0.783270) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#10cfa871a2f387fe9c6360e1873ea74c6d69ff67 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010983 / 0.011353 (-0.000370) | 0.005630 / 0.011008 (-0.005378) | 0.109967 / 0.038508 (0.071458) | 0.101580 / 0.023109 (0.078471) | 0.490205 / 0.275898 (0.214307) | 0.534653 / 0.323480 (0.211173) | 0.008365 / 0.007986 (0.000379) | 0.004317 / 0.004328 (-0.000012) | 0.082429 / 0.004250 (0.078179) | 0.080556 / 0.037052 (0.043504) | 0.494627 / 0.258489 (0.236138) | 0.544189 / 0.293841 (0.250348) | 0.049419 / 0.128546 (-0.079127) | 0.014033 / 0.075646 (-0.061613) | 0.370406 / 0.419271 (-0.048866) | 0.083468 / 0.043533 (0.039935) | 0.463829 / 0.255139 (0.208690) | 0.507516 / 0.283200 (0.224316) | 0.053266 / 0.141683 (-0.088417) | 1.778680 / 1.452155 (0.326525) | 1.916616 / 1.492716 (0.423900) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.267646 / 0.018006 (0.249640) | 0.617824 / 0.000490 (0.617334) | 0.007720 / 0.000200 (0.007520) | 0.000139 / 0.000054 (0.000085) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034464 / 0.037411 (-0.002948) | 0.113626 / 0.014526 (0.099100) | 0.118911 / 0.176557 (-0.057646) | 0.194701 / 0.737135 (-0.542434) | 0.123431 / 0.296338 (-0.172907) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.606073 / 0.215209 (0.390863) | 6.086393 / 2.077655 (4.008738) | 2.568712 / 1.504120 (1.064593) | 2.260801 / 1.541195 (0.719606) | 2.411798 / 1.468490 (0.943307) | 0.876433 / 4.584777 (-3.708344) | 5.521280 / 3.745712 (1.775568) | 5.969722 / 5.269862 (0.699861) | 3.671028 / 4.565676 (-0.894649) | 0.097082 / 0.424275 (-0.327193) | 0.011354 / 0.007607 (0.003747) | 0.713842 / 0.226044 (0.487798) | 7.291172 / 2.268929 (5.022244) | 3.315272 / 55.444624 (-52.129352) | 2.777487 / 6.876477 (-4.098990) | 3.025449 / 2.142072 (0.883377) | 1.014115 / 4.805227 (-3.791112) | 0.217928 / 6.500664 (-6.282736) | 0.083097 / 0.075469 (0.007627) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.640060 / 1.841788 (-0.201728) | 25.342172 / 8.074308 (17.267864) | 22.776510 / 10.191392 (12.585118) | 0.227300 / 0.680424 (-0.453124) | 0.032233 / 0.534201 (-0.501968) | 0.507547 / 0.579283 (-0.071736) | 0.647044 / 0.434364 (0.212680) | 0.607019 / 0.540337 (0.066682) | 0.823548 / 1.386936 (-0.563388) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009576 / 0.011353 (-0.001777) | 0.009322 / 0.011008 (-0.001687) | 0.087184 / 0.038508 (0.048676) | 0.100795 / 0.023109 (0.077685) | 0.492138 / 0.275898 (0.216240) | 0.528386 / 0.323480 (0.204906) | 0.006689 / 0.007986 (-0.001296) | 0.004735 / 0.004328 (0.000406) | 0.085519 / 0.004250 (0.081269) | 0.072648 / 0.037052 (0.035595) | 0.496068 / 0.258489 (0.237579) | 0.549634 / 0.293841 (0.255793) | 0.049709 / 0.128546 (-0.078837) | 0.015077 / 0.075646 (-0.060569) | 0.099445 / 0.419271 (-0.319826) | 0.068080 / 0.043533 (0.024547) | 0.500426 / 0.255139 (0.245287) | 0.531437 / 0.283200 (0.248238) | 0.053176 / 0.141683 (-0.088507) | 1.827942 / 1.452155 (0.375787) | 1.914286 / 1.492716 (0.421570) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.247658 / 0.018006 (0.229652) | 0.590805 / 0.000490 (0.590315) | 0.005319 / 0.000200 (0.005119) | 0.000165 / 0.000054 (0.000110) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036993 / 0.037411 (-0.000418) | 0.112944 / 0.014526 (0.098419) | 0.118964 / 0.176557 (-0.057593) | 0.194867 / 0.737135 (-0.542269) | 0.120816 / 0.296338 (-0.175523) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.638062 / 0.215209 (0.422853) | 6.246785 / 2.077655 (4.169130) | 2.957779 / 1.504120 (1.453659) | 2.739118 / 1.541195 (1.197924) | 2.795362 / 1.468490 (1.326872) | 0.890532 / 4.584777 (-3.694245) | 5.508198 / 3.745712 (1.762486) | 5.222315 / 5.269862 (-0.047547) | 3.152731 / 4.565676 (-1.412946) | 0.098344 / 0.424275 (-0.325931) | 0.008800 / 0.007607 (0.001193) | 0.757889 / 0.226044 (0.531845) | 7.545715 / 2.268929 (5.276787) | 3.694536 / 55.444624 (-51.750088) | 3.112872 / 6.876477 (-3.763605) | 3.182358 / 2.142072 (1.040285) | 1.028171 / 4.805227 (-3.777056) | 0.215223 / 6.500664 (-6.285441) | 0.085856 / 0.075469 (0.010387) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.853138 / 1.841788 (0.011350) | 25.939672 / 8.074308 (17.865364) | 23.118029 / 10.191392 (12.926637) | 0.250599 / 0.680424 (-0.429825) | 0.029942 / 0.534201 (-0.504259) | 0.508748 / 0.579283 (-0.070535) | 0.593966 / 0.434364 (0.159602) | 0.605499 / 0.540337 (0.065162) | 0.863827 / 1.386936 (-0.523109) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5d15950d99677e9473cdcd31cfd83aa17e313e28 \"CML watermark\")\n" ]
https://api.github.com/repos/huggingface/datasets/issues/4687
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4687/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4687/comments
https://api.github.com/repos/huggingface/datasets/issues/4687/events
https://github.com/huggingface/datasets/pull/4687
1,306,021,415
PR_kwDODunzps47eF_E
4,687
Trigger CI also on push to main
[]
closed
false
null
1
2022-07-15T13:11:29Z
2022-07-15T13:47:21Z
2022-07-15T13:35:23Z
null
Currently, new CI (on GitHub Actions) is only triggered on pull requests branches when the base branch is main. This PR also triggers the CI when a PR is merged to main branch.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4687/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4687/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4687.diff", "html_url": "https://github.com/huggingface/datasets/pull/4687", "merged_at": "2022-07-15T13:35:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/4687.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4687" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/3417
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3417/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3417/comments
https://api.github.com/repos/huggingface/datasets/issues/3417/events
https://github.com/huggingface/datasets/pull/3417
1,076,943,343
PR_kwDODunzps4vrwd7
3,417
Fix type of bridge field in QED
[]
closed
false
null
0
2021-12-10T15:07:21Z
2021-12-14T14:39:06Z
2021-12-14T14:39:05Z
null
Use `Value("string")` instead of `Value("bool")` for the feature type of the `"bridge"` field in the QED dataset. If the value is `False`, set to `None`. The following paragraph in the QED repo explains the purpose of this field: >Each annotation in referential_equalities is a pair of spans, the question_reference and the sentence_reference, corresponding to an entity mention in the question and the selected_sentence respectively. As described in the paper, sentence_references can be "bridged in", in which case they do not correspond with any actual span in the selected_sentence. Hence, sentence_reference spans contain an additional field, bridge, which is a prepositional phrase when a reference is bridged, and is False otherwise. Prepositional phrases serve to link bridged references to an anchoring phrase in the selected_sentence. In the case a sentence_reference is bridged, the start and end, as well as the span string, map to such an anchoring phrase in the selected_sentence. Fix #3346 cc @VictorSanh
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3417/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3417/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3417.diff", "html_url": "https://github.com/huggingface/datasets/pull/3417", "merged_at": "2021-12-14T14:39:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/3417.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3417" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4357
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4357/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4357/comments
https://api.github.com/repos/huggingface/datasets/issues/4357/events
https://github.com/huggingface/datasets/pull/4357
1,237,037,069
PR_kwDODunzps4333b9
4,357
Fix warning in push_to_hub
[]
closed
false
null
1
2022-05-16T11:50:17Z
2022-05-16T15:18:49Z
2022-05-16T15:10:41Z
null
Fix warning: ``` FutureWarning: 'shard_size' was renamed to 'max_shard_size' in version 2.1.1 and will be removed in 2.4.0. ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4357/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4357/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4357.diff", "html_url": "https://github.com/huggingface/datasets/pull/4357", "merged_at": "2022-05-16T15:10:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/4357.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4357" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/735
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/735/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/735/comments
https://api.github.com/repos/huggingface/datasets/issues/735/events
https://github.com/huggingface/datasets/issues/735
722,225,270
MDU6SXNzdWU3MjIyMjUyNzA=
735
Throw error when an unexpected key is used in data_files
[]
closed
false
null
1
2020-10-15T10:55:27Z
2020-10-30T13:23:52Z
2020-10-30T13:23:52Z
null
I have found that only "train", "validation" and "test" are valid keys in the `data_files` argument. When you use any other ones, those attached files are silently ignored - leading to unexpected behaviour for the users. So the following, unintuitively, returns only one key (namely `train`). ```python datasets = load_dataset("text", data_files={"train": train_f, "valid": valid_f}) print(datasets.keys()) # dict_keys(['train']) ``` whereas using `validation` instead, does return the expected result: ```python datasets = load_dataset("text", data_files={"train": train_f, "validation": valid_f}) print(datasets.keys()) # dict_keys(['train', 'validation']) ``` I would like to see more freedom in which keys one can use, but if that is not possible at least an error should be thrown when using an unexpected key.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/735/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/735/timeline
null
completed
null
null
false
[ "Thanks for reporting !\r\nWe'll add support for other keys" ]
https://api.github.com/repos/huggingface/datasets/issues/3266
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3266/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3266/comments
https://api.github.com/repos/huggingface/datasets/issues/3266/events
https://github.com/huggingface/datasets/pull/3266
1,052,700,155
PR_kwDODunzps4ufH94
3,266
Fix URLs for WikiAuto Manual, jeopardy and definite_pronoun_resolution
[]
closed
false
null
10
2021-11-13T15:01:34Z
2021-12-06T11:16:31Z
2021-12-06T11:16:31Z
null
[#3264](https://github.com/huggingface/datasets/issues/3264)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3266/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3266/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3266.diff", "html_url": "https://github.com/huggingface/datasets/pull/3266", "merged_at": "2021-12-06T11:16:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/3266.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3266" }
true
[ "There seems to be problems with datasets metadata, of which I dont have access to. I think one of the datasets is from reddit. Can anyone help?", "Hello @LashaO , I think the errors were caused by `_DATA_FILES` in `definite_pronoun_resolution.py`. Here are details of the test error.\r\n```\r\nself = BuilderConfig(name='plain_text', version=1.0.0, data_dir=None, data_files={'train': 'train.c.txt', 'test': 'test.c.txt'}, description='Plain text import of the Definite Pronoun Resolution Dataset.')\r\n\r\n def __post_init__(self):\r\n # The config name is used to name the cache directory.\r\n invalid_windows_characters = r\"<>:/\\|?*\"\r\n for invalid_char in invalid_windows_characters:\r\n if invalid_char in self.name:\r\n raise InvalidConfigName(\r\n f\"Bad characters from black list '{invalid_windows_characters}' found in '{self.name}'. \"\r\n f\"They could create issues when creating a directory for this config on Windows filesystem.\"\r\n )\r\n if self.data_files is not None and not isinstance(self.data_files, DataFilesDict):\r\n> raise ValueError(f\"Expected a DataFilesDict in data_files but got {self.data_files}\")\r\nE ValueError: Expected a DataFilesDict in data_files but got {'train': 'train.c.txt', 'test': 'test.c.txt'}\r\n```", "Hi ! Thanks for the fixes :)\r\n\r\nInstead of uploading the `definite_pronoun_resolution` data files in this PR, maybe we can just update the URL ?\r\nThe old url was http://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt, but now it's https://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt (https instead of http)", "Actually the bad certificate creates an issue with the download\r\n```python\r\nimport datasets \r\ndatasets.DownloadManager().download(\"https://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt\")\r\n# raises: ConnectionError: Couldn't reach https://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt\r\n```\r\n\r\nLet me see if I can fix that", "I uploaded them to these URLs, feel free to use them instead of having the text files here in the PR :)\r\nhttps://s3.amazonaws.com/datasets.huggingface.co/definite_pronoun_resolution/train.c.txt\r\nhttps://s3.amazonaws.com/datasets.huggingface.co/definite_pronoun_resolution/test.c.txt", "Thank you for the tips! Having a busy week so anyone willing to commit the suggestions is welcome. Else, I will try to get back to this in a while.", "@LashaO Thanks for working on this. Yes, I'll take over as we already have a request to fix the URL of the Jeopardy! dataset in a separate issue.", "~~Still have to fix the error in the dummy data test of the WikiAuto dataset (so please don't merge).~~ Done! Ready for merging.", "Thank you, Mario!", "The CI failure is only related to missing tags in the dataset cards, merging :)" ]
https://api.github.com/repos/huggingface/datasets/issues/5699
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5699/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5699/comments
https://api.github.com/repos/huggingface/datasets/issues/5699/events
https://github.com/huggingface/datasets/issues/5699
1,652,437,419
I_kwDODunzps5ifjGr
5,699
Issue when wanting to split in memory a cached dataset
[]
open
false
null
1
2023-04-03T17:00:07Z
2023-04-04T16:52:42Z
null
null
### Describe the bug **In the 'train_test_split' method of the Dataset class** (defined datasets/arrow_dataset.py), **if 'self.cache_files' is not empty**, then, **regarding the input parameters 'train_indices_cache_file_name' and 'test_indices_cache_file_name', if they are None**, we modify them to make them not None, to see if we can just provide back / work from cached data. But if we can't provide cached data, we move on with the call to the method, except those two values are not None anymore, which will conflict with the use of the 'keep_in_memory' parameter down the line. Indeed, at some point we end up calling the 'select' method, **and if 'keep_in_memory' is True**, since the value of this method's parameter 'indices_cache_file_name' is now not None anymore, **an exception is raised, whose message is "Please use either 'keep_in_memory' or 'indices_cache_file_name' but not both.".** Because of that, it's impossible to perform a train / test split of a cached dataset while requesting that the result not be cached. Which is inconvenient when one is just performing experiments, with no intention of caching the result. Aside from this being inconvenient, **the code which lead up to that situation seems simply wrong** to me: the input variable should not be modified so as to change the user's intention just to perform a test, if that test can fail and respecting the user's intention is necessary to proceed in that case. To fix this, I suggest to use other variables / other variable names, in order to host the value(s) needed to perform the test, so as not to change the originally input values needed by the rest of the method's code. Also, **I don't see why an exception should be raised when the 'select' method is called with both 'keep_in_memory'=True and 'indices_cache_file_name'!=None**: should the use of 'keep_in_memory' not prevail anyway, specifying that the user does not want to perform caching, and so making irrelevant the value of 'indices_cache_file_name'? This is indeed what happens when we look further in the code, in the '\_select_with_indices_mapping' method: when 'keep_in_memory' is True, then the value of indices_cache_file_name does not matter, the data will be written to a stream buffer anyway. Hence I suggest to remove the raising of exception in those circumstances. Notably, to remove the raising of it in the 'select', '\_select_with_indices_mapping', 'shuffle' and 'map' methods. ### Steps to reproduce the bug ```python import datasets def generate_examples(): for i in range(10): yield {"id": i} dataset_ = datasets.Dataset.from_generator( generate_examples, keep_in_memory=False, ) dataset_.train_test_split( test_size=3, shuffle=False, keep_in_memory=True, train_indices_cache_file_name=None, test_indices_cache_file_name=None, ) ``` ### Expected behavior The result of the above code should be a DatasetDict instance. Instead, we get the following exception stack: ```python --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[3], line 1 ----> 1 dataset_.train_test_split( 2 test_size=3, 3 shuffle=False, 4 keep_in_memory=True, 5 train_indices_cache_file_name=None, 6 test_indices_cache_file_name=None, 7 ) File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:528, in transmit_format.<locals>.wrapper(*args, **kwargs) 521 self_format = { 522 "type": self._format_type, 523 "format_kwargs": self._format_kwargs, 524 "columns": self._format_columns, 525 "output_all_columns": self._output_all_columns, 526 } 527 # apply actual function --> 528 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 529 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 530 # re-apply format to the output File ~/Work/Developments/datasets/src/datasets/fingerprint.py:511, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs) 507 validate_fingerprint(kwargs[fingerprint_name]) 509 # Call actual function --> 511 out = func(dataset, *args, **kwargs) 513 # Update fingerprint of in-place transforms + update in-place history of transforms 515 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:4428, in Dataset.train_test_split(self, test_size, train_size, shuffle, stratify_by_column, seed, generator, keep_in_memory, load_from_cache_file, train_indices_cache_file_name, test_indices_cache_file_name, writer_batch_size, train_new_fingerprint, test_new_fingerprint) 4425 test_indices = permutation[:n_test] 4426 train_indices = permutation[n_test : (n_test + n_train)] -> 4428 train_split = self.select( 4429 indices=train_indices, 4430 keep_in_memory=keep_in_memory, 4431 indices_cache_file_name=train_indices_cache_file_name, 4432 writer_batch_size=writer_batch_size, 4433 new_fingerprint=train_new_fingerprint, 4434 ) 4435 test_split = self.select( 4436 indices=test_indices, 4437 keep_in_memory=keep_in_memory, (...) 4440 new_fingerprint=test_new_fingerprint, 4441 ) 4443 return DatasetDict({"train": train_split, "test": test_split}) File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:528, in transmit_format.<locals>.wrapper(*args, **kwargs) 521 self_format = { 522 "type": self._format_type, 523 "format_kwargs": self._format_kwargs, 524 "columns": self._format_columns, 525 "output_all_columns": self._output_all_columns, 526 } 527 # apply actual function --> 528 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 529 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 530 # re-apply format to the output File ~/Work/Developments/datasets/src/datasets/fingerprint.py:511, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs) 507 validate_fingerprint(kwargs[fingerprint_name]) 509 # Call actual function --> 511 out = func(dataset, *args, **kwargs) 513 # Update fingerprint of in-place transforms + update in-place history of transforms 515 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:3679, in Dataset.select(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint) 3645 """Create a new dataset with rows selected following the list/array of indices. 3646 3647 Args: (...) 3676 ``` 3677 """ 3678 if keep_in_memory and indices_cache_file_name is not None: -> 3679 raise ValueError("Please use either `keep_in_memory` or `indices_cache_file_name` but not both.") 3681 if len(self.list_indexes()) > 0: 3682 raise DatasetTransformationNotAllowedError( 3683 "Using `.select` on a dataset with attached indexes is not allowed. You can first run `.drop_index() to remove your index and then re-add it." 3684 ) ValueError: Please use either `keep_in_memory` or `indices_cache_file_name` but not both. ``` ### Environment info - `datasets` version: 2.11.1.dev0 - Platform: Linux-5.4.236-1-MANJARO-x86_64-with-glibc2.2.5 - Python version: 3.8.12 - Huggingface_hub version: 0.13.3 - PyArrow version: 11.0.0 - Pandas version: 2.0.0 *** *** EDIT: Now with a pull request to fix this [here](https://github.com/huggingface/datasets/pull/5700)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5699/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5699/timeline
null
null
null
null
false
[ "Hi ! Good catch, this is wrong indeed and thanks for opening a PR :)" ]
https://api.github.com/repos/huggingface/datasets/issues/919
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/919/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/919/comments
https://api.github.com/repos/huggingface/datasets/issues/919/events
https://github.com/huggingface/datasets/issues/919
753,434,472
MDU6SXNzdWU3NTM0MzQ0NzI=
919
wrong length with datasets
[]
closed
false
null
2
2020-11-30T12:23:39Z
2020-11-30T12:37:27Z
2020-11-30T12:37:26Z
null
Hi I have a MRPC dataset which I convert it to seq2seq format, then this is of this format: `Dataset(features: {'src_texts': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 10) ` I feed it to a dataloader: ``` dataloader = DataLoader( train_dataset, batch_size=self.args.train_batch_size, sampler=train_sampler, collate_fn=self.data_collator, drop_last=self.args.dataloader_drop_last, num_workers=self.args.dataloader_num_workers, ) ``` now if I type len(dataloader) this is 1, which is wrong, and this needs to be 10. could you assist me please? thanks
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/919/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/919/timeline
null
completed
null
null
false
[ "Also, I cannot first convert it to torch format, since huggingface seq2seq_trainer codes process the datasets afterwards during datacollector function to make it optimize for TPUs. ", "sorry I misunderstood length of dataset with dataloader, closed. thanks " ]
https://api.github.com/repos/huggingface/datasets/issues/5173
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5173/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5173/comments
https://api.github.com/repos/huggingface/datasets/issues/5173/events
https://github.com/huggingface/datasets/pull/5173
1,425,880,441
PR_kwDODunzps5BreEm
5,173
Raise ffmpeg warnings only once
[]
closed
false
null
1
2022-10-27T15:58:33Z
2022-10-28T16:03:05Z
2022-10-28T16:00:51Z
null
Our warnings looks nice now. `librosa` warning that was raised at each decoding: ``` /usr/local/lib/python3.7/dist-packages/librosa/core/audio.py:165: UserWarning: PySoundFile failed. Trying audioread instead. warnings.warn("PySoundFile failed. Trying audioread instead.") ``` is suppressed with `filterwarnings("ignore")` in a context manager. That means the first warning is also ignored (setting `filterwarnings("once")` didn't work!), so I added info that audioread is used for decoding to our message. Hope it's enough. Tests failed at first because they used to check if the warning was raised at (each) decoding in `librosa` case but now we throw only one warning (at first decoding). I removed this check for warnings, do you think it's fine?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5173/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5173/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5173.diff", "html_url": "https://github.com/huggingface/datasets/pull/5173", "merged_at": "2022-10-28T16:00:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/5173.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5173" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/2195
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2195/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2195/comments
https://api.github.com/repos/huggingface/datasets/issues/2195/events
https://github.com/huggingface/datasets/issues/2195
854,070,194
MDU6SXNzdWU4NTQwNzAxOTQ=
2,195
KeyError: '_indices_files' in `arrow_dataset.py`
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
2
2021-04-09T01:37:12Z
2021-04-09T09:55:09Z
2021-04-09T09:54:39Z
null
After pulling the latest master, I'm getting a crash when `load_from_disk` tries to load my local dataset. Trace: ``` Traceback (most recent call last): File "load_data.py", line 11, in <module> dataset = load_from_disk(SRC) File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/load.py", line 784, in load_from_disk return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory) File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/dataset_dict.py", line 692, in load_from_disk dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory) File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 634, in load_from_disk if state["_indices_files"]: KeyError: '_indices_files' ``` I believe this is the line causing the error since there may not be a `_indices_files` key in the older versions: https://github.com/huggingface/datasets/blob/b70141e3c5149430951773aaa0155555c5fb3e76/src/datasets/arrow_dataset.py#L634 May I suggest using `state.get()` instead of directly indexing the dictionary? @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2195/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2195/timeline
null
completed
null
null
false
[ "Thanks for reporting @samsontmr.\r\n\r\nIt seems a backward compatibility issue...", "Thanks @samsontmr this should be fixed on master now\r\n\r\nFeel free to reopen if you're still having issues" ]
https://api.github.com/repos/huggingface/datasets/issues/4349
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4349/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4349/comments
https://api.github.com/repos/huggingface/datasets/issues/4349/events
https://github.com/huggingface/datasets/issues/4349
1,235,474,765
I_kwDODunzps5Jo9lN
4,349
Dataset.map()'s fails at any value of parameter writer_batch_size
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
6
2022-05-13T16:55:12Z
2022-06-02T12:51:11Z
2022-05-14T15:08:08Z
null
## Describe the bug If the the value of `writer_batch_size` is less than the total number of instances in the dataset it will fail at that same number of instances. If it is greater than the total number of instances, it fails on the last instance. Context: I am attempting to fine-tune a pre-trained HuggingFace transformers model called LayoutLMv2. This model takes three inputs: document images, words and word bounding boxes. [The Processor for this model has two options](https://huggingface.co/docs/transformers/model_doc/layoutlmv2#usage-layoutlmv2processor), the default is passing a document to the Processor and allowing it to create images of the document and use PyTesseract to perform OCR and generate words/bounding boxes. The other option is to provide `revision="no_ocr"` to the pre-trained model which allows you to use your own OCR results (in my case, Amazon Textract) so you have to provide the image, words and bounding boxes yourself. I am using this second option which might be good context for the bug. I am using the Dataset.map() paradigm to create these three inputs, encode them and save the dataset. Note that my documents (data instances) on average are fairly large and can range from 1 page up to 300 pages. Code I am using is provided below ## Steps to reproduce the bug I do not have explicit sample code, but I will paste the code I'm using in case reading it helps. When `.map()` is called, the dataset has 2933 rows, many of which represent large pdf documents. ```python def get_encoded_data(data): dataset = Dataset.from_pandas(data) unique_labels = data['label'].unique() features = Features({ 'image': Array3D(dtype="int64", shape=(3, 224, 224)), 'input_ids': Sequence(feature=Value(dtype='int64')), 'attention_mask': Sequence(Value(dtype='int64')), 'token_type_ids': Sequence(Value(dtype='int64')), 'bbox': Array2D(dtype="int64", shape=(512, 4)), 'label': ClassLabel(num_classes=len(unique_labels), names=unique_labels), }) encoded_dataset = dataset.map(preprocess_data, features=features, remove_columns=dataset.column_names, writer_batch_size=dataset.num_rows+1) encoded_dataset.save_to_disk(TRAINING_DATA_PATH + ENCODED_DATASET_NAME) encoded_dataset.set_format(type="torch") return encoded_dataset ``` ```python PROCESSOR = LayoutLMv2Processor.from_pretrained(MODEL_PATH, revision="no_ocr", use_fast=False) def preprocess_data(examples): directory = os.path.join(FILES_PATH, examples['file_location']) images_dir = os.path.join(directory, PDF_IMAGE_DIR) textract_response_path = os.path.join(directory, 'textract.json') doc_meta_path = os.path.join(directory, 'doc_meta.json') textract_document = get_textract_document(textract_response_path, doc_meta_path) images, words, bboxes = get_doc_training_data(images_dir, textract_document) encoded_inputs = PROCESSOR(images, words, boxes=bboxes, padding="max_length", truncation=True) # https://github.com/NielsRogge/Transformers-Tutorials/issues/36 encoded_inputs["image"] = np.array(encoded_inputs["image"]) encoded_inputs["label"] = examples['label_id'] return encoded_inputs ``` ## Expected results My expectation is that `writer_batch_size` allows one to simply trade off performance and memory requirements, not that it must be a specific number for `.map()` to function correctly. ## Actual results If writer_batch_size is set to a value less than the number of rows, I get either: ``` OverflowError: There was an overflow with type <class 'list'>. Try to reduce writer_batch_size to have batches smaller than 2GB. (offset overflow while concatenating arrays) ``` or simply ``` zsh: killed python doc_classification.py UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown ``` If it is greater than the number of rows, i get the `zsh: killed` error above ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.1.0 - Platform: macOS-12.2.1-arm64-arm-64bit - Python version: 3.9.12 - PyArrow version: 6.0.1 - Pandas version: 1.4.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4349/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4349/timeline
null
completed
null
null
false
[ "Note that this same issue occurs even if i preprocess with the more default way of tokenizing that uses LayoutLMv2Processor's internal OCR:\r\n\r\n```python\r\n feature_extractor = LayoutLMv2FeatureExtractor()\r\n tokenizer = LayoutLMv2Tokenizer.from_pretrained(\"microsoft/layoutlmv2-base-uncased\")\r\n processor = LayoutLMv2Processor(feature_extractor, tokenizer)\r\n encoded_inputs = processor(images, padding=\"max_length\", truncation=True)\r\n encoded_inputs[\"image\"] = np.array(encoded_inputs[\"image\"])\r\n encoded_inputs[\"label\"] = examples['label_id']\r\n```", "Wanted to make sure anyone that finds this also finds my other report: https://github.com/huggingface/datasets/issues/4352", "Did you close it because you found that it was due to the incorrect Feature types ?", "Yeah-- my analysis of the issue was wrong in this one so I just closed it while linking to the new issue", "I met with the same problem when doing some experiments about layoutlm. I tried to set the writer_batch_size to 1, and the error still exists. Is there any solutions to this problem?", "The problem lies in how your Features are defined. It's erroring out when it actually goes to write them to disk" ]
https://api.github.com/repos/huggingface/datasets/issues/4428
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4428/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4428/comments
https://api.github.com/repos/huggingface/datasets/issues/4428/events
https://github.com/huggingface/datasets/issues/4428
1,254,092,818
I_kwDODunzps5Kv_AS
4,428
Errors when building dummy data if you use nested _URLS
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
0
2022-05-31T16:10:57Z
2022-06-07T09:24:09Z
2022-06-07T09:24:09Z
null
## Describe the bug When making dummy data with the `datasets-cli dummy_data` tool, an error will be raised if you use a nested _URLS in your dataset script. Traceback (most recent call last): File "/home/name/LCCC/datasets/src/datasets/commands/datasets_cli.py", line 43, in <module> main() File "/home/name/LCCC/datasets/src/datasets/commands/datasets_cli.py", line 39, in main service.run() File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 311, in run self._autogenerate_dummy_data( File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 337, in _autogenerate_dummy_data dataset_builder._split_generators(dl_manager) File "/home/name/.cache/huggingface/modules/datasets_modules/datasets/personal_dialog/559332bced5eeafa7f7efc2a7c10ce02cee2a8116bbab4611c35a50ba2715b77/personal_dialog.py", line 108, in _split_generators data_dir = dl_manager.download_and_extract(urls) File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 56, in download_and_extract dummy_output = self.mock_download_manager.download(url_or_urls) File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 130, in download return self.download_and_extract(data_url) File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 122, in download_and_extract return self.create_dummy_data_dict(dummy_file, data_url) File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 165, in create_dummy_data_dict if isinstance(first_value, str) and len(set(dummy_data_dict.values())) < len(dummy_data_dict.values()): TypeError: unhashable type: 'list' ## Steps to reproduce the bug You can use my dataset script implemented here: https://github.com/silverriver/datasets/blob/2ecd36760c40b8e29b1137cd19b5bad0e19c76fd/datasets/personal_dialog/personal_dialog.py ```python datasets_cli dummy_data datasets/personal_dialog --auto_generate ``` You can change https://github.com/silverriver/datasets/blob/2ecd36760c40b8e29b1137cd19b5bad0e19c76fd/datasets/personal_dialog/personal_dialog.py#L54 to ``` "train": "https://huggingface.co/datasets/silver/personal_dialog/resolve/main/dev_random.jsonl.gz" ``` before runing the above script to avoid downloading a large training data. ## Expected results The dummy data should be generated ## Actual results An error is raised. It seems that in https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/src/datasets/download/mock_download_manager.py#L165 We only check if the first item of dummy_data_dict.values() is str. However, dummy_data_dict.values() may have the type of [str, list, list]. A simple fix would be changing https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/src/datasets/download/mock_download_manager.py#L165 to ```python if all([isinstance(value, str) for value in dummy_data_dict.values()]) and len(set(dummy_data_dict.values())) < len(dummy_data_dict.values()): ``` But I don't know if this kinds of change may bring any side effect since I am not sure about the detail logic here. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Linux - Python version: Python 3.9.10 - PyArrow version: 7.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4428/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4428/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/5833
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5833/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5833/comments
https://api.github.com/repos/huggingface/datasets/issues/5833/events
https://github.com/huggingface/datasets/issues/5833
1,702,280,682
I_kwDODunzps5ldr3q
5,833
Unable to push dataset - `create_pr` problem
[]
open
false
null
8
2023-05-09T15:32:55Z
2023-07-20T17:17:00Z
null
null
### Describe the bug I can't upload to the hub the dataset I manually created locally (Image dataset). I have a problem when using the method `.push_to_hub` which asks for a `create_pr` attribute which is not compatible. ### Steps to reproduce the bug here what I have: ```python dataset.push_to_hub("agomberto/FrenchCensus-handwritten-texts") ``` Output: ```python Pushing split train to the Hub. Pushing dataset shards to the dataset hub: 0%| | 0/2 [00:00<?, ?it/s] Creating parquet from Arrow format: 0%| | 0/3 [00:00<?, ?ba/s] Creating parquet from Arrow format: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 12.70ba/s] Pushing dataset shards to the dataset hub: 0%| | 0/2 [00:01<?, ?it/s] --------------------------------------------------------------------------- HTTPError Traceback (most recent call last) File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py:259, in hf_raise_for_status(response, endpoint_name) 258 try: --> 259 response.raise_for_status() 260 except HTTPError as e: File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/requests/models.py:1021, in Response.raise_for_status(self) 1020 if http_error_msg: -> 1021 raise HTTPError(http_error_msg, response=self) HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/api/datasets/agomberto/FrenchCensus-handwritten-texts/commit/main The above exception was the direct cause of the following exception: HfHubHTTPError Traceback (most recent call last) Cell In[7], line 1 ----> 1 dataset.push_to_hub("agomberto/FrenchCensus-handwritten-texts") File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/datasets/dataset_dict.py:1583, in DatasetDict.push_to_hub(self, repo_id, private, token, branch, max_shard_size, num_shards, embed_external_files) 1581 logger.warning(f"Pushing split {split} to the Hub.") 1582 # The split=key needs to be removed before merging -> 1583 repo_id, split, uploaded_size, dataset_nbytes, _, _ = self[split]._push_parquet_shards_to_hub( 1584 repo_id, 1585 split=split, 1586 private=private, 1587 token=token, 1588 branch=branch, 1589 max_shard_size=max_shard_size, 1590 num_shards=num_shards.get(split), 1591 embed_external_files=embed_external_files, 1592 ) 1593 total_uploaded_size += uploaded_size 1594 total_dataset_nbytes += dataset_nbytes File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/datasets/arrow_dataset.py:5275, in Dataset._push_parquet_shards_to_hub(self, repo_id, split, private, token, branch, max_shard_size, num_shards, embed_external_files) 5273 shard.to_parquet(buffer) 5274 uploaded_size += buffer.tell() -> 5275 _retry( 5276 api.upload_file, 5277 func_kwargs={ 5278 "path_or_fileobj": buffer.getvalue(), 5279 "path_in_repo": shard_path_in_repo, 5280 "repo_id": repo_id, 5281 "token": token, 5282 "repo_type": "dataset", 5283 "revision": branch, 5284 }, 5285 exceptions=HTTPError, 5286 status_codes=[504], 5287 base_wait_time=2.0, 5288 max_retries=5, 5289 max_wait_time=20.0, 5290 ) 5291 shards_path_in_repo.append(shard_path_in_repo) 5293 # Cleanup to remove unused files File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/datasets/utils/file_utils.py:285, in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time) 283 except exceptions as err: 284 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes): --> 285 raise err 286 else: 287 sleep_time = min(max_wait_time, base_wait_time * 2**retry) # Exponential backoff File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/datasets/utils/file_utils.py:282, in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time) 280 while True: 281 try: --> 282 return func(*func_args, **func_kwargs) 283 except exceptions as err: 284 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes): File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py:120, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs) 117 if check_use_auth_token: 118 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs) --> 120 return fn(*args, **kwargs) File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/huggingface_hub/hf_api.py:2998, in HfApi.upload_file(self, path_or_fileobj, path_in_repo, repo_id, token, repo_type, revision, commit_message, commit_description, create_pr, parent_commit) 2990 commit_message = ( 2991 commit_message if commit_message is not None else f"Upload {path_in_repo} with huggingface_hub" 2992 ) 2993 operation = CommitOperationAdd( 2994 path_or_fileobj=path_or_fileobj, 2995 path_in_repo=path_in_repo, 2996 ) -> 2998 commit_info = self.create_commit( 2999 repo_id=repo_id, 3000 repo_type=repo_type, 3001 operations=[operation], 3002 commit_message=commit_message, 3003 commit_description=commit_description, 3004 token=token, 3005 revision=revision, 3006 create_pr=create_pr, 3007 parent_commit=parent_commit, 3008 ) 3010 if commit_info.pr_url is not None: 3011 revision = quote(_parse_revision_from_pr_url(commit_info.pr_url), safe="") File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py:120, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs) 117 if check_use_auth_token: 118 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs) --> 120 return fn(*args, **kwargs) File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/huggingface_hub/hf_api.py:2548, in HfApi.create_commit(self, repo_id, operations, commit_message, commit_description, token, repo_type, revision, create_pr, num_threads, parent_commit) 2546 try: 2547 commit_resp = get_session().post(url=commit_url, headers=headers, data=data, params=params) -> 2548 hf_raise_for_status(commit_resp, endpoint_name="commit") 2549 except RepositoryNotFoundError as e: 2550 e.append_to_message(_CREATE_COMMIT_NO_REPO_ERROR_MESSAGE) File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py:301, in hf_raise_for_status(response, endpoint_name) 297 raise BadRequestError(message, response=response) from e 299 # Convert `HTTPError` into a `HfHubHTTPError` to display request information 300 # as well (request id and/or server error message) --> 301 raise HfHubHTTPError(str(e), response=response) from e HfHubHTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/api/datasets/agomberto/FrenchCensus-handwritten-texts/commit/main (Request ID: Root=1-645a66bf-255ad91602a6404e6cb70fba) Forbidden: pass `create_pr=1` as a query parameter to create a Pull Request ``` And then when I do ```python dataset.push_to_hub("agomberto/FrenchCensus-handwritten-texts", create_pr=1) ``` I get ```python --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[8], line 1 ----> 1 dataset.push_to_hub("agomberto/FrenchCensus-handwritten-texts", create_pr=1) TypeError: push_to_hub() got an unexpected keyword argument 'create_pr' ``` ### Expected behavior I would like to have the dataset updloaded [here](https://huggingface.co/datasets/agomberto/FrenchCensus-handwritten-texts). ### Environment info ```bash - `datasets` version: 2.12.0 - Platform: macOS-13.3.1-arm64-arm-64bit - Python version: 3.8.16 - Huggingface_hub version: 0.14.1 - PyArrow version: 12.0.0 - Pandas version: 1.5.3 ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5833/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5833/timeline
null
null
null
null
false
[ "Thanks for reporting, @agombert.\r\n\r\nIn this case, I think the root issue is authentication: before pushing to Hub, you should authenticate. See our docs: https://huggingface.co/docs/datasets/upload_dataset#upload-with-python\r\n> 2. To upload a dataset on the Hub in Python, you need to log in to your Hugging Face account:\r\n ```\r\n huggingface-cli login\r\n ```", "Hey @albertvillanova well I actually did :D \r\n\r\n<img width=\"1079\" alt=\"Capture d’écran 2023-05-09 à 18 02 58\" src=\"https://github.com/huggingface/datasets/assets/17645711/e091aa20-06b1-4dd3-bfdb-35e832c66f8d\">\r\n", "That is weird that you get a Forbidden error if you are properly authenticated...\r\n\r\nToday we had a big outage issue affecting the Hugging Face Hub. Could you please retry to push_to_hub your dataset? Maybe that was the cause...", "Yes I've just tried again and same error 403 :/", "Login successful but also got this error \"Forbidden: pass `create_pr=1` as a query parameter to create a Pull Request\"", "Make sure your API token has a `write` role. I had the same issue as you with the `read` token. Creating a `write` token and using that solved the issue.", "> Make sure your API token has a `write` role. I had the same issue as you with the `read` token. Creating a `write` token and using that solved the issue.\r\n\r\nI generate a token with write role. It works! thank you so much.", "@dmitrijsk amazing thanks so much ! \r\nThe error should be clearer when the token is read-only – I wasted a lot of time there.." ]
https://api.github.com/repos/huggingface/datasets/issues/3591
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3591/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3591/comments
https://api.github.com/repos/huggingface/datasets/issues/3591/events
https://github.com/huggingface/datasets/pull/3591
1,106,928,613
PR_kwDODunzps4xNDoB
3,591
Add support for time, date, duration, and decimal dtypes
[]
closed
false
null
2
2022-01-18T13:46:05Z
2022-01-31T18:29:34Z
2022-01-20T17:37:33Z
null
Add support for the pyarrow time (maps to `datetime.time` in python), date (maps to `datetime.time` in python), duration (maps to `datetime.timedelta` in python), and decimal (maps to `decimal.decimal` in python) dtypes. This should be helpful when writing scripts for time-series datasets.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3591/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3591/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3591.diff", "html_url": "https://github.com/huggingface/datasets/pull/3591", "merged_at": "2022-01-20T17:37:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/3591.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3591" }
true
[ "Is there a dataset which uses these four datatypes for tests purposes?\r\n", "@severo Not yet. I'll let you know if that changes." ]
https://api.github.com/repos/huggingface/datasets/issues/1081
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1081/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1081/comments
https://api.github.com/repos/huggingface/datasets/issues/1081/events
https://github.com/huggingface/datasets/pull/1081
756,672,527
MDExOlB1bGxSZXF1ZXN0NTMyMTg0ODc4
1,081
Add Knowledge-Enhanced Language Model Pre-training (KELM)
[]
closed
false
null
0
2020-12-03T23:30:09Z
2020-12-04T16:36:28Z
2020-12-04T16:36:28Z
null
Adds the KELM dataset. - Webpage/repo: https://github.com/google-research-datasets/KELM-corpus - Paper: https://arxiv.org/pdf/2010.12688.pdf
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1081/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1081/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1081.diff", "html_url": "https://github.com/huggingface/datasets/pull/1081", "merged_at": "2020-12-04T16:36:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/1081.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1081" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1476
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1476/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1476/comments
https://api.github.com/repos/huggingface/datasets/issues/1476/events
https://github.com/huggingface/datasets/pull/1476
762,256,048
MDExOlB1bGxSZXF1ZXN0NTM2ODIxNDI5
1,476
Add Spanish Billion Words Corpus
[]
closed
false
null
0
2020-12-11T11:24:58Z
2020-12-17T17:04:08Z
2020-12-14T13:14:31Z
null
Add an unannotated Spanish corpus of nearly 1.5 billion words, compiled from different resources from the web.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 1, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1476/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1476/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1476.diff", "html_url": "https://github.com/huggingface/datasets/pull/1476", "merged_at": "2020-12-14T13:14:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/1476.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1476" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/3348
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3348/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3348/comments
https://api.github.com/repos/huggingface/datasets/issues/3348/events
https://github.com/huggingface/datasets/pull/3348
1,067,831,113
PR_kwDODunzps4vOBOQ
3,348
BLEURT: Match key names to correspond with filename
[]
closed
false
null
3
2021-12-01T01:01:18Z
2021-12-07T16:06:57Z
2021-12-07T16:06:57Z
null
In order to properly locate downloaded ckpt files key name needs to match filename. Correcting change introduced in #3235
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3348/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3348/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3348.diff", "html_url": "https://github.com/huggingface/datasets/pull/3348", "merged_at": "2021-12-07T16:06:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/3348.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3348" }
true
[ "Thanks for the suggestion! I think the current checked-in `CHECKPOINT_URLS` is already not working. I believe anyone who tried using the new ckpts (`BLEURT-20-X`) can't unless this fix is in. The zip file from bleurt side unzips to directory name matching the filename (capitalized for new ones). For example without current changes I'd get the following error\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nAssertionError Traceback (most recent call last)\r\n<ipython-input-5-f6832fe20f84> in <module>()\r\n 1 predictions = [\"hello there\", \"general kenobi\"]\r\n 2 references = [\"hello there\", \"general kenobi\"]\r\n----> 3 bleurt = datasets.load_metric(\"bleurt\", \"bleurt-20\")\r\n 4 results = bleurt.compute(predictions=predictions, references=references)\r\n\r\n4 frames\r\n/usr/local/lib/python3.7/dist-packages/bleurt/checkpoint.py in read_bleurt_config(path)\r\n 84 \"\"\"Reads and checks config file from a BLEURT checkpoint.\"\"\"\r\n 85 assert tf.io.gfile.exists(path), \\\r\n---> 86 \"Could not find BLEURT checkpoint {}\".format(path)\r\n 87 config_path = os.path.join(path, CONFIG_FILE)\r\n 88 assert tf.io.gfile.exists(config_path), \\\r\n\r\nAssertionError: Could not find BLEURT checkpoint /root/.cache/huggingface/metrics/bleurt/bleurt-20/downloads/extracted/e34c60f1a05394ecda54e253a10413ca7b5d59f9a23f3cc73258c6b78ffa2f50/bleurt-20\r\n```\r\ninspecting specified path I see that directory name is `BLEURT-20` instead of `bleurt-20`. \r\nOther solution similar to your suggestion is meddle with `dl_manager.download_and_extract` to unzip to paths with lowering all the paths but I imagine this will affect other parts of the library. ", "Indeed, good catch ! Your solution that fixes `CHECKPOINT_URLS ` is simple and works well, thanks :)\r\n\r\nFurthermore to avoid breaking changes though we could also keep the support for the lowercase one:\r\n```python\r\n if self.config_name.lower() in CHECKPOINT_URLS:\r\n checkpoint_name = self.config_name.lower()\r\n elif self.config_name.upper() in CHECKPOINT_URLS:\r\n checkpoint_name = self.config_name.upper()\r\n else:\r\n raise KeyError(\r\n f\"{self.config_name} model not found. You should supply the name of a model checkpoint for bleurt in {CHECKPOINT_URLS.keys()}\"\r\n )\r\n```\r\nand then we can use `checkpoint_name` instead of `self.config_name` to download and instantiate the model:\r\n```python\r\n model_path = dl_manager.download_and_extract(CHECKPOINT_URLS[checkpoint_name])\r\n self.scorer = score.BleurtScorer(os.path.join(model_path, checkpoint_name))\r\n```\r\n\r\nPlease let me know if that sounds reasonable to you !", "Thanks for the suggestion! I believe your suggestion should work to make keys case insensitive. Changes are committed to the PR now. " ]
https://api.github.com/repos/huggingface/datasets/issues/2264
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2264/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2264/comments
https://api.github.com/repos/huggingface/datasets/issues/2264/events
https://github.com/huggingface/datasets/pull/2264
867,476,228
MDExOlB1bGxSZXF1ZXN0NjIzMTQwODA1
2,264
Fix memory issue in multiprocessing: Don't pickle table index
[]
closed
false
null
5
2021-04-26T09:21:35Z
2021-04-26T10:30:28Z
2021-04-26T10:08:14Z
null
The table index is currently being pickled when doing multiprocessing, which brings all the record batches of the dataset in memory. I fixed that by not pickling the index attributes. Therefore each process has to rebuild the index when unpickling the table. Fix issue #2256 We'll do a patch release asap !
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/2264/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2264/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2264.diff", "html_url": "https://github.com/huggingface/datasets/pull/2264", "merged_at": "2021-04-26T10:08:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/2264.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2264" }
true
[ "The code quality check is going to be fixed by #2265 ", "The memory issue didn't come from `self.__dict__.copy()` but from the fact that this dict contains `_batches` which has all the batches of the table in it.\r\nTherefore for a MemoryMappedTable all the data in `_batches` were copied in memory when pickling and this is the issue.", "I'm still investigating why we didn't catch this issue in the tests.\r\nThis test should have caught it but didn't:\r\n\r\nhttps://github.com/huggingface/datasets/blob/3db67f5ff6cbf807b129d2b4d1107af27623b608/tests/test_table.py#L350-L353", "I'll focus on the patch release and fix the test in another PR after the release", "Yes, I think it is better that way..." ]
https://api.github.com/repos/huggingface/datasets/issues/1676
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1676/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1676/comments
https://api.github.com/repos/huggingface/datasets/issues/1676/events
https://github.com/huggingface/datasets/pull/1676
777,477,645
MDExOlB1bGxSZXF1ZXN0NTQ3NzY1OTY3
1,676
new version of Ted Talks IWSLT (WIT3)
[]
closed
false
null
3
2021-01-02T15:30:03Z
2021-01-14T10:10:19Z
2021-01-14T10:10:19Z
null
In the previous iteration #1608 I had used language pairs. Which created 21,582 configs (109*108) !!! Now, TED talks in _each language_ is a separate config. So it's more cleaner with _just 109 configs_ (one for each language). Dummy files were created manually. Locally I was able to clear the `python datasets-cli test datasets/......` . Which created the `dataset_info.json` file . The test for the dummy files was also cleared. However couldn't figure out how to specify the local data folder for the real dataset **Note: that this requires manual download of the dataset.** **Note2: The high number of _Files changed (112)_ is because of the large number of dummy files/configs!**
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1676/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1676/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1676.diff", "html_url": "https://github.com/huggingface/datasets/pull/1676", "merged_at": "2021-01-14T10:10:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/1676.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1676" }
true
[ "> Nice thank you ! Actually as it is a translation dataset we should probably have one configuration = one language pair no ?\r\n> \r\n> Could you use the same trick for this dataset ?\r\n\r\nI was looking for this input, infact I had written a long post on the Slack channel,...(_but unfortunately due to the holidays didn;t get a respones_). Initially I had tried with language pairs and then with specific language configs. \r\n\r\nI'll have a look at the `opus-gnomes` dataset\r\n", "Oh sorry I must have missed your message then :/\r\nI was off a few days during the holidays\r\n\r\nHopefully this trick can enable the use of any language pair (+ year ?) combination and also simplify a lot the dummy data creation since it will only require a few configs.", "Updated it as per the comments. But couldn't figure out why the dummy tests are failing >> \r\n```\r\n$RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_ted_talks_iwslt\r\n.....\r\n....\r\ntests/test_dataset_common.py:198: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```" ]
https://api.github.com/repos/huggingface/datasets/issues/901
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/901/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/901/comments
https://api.github.com/repos/huggingface/datasets/issues/901/events
https://github.com/huggingface/datasets/pull/901
752,233,851
MDExOlB1bGxSZXF1ZXN0NTI4NTk3NDU5
901
Addition of Nl2Bash Dataset
[]
closed
false
null
3
2020-11-27T12:53:55Z
2020-11-29T18:09:25Z
2020-11-29T18:08:51Z
null
## Overview The NL2Bash data contains over 10,000 instances of linux shell commands and their corresponding natural language descriptions provided by experts, from the Tellina system. The dataset features 100+ commonly used shell utilities. ## Footnotes The following dataset marks the first ML on source code related Dataset in datasets module. It'll be really useful as a lot of the research direction involves Transformer Based Model. Thanks. ### Reference Links > Paper Link = https://arxiv.org/pdf/1802.08979.pdf > Github Link = https://github.com/TellinaTool/nl2bash
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/901/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/901/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/901.diff", "html_url": "https://github.com/huggingface/datasets/pull/901", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/901.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/901" }
true
[ "Hello, thanks. I had a talk with the dataset authors, found out that the data now is obsolete and they'll get a stable version soon. So temporality closing the PR.\r\n Although I have a question, What should _id_ be in the return statement? Should that be something like a start index (or) the type of split will do? Thanks. ", "@reshinthadithyan we should hold off on this for a couple of weeks till NeurIPS concludes. The [NLC2CMD](http://nlc2cmd.us-east.mybluemix.net/) data will be out then; which includes a cleaner version of this NL2Bash data. The older data is sort of obsolete now. ", "Ah nvm you already commented 😆 " ]
https://api.github.com/repos/huggingface/datasets/issues/5371
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5371/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5371/comments
https://api.github.com/repos/huggingface/datasets/issues/5371/events
https://github.com/huggingface/datasets/issues/5371
1,501,369,036
I_kwDODunzps5ZfRLM
5,371
Add a robustness benchmark dataset for vision
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
1
2022-12-17T12:35:13Z
2022-12-20T06:21:41Z
null
null
### Name ImageNet-C ### Paper Benchmarking Neural Network Robustness to Common Corruptions and Perturbations ### Data https://github.com/hendrycks/robustness ### Motivation It's a known fact that vision models are brittle when they meet with slightly corrupted and perturbed data. This is also correlated to the robustness aspects of vision models. Researchers use different benchmark datasets to evaluate the robustness aspects of vision models. ImageNet-C is one of them. Having this dataset in 🤗 Datasets would allow researchers to evaluate and study the robustness aspects of vision models. Since the metric associated with these evaluations is top-1 accuracy, researchers should be able to easily take advantage of the evaluation benchmarks on the Hub and perform comprehensive reporting. ImageNet-C is a large dataset. Once it's in, it can act as a reference and we can also reach out to the authors of the other robustness benchmark datasets in vision, such as ObjectNet, WILDS, Metashift, etc. These datasets cater to different aspects. For example, ObjectNet is related to assessing how well a model performs under sub-population shifts. Related thread: https://huggingface.slack.com/archives/C036H4A5U8Z/p1669994598060499
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 2, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/5371/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5371/timeline
null
null
null
null
false
[ "Ccing @nazneenrajani @lvwerra @osanseviero " ]
https://api.github.com/repos/huggingface/datasets/issues/2629
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2629/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2629/comments
https://api.github.com/repos/huggingface/datasets/issues/2629/events
https://github.com/huggingface/datasets/issues/2629
941,819,205
MDU6SXNzdWU5NDE4MTkyMDU=
2,629
Load datasets from the Hub without requiring a dataset script
[]
closed
false
null
1
2021-07-12T08:45:17Z
2021-08-25T14:18:08Z
2021-08-25T14:18:08Z
null
As a user I would like to be able to upload my csv/json/text/parquet/etc. files in a dataset repository on the Hugging Face Hub and be able to load this dataset with `load_dataset` without having to implement a dataset script. Moreover I would like to be able to specify which file goes into which split using the `data_files` argument. This feature should be compatible with private repositories and dataset streaming. This can be implemented by checking the extension of the files in the dataset repository and then by using the right dataset builder that is already packaged in the library (csv/json/text/parquet/etc.)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 7, "hooray": 2, "laugh": 0, "rocket": 2, "total_count": 11, "url": "https://api.github.com/repos/huggingface/datasets/issues/2629/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2629/timeline
null
completed
null
null
false
[ "This is so cool, let us know if we can help with anything on the hub side (@Pierrci @elishowk) 🎉 " ]
https://api.github.com/repos/huggingface/datasets/issues/3683
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3683/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3683/comments
https://api.github.com/repos/huggingface/datasets/issues/3683/events
https://github.com/huggingface/datasets/pull/3683
1,124,458,371
PR_kwDODunzps4yGKoj
3,683
added told-br (brazilian hate speech) dataset
[]
closed
false
null
2
2022-02-04T17:44:32Z
2022-02-07T21:14:52Z
2022-02-07T21:14:52Z
null
Hey, Adding ToLD-Br. Feel free to ask for modifications. Thanks!!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3683/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3683/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3683.diff", "html_url": "https://github.com/huggingface/datasets/pull/3683", "merged_at": "2022-02-07T21:14:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/3683.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3683" }
true
[ "Amazing thank you ! Feel free to regenerate the `dataset_infos.json` to account for the feature type change, and then I think we'll be good to merge :)", "Great thank you ! merging :)" ]
https://api.github.com/repos/huggingface/datasets/issues/5683
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5683/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5683/comments
https://api.github.com/repos/huggingface/datasets/issues/5683/events
https://github.com/huggingface/datasets/pull/5683
1,646,001,197
PR_kwDODunzps5NLUq1
5,683
Fix verification_mode when ignore_verifications is passed
[]
closed
false
null
2
2023-03-29T15:00:50Z
2023-03-29T17:36:06Z
2023-03-29T17:28:57Z
null
This PR fixes the values assigned to `verification_mode` when passing `ignore_verifications` to `load_dataset`. Related to: - #5303 Fix #5682.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5683/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5683/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5683.diff", "html_url": "https://github.com/huggingface/datasets/pull/5683", "merged_at": "2023-03-29T17:28:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/5683.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5683" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006935 / 0.011353 (-0.004418) | 0.004711 / 0.011008 (-0.006297) | 0.098461 / 0.038508 (0.059953) | 0.028889 / 0.023109 (0.005780) | 0.332167 / 0.275898 (0.056269) | 0.363309 / 0.323480 (0.039829) | 0.005179 / 0.007986 (-0.002807) | 0.004783 / 0.004328 (0.000455) | 0.074293 / 0.004250 (0.070043) | 0.038778 / 0.037052 (0.001726) | 0.318871 / 0.258489 (0.060382) | 0.362975 / 0.293841 (0.069134) | 0.032897 / 0.128546 (-0.095649) | 0.011685 / 0.075646 (-0.063961) | 0.322824 / 0.419271 (-0.096447) | 0.043842 / 0.043533 (0.000309) | 0.334789 / 0.255139 (0.079650) | 0.352922 / 0.283200 (0.069723) | 0.089692 / 0.141683 (-0.051991) | 1.490110 / 1.452155 (0.037955) | 1.601530 / 1.492716 (0.108813) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201882 / 0.018006 (0.183875) | 0.410875 / 0.000490 (0.410385) | 0.002472 / 0.000200 (0.002272) | 0.000073 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023636 / 0.037411 (-0.013775) | 0.102168 / 0.014526 (0.087642) | 0.107247 / 0.176557 (-0.069310) | 0.171858 / 0.737135 (-0.565278) | 0.110619 / 0.296338 (-0.185720) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433740 / 0.215209 (0.218531) | 4.332121 / 2.077655 (2.254466) | 2.075398 / 1.504120 (0.571278) | 1.941074 / 1.541195 (0.399879) | 2.033331 / 1.468490 (0.564841) | 0.697134 / 4.584777 (-3.887643) | 3.463855 / 3.745712 (-0.281857) | 3.080446 / 5.269862 (-2.189416) | 1.575020 / 4.565676 (-2.990656) | 0.083054 / 0.424275 (-0.341221) | 0.012454 / 0.007607 (0.004847) | 0.537996 / 0.226044 (0.311951) | 5.366765 / 2.268929 (3.097836) | 2.464398 / 55.444624 (-52.980227) | 2.143912 / 6.876477 (-4.732564) | 2.245706 / 2.142072 (0.103634) | 0.801397 / 4.805227 (-4.003831) | 0.150954 / 6.500664 (-6.349710) | 0.066758 / 0.075469 (-0.008711) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.216412 / 1.841788 (-0.625376) | 13.679322 / 8.074308 (5.605014) | 14.055286 / 10.191392 (3.863894) | 0.130264 / 0.680424 (-0.550160) | 0.016566 / 0.534201 (-0.517635) | 0.379126 / 0.579283 (-0.200157) | 0.390815 / 0.434364 (-0.043549) | 0.437586 / 0.540337 (-0.102751) | 0.526822 / 1.386936 (-0.860114) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006898 / 0.011353 (-0.004455) | 0.004705 / 0.011008 (-0.006304) | 0.078592 / 0.038508 (0.040084) | 0.028635 / 0.023109 (0.005525) | 0.340143 / 0.275898 (0.064245) | 0.377526 / 0.323480 (0.054047) | 0.005645 / 0.007986 (-0.002340) | 0.003533 / 0.004328 (-0.000796) | 0.078441 / 0.004250 (0.074191) | 0.039408 / 0.037052 (0.002356) | 0.342303 / 0.258489 (0.083814) | 0.386837 / 0.293841 (0.092996) | 0.032427 / 0.128546 (-0.096119) | 0.011763 / 0.075646 (-0.063883) | 0.087984 / 0.419271 (-0.331287) | 0.042126 / 0.043533 (-0.001406) | 0.339951 / 0.255139 (0.084812) | 0.366165 / 0.283200 (0.082966) | 0.091414 / 0.141683 (-0.050269) | 1.502034 / 1.452155 (0.049880) | 1.597901 / 1.492716 (0.105184) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232122 / 0.018006 (0.214115) | 0.410205 / 0.000490 (0.409715) | 0.000418 / 0.000200 (0.000218) | 0.000063 / 0.000054 (0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026013 / 0.037411 (-0.011399) | 0.105520 / 0.014526 (0.090995) | 0.108649 / 0.176557 (-0.067908) | 0.159324 / 0.737135 (-0.577811) | 0.114033 / 0.296338 (-0.182306) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.455634 / 0.215209 (0.240425) | 4.508544 / 2.077655 (2.430889) | 2.087065 / 1.504120 (0.582945) | 1.872622 / 1.541195 (0.331427) | 1.935617 / 1.468490 (0.467127) | 0.696909 / 4.584777 (-3.887868) | 3.449365 / 3.745712 (-0.296348) | 3.008399 / 5.269862 (-2.261462) | 1.459245 / 4.565676 (-3.106431) | 0.083637 / 0.424275 (-0.340638) | 0.012358 / 0.007607 (0.004750) | 0.547232 / 0.226044 (0.321187) | 5.522395 / 2.268929 (3.253466) | 2.691019 / 55.444624 (-52.753605) | 2.408083 / 6.876477 (-4.468394) | 2.369239 / 2.142072 (0.227166) | 0.807148 / 4.805227 (-3.998080) | 0.152030 / 6.500664 (-6.348634) | 0.067883 / 0.075469 (-0.007586) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.336956 / 1.841788 (-0.504832) | 14.403730 / 8.074308 (6.329422) | 14.854084 / 10.191392 (4.662692) | 0.146530 / 0.680424 (-0.533894) | 0.016611 / 0.534201 (-0.517590) | 0.398557 / 0.579283 (-0.180726) | 0.393194 / 0.434364 (-0.041170) | 0.486824 / 0.540337 (-0.053513) | 0.572844 / 1.386936 (-0.814092) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#411f9cc281e50954ea0c903e7a0a6618b3d31b9e \"CML watermark\")\n" ]
https://api.github.com/repos/huggingface/datasets/issues/5621
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5621/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5621/comments
https://api.github.com/repos/huggingface/datasets/issues/5621/events
https://github.com/huggingface/datasets/pull/5621
1,615,029,615
PR_kwDODunzps5LjwD8
5,621
Adding Oracle Cloud to docs
[]
closed
false
null
2
2023-03-08T10:22:50Z
2023-03-11T00:57:18Z
2023-03-11T00:49:56Z
null
Adding Oracle Cloud's fsspec implementation to the list of supported cloud storage providers.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5621/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5621/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5621.diff", "html_url": "https://github.com/huggingface/datasets/pull/5621", "merged_at": "2023-03-11T00:49:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/5621.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5621" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006183 / 0.011353 (-0.005170) | 0.004377 / 0.011008 (-0.006631) | 0.096898 / 0.038508 (0.058390) | 0.027729 / 0.023109 (0.004620) | 0.336582 / 0.275898 (0.060684) | 0.353792 / 0.323480 (0.030312) | 0.004541 / 0.007986 (-0.003445) | 0.004349 / 0.004328 (0.000020) | 0.074403 / 0.004250 (0.070153) | 0.033918 / 0.037052 (-0.003134) | 0.341505 / 0.258489 (0.083016) | 0.380192 / 0.293841 (0.086351) | 0.031703 / 0.128546 (-0.096843) | 0.011561 / 0.075646 (-0.064086) | 0.321848 / 0.419271 (-0.097423) | 0.043407 / 0.043533 (-0.000126) | 0.330365 / 0.255139 (0.075226) | 0.364630 / 0.283200 (0.081430) | 0.084798 / 0.141683 (-0.056885) | 1.450908 / 1.452155 (-0.001246) | 1.522235 / 1.492716 (0.029519) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.198267 / 0.018006 (0.180261) | 0.409554 / 0.000490 (0.409065) | 0.002501 / 0.000200 (0.002301) | 0.000270 / 0.000054 (0.000215) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021801 / 0.037411 (-0.015610) | 0.097429 / 0.014526 (0.082904) | 0.103259 / 0.176557 (-0.073298) | 0.161483 / 0.737135 (-0.575652) | 0.107843 / 0.296338 (-0.188496) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.427057 / 0.215209 (0.211848) | 4.259477 / 2.077655 (2.181823) | 1.945819 / 1.504120 (0.441699) | 1.733013 / 1.541195 (0.191819) | 1.748486 / 1.468490 (0.279996) | 0.702231 / 4.584777 (-3.882546) | 3.387608 / 3.745712 (-0.358104) | 1.890187 / 5.269862 (-3.379675) | 1.300465 / 4.565676 (-3.265211) | 0.083702 / 0.424275 (-0.340573) | 0.012674 / 0.007607 (0.005067) | 0.527978 / 0.226044 (0.301934) | 5.259610 / 2.268929 (2.990681) | 2.366512 / 55.444624 (-53.078113) | 2.013811 / 6.876477 (-4.862666) | 2.058175 / 2.142072 (-0.083898) | 0.815042 / 4.805227 (-3.990185) | 0.153496 / 6.500664 (-6.347168) | 0.065442 / 0.075469 (-0.010027) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.227494 / 1.841788 (-0.614294) | 13.812921 / 8.074308 (5.738613) | 14.430149 / 10.191392 (4.238757) | 0.145422 / 0.680424 (-0.535002) | 0.016672 / 0.534201 (-0.517529) | 0.382126 / 0.579283 (-0.197157) | 0.388369 / 0.434364 (-0.045995) | 0.446133 / 0.540337 (-0.094204) | 0.531044 / 1.386936 (-0.855892) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006273 / 0.011353 (-0.005080) | 0.004557 / 0.011008 (-0.006452) | 0.077398 / 0.038508 (0.038890) | 0.027295 / 0.023109 (0.004185) | 0.340866 / 0.275898 (0.064968) | 0.373918 / 0.323480 (0.050438) | 0.004967 / 0.007986 (-0.003018) | 0.003337 / 0.004328 (-0.000991) | 0.076041 / 0.004250 (0.071791) | 0.036708 / 0.037052 (-0.000344) | 0.346126 / 0.258489 (0.087637) | 0.385177 / 0.293841 (0.091336) | 0.032272 / 0.128546 (-0.096275) | 0.011756 / 0.075646 (-0.063890) | 0.086512 / 0.419271 (-0.332759) | 0.049310 / 0.043533 (0.005777) | 0.339352 / 0.255139 (0.084213) | 0.372058 / 0.283200 (0.088859) | 0.089712 / 0.141683 (-0.051971) | 1.501964 / 1.452155 (0.049809) | 1.573753 / 1.492716 (0.081037) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.162075 / 0.018006 (0.144069) | 0.391462 / 0.000490 (0.390973) | 0.002868 / 0.000200 (0.002668) | 0.000077 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024176 / 0.037411 (-0.013235) | 0.099631 / 0.014526 (0.085105) | 0.107544 / 0.176557 (-0.069013) | 0.157659 / 0.737135 (-0.579477) | 0.111130 / 0.296338 (-0.185209) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.442086 / 0.215209 (0.226877) | 4.426311 / 2.077655 (2.348657) | 2.086133 / 1.504120 (0.582013) | 1.860415 / 1.541195 (0.319220) | 1.892306 / 1.468490 (0.423816) | 0.702752 / 4.584777 (-3.882025) | 3.394358 / 3.745712 (-0.351354) | 1.857396 / 5.269862 (-3.412466) | 1.167168 / 4.565676 (-3.398509) | 0.083549 / 0.424275 (-0.340726) | 0.012780 / 0.007607 (0.005173) | 0.547075 / 0.226044 (0.321031) | 5.466619 / 2.268929 (3.197691) | 2.548893 / 55.444624 (-52.895731) | 2.185574 / 6.876477 (-4.690903) | 2.188000 / 2.142072 (0.045928) | 0.810370 / 4.805227 (-3.994857) | 0.153320 / 6.500664 (-6.347344) | 0.068409 / 0.075469 (-0.007060) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.330431 / 1.841788 (-0.511356) | 14.178916 / 8.074308 (6.104608) | 14.409594 / 10.191392 (4.218202) | 0.156270 / 0.680424 (-0.524154) | 0.016452 / 0.534201 (-0.517749) | 0.379837 / 0.579283 (-0.199447) | 0.389896 / 0.434364 (-0.044468) | 0.443892 / 0.540337 (-0.096446) | 0.531392 / 1.386936 (-0.855544) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e502117cafd92fd9c25d1d6dd047cc650c691629 \"CML watermark\")\n" ]
https://api.github.com/repos/huggingface/datasets/issues/4364
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4364/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4364/comments
https://api.github.com/repos/huggingface/datasets/issues/4364/events
https://github.com/huggingface/datasets/pull/4364
1,238,976,106
PR_kwDODunzps43-bmq
4,364
Support complex feature types as `features` in packaged loaders
[]
closed
false
null
1
2022-05-17T17:53:23Z
2022-05-31T12:26:23Z
2022-05-31T12:16:32Z
null
This PR adds `table_cast` to the packaged loaders to fix casting to the `Image`/`Audio`, `ArrayND` and `ClassLabel` types. If these types are not present in the `builder.config.features` dictionary, the built-in `pa.Table.cast` is used for better performance. Additionally, this PR adds `cast_storage` to `ClassLabel` to support the string to int conversion in `table_cast` and ensure that integer labels are in a valid range. Fix https://github.com/huggingface/datasets/issues/4210 This PR is also a solution for these (popular) discussions: https://discuss.huggingface.co/t/converting-string-label-to-int/2816 and https://discuss.huggingface.co/t/class-labels-for-custom-datasets/15130/2 TODO: * [x] tests
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4364/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4364/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4364.diff", "html_url": "https://github.com/huggingface/datasets/pull/4364", "merged_at": "2022-05-31T12:16:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/4364.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4364" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/2229
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2229/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2229/comments
https://api.github.com/repos/huggingface/datasets/issues/2229/events
https://github.com/huggingface/datasets/issues/2229
859,810,602
MDU6SXNzdWU4NTk4MTA2MDI=
2,229
`xnli` dataset creating a tuple key while yielding instead of `str` or `int`
[]
closed
false
null
2
2021-04-16T13:21:53Z
2021-04-19T08:56:42Z
2021-04-19T08:56:42Z
null
When using `ds = datasets.load_dataset('xnli', 'ar')`, the dataset generation script uses the following section of code in the egging, which yields a tuple key instead of the specified `str` or `int` key: https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/datasets/xnli/xnli.py#L196 Since, community datasets in Tensorflow Datasets also use HF datasets, this causes a Tuple key error while loading HF's `xnli` dataset. I'm up for sending a fix for this, I think we can simply use `file_idx + "_" + row_idx` as a unique key instead of a tuple.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2229/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2229/timeline
null
completed
null
null
false
[ "Hi ! Sure sounds good. Also if you find other datasets that use tuples instead of str/int, you can also fix them !\r\nthanks :)", "@lhoestq I have sent a PR for fixing the issue. Would be great if you could have a look! Thanks!" ]
https://api.github.com/repos/huggingface/datasets/issues/1584
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1584/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1584/comments
https://api.github.com/repos/huggingface/datasets/issues/1584/events
https://github.com/huggingface/datasets/pull/1584
768,820,406
MDExOlB1bGxSZXF1ZXN0NTQxMTM2OTQ5
1,584
Load hind encorp
[]
closed
false
null
0
2020-12-16T12:38:38Z
2020-12-18T02:27:24Z
2020-12-18T02:27:24Z
null
reformated well documented, yaml tags added, code
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1584/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1584/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1584.diff", "html_url": "https://github.com/huggingface/datasets/pull/1584", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1584.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1584" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/3999
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3999/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3999/comments
https://api.github.com/repos/huggingface/datasets/issues/3999/events
https://github.com/huggingface/datasets/pull/3999
1,178,685,280
PR_kwDODunzps406WN_
3,999
Docs maintenance
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
null
1
2022-03-23T21:27:33Z
2022-03-30T17:01:45Z
2022-03-30T16:56:38Z
null
This PR links some functions to the API reference. These functions previously only showed up in code format because the path to the actual API was incorrect.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3999/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3999/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3999.diff", "html_url": "https://github.com/huggingface/datasets/pull/3999", "merged_at": "2022-03-30T16:56:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/3999.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3999" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/6073
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6073/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6073/comments
https://api.github.com/repos/huggingface/datasets/issues/6073/events
https://github.com/huggingface/datasets/issues/6073
1,822,167,804
I_kwDODunzps5snBL8
6,073
version2.3.2 load_dataset()data_files can't include .xxxx in path
[]
open
false
null
1
2023-07-26T11:09:31Z
2023-07-26T12:34:45Z
null
null
### Describe the bug First, I cd workdir. Then, I just use load_dataset("json", data_file={"train":"/a/b/c/.d/train/train.json", "test":"/a/b/c/.d/train/test.json"}) that couldn't work and <FileNotFoundError: Unable to find '/a/b/c/.d/train/train.jsonl' at /a/b/c/.d/> And I debug, it is fine in version2.1.2 So there maybe a bug in path join. Here is the whole bug report: /x/datasets/loa │ │ d.py:1656 in load_dataset │ │ │ │ 1653 │ ignore_verifications = ignore_verifications or save_infos │ │ 1654 │ │ │ 1655 │ # Create a dataset builder │ │ ❱ 1656 │ builder_instance = load_dataset_builder( │ │ 1657 │ │ path=path, │ │ 1658 │ │ name=name, │ │ 1659 │ │ data_dir=data_dir, │ │ │ │ x/datasets/loa │ │ d.py:1439 in load_dataset_builder │ │ │ │ 1436 │ if use_auth_token is not None: │ │ 1437 │ │ download_config = download_config.copy() if download_config e │ │ 1438 │ │ download_config.use_auth_token = use_auth_token │ │ ❱ 1439 │ dataset_module = dataset_module_factory( │ │ 1440 │ │ path, │ │ 1441 │ │ revision=revision, │ │ 1442 │ │ download_config=download_config, │ │ │ │ x/datasets/loa │ │ d.py:1097 in dataset_module_factory │ │ │ │ 1094 │ │ │ 1095 │ # Try packaged │ │ 1096 │ if path in _PACKAGED_DATASETS_MODULES: │ │ ❱ 1097 │ │ return PackagedDatasetModuleFactory( │ │ 1098 │ │ │ path, │ │ 1099 │ │ │ data_dir=data_dir, │ │ 1100 │ │ │ data_files=data_files, │ │ │ │x/datasets/loa │ │ d.py:743 in get_module │ │ │ │ 740 │ │ │ if self.data_dir is not None │ │ 741 │ │ │ else get_patterns_locally(str(Path().resolve())) │ │ 742 │ │ ) │ │ ❱ 743 │ │ data_files = DataFilesDict.from_local_or_remote( │ │ 744 │ │ │ patterns, │ │ 745 │ │ │ use_auth_token=self.download_config.use_auth_token, │ │ 746 │ │ │ base_path=str(Path(self.data_dir).resolve()) if self.data │ │ │ │ x/datasets/dat │ │ a_files.py:590 in from_local_or_remote │ │ │ │ 587 │ │ out = cls() │ │ 588 │ │ for key, patterns_for_key in patterns.items(): │ │ 589 │ │ │ out[key] = ( │ │ ❱ 590 │ │ │ │ DataFilesList.from_local_or_remote( │ │ 591 │ │ │ │ │ patterns_for_key, │ │ 592 │ │ │ │ │ base_path=base_path, │ │ 593 │ │ │ │ │ allowed_extensions=allowed_extensions, │ │ │ │ /x/datasets/dat │ │ a_files.py:558 in from_local_or_remote │ │ │ │ 555 │ │ use_auth_token: Optional[Union[bool, str]] = None, │ │ 556 │ ) -> "DataFilesList": │ │ 557 │ │ base_path = base_path if base_path is not None else str(Path() │ │ ❱ 558 │ │ data_files = resolve_patterns_locally_or_by_urls(base_path, pa │ │ 559 │ │ origin_metadata = _get_origin_metadata_locally_or_by_urls(data │ │ 560 │ │ return cls(data_files, origin_metadata) │ │ 561 │ │ │ │ /x/datasets/dat │ │ a_files.py:195 in resolve_patterns_locally_or_by_urls │ │ │ │ 192 │ │ if is_remote_url(pattern): │ │ 193 │ │ │ data_files.append(Url(pattern)) │ │ 194 │ │ else: │ │ ❱ 195 │ │ │ for path in _resolve_single_pattern_locally(base_path, pat │ │ 196 │ │ │ │ data_files.append(path) │ │ 197 │ │ │ 198 │ if not data_files: │ │ │ │ /x/datasets/dat │ │ a_files.py:145 in _resolve_single_pattern_locally │ │ │ │ 142 │ │ error_msg = f"Unable to find '{pattern}' at {Path(base_path).r │ │ 143 │ │ if allowed_extensions is not None: │ │ 144 │ │ │ error_msg += f" with any supported extension {list(allowed │ │ ❱ 145 │ │ raise FileNotFoundError(error_msg) │ │ 146 │ return sorted(out) │ │ 147 ### Steps to reproduce the bug 1. Version=2.3.2 2. In shell, cd workdir.(cd /a/b/c/.d/) 3. load_dataset("json", data_file={"train":"/a/b/c/.d/train/train.json", "test":"/a/b/c/.d/train/test.json"}) ### Expected behavior fix it please~ ### Environment info 2.3.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6073/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6073/timeline
null
null
null
null
false
[ "Version 2.3.2 is over one year old, so please use the latest release (2.14.0) to get the expected behavior. Version 2.3.2 does not contain some fixes we made to fix resolving hidden files/directories (starting with a dot)." ]
https://api.github.com/repos/huggingface/datasets/issues/299
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/299/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/299/comments
https://api.github.com/repos/huggingface/datasets/issues/299/events
https://github.com/huggingface/datasets/pull/299
643,611,557
MDExOlB1bGxSZXF1ZXN0NDM4Mzg0NDgw
299
remove some print in snli file
[]
closed
false
null
1
2020-06-23T07:46:06Z
2020-06-23T08:10:46Z
2020-06-23T08:10:44Z
null
This PR removes unwanted `print` statements in some files such as `snli.py`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/299/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/299/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/299.diff", "html_url": "https://github.com/huggingface/datasets/pull/299", "merged_at": "2020-06-23T08:10:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/299.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/299" }
true
[ "I guess you can just rebase from master to fix the CI" ]
https://api.github.com/repos/huggingface/datasets/issues/1955
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1955/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1955/comments
https://api.github.com/repos/huggingface/datasets/issues/1955/events
https://github.com/huggingface/datasets/pull/1955
818,010,664
MDExOlB1bGxSZXF1ZXN0NTgxMzk2OTA5
1,955
typos + grammar
[]
closed
false
null
0
2021-02-27T20:21:43Z
2021-03-01T17:20:38Z
2021-03-01T14:43:19Z
null
This PR proposes a few typo + grammar fixes, and rewrites some sentences in an attempt to improve readability. N.B. When referring to the library `datasets` in the docs it is typically used as a singular, and it definitely is a singular when written as "`datasets` library", that is "`datasets` library is ..." and not "are ...".
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1955/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1955/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1955.diff", "html_url": "https://github.com/huggingface/datasets/pull/1955", "merged_at": "2021-03-01T14:43:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/1955.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1955" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/207
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/207/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/207/comments
https://api.github.com/repos/huggingface/datasets/issues/207/events
https://github.com/huggingface/datasets/issues/207
625,932,200
MDU6SXNzdWU2MjU5MzIyMDA=
207
Remove test set from NLP viewer
[ { "color": "94203D", "default": false, "description": "", "id": 2107841032, "name": "nlp-viewer", "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer" } ]
closed
false
null
3
2020-05-27T18:32:07Z
2022-02-10T13:17:45Z
2022-02-10T13:17:45Z
null
While the new [NLP viewer](https://huggingface.co/nlp/viewer/) is a great tool, I think it would be best to outright remove the option of looking at the test sets. At the very least, a warning should be displayed to users before showing the test set. Newcomers to the field might not be aware of best practices, and small things like this can help increase awareness.
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/207/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/207/timeline
null
completed
null
null
false
[ "~is the viewer also open source?~\r\n[is a streamlit app!](https://docs.streamlit.io/en/latest/getting_started.html)", "Appears that [two thirds of those polled on Twitter](https://twitter.com/srush_nlp/status/1265734497632477185) are in favor of _some_ mechanism for averting eyeballs from the test data.", "We do no longer use datasets-viewer." ]
https://api.github.com/repos/huggingface/datasets/issues/5826
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5826/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5826/comments
https://api.github.com/repos/huggingface/datasets/issues/5826/events
https://github.com/huggingface/datasets/pull/5826
1,698,155,751
PR_kwDODunzps5P5FYZ
5,826
Support working_dir in from_spark
[]
closed
false
null
6
2023-05-05T20:22:40Z
2023-05-25T17:45:54Z
2023-05-25T08:46:15Z
null
Accept `working_dir` as an argument to `Dataset.from_spark`. Setting a non-NFS working directory for Spark workers to materialize to will improve write performance.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5826/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5826/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5826.diff", "html_url": "https://github.com/huggingface/datasets/pull/5826", "merged_at": "2023-05-25T08:46:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/5826.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5826" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Added env var", "@lhoestq would you or another maintainer be able to review please? :)", "I removed the env var", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005771 / 0.011353 (-0.005582) | 0.004086 / 0.011008 (-0.006922) | 0.097170 / 0.038508 (0.058661) | 0.027464 / 0.023109 (0.004355) | 0.305425 / 0.275898 (0.029527) | 0.343869 / 0.323480 (0.020389) | 0.004899 / 0.007986 (-0.003087) | 0.003294 / 0.004328 (-0.001034) | 0.074710 / 0.004250 (0.070459) | 0.034982 / 0.037052 (-0.002070) | 0.306063 / 0.258489 (0.047574) | 0.343115 / 0.293841 (0.049274) | 0.025155 / 0.128546 (-0.103392) | 0.008429 / 0.075646 (-0.067217) | 0.318680 / 0.419271 (-0.100591) | 0.043304 / 0.043533 (-0.000229) | 0.306703 / 0.255139 (0.051564) | 0.335535 / 0.283200 (0.052335) | 0.087428 / 0.141683 (-0.054255) | 1.483769 / 1.452155 (0.031614) | 1.538753 / 1.492716 (0.046037) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203313 / 0.018006 (0.185307) | 0.413864 / 0.000490 (0.413375) | 0.003186 / 0.000200 (0.002986) | 0.000068 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022862 / 0.037411 (-0.014550) | 0.097306 / 0.014526 (0.082780) | 0.102823 / 0.176557 (-0.073733) | 0.162803 / 0.737135 (-0.574333) | 0.106311 / 0.296338 (-0.190028) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.451710 / 0.215209 (0.236501) | 4.508520 / 2.077655 (2.430865) | 2.181118 / 1.504120 (0.676998) | 1.977607 / 1.541195 (0.436412) | 2.008366 / 1.468490 (0.539876) | 0.565388 / 4.584777 (-4.019389) | 3.439318 / 3.745712 (-0.306394) | 1.747512 / 5.269862 (-3.522349) | 1.102124 / 4.565676 (-3.463553) | 0.069212 / 0.424275 (-0.355063) | 0.011926 / 0.007607 (0.004318) | 0.553414 / 0.226044 (0.327370) | 5.548959 / 2.268929 (3.280031) | 2.628769 / 55.444624 (-52.815856) | 2.301003 / 6.876477 (-4.575473) | 2.341744 / 2.142072 (0.199672) | 0.673092 / 4.805227 (-4.132135) | 0.137722 / 6.500664 (-6.362942) | 0.066909 / 0.075469 (-0.008560) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.196854 / 1.841788 (-0.644934) | 13.421776 / 8.074308 (5.347468) | 13.839760 / 10.191392 (3.648368) | 0.140557 / 0.680424 (-0.539867) | 0.016619 / 0.534201 (-0.517582) | 0.357985 / 0.579283 (-0.221298) | 0.387018 / 0.434364 (-0.047346) | 0.452798 / 0.540337 (-0.087540) | 0.542085 / 1.386936 (-0.844851) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005868 / 0.011353 (-0.005484) | 0.004103 / 0.011008 (-0.006905) | 0.076126 / 0.038508 (0.037618) | 0.027744 / 0.023109 (0.004635) | 0.357257 / 0.275898 (0.081359) | 0.387981 / 0.323480 (0.064501) | 0.004807 / 0.007986 (-0.003178) | 0.003337 / 0.004328 (-0.000991) | 0.075486 / 0.004250 (0.071236) | 0.035121 / 0.037052 (-0.001931) | 0.361385 / 0.258489 (0.102896) | 0.399346 / 0.293841 (0.105505) | 0.025263 / 0.128546 (-0.103284) | 0.008571 / 0.075646 (-0.067075) | 0.081815 / 0.419271 (-0.337457) | 0.041114 / 0.043533 (-0.002418) | 0.362840 / 0.255139 (0.107701) | 0.380926 / 0.283200 (0.097727) | 0.092728 / 0.141683 (-0.048955) | 1.517647 / 1.452155 (0.065492) | 1.534914 / 1.492716 (0.042198) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199669 / 0.018006 (0.181663) | 0.399070 / 0.000490 (0.398580) | 0.002014 / 0.000200 (0.001814) | 0.000079 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024541 / 0.037411 (-0.012870) | 0.099676 / 0.014526 (0.085151) | 0.106503 / 0.176557 (-0.070054) | 0.153755 / 0.737135 (-0.583380) | 0.108564 / 0.296338 (-0.187775) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443842 / 0.215209 (0.228633) | 4.441158 / 2.077655 (2.363503) | 2.159496 / 1.504120 (0.655376) | 1.955358 / 1.541195 (0.414163) | 1.973864 / 1.468490 (0.505374) | 0.550467 / 4.584777 (-4.034310) | 3.381831 / 3.745712 (-0.363881) | 2.561192 / 5.269862 (-2.708670) | 1.361684 / 4.565676 (-3.203992) | 0.068140 / 0.424275 (-0.356135) | 0.012005 / 0.007607 (0.004398) | 0.551921 / 0.226044 (0.325877) | 5.503591 / 2.268929 (3.234662) | 2.591609 / 55.444624 (-52.853015) | 2.246681 / 6.876477 (-4.629796) | 2.290941 / 2.142072 (0.148868) | 0.655212 / 4.805227 (-4.150015) | 0.136013 / 6.500664 (-6.364651) | 0.066995 / 0.075469 (-0.008474) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.300438 / 1.841788 (-0.541350) | 13.866224 / 8.074308 (5.791916) | 13.932624 / 10.191392 (3.741232) | 0.144345 / 0.680424 (-0.536079) | 0.016623 / 0.534201 (-0.517578) | 0.357629 / 0.579283 (-0.221654) | 0.389759 / 0.434364 (-0.044605) | 0.417704 / 0.540337 (-0.122633) | 0.501358 / 1.386936 (-0.885578) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#89f775226321ba94e5bf4670a323c0fb44f5f65c \"CML watermark\")\n", "Thank you!" ]
https://api.github.com/repos/huggingface/datasets/issues/4212
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4212/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4212/comments
https://api.github.com/repos/huggingface/datasets/issues/4212/events
https://github.com/huggingface/datasets/pull/4212
1,214,498,582
PR_kwDODunzps42udRf
4,212
[Common Voice] Make sure bytes are correctly deleted if `path` exists
[]
closed
false
null
2
2022-04-25T13:18:26Z
2022-04-26T22:54:28Z
2022-04-26T22:48:27Z
null
`path` should be set to local path inside audio feature if exist so that bytes can correctly be deleted.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4212/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4212/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4212.diff", "html_url": "https://github.com/huggingface/datasets/pull/4212", "merged_at": "2022-04-26T22:48:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/4212.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4212" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "cool that you noticed that we store unnecessary bytes again :D " ]
https://api.github.com/repos/huggingface/datasets/issues/5372
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5372/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5372/comments
https://api.github.com/repos/huggingface/datasets/issues/5372/events
https://github.com/huggingface/datasets/pull/5372
1,501,377,802
PR_kwDODunzps5Fs9w5
5,372
Fix streaming pandas.read_excel
[]
closed
false
null
2
2022-12-17T12:58:52Z
2023-01-06T11:50:58Z
2023-01-06T11:43:37Z
null
This PR fixes `xpandas_read_excel`: - Support passing a path string, besides a file-like object - Support passing `use_auth_token` - First assumes the host server supports HTTP range requests; only if a ValueError is thrown (Cannot seek streaming HTTP file), then it preserves previous behavior (see [#3355](https://github.com/huggingface/datasets/pull/3355)). Fix https://huggingface.co/datasets/bigbio/meqsum/discussions/1 Fix: - https://github.com/bigscience-workshop/biomedical/issues/801 Related to: - #3355
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5372/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5372/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5372.diff", "html_url": "https://github.com/huggingface/datasets/pull/5372", "merged_at": "2023-01-06T11:43:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/5372.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5372" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009517 / 0.011353 (-0.001835) | 0.005210 / 0.011008 (-0.005798) | 0.098916 / 0.038508 (0.060408) | 0.036123 / 0.023109 (0.013014) | 0.301564 / 0.275898 (0.025666) | 0.358086 / 0.323480 (0.034606) | 0.008159 / 0.007986 (0.000174) | 0.004122 / 0.004328 (-0.000206) | 0.075899 / 0.004250 (0.071648) | 0.046082 / 0.037052 (0.009030) | 0.302871 / 0.258489 (0.044382) | 0.351162 / 0.293841 (0.057321) | 0.038215 / 0.128546 (-0.090331) | 0.012026 / 0.075646 (-0.063620) | 0.330988 / 0.419271 (-0.088284) | 0.048351 / 0.043533 (0.004818) | 0.291840 / 0.255139 (0.036701) | 0.320387 / 0.283200 (0.037187) | 0.105018 / 0.141683 (-0.036665) | 1.447158 / 1.452155 (-0.004997) | 1.491205 / 1.492716 (-0.001511) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.250870 / 0.018006 (0.232863) | 0.562974 / 0.000490 (0.562484) | 0.001789 / 0.000200 (0.001589) | 0.000252 / 0.000054 (0.000197) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028208 / 0.037411 (-0.009203) | 0.110897 / 0.014526 (0.096371) | 0.120394 / 0.176557 (-0.056163) | 0.164980 / 0.737135 (-0.572156) | 0.126283 / 0.296338 (-0.170056) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397922 / 0.215209 (0.182713) | 3.969233 / 2.077655 (1.891578) | 1.766422 / 1.504120 (0.262302) | 1.577503 / 1.541195 (0.036308) | 1.672344 / 1.468490 (0.203854) | 0.695708 / 4.584777 (-3.889069) | 3.770763 / 3.745712 (0.025051) | 3.369592 / 5.269862 (-1.900269) | 1.851122 / 4.565676 (-2.714554) | 0.084063 / 0.424275 (-0.340212) | 0.012156 / 0.007607 (0.004549) | 0.534639 / 0.226044 (0.308594) | 5.021955 / 2.268929 (2.753027) | 2.215438 / 55.444624 (-53.229186) | 1.890459 / 6.876477 (-4.986018) | 2.071361 / 2.142072 (-0.070712) | 0.834623 / 4.805227 (-3.970604) | 0.165588 / 6.500664 (-6.335076) | 0.064336 / 0.075469 (-0.011133) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.205651 / 1.841788 (-0.636136) | 14.916871 / 8.074308 (6.842563) | 14.559495 / 10.191392 (4.368103) | 0.166889 / 0.680424 (-0.513535) | 0.028645 / 0.534201 (-0.505556) | 0.433634 / 0.579283 (-0.145649) | 0.429849 / 0.434364 (-0.004515) | 0.508617 / 0.540337 (-0.031720) | 0.595261 / 1.386936 (-0.791675) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007696 / 0.011353 (-0.003657) | 0.005434 / 0.011008 (-0.005574) | 0.099234 / 0.038508 (0.060725) | 0.033904 / 0.023109 (0.010795) | 0.379181 / 0.275898 (0.103283) | 0.401858 / 0.323480 (0.078379) | 0.006257 / 0.007986 (-0.001729) | 0.004406 / 0.004328 (0.000077) | 0.073174 / 0.004250 (0.068923) | 0.056033 / 0.037052 (0.018981) | 0.379375 / 0.258489 (0.120886) | 0.425928 / 0.293841 (0.132087) | 0.037476 / 0.128546 (-0.091071) | 0.012520 / 0.075646 (-0.063127) | 0.364975 / 0.419271 (-0.054297) | 0.049341 / 0.043533 (0.005808) | 0.370519 / 0.255139 (0.115380) | 0.390585 / 0.283200 (0.107385) | 0.113339 / 0.141683 (-0.028344) | 1.460575 / 1.452155 (0.008421) | 1.564951 / 1.492716 (0.072235) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246217 / 0.018006 (0.228210) | 0.554358 / 0.000490 (0.553869) | 0.000451 / 0.000200 (0.000251) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029557 / 0.037411 (-0.007855) | 0.110472 / 0.014526 (0.095946) | 0.122652 / 0.176557 (-0.053904) | 0.159396 / 0.737135 (-0.577739) | 0.128852 / 0.296338 (-0.167486) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.447927 / 0.215209 (0.232718) | 4.448292 / 2.077655 (2.370637) | 2.228874 / 1.504120 (0.724754) | 2.030231 / 1.541195 (0.489036) | 2.116417 / 1.468490 (0.647927) | 0.702713 / 4.584777 (-3.882064) | 3.774063 / 3.745712 (0.028351) | 3.521662 / 5.269862 (-1.748200) | 1.476700 / 4.565676 (-3.088976) | 0.084921 / 0.424275 (-0.339354) | 0.012862 / 0.007607 (0.005255) | 0.559142 / 0.226044 (0.333098) | 5.512233 / 2.268929 (3.243305) | 2.750024 / 55.444624 (-52.694600) | 2.388845 / 6.876477 (-4.487632) | 2.541786 / 2.142072 (0.399714) | 0.842256 / 4.805227 (-3.962971) | 0.168088 / 6.500664 (-6.332576) | 0.064211 / 0.075469 (-0.011258) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.239001 / 1.841788 (-0.602787) | 15.286345 / 8.074308 (7.212036) | 13.883981 / 10.191392 (3.692589) | 0.186212 / 0.680424 (-0.494212) | 0.018305 / 0.534201 (-0.515896) | 0.420459 / 0.579283 (-0.158824) | 0.421039 / 0.434364 (-0.013325) | 0.487348 / 0.540337 (-0.052989) | 0.587730 / 1.386936 (-0.799206) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png \"CML watermark\")\n" ]
https://api.github.com/repos/huggingface/datasets/issues/514
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/514/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/514/comments
https://api.github.com/repos/huggingface/datasets/issues/514/events
https://github.com/huggingface/datasets/issues/514
681,256,348
MDU6SXNzdWU2ODEyNTYzNDg=
514
dataset.shuffle(keep_in_memory=True) is never allowed
[ { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" }, { "color": "DF8D62", "default": false, "description": "", "id": 4614514401, "name": "hacktoberfest", "node_id": "LA_kwDODunzps8AAAABEwvm4Q", "url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest" } ]
closed
false
null
10
2020-08-18T18:47:40Z
2022-10-10T12:21:58Z
2022-10-10T12:21:58Z
null
As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)` The commit added the lines ```python # lines 994-996 in src/nlp/arrow_dataset.py assert ( not keep_in_memory or cache_file_name is None ), "Please use either `keep_in_memory` or `cache_file_name` but not both." ``` This affects both `shuffle()` as `select()` is a sub-routine, and `map()` that has the same check. I'd love to fix this myself, but unsure what the intention of the assert is given the rest of the logic in the function concerning `ccache_file_name` and `keep_in_memory`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/514/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/514/timeline
null
completed
null
null
false
[ "This seems to be fixed in #513 for the filter function, replacing `cache_file_name` with `indices_cache_file_name` in the assert. Although not for the `map()` function @thomwolf ", "Maybe I'm a bit tired but I fail to see the issue here.\r\n\r\nSince `cache_file_name` is `None` by default, if you set `keep_in_memory` to `True`, the assert should pass, no?", "I failed to realise that this only applies to `shuffle()`. Whenever `keep_in_memory` is set to True, this is passed on to the `select()` function. However, if `cache_file_name` is None, it will be defined in the `shuffle()` function before it is passed on to `select()`. \r\n\r\nThus, `select()` is called with `keep_in_memory=True` and a not None value for `cache_file_name`. \r\nThis is essentially fixed in #513 \r\n\r\nEasily reproducible:\r\n```python\r\n>>> import nlp\r\n>>> data = nlp.load_dataset(\"cosmos_qa\", split=\"train\")\r\nUsing custom data configuration default\r\n>>> data.shuffle(keep_in_memory=True)\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/nlp/arrow_dataset.py\", line 1398, in shuffle\r\n verbose=verbose,\r\n File \"/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/nlp/arrow_dataset.py\", line 1178, in select\r\n ), \"Please use either `keep_in_memory` or `cache_file_name` but not both.\"\r\nAssertionError: Please use either `keep_in_memory` or `cache_file_name` but not both.\r\n>>>data.select([0], keep_in_memory=True)\r\n# No error\r\n```", "Oh yes ok got it thanks. Should be fixed if we are happy with #513 indeed.", "My bad. This is actually not fixed in #513. Sorry about that...\r\nThe new `indices_cache_file_name` is set to a non-None value in the new `shuffle()` as well. \r\n\r\nThe buffer and caching mechanisms used in the `select()` function are too intricate for me to understand why the check is there at all. I've removed it in my local build and it seems to be working fine for my project, without really considering other implications of the change. \r\n\r\n", "Ok I'll investigate and add a series of tests on the `keep_in_memory=True` settings which is under-tested atm", "Hey, still seeing this issue with the latest version.", "The same :(", "These are the steps needed to fix this issue:\r\n1. add the following check to `Dataset.shuffle`:\r\n```python\r\nif keep_in_memory and indices_cache_file_name is not None:\r\n raise ValueError(\"Please use either `keep_in_memory` or `indices_cache_file_name` but not both.\")\r\n```\r\n2. set `indices_cache_file_name` to `None` if `keep_in_memory` is True in the call to `select`\r\n3. add a test with `shuffle(keep_in_memory=True)`", "Hi @mariosasko , I have opened this PR #5082 " ]
https://api.github.com/repos/huggingface/datasets/issues/931
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/931/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/931/comments
https://api.github.com/repos/huggingface/datasets/issues/931/events
https://github.com/huggingface/datasets/pull/931
753,818,193
MDExOlB1bGxSZXF1ZXN0NTI5ODIzMDYz
931
[WIP] complex_webqa - Error zipfile.BadZipFile: Bad CRC-32
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
1
2020-11-30T21:30:21Z
2022-10-03T09:40:09Z
2022-10-03T09:40:09Z
null
Have a string `zipfile.BadZipFile: Bad CRC-32 for file 'web_snippets_train.json'` error when downloading the largest file from dropbox: `https://www.dropbox.com/sh/7pkwkrfnwqhsnpo/AABVENv_Q9rFtnM61liyzO0La/web_snippets_train.json.zip?dl=1` Didn't managed to see how to solve that. Putting aside for now.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 1, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/931/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/931/timeline
null
null
true
{ "diff_url": "https://github.com/huggingface/datasets/pull/931.diff", "html_url": "https://github.com/huggingface/datasets/pull/931", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/931.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/931" }
true
[ "Thanks for your contribution, @thomwolf. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest that you create this dataset there. Please, feel free to tell us if you need some help." ]
https://api.github.com/repos/huggingface/datasets/issues/5125
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5125/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5125/comments
https://api.github.com/repos/huggingface/datasets/issues/5125/events
https://github.com/huggingface/datasets/pull/5125
1,411,602,813
PR_kwDODunzps5A7nr8
5,125
Add `pyproject.toml` for `black`
[]
closed
false
null
1
2022-10-17T13:38:47Z
2022-10-17T14:23:27Z
2022-10-17T14:21:09Z
null
Add `pyproject.toml` as a config file for the `black` tool to support VS Code's auto-formatting on save (and to be more consistent with the other HF projects).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5125/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5125/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5125.diff", "html_url": "https://github.com/huggingface/datasets/pull/5125", "merged_at": "2022-10-17T14:21:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/5125.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5125" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/1262
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1262/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1262/comments
https://api.github.com/repos/huggingface/datasets/issues/1262/events
https://github.com/huggingface/datasets/pull/1262
758,637,124
MDExOlB1bGxSZXF1ZXN0NTMzNzc3OTcy
1,262
Adding msr_genomics_kbcomp dataset
[]
closed
false
null
0
2020-12-07T16:01:30Z
2020-12-08T18:08:55Z
2020-12-08T18:08:47Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1262/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1262/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1262.diff", "html_url": "https://github.com/huggingface/datasets/pull/1262", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1262.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1262" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2820
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2820/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2820/comments
https://api.github.com/repos/huggingface/datasets/issues/2820/events
https://github.com/huggingface/datasets/issues/2820
975,210,712
MDU6SXNzdWU5NzUyMTA3MTI=
2,820
Downloading “reddit” dataset keeps timing out.
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
10
2021-08-20T02:52:36Z
2021-09-08T14:52:02Z
2021-09-08T14:52:02Z
null
## Describe the bug A clear and concise description of what the bug is. Everytime I try and download the reddit dataset it times out before finishing and I have to try again. There is some timeout error that I will post once it happens again. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("reddit", ignore_verifications=True, cache_dir="/Volumes/My Passport for Mac/og-chat-data") ``` ## Expected results A clear and concise description of the expected results. I would expect the download to finish, or at least provide a parameter to extend the read timeout window. ## Actual results Specify the actual results or traceback. Shown below in error message. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: macOS - Python version: 3.9.6 (conda env) - PyArrow version: N/A
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2820/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2820/timeline
null
completed
null
null
false
[ "```\r\nUsing custom data configuration default\r\nDownloading and preparing dataset reddit/default (download: 2.93 GiB, generated: 17.64 GiB, post-processed: Unknown size, total: 20.57 GiB) to /Volumes/My Passport for Mac/og-chat-data/reddit/default/1.0.0/98ba5abea674d3178f7588aa6518a5510dc0c6fa8176d9653a3546d5afcb3969...\r\nDownloading: 13%\r\n403M/3.14G [44:39<2:27:09, 310kB/s]\r\n---------------------------------------------------------------------------\r\ntimeout Traceback (most recent call last)\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in _error_catcher(self)\r\n 437 try:\r\n--> 438 yield\r\n 439 \r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in read(self, amt, decode_content, cache_content)\r\n 518 cache_content = False\r\n--> 519 data = self._fp.read(amt) if not fp_closed else b\"\"\r\n 520 if (\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/http/client.py in read(self, amt)\r\n 458 b = bytearray(amt)\r\n--> 459 n = self.readinto(b)\r\n 460 return memoryview(b)[:n].tobytes()\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/http/client.py in readinto(self, b)\r\n 502 # (for example, reading in 1k chunks)\r\n--> 503 n = self.fp.readinto(b)\r\n 504 if not n and b:\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/socket.py in readinto(self, b)\r\n 703 try:\r\n--> 704 return self._sock.recv_into(b)\r\n 705 except timeout:\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/ssl.py in recv_into(self, buffer, nbytes, flags)\r\n 1240 self.__class__)\r\n-> 1241 return self.read(nbytes, buffer)\r\n 1242 else:\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/ssl.py in read(self, len, buffer)\r\n 1098 if buffer is not None:\r\n-> 1099 return self._sslobj.read(len, buffer)\r\n 1100 else:\r\n\r\ntimeout: The read operation timed out\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nReadTimeoutError Traceback (most recent call last)\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/requests/models.py in generate()\r\n 757 try:\r\n--> 758 for chunk in self.raw.stream(chunk_size, decode_content=True):\r\n 759 yield chunk\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in stream(self, amt, decode_content)\r\n 575 while not is_fp_closed(self._fp):\r\n--> 576 data = self.read(amt=amt, decode_content=decode_content)\r\n 577 \r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in read(self, amt, decode_content, cache_content)\r\n 540 # Content-Length are caught.\r\n--> 541 raise IncompleteRead(self._fp_bytes_read, self.length_remaining)\r\n 542 \r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/contextlib.py in __exit__(self, type, value, traceback)\r\n 134 try:\r\n--> 135 self.gen.throw(type, value, traceback)\r\n 136 except StopIteration as exc:\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in _error_catcher(self)\r\n 442 # there is yet no clean way to get at it from this context.\r\n--> 443 raise ReadTimeoutError(self._pool, None, \"Read timed out.\")\r\n 444 \r\n\r\nReadTimeoutError: HTTPSConnectionPool(host='zenodo.org', port=443): Read timed out.\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nConnectionError Traceback (most recent call last)\r\n/var/folders/3f/md0t9sgj6rz8xy01fskttqdc0000gn/T/ipykernel_89016/1133441872.py in <module>\r\n 1 from datasets import load_dataset\r\n 2 \r\n----> 3 dataset = load_dataset(\"reddit\", ignore_verifications=True, cache_dir=\"/Volumes/My Passport for Mac/og-chat-data\")\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)\r\n 845 \r\n 846 # Download and prepare data\r\n--> 847 builder_instance.download_and_prepare(\r\n 848 download_config=download_config,\r\n 849 download_mode=download_mode,\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 613 logger.warning(\"HF google storage unreachable. Downloading and preparing it from source\")\r\n 614 if not downloaded_from_gcs:\r\n--> 615 self._download_and_prepare(\r\n 616 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 617 )\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 669 split_dict = SplitDict(dataset_name=self.name)\r\n 670 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 671 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 672 \r\n 673 # Checksums verification\r\n\r\n~/.cache/huggingface/modules/datasets_modules/datasets/reddit/98ba5abea674d3178f7588aa6518a5510dc0c6fa8176d9653a3546d5afcb3969/reddit.py in _split_generators(self, dl_manager)\r\n 73 def _split_generators(self, dl_manager):\r\n 74 \"\"\"Returns SplitGenerators.\"\"\"\r\n---> 75 dl_path = dl_manager.download_and_extract(_URL)\r\n 76 return [\r\n 77 datasets.SplitGenerator(\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/download_manager.py in download_and_extract(self, url_or_urls)\r\n 287 extracted_path(s): `str`, extracted paths of given URL(s).\r\n 288 \"\"\"\r\n--> 289 return self.extract(self.download(url_or_urls))\r\n 290 \r\n 291 def get_recorded_sizes_checksums(self):\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/download_manager.py in download(self, url_or_urls)\r\n 195 \r\n 196 start_time = datetime.now()\r\n--> 197 downloaded_path_or_paths = map_nested(\r\n 198 download_func,\r\n 199 url_or_urls,\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types)\r\n 194 # Singleton\r\n 195 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 196 return function(data_struct)\r\n 197 \r\n 198 disable_tqdm = bool(logger.getEffectiveLevel() > logging.INFO) or not utils.is_progress_bar_enabled()\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/download_manager.py in _download(self, url_or_filename, download_config)\r\n 218 # append the relative path to the base_path\r\n 219 url_or_filename = url_or_path_join(self._base_path, url_or_filename)\r\n--> 220 return cached_path(url_or_filename, download_config=download_config)\r\n 221 \r\n 222 def iter_archive(self, path):\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)\r\n 286 if is_remote_url(url_or_filename):\r\n 287 # URL, so get it from the cache (downloading if necessary)\r\n--> 288 output_path = get_from_cache(\r\n 289 url_or_filename,\r\n 290 cache_dir=cache_dir,\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)\r\n 643 ftp_get(url, temp_file)\r\n 644 else:\r\n--> 645 http_get(\r\n 646 url,\r\n 647 temp_file,\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/file_utils.py in http_get(url, temp_file, proxies, resume_size, headers, cookies, timeout, max_retries)\r\n 451 disable=bool(logging.get_verbosity() == logging.NOTSET),\r\n 452 )\r\n--> 453 for chunk in response.iter_content(chunk_size=1024):\r\n 454 if chunk: # filter out keep-alive new chunks\r\n 455 progress.update(len(chunk))\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/requests/models.py in generate()\r\n 763 raise ContentDecodingError(e)\r\n 764 except ReadTimeoutError as e:\r\n--> 765 raise ConnectionError(e)\r\n 766 else:\r\n 767 # Standard file-like object.\r\n\r\nConnectionError: HTTPSConnectionPool(host='zenodo.org', port=443): Read timed out.\r\n```", "Hey @lhoestq should I try to fix this issue ?", "It also doesn't seem to be \"smart caching\" and I received an error about a file not being found...", "To be clear, the error I get when I try to \"re-instantiate\" the download after failure is: \r\n```\r\nOSError: Cannot find data file. \r\nOriginal error:\r\n[Errno 20] Not a directory: <HOME>/.cache/huggingface/datasets/downloads/1ec12301abba4daa60eb3a90e53529b5b173296b22dc3bef3186e205c75e594c/corpus-webis-tldr-17.json'\r\n```", "Here is a new error:\r\n```\r\nConnectionError: Couldn't reach https://zenodo.org/record/1043504/files/corpus-webis-tldr-17.zip?download=1\r\n```", "Hi ! Since https://github.com/huggingface/datasets/pull/2803 we've changed the time out from 10sec to 100sec.\r\nThis should prevent the `ReadTimeoutError`. Feel free to try it out by installing `datasets` from source\r\n```\r\npip install git+https://github.com/huggingface/datasets.git\r\n```\r\n\r\nWhen re-running your code you said you get a `OSError`, could you try deleting the file at the path returned by the error ? (the one after `[Errno 20] Not a directory:`). Ideally when a download fails you should be able to re-run it without error; there might be an issue here.\r\n\r\nFinally not sure what we can do about `ConnectionError`, this must be an issue from zenodo. If it happens you simply need to try again\r\n", "@lhoestq thanks for the update. The directory specified by the OSError ie. \r\n```\r\n1ec12301abba4daa60eb3a90e53529b5b173296b22dc3bef3186e205c75e594c/corpus-webis-tldr-17.json \r\n```\r\n was not actually in that directory so I can't delete it. ", "Oh, then could you try deleting the parent directory `1ec12301abba4daa60eb3a90e53529b5b173296b22dc3bef3186e205c75e594c` instead ?\r\nThis way the download manager will know that it has to uncompress the data again", "It seems to have worked. It only took like 20min! I think the extra timeout length did the trick! One thing is that it downloaded a total of 41gb instead of 20gb but at least it finished. ", "Great ! The timeout change will be available in the next release of `datasets` :)" ]
https://api.github.com/repos/huggingface/datasets/issues/192
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/192/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/192/comments
https://api.github.com/repos/huggingface/datasets/issues/192/events
https://github.com/huggingface/datasets/issues/192
624,397,592
MDU6SXNzdWU2MjQzOTc1OTI=
192
[Question] Create Apache Arrow dataset from raw text file
[]
closed
false
null
4
2020-05-25T16:42:47Z
2021-12-18T01:45:34Z
2020-10-27T15:20:22Z
null
Hi guys, I have gathered and preprocessed about 2GB of COVID papers from CORD dataset @ Kggle. I have seen you have a text dataset as "Crime and punishment" in Apache arrow format. Do you have any script to do it from a raw txt file (preprocessed as for BERT like) or any guide? Is the worth of send it to you and add it to the NLP library? Thanks, Manu
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/192/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/192/timeline
null
completed
null
null
false
[ "We store every dataset in the Arrow format. This is convenient as it supports nested types and memory mapping. If you are curious feel free to check the [pyarrow documentation](https://arrow.apache.org/docs/python/)\r\n\r\nYou can use this library to load your covid papers by creating a dataset script. You can find inspiration from the ones we've already written in `/datasets`. Here is a link to the steps to [add a dataset](https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset)", "Hello @mrm8488 and @lhoestq \r\n\r\nIs there a way to convert a dataset to Apache arrow format (locally/personal use) & use it before sending it to hugging face?\r\n\r\nThanks :)", "> Is there a way to convert a dataset to Apache arrow format (locally/personal use) & use it before sending it to hugging face?\r\n\r\nSure, to get a dataset in arrow format you can either:\r\n- [load from local files (txt, json, csv)](https://huggingface.co/nlp/loading_datasets.html?highlight=csv#from-local-files)\r\n- OR [load from python data (dict, pandas)](https://huggingface.co/nlp/loading_datasets.html?highlight=csv#from-in-memory-data)\r\n- OR [create your own dataset script](https://huggingface.co/nlp/loading_datasets.html?highlight=csv#using-a-custom-dataset-loading-script)\r\n", "> > Is there a way to convert a dataset to Apache arrow format (locally/personal use) & use it before sending it to hugging face?\r\n> \r\n> Sure, to get a dataset in arrow format you can either:\r\n> \r\n> * [load from local files (txt, json, csv)](https://huggingface.co/nlp/loading_datasets.html?highlight=csv#from-local-files)\r\n> \r\n> * OR [load from python data (dict, pandas)](https://huggingface.co/nlp/loading_datasets.html?highlight=csv#from-in-memory-data)\r\n> \r\n> * OR [create your own dataset script](https://huggingface.co/nlp/loading_datasets.html?highlight=csv#using-a-custom-dataset-loading-script)\r\n\r\nLinks were broken. \r\n\r\nUpdated links provided as below\r\n- [load from local files (txt, json, csv)](https://huggingface.co/docs/datasets/loading_datasets.html#from-local-or-remote-files)\r\n- [load from python data (dict, pandas)](https://huggingface.co/docs/datasets/loading_datasets.html#from-in-memory-data)\r\n- [create your own dataset script](https://huggingface.co/docs/datasets/loading_datasets.html#using-a-custom-dataset-loading-script)\r\n" ]