url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.1B
node_id
stringlengths
18
32
number
int64
1
3.54k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
null
assignees
sequence
milestone
null
comments
sequence
created_at
int64
1,587B
1,642B
updated_at
int64
1,587B
1,642B
closed_at
int64
1,587B
1,641B
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/3544
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3544/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3544/comments
https://api.github.com/repos/huggingface/datasets/issues/3544/events
https://github.com/huggingface/datasets/issues/3544
1,095,784,681
I_kwDODunzps5BUFjp
3,544
Ability to split a dataset in multiple files.
{ "login": "Dref360", "id": 8976546, "node_id": "MDQ6VXNlcjg5NzY1NDY=", "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dref360", "html_url": "https://github.com/Dref360", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "organizations_url": "https://api.github.com/users/Dref360/orgs", "repos_url": "https://api.github.com/users/Dref360/repos", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "received_events_url": "https://api.github.com/users/Dref360/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,641,510,145,000
1,641,510,145,000
null
CONTRIBUTOR
null
Hello, **Is your feature request related to a problem? Please describe.** My use case is that I have one writer that adds columns and multiple workers reading the same `Dataset`. Each worker should have access to columns added by the writer when they reload the dataset. I understand that we shouldn't overwrite an arrow file as this could cause Segfault and so on. Before 1.16, I was able to overwrite the dataset and that would work most of the time with some retries. **Describe the solution you'd like** I was thinking that if we could append `Dataset._data_files`, when the workers reload the Dataset, they would get the new columns. **Describe alternatives you've considered** I currently need to 1. Save multiple "versions" of the dataset and load the latest. 2. Try working with cache files to get the latest columns. **Additional context** I think this would be a great addition to HFDataset as Parquet supports multi-files input out of the box! I can make a PR myself with some pointers as needed :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3544/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3544/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3543
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3543/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3543/comments
https://api.github.com/repos/huggingface/datasets/issues/3543/events
https://github.com/huggingface/datasets/issues/3543
1,095,226,438
I_kwDODunzps5BR9RG
3,543
Allow loading community metrics from the hub, just like datasets
{ "login": "eladsegal", "id": 13485709, "node_id": "MDQ6VXNlcjEzNDg1NzA5", "avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eladsegal", "html_url": "https://github.com/eladsegal", "followers_url": "https://api.github.com/users/eladsegal/followers", "following_url": "https://api.github.com/users/eladsegal/following{/other_user}", "gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}", "starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions", "organizations_url": "https://api.github.com/users/eladsegal/orgs", "repos_url": "https://api.github.com/users/eladsegal/repos", "events_url": "https://api.github.com/users/eladsegal/events{/privacy}", "received_events_url": "https://api.github.com/users/eladsegal/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
open
false
null
[]
null
[ "Hi ! Thanks for your message :) This is a great idea indeed. We haven't started working on this yet though. For now I guess you can host your metric on the Hub (either with your model or your dataset) and use `hf_hub_download` to download it (docs [here](https://github.com/huggingface/huggingface_hub/blob/main/docs/hub/how-to-downstream.md#cached_download))", "This is a great solution in the meantime, thanks!" ]
1,641,468,386,000
1,641,488,206,000
null
CONTRIBUTOR
null
**Is your feature request related to a problem? Please describe.** Currently, I can load a metric implemented by me by providing the local path to the file in `load_metric`. However, there is no option to do it with the metric uploaded to the hub. This means that if I want to allow other users to use it, they must download it first which makes the usage less smooth. **Describe the solution you'd like** Load metrics from the hub just like datasets are loaded. In order to not break stuff, the convention can be to put the metric file in a "metrics" folder in the hub.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3543/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3543/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3542
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3542/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3542/comments
https://api.github.com/repos/huggingface/datasets/issues/3542/events
https://github.com/huggingface/datasets/pull/3542
1,095,088,485
PR_kwDODunzps4wmPIP
3,542
Update the CC-100 dataset card
{ "login": "aajanki", "id": 353043, "node_id": "MDQ6VXNlcjM1MzA0Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/353043?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aajanki", "html_url": "https://github.com/aajanki", "followers_url": "https://api.github.com/users/aajanki/followers", "following_url": "https://api.github.com/users/aajanki/following{/other_user}", "gists_url": "https://api.github.com/users/aajanki/gists{/gist_id}", "starred_url": "https://api.github.com/users/aajanki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aajanki/subscriptions", "organizations_url": "https://api.github.com/users/aajanki/orgs", "repos_url": "https://api.github.com/users/aajanki/repos", "events_url": "https://api.github.com/users/aajanki/events{/privacy}", "received_events_url": "https://api.github.com/users/aajanki/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,641,458,118,000
1,641,494,264,000
1,641,494,264,000
CONTRIBUTOR
null
* summary from the dataset homepage * more details about the data structure * this dataset does not contain annotations
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3542/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3542/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3542", "html_url": "https://github.com/huggingface/datasets/pull/3542", "diff_url": "https://github.com/huggingface/datasets/pull/3542.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3542.patch", "merged_at": 1641494264000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3541
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3541/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3541/comments
https://api.github.com/repos/huggingface/datasets/issues/3541/events
https://github.com/huggingface/datasets/issues/3541
1,095,033,828
I_kwDODunzps5BROPk
3,541
Support 7-zip compressed data files
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,641,453,063,000
1,641,453,063,000
null
MEMBER
null
**Is your feature request related to a problem? Please describe.** We should support 7-zip compressed data files: - in `extract` - in `iter_archive` both in streaming and non-streaming modes.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3541/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3541/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3540
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3540/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3540/comments
https://api.github.com/repos/huggingface/datasets/issues/3540/events
https://github.com/huggingface/datasets/issues/3540
1,094,900,336
I_kwDODunzps5BQtpw
3,540
How to convert torch.utils.data.Dataset to datasets.arrow_dataset.Dataset?
{ "login": "CindyTing", "id": 35062414, "node_id": "MDQ6VXNlcjM1MDYyNDE0", "avatar_url": "https://avatars.githubusercontent.com/u/35062414?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CindyTing", "html_url": "https://github.com/CindyTing", "followers_url": "https://api.github.com/users/CindyTing/followers", "following_url": "https://api.github.com/users/CindyTing/following{/other_user}", "gists_url": "https://api.github.com/users/CindyTing/gists{/gist_id}", "starred_url": "https://api.github.com/users/CindyTing/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CindyTing/subscriptions", "organizations_url": "https://api.github.com/users/CindyTing/orgs", "repos_url": "https://api.github.com/users/CindyTing/repos", "events_url": "https://api.github.com/users/CindyTing/events{/privacy}", "received_events_url": "https://api.github.com/users/CindyTing/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,641,435,222,000
1,641,435,459,000
null
NONE
null
Hi, I use torch.utils.data.Dataset to define my own data, but I need to use the 'map' function of datasets.arrow_dataset.Dataset later, so I hope to convert torch.utils.data.Dataset to datasets.arrow_dataset.Dataset. Here is an example. ``` from torch.utils.data import Dataset from datasets.arrow_dataset import Dataset as HFDataset class ADataset(Dataset): def __init__(self, data): super().__init__() self.data = data def __getitem__(self, index): return self.data[index] def __len__(self): return self.len class MDataset(): def __init__(self, tokenizer: AutoTokenizer, data_args, training_args): self.train_dataset = ADataset(data_args) self.tokenizer = tokenizer self.data_args = data_args self.train_dataset = self.train_dataset.map( self.process_function, batched=True, remove_columns=column_names, load_from_cache_file=True, desc="Running tokenizer on train dataset", ) def process_function(self, examples): sentences = [" ".join(sample[0][3]) for sample in examples] tokenized = self.tokenizer( sentences, max_length=self.max_seq_len, padding=self.padding, truncation=True) ``` But it would raise an ERROR, AttributeError: 'ADataset' object has no attribute 'map'. so how to convert torch.utils.data.Dataset to datasets.arrow_dataset.Dataset? Thanks in advance!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3540/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3540/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3539
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3539/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3539/comments
https://api.github.com/repos/huggingface/datasets/issues/3539/events
https://github.com/huggingface/datasets/pull/3539
1,094,813,242
PR_kwDODunzps4wlXU4
3,539
Research wording for nc licenses
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[ { "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false } ]
null
[ "The CI failure is about some missing tags or sections in the dataset cards, and is unrelated to the part about non commercial use of this PR. Merging" ]
1,641,423,698,000
1,641,495,500,000
1,641,495,499,000
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3539/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3539/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3539", "html_url": "https://github.com/huggingface/datasets/pull/3539", "diff_url": "https://github.com/huggingface/datasets/pull/3539.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3539.patch", "merged_at": 1641495499000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3538
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3538/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3538/comments
https://api.github.com/repos/huggingface/datasets/issues/3538/events
https://github.com/huggingface/datasets/pull/3538
1,094,756,755
PR_kwDODunzps4wlLmD
3,538
Readme usage update
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[ { "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false } ]
null
[]
1,641,417,988,000
1,641,425,665,000
1,641,425,055,000
CONTRIBUTOR
null
Noticing that the recent commit throws a lot of errors in the automatic checks. It looks to me that those errors are simply errors that were already there (metadata issues), unrelated to what I've just changed, but worth another look to make sure.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3538/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3538/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3538", "html_url": "https://github.com/huggingface/datasets/pull/3538", "diff_url": "https://github.com/huggingface/datasets/pull/3538.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3538.patch", "merged_at": 1641425055000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3537
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3537/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3537/comments
https://api.github.com/repos/huggingface/datasets/issues/3537/events
https://github.com/huggingface/datasets/pull/3537
1,094,738,734
PR_kwDODunzps4wlH1d
3,537
added PII statements and license links to data cards
{ "login": "mcmillanmajora", "id": 26722925, "node_id": "MDQ6VXNlcjI2NzIyOTI1", "avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mcmillanmajora", "html_url": "https://github.com/mcmillanmajora", "followers_url": "https://api.github.com/users/mcmillanmajora/followers", "following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}", "gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}", "starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions", "organizations_url": "https://api.github.com/users/mcmillanmajora/orgs", "repos_url": "https://api.github.com/users/mcmillanmajora/repos", "events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}", "received_events_url": "https://api.github.com/users/mcmillanmajora/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,641,416,361,000
1,641,420,157,000
1,641,420,157,000
CONTRIBUTOR
null
Updates for the following datacards: multilingual_librispeech openslr speech commands superb timit_asr vctk
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3537/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3537/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3537", "html_url": "https://github.com/huggingface/datasets/pull/3537", "diff_url": "https://github.com/huggingface/datasets/pull/3537.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3537.patch", "merged_at": 1641420157000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3536
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3536/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3536/comments
https://api.github.com/repos/huggingface/datasets/issues/3536/events
https://github.com/huggingface/datasets/pull/3536
1,094,645,771
PR_kwDODunzps4wk0Yb
3,536
update `pretty_name` for all datasets
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,641,408,305,000
1,641,408,531,000
null
CONTRIBUTOR
null
This PR updates `pretty_name` for all datasets. Previous PR #3498 had done this for only first 200 datasets
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3536/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3536/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3536", "html_url": "https://github.com/huggingface/datasets/pull/3536", "diff_url": "https://github.com/huggingface/datasets/pull/3536.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3536.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3535
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3535/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3535/comments
https://api.github.com/repos/huggingface/datasets/issues/3535/events
https://github.com/huggingface/datasets/pull/3535
1,094,633,214
PR_kwDODunzps4wkxv0
3,535
Add SVHN dataset
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,641,407,349,000
1,641,407,349,000
null
CONTRIBUTOR
null
Add the SVHN dataset. Additional notes: * compared to the TFDS implementation, exposes additional the "full numbers" config * adds the streaming support for `os.path.splitext` and `scipy.io.loadmat` * adds `h5py` to the requirements list for the dummy data test
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3535/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3535/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3535", "html_url": "https://github.com/huggingface/datasets/pull/3535", "diff_url": "https://github.com/huggingface/datasets/pull/3535.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3535.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3534
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3534/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3534/comments
https://api.github.com/repos/huggingface/datasets/issues/3534/events
https://github.com/huggingface/datasets/pull/3534
1,094,352,449
PR_kwDODunzps4wj3LE
3,534
Update wiki_dpr README.md
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,641,389,384,000
1,641,392,212,000
1,641,392,211,000
MEMBER
null
Some infos of wiki_dpr were missing as noted in https://github.com/huggingface/datasets/issues/3510, I added them and updated the tags and the examples
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3534/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3534/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3534", "html_url": "https://github.com/huggingface/datasets/pull/3534", "diff_url": "https://github.com/huggingface/datasets/pull/3534.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3534.patch", "merged_at": 1641392211000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3533
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3533/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3533/comments
https://api.github.com/repos/huggingface/datasets/issues/3533/events
https://github.com/huggingface/datasets/issues/3533
1,094,156,147
I_kwDODunzps5BN39z
3,533
Task search function on hub not working correctly
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }, { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[ "known issue due to https://github.com/huggingface/datasets/pull/2362 (and [internal](https://github.com/huggingface/moon-landing/issues/946)) , will be solved soon" ]
1,641,375,390,000
1,641,376,988,000
null
MEMBER
null
When I want to look at all datasets of the category: `speech-processing` *i.e.* https://huggingface.co/datasets?task_categories=task_categories:speech-processing&sort=downloads , then the following dataset doesn't show up for some reason: - https://huggingface.co/datasets/speech_commands even thought it's task tags seem correct: https://raw.githubusercontent.com/huggingface/datasets/master/datasets/speech_commands/README.md
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3533/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3533/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3532
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3532/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3532/comments
https://api.github.com/repos/huggingface/datasets/issues/3532/events
https://github.com/huggingface/datasets/pull/3532
1,094,035,066
PR_kwDODunzps4wi1ft
3,532
Give clearer instructions to add the YAML tags
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "this is great, maybe just put all of it in one line?\r\n\r\n> TODO: Add YAML tags here. Copy-paste the tags obtained with the online tagging app: https://huggingface.co/spaces/huggingface/datasets-tagging" ]
1,641,365,272,000
1,641,495,517,000
null
MEMBER
null
Fix #3531. CC: @julien-c @VictorSanh
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3532/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3532/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3532", "html_url": "https://github.com/huggingface/datasets/pull/3532", "diff_url": "https://github.com/huggingface/datasets/pull/3532.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3532.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3531
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3531/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3531/comments
https://api.github.com/repos/huggingface/datasets/issues/3531/events
https://github.com/huggingface/datasets/issues/3531
1,094,033,280
I_kwDODunzps5BNZ-A
3,531
Give clearer instructions to add the YAML tags
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
1,641,365,060,000
1,641,365,060,000
null
MEMBER
null
## Describe the bug As reported by @julien-c, many community datasets contain the line `YAML tags:` at the top of the YAML section in the header of the README file. See e.g.: https://huggingface.co/datasets/bigscience/P3/commit/a03bea08cf4d58f268b469593069af6aeb15de32 Maybe we should give clearer instruction/hints in the README template.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3531/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3531/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3530
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3530/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3530/comments
https://api.github.com/repos/huggingface/datasets/issues/3530/events
https://github.com/huggingface/datasets/pull/3530
1,093,894,732
PR_kwDODunzps4wiZCw
3,530
Update README.md
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[ { "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false } ]
null
[]
1,641,346,327,000
1,641,387,051,000
1,641,387,050,000
CONTRIBUTOR
null
Removing reference to "Common Voice" in Personal and Sensitive Information section. Adding link to license. Correct license type in metadata.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3530/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3530/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3530", "html_url": "https://github.com/huggingface/datasets/pull/3530", "diff_url": "https://github.com/huggingface/datasets/pull/3530.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3530.patch", "merged_at": 1641387050000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3529
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3529/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3529/comments
https://api.github.com/repos/huggingface/datasets/issues/3529/events
https://github.com/huggingface/datasets/pull/3529
1,093,846,356
PR_kwDODunzps4wiPA9
3,529
Update README.md
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[ { "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false } ]
null
[]
1,641,340,367,000
1,641,387,015,000
1,641,387,014,000
CONTRIBUTOR
null
Updating licensing information & personal and sensitive information.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3529/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3529/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3529", "html_url": "https://github.com/huggingface/datasets/pull/3529", "diff_url": "https://github.com/huggingface/datasets/pull/3529.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3529.patch", "merged_at": 1641387014000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3528
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3528/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3528/comments
https://api.github.com/repos/huggingface/datasets/issues/3528/events
https://github.com/huggingface/datasets/pull/3528
1,093,844,616
PR_kwDODunzps4wiOqH
3,528
Update README.md
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[ { "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false } ]
null
[]
1,641,340,091,000
1,641,386,981,000
1,641,386,980,000
CONTRIBUTOR
null
Updating license with appropriate capitalization & a link. Updating Personal and Sensitive Information to address PII concern.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3528/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3528/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3528", "html_url": "https://github.com/huggingface/datasets/pull/3528", "diff_url": "https://github.com/huggingface/datasets/pull/3528.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3528.patch", "merged_at": 1641386980000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3527
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3527/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3527/comments
https://api.github.com/repos/huggingface/datasets/issues/3527/events
https://github.com/huggingface/datasets/pull/3527
1,093,840,707
PR_kwDODunzps4wiN1w
3,527
Update README.md
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[ { "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false } ]
null
[]
1,641,339,581,000
1,641,342,230,000
1,641,342,230,000
CONTRIBUTOR
null
Adding licensing information.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3527/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3527/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3527", "html_url": "https://github.com/huggingface/datasets/pull/3527", "diff_url": "https://github.com/huggingface/datasets/pull/3527.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3527.patch", "merged_at": 1641342230000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3526
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3526/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3526/comments
https://api.github.com/repos/huggingface/datasets/issues/3526/events
https://github.com/huggingface/datasets/pull/3526
1,093,833,446
PR_kwDODunzps4wiMaQ
3,526
Update README.md
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[ { "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false } ]
null
[]
1,641,338,723,000
1,641,339,008,000
null
CONTRIBUTOR
null
Not entirely sure, following the links here, but it seems the relevant license is at https://github.com/soskek/bookcorpus/blob/master/LICENSE
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3526/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3526/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3526", "html_url": "https://github.com/huggingface/datasets/pull/3526", "diff_url": "https://github.com/huggingface/datasets/pull/3526.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3526.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3525
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3525/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3525/comments
https://api.github.com/repos/huggingface/datasets/issues/3525/events
https://github.com/huggingface/datasets/pull/3525
1,093,831,268
PR_kwDODunzps4wiL8p
3,525
Adding license information for Openbookcorpus
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[ { "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks, @meg-huggingface for the updates!\r\n\r\nThanks also for setting me as a reviewer, but I'm totally not a specialist of the datasets themselves, so I prefer to just lurk and let @lhoestq @albertvillanova @mariosasko or @patrickvonplaten review these changes (https://github.com/huggingface/datasets/pulls/assigned/meg-huggingface).", "The MIT license seems to be for the crawling code, no ? Then maybe we can also redirect users to the [terms of smashwords.com](https://www.smashwords.com/about/tos) regarding copyrights, in particular the paragraph 10 for end-users. In particular it seems that end users can download and use the content \"for their personal enjoyment in any reasonable non-commercial manner in compliance with copyright law\" and the smashwords end-users agreement.\r\n\r\nIt should be the same for https://github.com/huggingface/datasets/pull/3526 as well" ]
1,641,338,436,000
1,641,386,918,000
null
CONTRIBUTOR
null
Not entirely sure, following the links here, but it seems the relevant license is at https://github.com/soskek/bookcorpus/blob/master/LICENSE
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3525/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3525/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3525", "html_url": "https://github.com/huggingface/datasets/pull/3525", "diff_url": "https://github.com/huggingface/datasets/pull/3525.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3525.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3524
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3524/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3524/comments
https://api.github.com/repos/huggingface/datasets/issues/3524/events
https://github.com/huggingface/datasets/pull/3524
1,093,826,723
PR_kwDODunzps4wiK_v
3,524
Adding link to license.
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[ { "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false } ]
null
[]
1,641,337,908,000
1,641,385,898,000
1,641,385,897,000
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3524/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3524/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3524", "html_url": "https://github.com/huggingface/datasets/pull/3524", "diff_url": "https://github.com/huggingface/datasets/pull/3524.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3524.patch", "merged_at": 1641385897000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3523
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3523/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3523/comments
https://api.github.com/repos/huggingface/datasets/issues/3523/events
https://github.com/huggingface/datasets/pull/3523
1,093,819,227
PR_kwDODunzps4wiJc2
3,523
Added links to licensing and PII message in vctk dataset
{ "login": "mcmillanmajora", "id": 26722925, "node_id": "MDQ6VXNlcjI2NzIyOTI1", "avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mcmillanmajora", "html_url": "https://github.com/mcmillanmajora", "followers_url": "https://api.github.com/users/mcmillanmajora/followers", "following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}", "gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}", "starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions", "organizations_url": "https://api.github.com/users/mcmillanmajora/orgs", "repos_url": "https://api.github.com/users/mcmillanmajora/repos", "events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}", "received_events_url": "https://api.github.com/users/mcmillanmajora/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,641,337,018,000
1,641,497,630,000
1,641,497,630,000
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3523/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3523/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3523", "html_url": "https://github.com/huggingface/datasets/pull/3523", "diff_url": "https://github.com/huggingface/datasets/pull/3523.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3523.patch", "merged_at": 1641497630000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3522
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3522/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3522/comments
https://api.github.com/repos/huggingface/datasets/issues/3522/events
https://github.com/huggingface/datasets/issues/3522
1,093,807,586
I_kwDODunzps5BMi3i
3,522
wmt19 is broken (zh-en)
{ "login": "AjayP13", "id": 5404177, "node_id": "MDQ6VXNlcjU0MDQxNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/5404177?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AjayP13", "html_url": "https://github.com/AjayP13", "followers_url": "https://api.github.com/users/AjayP13/followers", "following_url": "https://api.github.com/users/AjayP13/following{/other_user}", "gists_url": "https://api.github.com/users/AjayP13/gists{/gist_id}", "starred_url": "https://api.github.com/users/AjayP13/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AjayP13/subscriptions", "organizations_url": "https://api.github.com/users/AjayP13/orgs", "repos_url": "https://api.github.com/users/AjayP13/repos", "events_url": "https://api.github.com/users/AjayP13/events{/privacy}", "received_events_url": "https://api.github.com/users/AjayP13/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,641,335,625,000
1,641,335,625,000
null
NONE
null
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("wmt19", 'zh-en') ``` ## Expected results The dataset should download. ## Actual results `ConnectionError: Couldn't reach ftp://cwmt-wmt:cwmt-wmt@datasets.nju.edu.cn/parallel/casia2015.zip` ## Environment info - `datasets` version: 1.15.1 - Platform: Linux - Python version: 3.8
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3522/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3522/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3521
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3521/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3521/comments
https://api.github.com/repos/huggingface/datasets/issues/3521/events
https://github.com/huggingface/datasets/pull/3521
1,093,797,947
PR_kwDODunzps4wiFCs
3,521
Vivos license update
{ "login": "mcmillanmajora", "id": 26722925, "node_id": "MDQ6VXNlcjI2NzIyOTI1", "avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mcmillanmajora", "html_url": "https://github.com/mcmillanmajora", "followers_url": "https://api.github.com/users/mcmillanmajora/followers", "following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}", "gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}", "starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions", "organizations_url": "https://api.github.com/users/mcmillanmajora/orgs", "repos_url": "https://api.github.com/users/mcmillanmajora/repos", "events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}", "received_events_url": "https://api.github.com/users/mcmillanmajora/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,641,334,667,000
1,641,334,696,000
1,641,334,696,000
CONTRIBUTOR
null
Updated the license information with the link to the license text
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3521/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3521/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3521", "html_url": "https://github.com/huggingface/datasets/pull/3521", "diff_url": "https://github.com/huggingface/datasets/pull/3521.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3521.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3520
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3520/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3520/comments
https://api.github.com/repos/huggingface/datasets/issues/3520/events
https://github.com/huggingface/datasets/pull/3520
1,093,747,753
PR_kwDODunzps4wh6oD
3,520
Audio datacard update - first pass
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[ { "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false } ]
null
[ "I'm not sure that we want to change the tags at the top of the cards by hand. Those are used to create the tags in the hub. Although looking at all the tags now, we might want to normalize the current tags again (hyphens or no, \".0\" or no). Maybe we could add a binary tag for public domain or not?", "> \r\n\r\nThat's a good point, I didn't realize these were auto-populated.\r\nAt the same time, some of them are wrong -- how/where are they auto-populated? Seems like we should fix it at that source for the future.\r\nIn the mean time, I see that \"cc0-1.0\" is the desired tag for public domain, so I will change that for now." ]
1,641,329,905,000
1,641,385,821,000
1,641,385,820,000
CONTRIBUTOR
null
Filling out data card "Personal and Sensitive Information" for speech datasets to note PII concerns
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3520/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3520/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3520", "html_url": "https://github.com/huggingface/datasets/pull/3520", "diff_url": "https://github.com/huggingface/datasets/pull/3520.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3520.patch", "merged_at": 1641385820000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3519
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3519/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3519/comments
https://api.github.com/repos/huggingface/datasets/issues/3519/events
https://github.com/huggingface/datasets/pull/3519
1,093,655,205
PR_kwDODunzps4whnXH
3,519
CC100: Using HTTPS for the data source URL fixes load_dataset()
{ "login": "aajanki", "id": 353043, "node_id": "MDQ6VXNlcjM1MzA0Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/353043?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aajanki", "html_url": "https://github.com/aajanki", "followers_url": "https://api.github.com/users/aajanki/followers", "following_url": "https://api.github.com/users/aajanki/following{/other_user}", "gists_url": "https://api.github.com/users/aajanki/gists{/gist_id}", "starred_url": "https://api.github.com/users/aajanki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aajanki/subscriptions", "organizations_url": "https://api.github.com/users/aajanki/orgs", "repos_url": "https://api.github.com/users/aajanki/repos", "events_url": "https://api.github.com/users/aajanki/events{/privacy}", "received_events_url": "https://api.github.com/users/aajanki/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,641,321,954,000
1,641,403,714,000
1,641,403,714,000
CONTRIBUTOR
null
Without this change the following script (with any lang parameter) consistently fails. After changing to the HTTPS URL, the script works as expected. ```python from datasets import load_dataset dataset = load_dataset("cc100", lang="en") ``` This is the error produced by the previous script: ```sh Using custom data configuration en-lang=en Downloading and preparing dataset cc100/en to /home/antti/.cache/huggingface/datasets/cc100/en-lang=en/0.0.0/526ac20780de5e074cf73a7466e868cb67f960b48f6de42ff6a6c4e71910d71b... Traceback (most recent call last): File "/home/antti/tmp/cc100/cc100.py", line 3, in <module> dataset = load_dataset("cc100", lang="en") File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/load.py", line 1694, in load_dataset builder_instance.download_and_prepare( File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/builder.py", line 595, in download_and_prepare self._download_and_prepare( File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/builder.py", line 661, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/antti/.cache/huggingface/modules/datasets_modules/datasets/cc100/526ac20780de5e074cf73a7466e868cb67f960b48f6de42ff6a6c4e71910d71b/cc100.py", line 117, in _split_generators path = dl_manager.download_and_extract(download_url) File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 308, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 196, in download downloaded_path_or_paths = map_nested( File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 251, in map_nested return function(data_struct) File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 298, in cached_path output_path = get_from_cache( File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 617, in get_from_cache raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})") ConnectionError: Couldn't reach http://data.statmt.org/cc-100/en.txt.xz (error 503) ``` Note that I get the same behavior also using curl on the command line. The plain HTTP "curl -L http://data.statmt.org/cc-100/en.txt.xz" fails with "503 Service unavailable", but the with the HTTPS version of the URL curl starts downloading the file. My guess is that the server does overly aggressive rate-limitting. When a client requests an HTTP URL, it (sensibly) gets redirected to the HTTPS equivalent, but now the server notices two requests coming from the same client (the original HTTP and the redirected HTTPS) during a brief time windows and rate-limitter kicks in and blocks the second request! If the client initally uses the HTTPS URL there's only one incoming request which the rate-limitter allows.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3519/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3519/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3519", "html_url": "https://github.com/huggingface/datasets/pull/3519", "diff_url": "https://github.com/huggingface/datasets/pull/3519.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3519.patch", "merged_at": 1641403714000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3518
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3518/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3518/comments
https://api.github.com/repos/huggingface/datasets/issues/3518/events
https://github.com/huggingface/datasets/issues/3518
1,093,063,455
I_kwDODunzps5BJtMf
3,518
Add PubMed Central Open Access dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "In the framework of BigScience:\r\n- bigscience-workshop/data_tooling#121\r\n\r\nI have created this dataset as a community dataset: https://huggingface.co/datasets/albertvillanova/pmc_open_access\r\n\r\nHowever, I was wondering that it may be more appropriate to move it under an org namespace: `pubmed_central` or `pmc`\r\nThis way, we could add other datasets I'm also working on: Author Manuscript Dataset, Historical OCR Dataset, LitArch Open Access Subset.\r\n\r\nWhat do you think? @lhoestq @mariosasko ", "Why not ! Having them under such namespaces would also help people searching for this kind of datasets.\r\nWe can also invite people from pubmed at one point" ]
1,641,279,275,000
1,641,392,157,000
null
MEMBER
null
## Adding a Dataset - **Name:** PubMed Central Open Access - **Description:** The PMC Open Access Subset includes more than 3.4 million journal articles and preprints that are made available under license terms that allow reuse. - **Paper:** *link to the dataset paper if available* - **Data:** https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/ - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3518/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3518/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3517
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3517/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3517/comments
https://api.github.com/repos/huggingface/datasets/issues/3517/events
https://github.com/huggingface/datasets/pull/3517
1,092,726,651
PR_kwDODunzps4wemwU
3,517
Add CPPE-5 dataset
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,641,234,680,000
1,641,408,782,000
1,641,408,782,000
CONTRIBUTOR
null
Adds the recently released CPPE-5 dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3517/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3517/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3517", "html_url": "https://github.com/huggingface/datasets/pull/3517", "diff_url": "https://github.com/huggingface/datasets/pull/3517.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3517.patch", "merged_at": 1641408782000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3516
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3516/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3516/comments
https://api.github.com/repos/huggingface/datasets/issues/3516/events
https://github.com/huggingface/datasets/pull/3516
1,092,657,738
PR_kwDODunzps4weYhE
3,516
dataset `asset` - change to raw.githubusercontent.com URLs
{ "login": "VictorSanh", "id": 16107619, "node_id": "MDQ6VXNlcjE2MTA3NjE5", "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VictorSanh", "html_url": "https://github.com/VictorSanh", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "repos_url": "https://api.github.com/users/VictorSanh/repos", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,641,228,237,000
1,641,231,542,000
1,641,231,541,000
MEMBER
null
Changed the URLs to the ones it was automatically re-directing. Before, the download was failing
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3516/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3516/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3516", "html_url": "https://github.com/huggingface/datasets/pull/3516", "diff_url": "https://github.com/huggingface/datasets/pull/3516.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3516.patch", "merged_at": 1641231541000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3515
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3515/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3515/comments
https://api.github.com/repos/huggingface/datasets/issues/3515/events
https://github.com/huggingface/datasets/issues/3515
1,092,624,695
I_kwDODunzps5BICE3
3,515
`ExpectedMoreDownloadedFiles` for `evidence_infer_treatment`
{ "login": "VictorSanh", "id": 16107619, "node_id": "MDQ6VXNlcjE2MTA3NjE5", "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VictorSanh", "html_url": "https://github.com/VictorSanh", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "repos_url": "https://api.github.com/users/VictorSanh/repos", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,641,225,518,000
1,641,225,518,000
null
MEMBER
null
## Describe the bug I am trying to load a dataset called `evidence_infer_treatment`. The first subset (`1.1`) works fine but the second returns an error (`2.0`). It downloads a file but crashes during the checksums. ## Steps to reproduce the bug ```python >>> from datasets import load_dataset >>> load_dataset("evidence_infer_treatment", "2.0") Downloading and preparing dataset evidence_infer_treatment/2.0 (download: 34.84 MiB, generated: 91.46 MiB, post-processed: Unknown size, total: 126.30 MiB) to /home/victor_huggingface_co/.cache/huggingface/datasets/evidence_infer_treatment/2.0/2.0.0/6812655bfd26cbaa58c84eab098bf6403694b06c6ae2ded603c55681868a1e24... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/load.py", line 1669, in load_dataset use_auth_token=use_auth_token, File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py", line 594, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py", line 664, in _download_and_prepare self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files" File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 33, in verify_checksums raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums))) datasets.utils.info_utils.ExpectedMoreDownloadedFiles: {'http://evidence-inference.ebm-nlp.com/v2.0.tar.gz'} ``` I did try to pass the argument `ignore_verifications=True` but run into an error when trying to build the dataset: ```python >>> load_dataset("evidence_infer_treatment", "2.0", ignore_verifications=True, download_mode="force_redownload") Downloading and preparing dataset evidence_infer_treatment/2.0 (download: 34.84 MiB, generated: 91.46 MiB, post-processed: Unknown size, total: 126.30 MiB) to /home/victor_huggingface_co/.cache/huggingface/datasets/evidence_infer_treatment/2.0/2.0.0/6812655bfd26cbaa58c84eab098bf6403694b06c6ae2ded603c55681868a1e24... Downloading: 164MB [00:23, 6.98MB/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/load.py", line 1669, in load_dataset use_auth_token=use_auth_token, File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py", line 594, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py", line 681, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py", line 1080, in _prepare_split example = self.info.features.encode_example(record) File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 1032, in encode_example return encode_nested_example(self, example) File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 807, in encode_nested_example k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 807, in <dictcomp> k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 829, in encode_nested_example list_dict[k] = [encode_nested_example(dict_tuples[0], o) for o in dict_tuples[1:]] File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 829, in <listcomp> list_dict[k] = [encode_nested_example(dict_tuples[0], o) for o in dict_tuples[1:]] File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 828, in encode_nested_example for k, dict_tuples in utils.zip_dict(schema.feature, *obj): File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 136, in zip_dict yield key, tuple(d[key] for d in dicts) File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 136, in <genexpr> yield key, tuple(d[key] for d in dicts) KeyError: '' ``` ## Environment info - `datasets` version: 1.16.1 - Platform: Linux-5.0.0-1020-gcp-x86_64-with-debian-buster-sid - Python version: 3.7.11 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3515/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3515/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3514
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3514/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3514/comments
https://api.github.com/repos/huggingface/datasets/issues/3514/events
https://github.com/huggingface/datasets/pull/3514
1,092,606,383
PR_kwDODunzps4weN9W
3,514
Fix to_tf_dataset references in docs
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The code snippet in [this section](https://huggingface.co/docs/datasets/master/use_dataset.html?highlight=to_tf_dataset#tensorflow) is missing an import (`DataCollatorWithPadding`) and doesn't initialize the TF model before the `model.fit` call." ]
1,641,223,899,000
1,641,408,768,000
1,641,408,768,000
CONTRIBUTOR
null
Fix the `to_tf_dataset` references in the docs. The currently failing example of usage will be fixed by #3338.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3514/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3514/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3514", "html_url": "https://github.com/huggingface/datasets/pull/3514", "diff_url": "https://github.com/huggingface/datasets/pull/3514.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3514.patch", "merged_at": 1641408767000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3513
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3513/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3513/comments
https://api.github.com/repos/huggingface/datasets/issues/3513/events
https://github.com/huggingface/datasets/pull/3513
1,092,569,802
PR_kwDODunzps4weGWl
3,513
Add desc parameter to filter
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,641,221,058,000
1,641,407,485,000
1,641,407,485,000
CONTRIBUTOR
null
Fix #3317
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3513/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3513/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3513", "html_url": "https://github.com/huggingface/datasets/pull/3513", "diff_url": "https://github.com/huggingface/datasets/pull/3513.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3513.patch", "merged_at": 1641407484000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3512
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3512/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3512/comments
https://api.github.com/repos/huggingface/datasets/issues/3512/events
https://github.com/huggingface/datasets/issues/3512
1,092,359,973
I_kwDODunzps5BHBcl
3,512
No Data format found
{ "login": "shazzad47", "id": 57741378, "node_id": "MDQ6VXNlcjU3NzQxMzc4", "avatar_url": "https://avatars.githubusercontent.com/u/57741378?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shazzad47", "html_url": "https://github.com/shazzad47", "followers_url": "https://api.github.com/users/shazzad47/followers", "following_url": "https://api.github.com/users/shazzad47/following{/other_user}", "gists_url": "https://api.github.com/users/shazzad47/gists{/gist_id}", "starred_url": "https://api.github.com/users/shazzad47/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shazzad47/subscriptions", "organizations_url": "https://api.github.com/users/shazzad47/orgs", "repos_url": "https://api.github.com/users/shazzad47/repos", "events_url": "https://api.github.com/users/shazzad47/events{/privacy}", "received_events_url": "https://api.github.com/users/shazzad47/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
open
false
null
[]
null
[ "Hi, which dataset is giving you an error?" ]
1,641,202,871,000
1,641,202,972,000
null
NONE
null
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3512/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3512/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3511
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3511/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3511/comments
https://api.github.com/repos/huggingface/datasets/issues/3511/events
https://github.com/huggingface/datasets/issues/3511
1,092,170,411
I_kwDODunzps5BGTKr
3,511
Dataset
{ "login": "MIKURI0114", "id": 92849978, "node_id": "U_kgDOBYjHOg", "avatar_url": "https://avatars.githubusercontent.com/u/92849978?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MIKURI0114", "html_url": "https://github.com/MIKURI0114", "followers_url": "https://api.github.com/users/MIKURI0114/followers", "following_url": "https://api.github.com/users/MIKURI0114/following{/other_user}", "gists_url": "https://api.github.com/users/MIKURI0114/gists{/gist_id}", "starred_url": "https://api.github.com/users/MIKURI0114/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MIKURI0114/subscriptions", "organizations_url": "https://api.github.com/users/MIKURI0114/orgs", "repos_url": "https://api.github.com/users/MIKURI0114/repos", "events_url": "https://api.github.com/users/MIKURI0114/events{/privacy}", "received_events_url": "https://api.github.com/users/MIKURI0114/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
null
[]
null
[ "Can you reopen with the correct dataset name (if relevant)?\r\n\r\nThanks", "The dataset viewer was down tonight. It works again." ]
1,641,175,403,000
1,641,199,286,000
1,641,198,187,000
NONE
null
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3511/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3511/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3510
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3510/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3510/comments
https://api.github.com/repos/huggingface/datasets/issues/3510/events
https://github.com/huggingface/datasets/issues/3510
1,091,997,004
I_kwDODunzps5BFo1M
3,510
`wiki_dpr` details for Open Domain Question Answering tasks
{ "login": "pk1130", "id": 40918514, "node_id": "MDQ6VXNlcjQwOTE4NTE0", "avatar_url": "https://avatars.githubusercontent.com/u/40918514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pk1130", "html_url": "https://github.com/pk1130", "followers_url": "https://api.github.com/users/pk1130/followers", "following_url": "https://api.github.com/users/pk1130/following{/other_user}", "gists_url": "https://api.github.com/users/pk1130/gists{/gist_id}", "starred_url": "https://api.github.com/users/pk1130/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pk1130/subscriptions", "organizations_url": "https://api.github.com/users/pk1130/orgs", "repos_url": "https://api.github.com/users/pk1130/repos", "events_url": "https://api.github.com/users/pk1130/events{/privacy}", "received_events_url": "https://api.github.com/users/pk1130/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! According to the DPR paper, the wikipedia dump is the one from Dec. 20, 2018.\r\nEach instance contains a paragraph of at most 100 word, as well as the title of the wikipedia page it comes from and the DPR embedding (a 768-d vector)." ]
1,641,121,441,000
1,641,388,565,000
null
NONE
null
Hey guys! Thanks for creating the `wiki_dpr` dataset! I am currently trying to use the dataset for context retrieval using DPR on NQ questions and need details about what each of the files and data instances mean, which version of the Wikipedia dump it uses, etc. Please respond at your earliest convenience regarding the same! Thanks a ton! P.S.: (If one of @thomwolf @lewtun @lhoestq could respond, that would be even better since they have the first-hand details of the dataset. If anyone else has those, please reach out! Thanks!)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3510/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3510/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3507
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3507/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3507/comments
https://api.github.com/repos/huggingface/datasets/issues/3507/events
https://github.com/huggingface/datasets/issues/3507
1,091,214,808
I_kwDODunzps5BCp3Y
3,507
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
open
false
null
[]
null
[ "IMO, the data streaming test is good enough of a test that the dataset works correctly (assuming that we can more or less ensure that if streaming works then the non-streaming case will also work), so that for datasets that have a working dataset preview, we can remove the dummy data IMO. On the other hand, it seems like not all datasets have streaming enabled yet and for those datasets (if they are used a lot), I think it would be nice to continue testing some dummy data.\r\n\r\nI don't really have an opinion regarding the JSON metadata as I don't know enough about it.\r\n\r\n", "I don't know all the details, but generally I'd be in favor of unifying the metadata formats into YAML inside .md (and so deprecating the dataset_infos.json) \r\n\r\n(Ultimately the CI can run on \"HuggingFace Actions\" instead of on GitHub)", "The dataset_infos.json file currently has these useful infos for each dataset configuration, that I think can be moved to the dataset tags:\r\n- Size of the dataset in MB: download size, arrow file size, and total size (sum of download + arrow)\r\n- Size of each split in MB and number of examples. Again this can be moved to the dataset tags\r\n- Feature type of each column\r\n- supported task templates (it defines what columns correspond to the features and labels for example)\r\n\r\nBut it also has this, which I'm not sure if it should be in the tags or not:\r\n- Checksums of the downloaded files for integrity verifications\r\n\r\nSo ultimately this file could probably be deprecated in favor of having the infos in the tags.\r\n\r\n> Also note that for generating both (dataset_infos.json file and dummy data), the entire dataset needs being downloaded. This can be an issue for huge datasets (like WIT, with 400 GB of data).\r\n\r\nTo get the exact number of examples and size in MB of the dataset, one needs to download and generate it completely. IMO these infos are very important when someone considers using a dataset. Though using streaming we could do some extrapolation to have approximate values instead.\r\n\r\nFor the integrity verifications we also need the number of examples and the checksums of the downloaded files, so it requires the dataset to be fully downloaded once. This can be optional though.\r\n\r\n> IMO, the data streaming test is good enough of a test that the dataset works correctly (assuming that we can more or less ensure that if streaming works then the non-streaming case will also work)\r\n\r\nI agree with this. Usually if a dataset works in streaming mode, then it works in non-streaming mode (the other way around is not true though).\r\n\r\n> On the other hand, it seems like not all datasets have streaming enabled yet and for those datasets (if they are used a lot), I think it would be nice to continue testing some dummy data.\r\n\r\nYes indeed, or at least make sure that it was tested on the true data.", "(note that if we wanted to display sizes, etc we could also pretty easily parse the `dataset_infos.json` on the hub side)", "I agree that we can move the relevant parts of `dataset_infos.json` to the YAML tags.\r\n\r\n> On the other hand, it seems like not all datasets have streaming enabled yet and for those datasets (if they are used a lot), I think it would be nice to continue testing some dummy data. <\r\n> > Yes indeed, or at least make sure that it was tested on the true data.\r\n\r\nI like the idea of testing streaming and falling back to the dummy data test if streaming does not work. Generating dummy data can be very tedious, so this would be a nice incentive for the contributors to make their datasets streamable. " ]
1,640,883,865,000
1,641,396,844,000
null
MEMBER
null
I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point of having also the JSON metadata (dataset_infos.json file)? On the other hand, the dummy data is necessary for testing (in our CI suite) that the canonical dataset loads correctly. However: - the dataset preview feature is already an indirect test that the dataset loads correctly (it also tests it is streamable though) - we are migrating canonical datasets to the Hub Do we really need to continue testing them in out CI? Also note that for generating both (dataset_infos.json file and dummy data), the entire dataset needs being downloaded. This can be an issue for huge datasets (like WIT, with 400 GB of data). Feel free to ping other people for the discussion. CC: @lhoestq @mariosasko @thomwolf @julien-c @patrickvonplaten @anton-l @LysandreJik @yjernite @nateraw
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3507/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3507/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3506
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3506/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3506/comments
https://api.github.com/repos/huggingface/datasets/issues/3506/events
https://github.com/huggingface/datasets/pull/3506
1,091,166,595
PR_kwDODunzps4wZpot
3,506
Allows DatasetDict.filter to have batching option
{ "login": "thomasw21", "id": 24695242, "node_id": "MDQ6VXNlcjI0Njk1MjQy", "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomasw21", "html_url": "https://github.com/thomasw21", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "repos_url": "https://api.github.com/users/thomasw21/repos", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,640,877,742,000
1,641,291,868,000
1,641,291,867,000
CONTRIBUTOR
null
- Related to: #3244 - Fixes: #3503 We extends `.filter( ... batched: bool)` support to DatasetDict.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3506/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3506/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3506", "html_url": "https://github.com/huggingface/datasets/pull/3506", "diff_url": "https://github.com/huggingface/datasets/pull/3506.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3506.patch", "merged_at": 1641291867000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3505
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3505/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3505/comments
https://api.github.com/repos/huggingface/datasets/issues/3505/events
https://github.com/huggingface/datasets/issues/3505
1,091,150,820
I_kwDODunzps5BCaPk
3,505
cast_column function not working with map function in streaming mode for Audio features
{ "login": "ashu5644", "id": 8268102, "node_id": "MDQ6VXNlcjgyNjgxMDI=", "avatar_url": "https://avatars.githubusercontent.com/u/8268102?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ashu5644", "html_url": "https://github.com/ashu5644", "followers_url": "https://api.github.com/users/ashu5644/followers", "following_url": "https://api.github.com/users/ashu5644/following{/other_user}", "gists_url": "https://api.github.com/users/ashu5644/gists{/gist_id}", "starred_url": "https://api.github.com/users/ashu5644/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ashu5644/subscriptions", "organizations_url": "https://api.github.com/users/ashu5644/orgs", "repos_url": "https://api.github.com/users/ashu5644/repos", "events_url": "https://api.github.com/users/ashu5644/events{/privacy}", "received_events_url": "https://api.github.com/users/ashu5644/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi! This is probably due to the fact that `IterableDataset.map` sets `features` to `None` before mapping examples. We can fix the issue by passing the old `features` dict to the map generator and performing encoding/decoding there (before calling the map transform function)." ]
1,640,875,921,000
1,641,298,270,000
null
NONE
null
## Describe the bug I am trying to use Audio class for loading audio features using custom dataset. I am able to cast 'audio' feature into 'Audio' format with cast_column function. On using map function, I am not getting 'Audio' casted feature but getting path of audio file only. I am getting features of 'audio' of string type with load_dataset call. After using cast_column 'audio' feature is converted into 'Audio' type. But in map function I am not able to get Audio type for audio feature & getting string type data containing path of file only. So I am not able to use processor in encode function. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset, Audio from transformers import Wav2Vec2Processor def encode(batch, processor): print("Audio: ",batch['audio']) batch["input_values"] = processor(batch["audio"]['array'], sampling_rate=16000).input_values return batch def print_ds(ds): iterator = iter(ds) for d in iterator: print("Data: ",d) break processor = Wav2Vec2Processor.from_pretrained(pretrained_model_path) dataset = load_dataset("custom_dataset.py","train",data_files={'train':'train_path.txt'}, data_dir="data", streaming=True, split="train") print("Features: ",dataset.features) print_ds(dataset) dataset = dataset.cast_column("audio", Audio(sampling_rate=16_000)) print("Features: ",dataset.features) print_ds(dataset) dataset = dataset.map(lambda x: encode(x,processor)) print("Features: ",dataset.features) print_ds(dataset) ``` ## Expected results map function not printing Audio type features be used with processor function and getting error in processor call due to this. ## Actual results # after load_dataset call Features: {'sentence': Value(dtype='string', id=None), 'audio': Value(dtype='string', id=None)} Data: {'sentence': 'और अपने पेट को माँ की स्वादिष्ट गरमगरम जलेबियाँ हड़पते\n', 'audio': 'data/0116_003.wav'} # after cast_column call Features: {'sentence': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=16000, mono=True, _storage_dtype='string', id=None)} Data: {'sentence': 'और अपने पेट को माँ की स्वादिष्ट गरमगरम जलेबियाँ हड़पते\n', 'audio': {'path': 'data/0116_003.wav', 'array': array([ 1.2662281e-06, 1.0264218e-06, -1.3615092e-06, ..., 1.3017889e-02, 1.0085563e-02, 4.8155054e-03], dtype=float32), 'sampling_rate': 16000}} # after map call Features: None Audio: data/0116_003.wav Traceback (most recent call last): File "demo2.py", line 36, in <module> print_ds(dataset) File "demo2.py", line 11, in print_ds for d in iterator: File "/opt/conda/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 341, in __iter__ for key, example in self._iter(): File "/opt/conda/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 338, in _iter yield from ex_iterable File "/opt/conda/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 192, in __iter__ yield key, self.function(example) File "demo2.py", line 32, in <lambda> dataset = dataset.map(lambda x: batch_encode(x,processor)) File "demo2.py", line 6, in batch_encode batch["input_values"] = processor(batch["audio"]['array'], sampling_rate=16000).input_values TypeError: string indices must be integers ## Environment info - `datasets` version: 1.17.0 - Platform: Linux-4.14.243 with-debian-bullseye-sid - Python version: 3.7.9 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3505/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3505/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3504
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3504/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3504/comments
https://api.github.com/repos/huggingface/datasets/issues/3504/events
https://github.com/huggingface/datasets/issues/3504
1,090,682,230
I_kwDODunzps5BAn12
3,504
Unable to download PUBMED_title_abstracts_2019_baseline.jsonl.zst
{ "login": "ToddMorrill", "id": 12600692, "node_id": "MDQ6VXNlcjEyNjAwNjky", "avatar_url": "https://avatars.githubusercontent.com/u/12600692?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ToddMorrill", "html_url": "https://github.com/ToddMorrill", "followers_url": "https://api.github.com/users/ToddMorrill/followers", "following_url": "https://api.github.com/users/ToddMorrill/following{/other_user}", "gists_url": "https://api.github.com/users/ToddMorrill/gists{/gist_id}", "starred_url": "https://api.github.com/users/ToddMorrill/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ToddMorrill/subscriptions", "organizations_url": "https://api.github.com/users/ToddMorrill/orgs", "repos_url": "https://api.github.com/users/ToddMorrill/repos", "events_url": "https://api.github.com/users/ToddMorrill/events{/privacy}", "received_events_url": "https://api.github.com/users/ToddMorrill/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @ToddMorrill, thanks for reporting.\r\n\r\nThree weeks ago I contacted the team who created the Pile dataset to report this issue with their data host server: https://the-eye.eu\r\n\r\nThey told me that unfortunately, the-eye was heavily affected by the recent tornado catastrophe in the US. They hope to have their data back online asap." ]
1,640,802,200,000
1,641,279,004,000
null
NONE
null
## Describe the bug I am unable to download the PubMed dataset from the link provided in the [Hugging Face Course (Chapter 5 Section 4)](https://huggingface.co/course/chapter5/4?fw=pt). https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset # This takes a few minutes to run, so go grab a tea or coffee while you wait :) data_files = "https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst" pubmed_dataset = load_dataset("json", data_files=data_files, split="train") pubmed_dataset ``` I also tried with `wget` as follows. ``` wget https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst ``` ## Expected results I expect to be able to download this file. ## Actual results Traceback ``` --------------------------------------------------------------------------- timeout Traceback (most recent call last) /usr/lib/python3/dist-packages/urllib3/connection.py in _new_conn(self) 158 try: --> 159 conn = connection.create_connection( 160 (self._dns_host, self.port), self.timeout, **extra_kw /usr/lib/python3/dist-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 83 if err is not None: ---> 84 raise err 85 /usr/lib/python3/dist-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 73 sock.bind(source_address) ---> 74 sock.connect(sa) 75 return sock timeout: timed out During handling of the above exception, another exception occurred: ConnectTimeoutError Traceback (most recent call last) /usr/lib/python3/dist-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 664 # Make the request on the httplib connection object. --> 665 httplib_response = self._make_request( 666 conn, /usr/lib/python3/dist-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 375 try: --> 376 self._validate_conn(conn) 377 except (SocketTimeout, BaseSSLError) as e: /usr/lib/python3/dist-packages/urllib3/connectionpool.py in _validate_conn(self, conn) 995 if not getattr(conn, "sock", None): # AppEngine might not have `.sock` --> 996 conn.connect() 997 /usr/lib/python3/dist-packages/urllib3/connection.py in connect(self) 313 # Add certificate verification --> 314 conn = self._new_conn() 315 hostname = self.host /usr/lib/python3/dist-packages/urllib3/connection.py in _new_conn(self) 163 except SocketTimeout: --> 164 raise ConnectTimeoutError( 165 self, ConnectTimeoutError: (<urllib3.connection.VerifiedHTTPSConnection object at 0x7f06dd698850>, 'Connection to the-eye.eu timed out. (connect timeout=10.0)') During handling of the above exception, another exception occurred: MaxRetryError Traceback (most recent call last) /usr/lib/python3/dist-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 438 if not chunked: --> 439 resp = conn.urlopen( 440 method=request.method, /usr/lib/python3/dist-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 718 --> 719 retries = retries.increment( 720 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] /usr/lib/python3/dist-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace) 435 if new_retry.is_exhausted(): --> 436 raise MaxRetryError(_pool, url, error or ResponseError(cause)) 437 MaxRetryError: HTTPSConnectionPool(host='the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f06dd698850>, 'Connection to the-eye.eu timed out. (connect timeout=10.0)')) During handling of the above exception, another exception occurred: ConnectTimeout Traceback (most recent call last) /tmp/ipykernel_15104/606583593.py in <module> 3 # This takes a few minutes to run, so go grab a tea or coffee while you wait :) 4 data_files = "https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst" ----> 5 pubmed_dataset = load_dataset("json", data_files=data_files, split="train") 6 pubmed_dataset ~/.local/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs) 1655 1656 # Create a dataset builder -> 1657 builder_instance = load_dataset_builder( 1658 path=path, 1659 name=name, ~/.local/lib/python3.8/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, script_version, **config_kwargs) 1492 download_config = download_config.copy() if download_config else DownloadConfig() 1493 download_config.use_auth_token = use_auth_token -> 1494 dataset_module = dataset_module_factory( 1495 path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files 1496 ) ~/.local/lib/python3.8/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_files, **download_kwargs) 1116 # Try packaged 1117 if path in _PACKAGED_DATASETS_MODULES: -> 1118 return PackagedDatasetModuleFactory( 1119 path, data_files=data_files, download_config=download_config, download_mode=download_mode 1120 ).get_module() ~/.local/lib/python3.8/site-packages/datasets/load.py in get_module(self) 773 else get_patterns_locally(str(Path().resolve())) 774 ) --> 775 data_files = DataFilesDict.from_local_or_remote(patterns, use_auth_token=self.downnload_config.use_auth_token) 776 module_path, hash = _PACKAGED_DATASETS_MODULES[self.name] 777 builder_kwargs = {"hash": hash, "data_files": data_files} ~/.local/lib/python3.8/site-packages/datasets/data_files.py in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token) 576 for key, patterns_for_key in patterns.items(): 577 out[key] = ( --> 578 DataFilesList.from_local_or_remote( 579 patterns_for_key, 580 base_path=base_path, ~/.local/lib/python3.8/site-packages/datasets/data_files.py in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token) 545 base_path = base_path if base_path is not None else str(Path().resolve()) 546 data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions) --> 547 origin_metadata = _get_origin_metadata_locally_or_by_urls(data_files, use_auth_token=use_auth_token) 548 return cls(data_files, origin_metadata) 549 ~/.local/lib/python3.8/site-packages/datasets/data_files.py in _get_origin_metadata_locally_or_by_urls(data_files, max_workers, use_auth_token) 492 data_files: List[Union[Path, Url]], max_workers=64, use_auth_token: Optional[Union[bool, str]] = None 493 ) -> Tuple[str]: --> 494 return thread_map( 495 partial(_get_single_origin_metadata_locally_or_by_urls, use_auth_token=use_auth_token), 496 data_files, ~/.local/lib/python3.8/site-packages/tqdm/contrib/concurrent.py in thread_map(fn, *iterables, **tqdm_kwargs) 92 """ 93 from concurrent.futures import ThreadPoolExecutor ---> 94 return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs) 95 96 ~/.local/lib/python3.8/site-packages/tqdm/contrib/concurrent.py in _executor_map(PoolExecutor, fn, *iterables, **tqdm_kwargs) 74 map_args.update(chunksize=chunksize) 75 with PoolExecutor(**pool_kwargs) as ex: ---> 76 return list(tqdm_class(ex.map(fn, *iterables, **map_args), **kwargs)) 77 78 ~/.local/lib/python3.8/site-packages/tqdm/notebook.py in __iter__(self) 252 def __iter__(self): 253 try: --> 254 for obj in super(tqdm_notebook, self).__iter__(): 255 # return super(tqdm...) will not catch exception 256 yield obj ~/.local/lib/python3.8/site-packages/tqdm/std.py in __iter__(self) 1171 # (note: keep this check outside the loop for performance) 1172 if self.disable: -> 1173 for obj in iterable: 1174 yield obj 1175 return /usr/lib/python3.8/concurrent/futures/_base.py in result_iterator() 617 # Careful not to keep a reference to the popped future 618 if timeout is None: --> 619 yield fs.pop().result() 620 else: 621 yield fs.pop().result(end_time - time.monotonic()) /usr/lib/python3.8/concurrent/futures/_base.py in result(self, timeout) 442 raise CancelledError() 443 elif self._state == FINISHED: --> 444 return self.__get_result() 445 else: 446 raise TimeoutError() /usr/lib/python3.8/concurrent/futures/_base.py in __get_result(self) 387 if self._exception: 388 try: --> 389 raise self._exception 390 finally: 391 # Break a reference cycle with the exception in self._exception /usr/lib/python3.8/concurrent/futures/thread.py in run(self) 55 56 try: ---> 57 result = self.fn(*self.args, **self.kwargs) 58 except BaseException as exc: 59 self.future.set_exception(exc) ~/.local/lib/python3.8/site-packages/datasets/data_files.py in _get_single_origin_metadata_locally_or_by_urls(data_file, use_auth_token) 483 if isinstance(data_file, Url): 484 data_file = str(data_file) --> 485 return (request_etag(data_file, use_auth_token=use_auth_token),) 486 else: 487 data_file = str(data_file.resolve()) ~/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py in request_etag(url, use_auth_token) 489 def request_etag(url: str, use_auth_token: Optional[Union[str, bool]] = None) -> Optional[str]: 490 headers = get_authentication_headers_for_url(url, use_auth_token=use_auth_token) --> 491 response = http_head(url, headers=headers, max_retries=3) 492 response.raise_for_status() 493 etag = response.headers.get("ETag") if response.ok else None ~/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py in http_head(url, proxies, headers, cookies, allow_redirects, timeout, max_retries) 474 headers = copy.deepcopy(headers) or {} 475 headers["user-agent"] = get_datasets_user_agent(user_agent=headers.get("user-agent")) --> 476 response = _request_with_retry( 477 method="HEAD", 478 url=url, ~/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py in _request_with_retry(method, url, max_retries, base_wait_time, max_wait_time, timeout, **params) 407 except (requests.exceptions.ConnectTimeout, requests.exceptions.ConnectionError) as err: 408 if tries > max_retries: --> 409 raise err 410 else: 411 logger.info(f"{method} request to {url} timed out, retrying... [{tries/max_retries}]") ~/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py in _request_with_retry(method, url, max_retries, base_wait_time, max_wait_time, timeout, **params) 403 tries += 1 404 try: --> 405 response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) 406 success = True 407 except (requests.exceptions.ConnectTimeout, requests.exceptions.ConnectionError) as err: /usr/lib/python3/dist-packages/requests/api.py in request(method, url, **kwargs) 58 # cases, and look like a memory leak in others. 59 with sessions.Session() as session: ---> 60 return session.request(method=method, url=url, **kwargs) 61 62 /usr/lib/python3/dist-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 531 } 532 send_kwargs.update(settings) --> 533 resp = self.send(prep, **send_kwargs) 534 535 return resp /usr/lib/python3/dist-packages/requests/sessions.py in send(self, request, **kwargs) 644 645 # Send the request --> 646 r = adapter.send(request, **kwargs) 647 648 # Total elapsed time of the request (approximately) /usr/lib/python3/dist-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 502 # TODO: Remove this in 3.0.0: see #2811 503 if not isinstance(e.reason, NewConnectionError): --> 504 raise ConnectTimeout(e, request=request) 505 506 if isinstance(e.reason, ResponseError): ConnectTimeout: HTTPSConnectionPool(host='the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f06dd698850>, 'Connection to the-eye.eu timed out. (connect timeout=10.0)')) ``` ## Environment info - `datasets` version: 1.17.0 - Platform: Linux-5.11.0-43-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3504/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3504/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3503
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3503/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3503/comments
https://api.github.com/repos/huggingface/datasets/issues/3503/events
https://github.com/huggingface/datasets/issues/3503
1,090,472,735
I_kwDODunzps5A_0sf
3,503
Batched in filter throws error
{ "login": "gpucce", "id": 32967787, "node_id": "MDQ6VXNlcjMyOTY3Nzg3", "avatar_url": "https://avatars.githubusercontent.com/u/32967787?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gpucce", "html_url": "https://github.com/gpucce", "followers_url": "https://api.github.com/users/gpucce/followers", "following_url": "https://api.github.com/users/gpucce/following{/other_user}", "gists_url": "https://api.github.com/users/gpucce/gists{/gist_id}", "starred_url": "https://api.github.com/users/gpucce/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gpucce/subscriptions", "organizations_url": "https://api.github.com/users/gpucce/orgs", "repos_url": "https://api.github.com/users/gpucce/repos", "events_url": "https://api.github.com/users/gpucce/events{/privacy}", "received_events_url": "https://api.github.com/users/gpucce/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[ { "login": "thomasw21", "id": 24695242, "node_id": "MDQ6VXNlcjI0Njk1MjQy", "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomasw21", "html_url": "https://github.com/thomasw21", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "repos_url": "https://api.github.com/users/thomasw21/repos", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "type": "User", "site_admin": false } ]
null
[]
1,640,779,264,000
1,641,291,867,000
1,641,291,867,000
NONE
null
I hope this is really a bug, I could not find it among the open issues ## Describe the bug using `batched=False` in DataSet.filter throws error ```python TypeError: filter() got an unexpected keyword argument 'batched' ``` but in the docs it is lister as an argument. ## Steps to reproduce the bug ```python task = "mnli" max_length = 128 tokenizer = AutoTokenizer.from_pretrained("./pretrained_models/pretrained_models_drozd/sl250.m.gsic.titech.ac.jp:8000/21.11.17_06.30.32_roberta-base_a0057/checkpoints/smpl_400M/hf/") dataset = load_dataset("glue", task) task_to_keys = { "cola": ("sentence", None), "mnli": ("premise", "hypothesis"), "mnli-mm": ("premise", "hypothesis"), "mrpc": ("sentence1", "sentence2"), "qnli": ("question", "sentence"), "qqp": ("question1", "question2"), "rte": ("sentence1", "sentence2"), "sst2": ("sentence", None), "stsb": ("sentence1", "sentence2"), "wnli": ("sentence1", "sentence2"), } ##### tokenization_parameters sentence1_key, sentence2_key = task_to_keys[task] def preprocess_function(examples, max_length): if sentence2_key is None: return tokenizer( examples[sentence1_key], truncation=True, max_length=max_length ) return tokenizer( examples[sentence1_key], examples[sentence2_key], truncation=False, padding="max_length", max_length=max_length, ) encoded_dataset = dataset.map( lambda x: preprocess_function(x, max_length=max_length), batched=False ) encoded_dataset.filter(lambda x: len(x['input_ids']) <= max_length, batched=False) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.1, 1.17.0 - Platform: ubuntu - Python version: 3.8.12
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3503/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3503/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3502
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3502/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3502/comments
https://api.github.com/repos/huggingface/datasets/issues/3502/events
https://github.com/huggingface/datasets/pull/3502
1,090,438,558
PR_kwDODunzps4wXSLi
3,502
Add QuALITY
{ "login": "jaketae", "id": 25360440, "node_id": "MDQ6VXNlcjI1MzYwNDQw", "avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jaketae", "html_url": "https://github.com/jaketae", "followers_url": "https://api.github.com/users/jaketae/followers", "following_url": "https://api.github.com/users/jaketae/following{/other_user}", "gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}", "starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jaketae/subscriptions", "organizations_url": "https://api.github.com/users/jaketae/orgs", "repos_url": "https://api.github.com/users/jaketae/repos", "events_url": "https://api.github.com/users/jaketae/events{/privacy}", "received_events_url": "https://api.github.com/users/jaketae/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,640,775,526,000
1,640,775,526,000
null
CONTRIBUTOR
null
Fixes #3441.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3502/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3502/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3502", "html_url": "https://github.com/huggingface/datasets/pull/3502", "diff_url": "https://github.com/huggingface/datasets/pull/3502.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3502.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3501
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3501/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3501/comments
https://api.github.com/repos/huggingface/datasets/issues/3501/events
https://github.com/huggingface/datasets/pull/3501
1,090,413,758
PR_kwDODunzps4wXM8H
3,501
Update pib dataset card
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,640,772,880,000
1,640,776,401,000
1,640,776,401,000
MEMBER
null
Related to #3496
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3501/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3501/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3501", "html_url": "https://github.com/huggingface/datasets/pull/3501", "diff_url": "https://github.com/huggingface/datasets/pull/3501.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3501.patch", "merged_at": 1640776401000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3500
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3500/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3500/comments
https://api.github.com/repos/huggingface/datasets/issues/3500/events
https://github.com/huggingface/datasets/pull/3500
1,090,406,133
PR_kwDODunzps4wXLTB
3,500
Docs: Add VCTK dataset description
{ "login": "jaketae", "id": 25360440, "node_id": "MDQ6VXNlcjI1MzYwNDQw", "avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jaketae", "html_url": "https://github.com/jaketae", "followers_url": "https://api.github.com/users/jaketae/followers", "following_url": "https://api.github.com/users/jaketae/following{/other_user}", "gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}", "starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jaketae/subscriptions", "organizations_url": "https://api.github.com/users/jaketae/orgs", "repos_url": "https://api.github.com/users/jaketae/repos", "events_url": "https://api.github.com/users/jaketae/events{/privacy}", "received_events_url": "https://api.github.com/users/jaketae/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,640,772,125,000
1,641,293,162,000
1,641,291,909,000
CONTRIBUTOR
null
This PR is a very minor followup to #1837, with only docs changes (single comment string).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3500/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3500/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3500", "html_url": "https://github.com/huggingface/datasets/pull/3500", "diff_url": "https://github.com/huggingface/datasets/pull/3500.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3500.patch", "merged_at": 1641291909000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3499
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3499/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3499/comments
https://api.github.com/repos/huggingface/datasets/issues/3499/events
https://github.com/huggingface/datasets/issues/3499
1,090,132,618
I_kwDODunzps5A-hqK
3,499
Adjusting chunk size for streaming datasets
{ "login": "JoelNiklaus", "id": 3775944, "node_id": "MDQ6VXNlcjM3NzU5NDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JoelNiklaus", "html_url": "https://github.com/JoelNiklaus", "followers_url": "https://api.github.com/users/JoelNiklaus/followers", "following_url": "https://api.github.com/users/JoelNiklaus/following{/other_user}", "gists_url": "https://api.github.com/users/JoelNiklaus/gists{/gist_id}", "starred_url": "https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JoelNiklaus/subscriptions", "organizations_url": "https://api.github.com/users/JoelNiklaus/orgs", "repos_url": "https://api.github.com/users/JoelNiklaus/repos", "events_url": "https://api.github.com/users/JoelNiklaus/events{/privacy}", "received_events_url": "https://api.github.com/users/JoelNiklaus/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi ! Data streaming uses `fsspec` to read the data files progressively. IIRC the block size for buffering is 5MiB by default. So every time you finish iterating over a block, it downloads the next one. You can still try to increase the `fsspec` block size for buffering if it can help. To do so you just need to increase `fsspec.spec.AbstractBufferedFile.DEFAULT_BLOCK_SIZE `\r\n\r\nCurrently this is unfortunately done in a single thread, so it blocks the processing to download and uncompress the next block. At one point it would be nice to be able to do that in parallel !" ]
1,640,726,273,000
1,641,399,142,000
null
CONTRIBUTOR
null
**Is your feature request related to a problem? Please describe.** I want to use mc4 which I cannot save locally, so I stream it. However, I want to process the entire dataset and filter some documents from it. With the current chunk size of around 1000 documents (right?) I hit a performance bottleneck because of the frequent decompressing. **Describe the solution you'd like** I would appreciate a parameter in the load_dataset function, that allows me to set the chunksize myself (to a value like 100'000 in my case). Like that, I hope to improve the processing time.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3499/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3499/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3498
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3498/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3498/comments
https://api.github.com/repos/huggingface/datasets/issues/3498/events
https://github.com/huggingface/datasets/pull/3498
1,090,096,332
PR_kwDODunzps4wWL5U
3,498
update `pretty_name` for first 200 datasets
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,640,721,007,000
1,641,408,503,000
1,641,400,701,000
CONTRIBUTOR
null
I made a script some time back to fetch `pretty_names` from `papers_with_code` dataset along with some other rules incase that dataset wasn't available on `papers_with_code`. Updating them in the `README` of `datasets`. Took only the first 200 datasets into consideration and after some eyeballing, most of them were looking good to me!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3498/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3498/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3498", "html_url": "https://github.com/huggingface/datasets/pull/3498", "diff_url": "https://github.com/huggingface/datasets/pull/3498.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3498.patch", "merged_at": 1641400701000 }
true
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

annotations_creators:

  • no-annotation language_creators:
  • found languages:
  • en licenses:
  • unknown multilinguality:
  • monolingual pretty_name: Practice size_categories:
  • unknown source_datasets:
  • original task_categories:
  • text-classification
  • text-retrieval task_ids:
  • multi-class-classification
  • multi-label-classification
  • document-retrieval

Dataset Card for [Needs More Information]

Dataset Summary

For Practice

Supported Tasks and Leaderboards

Classification

Languages

en

Dataset Structure

Data Instances

[Needs More Information]

Data Fields

[Needs More Information]

Data Splits

train

Dataset Creation

Curation Rationale

[Needs More Information]

Source Data

Initial Data Collection and Normalization

[Needs More Information]

Who are the source language producers?

[Needs More Information]

Annotations

Annotation process

[Needs More Information]

Who are the annotators?

[Needs More Information]

Personal and Sensitive Information

[Needs More Information]

Considerations for Using the Data

Social Impact of Dataset

[Needs More Information]

Discussion of Biases

[Needs More Information]

Other Known Limitations

[Needs More Information]

Additional Information

Dataset Curators

[Needs More Information]

Licensing Information

[Needs More Information]

Citation Information

[Needs More Information]

Downloads last month
4
Edit dataset card