url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
871M
1.93B
node_id
stringlengths
18
32
number
int64
2.28k
6.28k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
int64
0
48
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
float64
body
stringlengths
0
36.2k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
float64
state_reason
stringclasses
3 values
draft
float64
0
1
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/6284
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6284/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6284/comments
https://api.github.com/repos/huggingface/datasets/issues/6284/events
https://github.com/huggingface/datasets/issues/6284
1,929,551,712
I_kwDODunzps5zAp9g
6,284
Add Belebele multiple-choice machine reading comprehension (MRC) dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4", "events_url": "https://api.github.com/users/rajveer43/events{/privacy}", "followers_url": "https://api.github.com/users/rajveer43/followers", "following_url": "https://api.github.com/users/rajveer43/following{/other_user}", "gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rajveer43", "id": 64583161, "login": "rajveer43", "node_id": "MDQ6VXNlcjY0NTgzMTYx", "organizations_url": "https://api.github.com/users/rajveer43/orgs", "received_events_url": "https://api.github.com/users/rajveer43/received_events", "repos_url": "https://api.github.com/users/rajveer43/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions", "type": "User", "url": "https://api.github.com/users/rajveer43" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
1
"2023-10-06T06:58:03"
"2023-10-06T13:26:51"
"2023-10-06T13:26:51"
NONE
null
### Feature request Belebele is a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. This dataset enables the evaluation of mono- and multi-lingual models in high-, medium-, and low-resource languages. Each question has four multiple-choice answers and is linked to a short passage from the [FLORES-200](https://github.com/facebookresearch/flores/tree/main/flores200) dataset. The human annotation procedure was carefully curated to create questions that discriminate between different levels of generalizable language comprehension and is reinforced by extensive quality checks. While all questions directly relate to the passage, the English dataset on its own proves difficult enough to challenge state-of-the-art language models. Being fully parallel, this dataset enables direct comparison of model performance across all languages. Belebele opens up new avenues for evaluating and analyzing the multilingual abilities of language models and NLP systems. Please refer to paper for more details, [The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants](https://arxiv.org/abs/2308.16884). ## Composition - 900 questions per language variant - 488 distinct passages, there are 1-2 associated questions for each. - For each question, there is 4 multiple-choice answers, exactly 1 of which is correct. - 122 language/language variants (including English). - 900 x 122 = 109,800 total questions. ### Motivation official repo https://github.com/facebookresearch/belebele ### Your contribution -
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6284/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6284/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6283
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6283/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6283/comments
https://api.github.com/repos/huggingface/datasets/issues/6283/events
https://github.com/huggingface/datasets/pull/6283
1,928,552,257
PR_kwDODunzps5cBlKq
6,283
Fix `array.values` handling in array cast/embed
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
open
false
null
[]
null
4
"2023-10-05T15:24:05"
"2023-10-06T13:46:13"
null
CONTRIBUTOR
null
Fix #6280
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6283/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6283/timeline
null
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/6283.diff", "html_url": "https://github.com/huggingface/datasets/pull/6283", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6283.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6283" }
true
https://api.github.com/repos/huggingface/datasets/issues/6282
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6282/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6282/comments
https://api.github.com/repos/huggingface/datasets/issues/6282/events
https://github.com/huggingface/datasets/pull/6282
1,928,473,630
PR_kwDODunzps5cBT5p
6,282
Drop data_files duplicates
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
open
false
null
[]
null
2
"2023-10-05T14:43:08"
"2023-10-06T13:02:04"
null
MEMBER
null
I just added drop_duplicates=True to `.from_patterns`. I used a dict to deduplicate and preserve the order close https://github.com/huggingface/datasets/issues/6259 close https://github.com/huggingface/datasets/issues/6272
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6282/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6282/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6282.diff", "html_url": "https://github.com/huggingface/datasets/pull/6282", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6282.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6282" }
true
https://api.github.com/repos/huggingface/datasets/issues/6281
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6281/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6281/comments
https://api.github.com/repos/huggingface/datasets/issues/6281/events
https://github.com/huggingface/datasets/pull/6281
1,928,456,959
PR_kwDODunzps5cBQPd
6,281
Improve documentation of dataset.from_generator
{ "avatar_url": "https://avatars.githubusercontent.com/u/53510?v=4", "events_url": "https://api.github.com/users/hartmans/events{/privacy}", "followers_url": "https://api.github.com/users/hartmans/followers", "following_url": "https://api.github.com/users/hartmans/following{/other_user}", "gists_url": "https://api.github.com/users/hartmans/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hartmans", "id": 53510, "login": "hartmans", "node_id": "MDQ6VXNlcjUzNTEw", "organizations_url": "https://api.github.com/users/hartmans/orgs", "received_events_url": "https://api.github.com/users/hartmans/received_events", "repos_url": "https://api.github.com/users/hartmans/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hartmans/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hartmans/subscriptions", "type": "User", "url": "https://api.github.com/users/hartmans" }
[]
closed
false
null
[]
null
2
"2023-10-05T14:34:49"
"2023-10-05T19:09:07"
"2023-10-05T18:57:41"
CONTRIBUTOR
null
Improve documentation to clarify sharding behavior (#6270)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6281/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6281/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6281.diff", "html_url": "https://github.com/huggingface/datasets/pull/6281", "merged_at": "2023-10-05T18:57:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/6281.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6281" }
true
https://api.github.com/repos/huggingface/datasets/issues/6280
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6280/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6280/comments
https://api.github.com/repos/huggingface/datasets/issues/6280/events
https://github.com/huggingface/datasets/issues/6280
1,928,215,278
I_kwDODunzps5y7jru
6,280
Couldn't cast array of type fixed_size_list to Sequence(Value(float64))
{ "avatar_url": "https://avatars.githubusercontent.com/u/1000442?v=4", "events_url": "https://api.github.com/users/jmif/events{/privacy}", "followers_url": "https://api.github.com/users/jmif/followers", "following_url": "https://api.github.com/users/jmif/following{/other_user}", "gists_url": "https://api.github.com/users/jmif/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jmif", "id": 1000442, "login": "jmif", "node_id": "MDQ6VXNlcjEwMDA0NDI=", "organizations_url": "https://api.github.com/users/jmif/orgs", "received_events_url": "https://api.github.com/users/jmif/received_events", "repos_url": "https://api.github.com/users/jmif/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jmif/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmif/subscriptions", "type": "User", "url": "https://api.github.com/users/jmif" }
[]
open
false
null
[]
null
3
"2023-10-05T12:48:31"
"2023-10-06T13:32:53"
null
NONE
null
### Describe the bug I have a dataset with an embedding column, when I try to map that dataset I get the following exception: ``` Traceback (most recent call last): File "/Users/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3189, in map for rank, done, content in iflatmap_unordered( File "/Users/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1387, in iflatmap_unordered [async_result.get(timeout=0.05) for async_result in async_results] File "/Users/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1387, in <listcomp> [async_result.get(timeout=0.05) for async_result in async_results] File "/Users/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/multiprocess/pool.py", line 774, in get raise self._value TypeError: Couldn't cast array of type fixed_size_list<item: float>[2] to Sequence(feature=Value(dtype='float32', id=None), length=2, id=None) ``` ### Steps to reproduce the bug Here's a simple repro script: ``` from datasets import Features, Value, Sequence, ClassLabel, Dataset dataset_features = Features({ 'text': Value('string'), 'embedding': Sequence(Value('double'), length=2), 'categories': Sequence(ClassLabel(names=sorted([ 'one', 'two', 'three' ]))), }) dataset = Dataset.from_dict( { 'text': ['A'] * 10000, 'embedding': [[0.0, 0.1]] * 10000, 'categories': [[0]] * 10000, }, features=dataset_features ) def test_mapper(r): r['text'] = list(map(lambda t: t + ' b', r['text'])) return r dataset = dataset.map(test_mapper, batched=True, batch_size=10, features=dataset_features, num_proc=2) ``` Removing the embedding column fixes the issue! ### Expected behavior The mapping completes successfully. ### Environment info - `datasets` version: 2.14.4 - Platform: macOS-14.0-arm64-arm-64bit - Python version: 3.10.12 - Huggingface_hub version: 0.17.1 - PyArrow version: 13.0.0 - Pandas version: 2.0.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6280/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6280/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6279
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6279/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6279/comments
https://api.github.com/repos/huggingface/datasets/issues/6279/events
https://github.com/huggingface/datasets/issues/6279
1,928,028,226
I_kwDODunzps5y62BC
6,279
Batched IterableDataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/7010688?v=4", "events_url": "https://api.github.com/users/lneukom/events{/privacy}", "followers_url": "https://api.github.com/users/lneukom/followers", "following_url": "https://api.github.com/users/lneukom/following{/other_user}", "gists_url": "https://api.github.com/users/lneukom/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lneukom", "id": 7010688, "login": "lneukom", "node_id": "MDQ6VXNlcjcwMTA2ODg=", "organizations_url": "https://api.github.com/users/lneukom/orgs", "received_events_url": "https://api.github.com/users/lneukom/received_events", "repos_url": "https://api.github.com/users/lneukom/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lneukom/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lneukom/subscriptions", "type": "User", "url": "https://api.github.com/users/lneukom" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
1
"2023-10-05T11:12:49"
"2023-10-05T11:50:28"
null
NONE
null
### Feature request Hi, could you add an implementation of a batched `IterableDataset`. It already support an option to do batch iteration via `.iter(batch_size=...)` but this cannot be used in combination with a torch `DataLoader` since it just returns an iterator. ### Motivation The current implementation loads each element of a batch individually which can be very slow in cases of a big batch_size. I did some experiments [here](https://discuss.huggingface.co/t/slow-dataloader-with-big-batch-size/57224) and using a batched iteration would speed up data loading significantly. ### Your contribution N/A
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6279/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6279/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6278
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6278/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6278/comments
https://api.github.com/repos/huggingface/datasets/issues/6278/events
https://github.com/huggingface/datasets/pull/6278
1,927,957,877
PR_kwDODunzps5b_iKb
6,278
No data files duplicates
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
4
"2023-10-05T10:31:58"
"2023-10-05T14:43:17"
"2023-10-05T14:43:17"
MEMBER
null
I added a new DataFilesSet class to disallow duplicate data files. I also deprecated DataFilesList. EDIT: actually I might just add drop_duplicates=True to `.from_patterns` close https://github.com/huggingface/datasets/issues/6259 close https://github.com/huggingface/datasets/issues/6272 TODO: - [ ] tests - [ ] preserve data files order
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6278/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6278/timeline
null
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/6278.diff", "html_url": "https://github.com/huggingface/datasets/pull/6278", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6278.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6278" }
true
https://api.github.com/repos/huggingface/datasets/issues/6277
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6277/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6277/comments
https://api.github.com/repos/huggingface/datasets/issues/6277/events
https://github.com/huggingface/datasets/issues/6277
1,927,044,546
I_kwDODunzps5y3F3C
6,277
FileNotFoundError: Couldn't find a module script at /content/paws-x/paws-x.py. Module 'paws-x' doesn't exist on the Hugging Face Hub either.
{ "avatar_url": "https://avatars.githubusercontent.com/u/66733346?v=4", "events_url": "https://api.github.com/users/diegogonzalezc/events{/privacy}", "followers_url": "https://api.github.com/users/diegogonzalezc/followers", "following_url": "https://api.github.com/users/diegogonzalezc/following{/other_user}", "gists_url": "https://api.github.com/users/diegogonzalezc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/diegogonzalezc", "id": 66733346, "login": "diegogonzalezc", "node_id": "MDQ6VXNlcjY2NzMzMzQ2", "organizations_url": "https://api.github.com/users/diegogonzalezc/orgs", "received_events_url": "https://api.github.com/users/diegogonzalezc/received_events", "repos_url": "https://api.github.com/users/diegogonzalezc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/diegogonzalezc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/diegogonzalezc/subscriptions", "type": "User", "url": "https://api.github.com/users/diegogonzalezc" }
[]
open
false
null
[]
null
1
"2023-10-04T22:01:25"
"2023-10-05T14:00:58"
null
NONE
null
### Describe the bug I'm encountering a "FileNotFoundError" while attempting to use the "paws-x" dataset to retrain the DistilRoBERTa-base model. The error message is as follows: FileNotFoundError: Couldn't find a module script at /content/paws-x/paws-x.py. Module 'paws-x' doesn't exist on the Hugging Face Hub either. ### Steps to reproduce the bug https://colab.research.google.com/drive/11xUUFxloClpmqLvDy_Xxfmo3oUzjY5nx#scrollTo=kUn74FigzhHm ### Expected behavior The the trained model ### Environment info colab, "paws-x" dataset , DistilRoBERTa-base model
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6277/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6277/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6276
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6276/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6276/comments
https://api.github.com/repos/huggingface/datasets/issues/6276/events
https://github.com/huggingface/datasets/issues/6276
1,925,961,878
I_kwDODunzps5yy9iW
6,276
I'm trying to fine tune the openai/whisper model from huggingface using jupyter notebook and i keep getting this error
{ "avatar_url": "https://avatars.githubusercontent.com/u/50768065?v=4", "events_url": "https://api.github.com/users/valaofficial/events{/privacy}", "followers_url": "https://api.github.com/users/valaofficial/followers", "following_url": "https://api.github.com/users/valaofficial/following{/other_user}", "gists_url": "https://api.github.com/users/valaofficial/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/valaofficial", "id": 50768065, "login": "valaofficial", "node_id": "MDQ6VXNlcjUwNzY4MDY1", "organizations_url": "https://api.github.com/users/valaofficial/orgs", "received_events_url": "https://api.github.com/users/valaofficial/received_events", "repos_url": "https://api.github.com/users/valaofficial/repos", "site_admin": false, "starred_url": "https://api.github.com/users/valaofficial/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/valaofficial/subscriptions", "type": "User", "url": "https://api.github.com/users/valaofficial" }
[]
open
false
null
[]
null
2
"2023-10-04T11:03:41"
"2023-10-04T22:14:38"
null
NONE
null
### Describe the bug I'm trying to fine tune the openai/whisper model from huggingface using jupyter notebook and i keep getting this error, i'm following the steps in this blog post https://huggingface.co/blog/fine-tune-whisper I tried google collab and it works but because I'm on the free version the training doesn't complete the error comes in jupyter notebook when i run this line `common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=4)` here is the error message ``` Map (num_proc=4): 0% 0/2506 [00:52<?, ? examples/s] The above exception was the direct cause of the following exception: NameError Traceback (most recent call last) Cell In[19], line 1 ----> 1 common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=4) File ~\anaconda\Lib\site-packages\datasets\dataset_dict.py:853, in DatasetDict.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, desc) 850 if cache_file_names is None: 851 cache_file_names = {k: None for k in self} 852 return DatasetDict( --> 853 { 854 k: dataset.map( 855 function=function, 856 with_indices=with_indices, 857 with_rank=with_rank, 858 input_columns=input_columns, 859 batched=batched, 860 batch_size=batch_size, 861 drop_last_batch=drop_last_batch, 862 remove_columns=remove_columns, 863 keep_in_memory=keep_in_memory, 864 load_from_cache_file=load_from_cache_file, 865 cache_file_name=cache_file_names[k], 866 writer_batch_size=writer_batch_size, 867 features=features, 868 disable_nullable=disable_nullable, 869 fn_kwargs=fn_kwargs, 870 num_proc=num_proc, 871 desc=desc, 872 ) 873 for k, dataset in self.items() 874 } 875 ) File ~\anaconda\Lib\site-packages\datasets\dataset_dict.py:854, in <dictcomp>(.0) 850 if cache_file_names is None: 851 cache_file_names = {k: None for k in self} 852 return DatasetDict( 853 { --> 854 k: dataset.map( 855 function=function, 856 with_indices=with_indices, 857 with_rank=with_rank, 858 input_columns=input_columns, 859 batched=batched, 860 batch_size=batch_size, 861 drop_last_batch=drop_last_batch, 862 remove_columns=remove_columns, 863 keep_in_memory=keep_in_memory, 864 load_from_cache_file=load_from_cache_file, 865 cache_file_name=cache_file_names[k], 866 writer_batch_size=writer_batch_size, 867 features=features, 868 disable_nullable=disable_nullable, 869 fn_kwargs=fn_kwargs, 870 num_proc=num_proc, 871 desc=desc, 872 ) 873 for k, dataset in self.items() 874 } 875 ) File ~\anaconda\Lib\site-packages\datasets\arrow_dataset.py:592, in transmit_tasks.<locals>.wrapper(*args, **kwargs) 590 self: "Dataset" = kwargs.pop("self") 591 # apply actual function --> 592 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 593 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 594 for dataset in datasets: 595 # Remove task templates if a column mapping of the template is no longer valid File ~\anaconda\Lib\site-packages\datasets\arrow_dataset.py:557, in transmit_format.<locals>.wrapper(*args, **kwargs) 550 self_format = { 551 "type": self._format_type, 552 "format_kwargs": self._format_kwargs, 553 "columns": self._format_columns, 554 "output_all_columns": self._output_all_columns, 555 } 556 # apply actual function --> 557 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 558 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 559 # re-apply format to the output File ~\anaconda\Lib\site-packages\datasets\arrow_dataset.py:3189, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 3182 logger.info(f"Spawning {num_proc} processes") 3183 with logging.tqdm( 3184 disable=not logging.is_progress_bar_enabled(), 3185 unit=" examples", 3186 total=pbar_total, 3187 desc=(desc or "Map") + f" (num_proc={num_proc})", 3188 ) as pbar: -> 3189 for rank, done, content in iflatmap_unordered( 3190 pool, Dataset._map_single, kwargs_iterable=kwargs_per_job 3191 ): 3192 if done: 3193 shards_done += 1 File ~\anaconda\Lib\site-packages\datasets\utils\py_utils.py:1394, in iflatmap_unordered(pool, func, kwargs_iterable) 1391 finally: 1392 if not pool_changed: 1393 # we get the result in case there's an error to raise -> 1394 [async_result.get(timeout=0.05) for async_result in async_results] File ~\anaconda\Lib\site-packages\datasets\utils\py_utils.py:1394, in <listcomp>(.0) 1391 finally: 1392 if not pool_changed: 1393 # we get the result in case there's an error to raise -> 1394 [async_result.get(timeout=0.05) for async_result in async_results] File ~\anaconda\Lib\site-packages\multiprocess\pool.py:774, in ApplyResult.get(self, timeout) 772 return self._value 773 else: --> 774 raise self._value NameError: name 'feature_extractor' is not defined ``` ### Steps to reproduce the bug 1. follow the steps in this blog post https://huggingface.co/blog/fine-tune-whisper 2. run this line of code `common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=4)` 3. I'm using jupyter notebook from anaconda ### Expected behavior No error message ### Environment info datasets version: 2.8.0 Python version: 3.11 Windows 10
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6276/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6276/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6275
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6275/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6275/comments
https://api.github.com/repos/huggingface/datasets/issues/6275/events
https://github.com/huggingface/datasets/issues/6275
1,921,354,680
I_kwDODunzps5yhYu4
6,275
Would like to Contribute a dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/97907750?v=4", "events_url": "https://api.github.com/users/vikas70607/events{/privacy}", "followers_url": "https://api.github.com/users/vikas70607/followers", "following_url": "https://api.github.com/users/vikas70607/following{/other_user}", "gists_url": "https://api.github.com/users/vikas70607/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vikas70607", "id": 97907750, "login": "vikas70607", "node_id": "U_kgDOBdX0Jg", "organizations_url": "https://api.github.com/users/vikas70607/orgs", "received_events_url": "https://api.github.com/users/vikas70607/received_events", "repos_url": "https://api.github.com/users/vikas70607/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vikas70607/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vikas70607/subscriptions", "type": "User", "url": "https://api.github.com/users/vikas70607" }
[]
open
false
null
[]
null
1
"2023-10-02T07:00:21"
"2023-10-02T15:56:34"
null
NONE
null
I have a dataset of 2500 images that can be used for color-blind machine-learning algorithms. Since , there was no dataset available online , I made this dataset myself and would like to contribute this now to community
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6275/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6275/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6274
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6274/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6274/comments
https://api.github.com/repos/huggingface/datasets/issues/6274/events
https://github.com/huggingface/datasets/issues/6274
1,921,036,328
I_kwDODunzps5ygLAo
6,274
FileNotFoundError for dataset with multiple builder config
{ "avatar_url": "https://avatars.githubusercontent.com/u/97120485?v=4", "events_url": "https://api.github.com/users/LouisChen15/events{/privacy}", "followers_url": "https://api.github.com/users/LouisChen15/followers", "following_url": "https://api.github.com/users/LouisChen15/following{/other_user}", "gists_url": "https://api.github.com/users/LouisChen15/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LouisChen15", "id": 97120485, "login": "LouisChen15", "node_id": "U_kgDOBcnw5Q", "organizations_url": "https://api.github.com/users/LouisChen15/orgs", "received_events_url": "https://api.github.com/users/LouisChen15/received_events", "repos_url": "https://api.github.com/users/LouisChen15/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LouisChen15/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LouisChen15/subscriptions", "type": "User", "url": "https://api.github.com/users/LouisChen15" }
[]
closed
false
null
[]
null
1
"2023-10-01T23:45:56"
"2023-10-02T20:09:38"
"2023-10-02T20:09:38"
NONE
null
### Describe the bug When there is only one config and only the dataset name is entered when using datasets.load_dataset(), it works fine. But if I create a second builder_config for my dataset and enter the config name when using datasets.load_dataset(), the following error will happen. FileNotFoundError: [Errno 2] No such file or directory: 'C:/Users/chenx/.cache/huggingface/datasets/my_dataset/0_shot_multiple_choice/1.0.0/97c3854a012cfd6b045e3be4c864739902af2d818bb9235b047baa94c302e9a2.incomplete/my_dataset-test-00000-00000-of-NNNNN.arrow' The "XXX.incomplete folder" in the cache folder of my dataset will disappear before "generating test split", which does not happen when config name is not entered and the config name is "default" C:\Users\chenx\.cache\huggingface\datasets\my_dataset\0_shot_multiple_choice\1.0.0 The folder that is supposed to remain under the above directory will disappear, and the data generator will not have a place to generate data into. ### Steps to reproduce the bug test = load_dataset('my_dataset', '0_shot_multiple_choice') ### Expected behavior FileNotFoundError: [Errno 2] No such file or directory: 'C:/Users/chenx/.cache/huggingface/datasets/my_dataset/0_shot_multiple_choice/1.0.0/97c3854a012cfd6b045e3be4c864739902af2d818bb9235b047baa94c302e9a2.incomplete/my_dataset-test-00000-00000-of-NNNNN.arrow' ### Environment info datasets 2.14.5 python 3.8.18
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6274/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6274/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6273
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6273/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6273/comments
https://api.github.com/repos/huggingface/datasets/issues/6273/events
https://github.com/huggingface/datasets/issues/6273
1,920,922,260
I_kwDODunzps5yfvKU
6,273
Broken Link to PubMed Abstracts dataset .
{ "avatar_url": "https://avatars.githubusercontent.com/u/100606327?v=4", "events_url": "https://api.github.com/users/sameemqureshi/events{/privacy}", "followers_url": "https://api.github.com/users/sameemqureshi/followers", "following_url": "https://api.github.com/users/sameemqureshi/following{/other_user}", "gists_url": "https://api.github.com/users/sameemqureshi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sameemqureshi", "id": 100606327, "login": "sameemqureshi", "node_id": "U_kgDOBf8hdw", "organizations_url": "https://api.github.com/users/sameemqureshi/orgs", "received_events_url": "https://api.github.com/users/sameemqureshi/received_events", "repos_url": "https://api.github.com/users/sameemqureshi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sameemqureshi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sameemqureshi/subscriptions", "type": "User", "url": "https://api.github.com/users/sameemqureshi" }
[]
open
false
null
[]
null
3
"2023-10-01T19:08:48"
"2023-10-02T16:40:18"
null
NONE
null
### Describe the bug The link provided for the dataset is broken, data_files = [https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst](url) The ### Steps to reproduce the bug Steps to reproduce: 1) Head over to [https://huggingface.co/learn/nlp-course/chapter5/4?fw=pt#big-data-datasets-to-the-rescue](url) 2) In the Section "What is the Pile?", you can see a code snippet that contains the broken link. ### Expected behavior The link should Redirect to the "PubMed Abstracts dataset" as expected . ### Environment info .
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6273/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6273/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6272
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6272/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6272/comments
https://api.github.com/repos/huggingface/datasets/issues/6272/events
https://github.com/huggingface/datasets/issues/6272
1,920,831,487
I_kwDODunzps5yfY__
6,272
Duplicate `data_files` when named `<split>/<split>.parquet`
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
7
"2023-10-01T15:43:56"
"2023-10-05T10:32:27"
null
MEMBER
null
e.g. with `u23429/stock_1_minute_ticker` ```ipython In [1]: from datasets import * In [2]: b = load_dataset_builder("u23429/stock_1_minute_ticker") Downloading readme: 100%|██████████████████████████| 627/627 [00:00<00:00, 246kB/s] In [3]: b.config.data_files Out[3]: {NamedSplit('train'): ['hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/train/train.parquet', 'hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/train/train.parquet'], NamedSplit('validation'): ['hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/validation/validation.parquet', 'hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/validation/validation.parquet'], NamedSplit('test'): ['hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/test/test.parquet', 'hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/test/test.parquet']} ``` This bug issue is present in the current `datasets` 2.14.5 and also on `main` even after https://github.com/huggingface/datasets/pull/6244 cc @mariosasko
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6272/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6272/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6271
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6271/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6271/comments
https://api.github.com/repos/huggingface/datasets/issues/6271/events
https://github.com/huggingface/datasets/issues/6271
1,920,420,295
I_kwDODunzps5yd0nH
6,271
Overwriting Split overwrites data but not metadata, corrupting dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/13859249?v=4", "events_url": "https://api.github.com/users/govindrai/events{/privacy}", "followers_url": "https://api.github.com/users/govindrai/followers", "following_url": "https://api.github.com/users/govindrai/following{/other_user}", "gists_url": "https://api.github.com/users/govindrai/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/govindrai", "id": 13859249, "login": "govindrai", "node_id": "MDQ6VXNlcjEzODU5MjQ5", "organizations_url": "https://api.github.com/users/govindrai/orgs", "received_events_url": "https://api.github.com/users/govindrai/received_events", "repos_url": "https://api.github.com/users/govindrai/repos", "site_admin": false, "starred_url": "https://api.github.com/users/govindrai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/govindrai/subscriptions", "type": "User", "url": "https://api.github.com/users/govindrai" }
[]
open
false
null
[]
null
0
"2023-09-30T22:37:31"
"2023-09-30T22:37:31"
null
NONE
null
### Describe the bug I want to be able to overwrite/update/delete splits in my dataset. Currently the only way to do is to manually go into the dataset and delete the split. If I try to overwrite programmatically I end up in an error state and (somewhat) corrupting the dataset. Read below. **Current Behavior** When I push to an existing split I get this error: `ValueError: Split complexRoofLocation_01Apr2023_to_31May2023test already present` This seems to suggest that the library doesn't support overwriting splits. **Potential Bug** What’s strange is that datasets, despite the operation erroring out with the ValueError above, does, in fact, overwrite the split: `Pushing dataset shards to the dataset hub: 100% [.....................] 1/1 [00:00<00:00, 55.04it/s]` Even though you got an error message and your code fails, your dataset is now changed. That seems like a bug. Either don't change the dataset, or don't throw the error and allow the script to proceed. Additional Bug While it overwrites the split, it doesn’t overwrite the split’s information. Because of this when you pull down the dataset you may end up getting a `NonMatchingSplitsSizesError` if the size of the dataset during the overwrite is different. For example, my original split had 5 rows, but on my overwrite, I only had 4. Then when I try to download the dataset, I get a `NonMatchingSplitsSizesError` because the dataset's data.json states there’s 5 but only 4 exist in the split. Expected Behavior This corrupts the dataset rendering it unusable (until you take manual intervention). Either the library should let the overwrite happen (which it does but should also update the metadata) or it shouldn’t do anything. ### Steps to reproduce the bug [Colab Notebook](https://colab.research.google.com/drive/1bqVkD06Ngs9MQNdSk_ygCG6y1UqXA4pC?usp=sharing) ### Expected behavior The split should be overwritten and I should be able to use the new version of the dataset without issue. ### Environment info - `datasets` version: 2.14.5 - Platform: Linux-5.15.120+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.17.3 - PyArrow version: 9.0.0 - Pandas version: 1.5.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6271/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6271/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6270
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6270/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6270/comments
https://api.github.com/repos/huggingface/datasets/issues/6270/events
https://github.com/huggingface/datasets/issues/6270
1,920,329,373
I_kwDODunzps5ydead
6,270
Dataset.from_generator raises with sharded gen_args
{ "avatar_url": "https://avatars.githubusercontent.com/u/53510?v=4", "events_url": "https://api.github.com/users/hartmans/events{/privacy}", "followers_url": "https://api.github.com/users/hartmans/followers", "following_url": "https://api.github.com/users/hartmans/following{/other_user}", "gists_url": "https://api.github.com/users/hartmans/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hartmans", "id": 53510, "login": "hartmans", "node_id": "MDQ6VXNlcjUzNTEw", "organizations_url": "https://api.github.com/users/hartmans/orgs", "received_events_url": "https://api.github.com/users/hartmans/received_events", "repos_url": "https://api.github.com/users/hartmans/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hartmans/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hartmans/subscriptions", "type": "User", "url": "https://api.github.com/users/hartmans" }
[]
open
false
null
[]
null
6
"2023-09-30T16:50:06"
"2023-10-03T01:21:39"
null
CONTRIBUTOR
null
### Describe the bug According to the docs of Datasets.from_generator: ``` gen_kwargs(`dict`, *optional*): Keyword arguments to be passed to the `generator` callable. You can define a sharded dataset by passing the list of shards in `gen_kwargs`. ``` So I'd expect that if gen_kwargs was a list, then my generator would be called once for each element in the list with the dict in the list for that element. It doesn't work that way though. ### Steps to reproduce the bug ```python #!/usr/bin/python from pathlib import Path import datasets def process_yaml(file): yield dict(example=42) if __name__ == '__main__': import sys dir = Path(sys.argv[0]).parent ds = datasets.Dataset.from_generator(process_yaml, gen_kwargs=[{'file':f} for f in dir.glob('*.yml')], ) ds.to_json('training.jsonl') ``` ``` Generating train split: 0 examples [00:00, ? examples/s] Traceback (most recent call last): File "/tmp/dataset_bug.py", line 13, in <module> ds = datasets.Dataset.from_generator(process_yaml, gen_kwargs=[{'file':f} for f in dir.glob('*.yml')], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/hartmans/ai/venv/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 1072, in from_generator ).read() ^^^^^^ File "/home/hartmans/ai/venv/lib/python3.11/site-packages/datasets/io/generator.py", line 47, in read self.builder.download_and_prepare( File "/home/hartmans/ai/venv/lib/python3.11/site-packages/datasets/builder.py", line 954, in download_and_prepare self._download_and_prepare( File "/home/hartmans/ai/venv/lib/python3.11/site-packages/datasets/builder.py", line 1717, in _download_and_prepare super()._download_and_prepare( File "/home/hartmans/ai/venv/lib/python3.11/site-packages/datasets/builder.py", line 1049, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/hartmans/ai/venv/lib/python3.11/site-packages/datasets/builder.py", line 1555, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/home/hartmans/ai/venv/lib/python3.11/site-packages/datasets/builder.py", line 1656, in _prepare_split_single generator = self._generate_examples(**gen_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: datasets.packaged_modules.generator.generator.Generator._generate_examples() argument after ** must be a ``` mapping, not list ### Expected behavior I would expect that process_yaml would be called once for each yaml file in the directory where the script is run. I also tried with the list being in gen_kwargs, but in that case process_yaml gets called with a list. ### Environment info - `datasets` version: 2.14.6.dev0 (git commit 0cc77d7f45c7369; also tested with 2.14.0) - Platform: Linux-6.1.0-10-amd64-x86_64-with-glibc2.36 - Python version: 3.11.2 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6270/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6270/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6269
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6269/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6269/comments
https://api.github.com/repos/huggingface/datasets/issues/6269/events
https://github.com/huggingface/datasets/pull/6269
1,919,572,790
PR_kwDODunzps5bjbDc
6,269
Test single commit `push_to_hub` API
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
open
false
null
[]
null
4
"2023-09-29T16:22:31"
"2023-10-02T14:53:06"
null
CONTRIBUTOR
null
Test PR to check the compatibility with https://github.com/huggingface/huggingface_hub/pull/1699 cc @Wauplin
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 2, "laugh": 0, "rocket": 1, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/6269/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6269/timeline
null
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/6269.diff", "html_url": "https://github.com/huggingface/datasets/pull/6269", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6269.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6269" }
true
https://api.github.com/repos/huggingface/datasets/issues/6268
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6268/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6268/comments
https://api.github.com/repos/huggingface/datasets/issues/6268/events
https://github.com/huggingface/datasets/pull/6268
1,919,010,645
PR_kwDODunzps5bhgs7
6,268
Add repo_id to DatasetInfo
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
open
false
null
[]
null
9
"2023-09-29T10:24:55"
"2023-10-01T15:29:45"
null
MEMBER
null
```python from datasets import load_dataset ds = load_dataset("lhoestq/demo1", split="train") ds = ds.map(lambda x: {}, num_proc=2).filter(lambda x: True).remove_columns(["id"]) print(ds.repo_id) # lhoestq/demo1 ``` - repo_id is None when the dataset doesn't come from the Hub, e.g. from Dataset.from_dict - repo_id is set to None when concatenating datasets with different repo ids related to https://github.com/huggingface/datasets/issues/4129 TODO: - [ ] discuss if it's ok for now - [ ] tests
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/6268/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6268/timeline
null
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/6268.diff", "html_url": "https://github.com/huggingface/datasets/pull/6268", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6268.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6268" }
true
https://api.github.com/repos/huggingface/datasets/issues/6267
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6267/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6267/comments
https://api.github.com/repos/huggingface/datasets/issues/6267/events
https://github.com/huggingface/datasets/issues/6267
1,916,443,262
I_kwDODunzps5yOpp-
6,267
Multi label class encoding
{ "avatar_url": "https://avatars.githubusercontent.com/u/1000442?v=4", "events_url": "https://api.github.com/users/jmif/events{/privacy}", "followers_url": "https://api.github.com/users/jmif/followers", "following_url": "https://api.github.com/users/jmif/following{/other_user}", "gists_url": "https://api.github.com/users/jmif/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jmif", "id": 1000442, "login": "jmif", "node_id": "MDQ6VXNlcjEwMDA0NDI=", "organizations_url": "https://api.github.com/users/jmif/orgs", "received_events_url": "https://api.github.com/users/jmif/received_events", "repos_url": "https://api.github.com/users/jmif/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jmif/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmif/subscriptions", "type": "User", "url": "https://api.github.com/users/jmif" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
5
"2023-09-27T22:48:08"
"2023-10-06T13:40:28"
null
NONE
null
### Feature request I have a multi label dataset and I'd like to be able to class encode the column and store the mapping directly in the features just as I can with a single label column. `class_encode_column` currently does not support multi labels. Here's an example of what I'd like to encode: ``` data = { 'text': ['one', 'two', 'three', 'four'], 'labels': [['a', 'b'], ['b'], ['b', 'c'], ['a', 'd']] } dataset = Dataset.from_dict(data) dataset = dataset.class_encode_column('labels') ``` I did some digging into the code base to evaluate the feasibility of this (note I'm very new to this code base) and from what I noticed the `ClassLabel` feature is still stored as an underlying raw data type of int so I thought a `MultiLabel` feature could similarly be stored as a Sequence of ints, thus not requiring significant serialization / conversion work to / from arrow. I did a POC of this [here](https://github.com/huggingface/datasets/commit/15443098e9ce053943172f7ec6fce3769d7dff6e) and included a simple test case (please excuse all the commented out tests, going for speed of POC here and didn't want to fight IDE to debug a single test). In the test I just assert that `num_classes` is the same to show that things are properly serializing, but if you break after loading from disk you'll see the dataset correct and the dataset feature is as expected. After digging more I did notice a few issues - After loading from disk I noticed type of the `labels` class is `Sequence` not `MultiLabel` (though the added `feature` attribute came through). This doesn't happen for `ClassLabel` but I couldn't find the encode / decode code paths that handle this. - I subclass `Sequence` in `MultiLabel` to leverage existing serialization, but this does miss the custom encode logic that `ClassLabel` has. I'm not sure of the best way to approach this as I haven't fully understood the encode / decode flow for datasets. I suspect my simple implementation will need some improvement as it'll require a significant amount of repeated logic to mimic `ClassLabel` behavior. ### Motivation See above - would like to support multi label class encodings. ### Your contribution This would be a big help for us and we're open to contributing but I'll likely need some guidance on how to implement to fit the encode / decode flow. Some suggestions on tests / would be great too, I'm guessing in addition to the class encode tests (that I'll need to expand) we'll need encode / decode tests.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6267/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6267/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6266
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6266/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6266/comments
https://api.github.com/repos/huggingface/datasets/issues/6266/events
https://github.com/huggingface/datasets/pull/6266
1,916,334,394
PR_kwDODunzps5bYYb8
6,266
Use LibYAML with PyYAML if available
{ "avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4", "events_url": "https://api.github.com/users/bryant1410/events{/privacy}", "followers_url": "https://api.github.com/users/bryant1410/followers", "following_url": "https://api.github.com/users/bryant1410/following{/other_user}", "gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bryant1410", "id": 3905501, "login": "bryant1410", "node_id": "MDQ6VXNlcjM5MDU1MDE=", "organizations_url": "https://api.github.com/users/bryant1410/orgs", "received_events_url": "https://api.github.com/users/bryant1410/received_events", "repos_url": "https://api.github.com/users/bryant1410/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions", "type": "User", "url": "https://api.github.com/users/bryant1410" }
[]
open
false
null
[]
null
5
"2023-09-27T21:13:36"
"2023-09-28T14:29:24"
null
CONTRIBUTOR
null
PyYAML, the YAML framework used in this library, allows the use of LibYAML to accelerate the methods `load` and `dump`. To use it, a user would need to first install a PyYAML version that uses LibYAML (not available in PyPI; needs to be manually installed). Then, to actually use them, PyYAML suggests importing the LibYAML version of the `Loader` and `Dumper` and falling back to the default ones. This PR implements this change. See [PyYAML docs](https://pyyaml.org/wiki/PyYAMLDocumentation) for more info. This change was motivated after trying to use any of [the SugarCREPE datasets in the Hub](https://huggingface.co/datasets?search=sugarcrepe) provided by [the org HuggingFaceM4](https://huggingface.co/datasets/HuggingFaceM4). Such datasets save a lot of information (~1MB) in the YAML metadata from the `README.md` file and I noticed this slowed down the data loading process. BTW, I also noticed cache files for it is also slow because it tries to hash an instance of `DatasetInfo`, which in turn has all this metadata. Also, I changed two list comprehensions into generator expressions to avoid allocating extra memory unnecessarily. And BTW, there's [an issue in PyYAML suggesting to make this automatic](https://github.com/yaml/pyyaml/issues/437).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6266/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6266/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6266.diff", "html_url": "https://github.com/huggingface/datasets/pull/6266", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6266.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6266" }
true
https://api.github.com/repos/huggingface/datasets/issues/6265
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6265/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6265/comments
https://api.github.com/repos/huggingface/datasets/issues/6265/events
https://github.com/huggingface/datasets/pull/6265
1,915,651,566
PR_kwDODunzps5bWDfc
6,265
Remove `apache_beam` import in `BeamBasedBuilder._save_info`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
4
"2023-09-27T13:56:34"
"2023-09-28T18:34:02"
"2023-09-28T18:23:35"
CONTRIBUTOR
null
... to avoid an `ImportError` raised in `BeamBasedBuilder._save_info` when `apache_beam` is not installed (e.g., when downloading the processed version of a dataset from the HF GCS) Fix https://github.com/huggingface/datasets/issues/6260
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6265/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6265/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6265.diff", "html_url": "https://github.com/huggingface/datasets/pull/6265", "merged_at": "2023-09-28T18:23:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/6265.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6265" }
true
https://api.github.com/repos/huggingface/datasets/issues/6264
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6264/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6264/comments
https://api.github.com/repos/huggingface/datasets/issues/6264/events
https://github.com/huggingface/datasets/pull/6264
1,914,958,781
PR_kwDODunzps5bTvzh
6,264
Temporarily pin tensorflow < 2.14.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
4
"2023-09-27T08:16:06"
"2023-09-27T08:45:24"
"2023-09-27T08:36:39"
MEMBER
null
Temporarily pin tensorflow < 2.14.0 until permanent solution is found. Hot fix #6263.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6264/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6264/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6264.diff", "html_url": "https://github.com/huggingface/datasets/pull/6264", "merged_at": "2023-09-27T08:36:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/6264.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6264" }
true
https://api.github.com/repos/huggingface/datasets/issues/6263
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6263/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6263/comments
https://api.github.com/repos/huggingface/datasets/issues/6263/events
https://github.com/huggingface/datasets/issues/6263
1,914,951,043
I_kwDODunzps5yI9WD
6,263
CI is broken: ImportError: cannot import name 'context' from 'tensorflow.python'
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
0
"2023-09-27T08:12:05"
"2023-09-27T08:36:40"
"2023-09-27T08:36:40"
MEMBER
null
Python 3.10 CI is broken for `test_py310`. See: https://github.com/huggingface/datasets/actions/runs/6322990957/job/17169678812?pr=6262 ``` FAILED tests/test_py_utils.py::TempSeedTest::test_tensorflow - ImportError: cannot import name 'context' from 'tensorflow.python' (/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/tensorflow/python/__init__.py) ``` ``` _________________________ TempSeedTest.test_tensorflow _________________________ [gw1] linux -- Python 3.10.13 /opt/hostedtoolcache/Python/3.10.13/x64/bin/python self = <tests.test_py_utils.TempSeedTest testMethod=test_tensorflow> @require_tf def test_tensorflow(self): import tensorflow as tf from tensorflow.keras import layers model = layers.Dense(2) def gen_random_output(): x = tf.random.uniform((1, 3)) return model(x).numpy() > with temp_seed(42, set_tensorflow=True): tests/test_py_utils.py:155: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/contextlib.py:135: in __enter__ return next(self.gen) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ seed = 42, set_pytorch = False, set_tensorflow = True @contextmanager def temp_seed(seed: int, set_pytorch=False, set_tensorflow=False): """Temporarily set the random seed. This works for python numpy, pytorch and tensorflow.""" np_state = np.random.get_state() np.random.seed(seed) if set_pytorch and config.TORCH_AVAILABLE: import torch torch_state = torch.random.get_rng_state() torch.random.manual_seed(seed) if torch.cuda.is_available(): torch_cuda_states = torch.cuda.get_rng_state_all() torch.cuda.manual_seed_all(seed) if set_tensorflow and config.TF_AVAILABLE: import tensorflow as tf > from tensorflow.python import context as tfpycontext E ImportError: cannot import name 'context' from 'tensorflow.python' (/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/tensorflow/python/__init__.py) /opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/datasets/utils/py_utils.py:257: ImportError ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6263/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6263/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6262
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6262/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6262/comments
https://api.github.com/repos/huggingface/datasets/issues/6262/events
https://github.com/huggingface/datasets/pull/6262
1,914,895,459
PR_kwDODunzps5bTh6H
6,262
Fix CI 404 errors
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
9
"2023-09-27T07:40:18"
"2023-09-28T15:39:16"
"2023-09-28T15:30:40"
MEMBER
null
Currently our CI usually raises 404 errors when trying to delete temporary repositories. See, e.g.: https://github.com/huggingface/datasets/actions/runs/6314980985/job/17146507884 ``` FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_multiple_files_with_max_shard_size - huggingface_hub.utils._errors.RepositoryNotFoundError: 404 Client Error. (Request ID: Root=1-6512fb99-4a52c561752ece3d77eb6d57;2b61cae4-613d-4a73-bbb1-2faf9e32b02d) Repository Not Found for url: https://hub-ci.huggingface.co/api/repos/delete. Please make sure you specified the correct `repo_id` and `repo_type`. If you are trying to access a private or gated repo, make sure you are authenticated. FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_to_hub_custom_features_audio - huggingface_hub.utils._errors.RepositoryNotFoundError: 404 Client Error. (Request ID: Root=1-6512fbb2-0333dd666d42f0e173c2bb68;dfdc4271-b49b-4008-8c49-f05cf7c1d53d) Repository Not Found for url: https://hub-ci.huggingface.co/api/repos/delete. Please make sure you specified the correct `repo_id` and `repo_type`. If you are trying to access a private or gated repo, make sure you are authenticated. FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_custom_splits - huggingface_hub.utils._errors.RepositoryNotFoundError: 404 Client Error. (Request ID: Root=1-6512fbca-167690694f39770a5b3a444e;baeaa905-0a57-4585-ac97-9aaae12dd47d) Repository Not Found for url: https://hub-ci.huggingface.co/api/repos/delete. Please make sure you specified the correct `repo_id` and `repo_type`. If you are trying to access a private or gated repo, make sure you are authenticated. ``` I think this can be caused by collisions in temporary repository IDs because we create them in multiprocessing: ```python with temporary_repo(f"{CI_HUB_USER}/test-{int(time.time() * 10e3)}") as ds_name: ``` This can also be caused when there is another issue that does not allow the creation of the repository, thus making it impossible to delete it. This PR tries to fix this issue by increasing the precision of the number on the repository ID: `10e6` instead of `10e3`. Additionally, this PR catches RepositoryNotFoundError.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6262/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6262/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6262.diff", "html_url": "https://github.com/huggingface/datasets/pull/6262", "merged_at": "2023-09-28T15:30:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/6262.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6262" }
true
https://api.github.com/repos/huggingface/datasets/issues/6261
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6261/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6261/comments
https://api.github.com/repos/huggingface/datasets/issues/6261/events
https://github.com/huggingface/datasets/issues/6261
1,913,813,178
I_kwDODunzps5yEni6
6,261
Can't load a dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/37955817?v=4", "events_url": "https://api.github.com/users/joaopedrosdmm/events{/privacy}", "followers_url": "https://api.github.com/users/joaopedrosdmm/followers", "following_url": "https://api.github.com/users/joaopedrosdmm/following{/other_user}", "gists_url": "https://api.github.com/users/joaopedrosdmm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/joaopedrosdmm", "id": 37955817, "login": "joaopedrosdmm", "node_id": "MDQ6VXNlcjM3OTU1ODE3", "organizations_url": "https://api.github.com/users/joaopedrosdmm/orgs", "received_events_url": "https://api.github.com/users/joaopedrosdmm/received_events", "repos_url": "https://api.github.com/users/joaopedrosdmm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/joaopedrosdmm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joaopedrosdmm/subscriptions", "type": "User", "url": "https://api.github.com/users/joaopedrosdmm" }
[]
closed
false
null
[]
null
5
"2023-09-26T15:46:25"
"2023-10-05T10:23:23"
"2023-10-05T10:23:22"
NONE
null
### Describe the bug Can't seem to load the JourneyDB dataset. It throws the following error: ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) Cell In[15], line 2 1 # If the dataset is gated/private, make sure you have run huggingface-cli login ----> 2 dataset = load_dataset("JourneyDB/JourneyDB", data_files="data", use_auth_token=True) File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1664, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1661 ignore_verifications = ignore_verifications or save_infos 1663 # Create a dataset builder -> 1664 builder_instance = load_dataset_builder( 1665 path=path, 1666 name=name, 1667 data_dir=data_dir, 1668 data_files=data_files, 1669 cache_dir=cache_dir, 1670 features=features, 1671 download_config=download_config, 1672 download_mode=download_mode, 1673 revision=revision, 1674 use_auth_token=use_auth_token, 1675 **config_kwargs, 1676 ) 1678 # Return iterable dataset in case of streaming 1679 if streaming: File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1490, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs) 1488 download_config = download_config.copy() if download_config else DownloadConfig() 1489 download_config.use_auth_token = use_auth_token -> 1490 dataset_module = dataset_module_factory( 1491 path, 1492 revision=revision, 1493 download_config=download_config, 1494 download_mode=download_mode, 1495 data_dir=data_dir, 1496 data_files=data_files, 1497 ) 1499 # Get dataset builder class from the processing script 1500 builder_cls = import_main_class(dataset_module.module_path) File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1238, in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1236 raise ConnectionError(f"Couln't reach the Hugging Face Hub for dataset '{path}': {e1}") from None 1237 if isinstance(e1, FileNotFoundError): -> 1238 raise FileNotFoundError( 1239 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. " 1240 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" 1241 ) from None 1242 raise e1 from None 1243 else: FileNotFoundError: Couldn't find a dataset script at /kaggle/working/JourneyDB/JourneyDB/JourneyDB.py or any data file in the same directory. Couldn't find 'JourneyDB/JourneyDB' on the Hugging Face Hub either: FileNotFoundError: Unable to find data in dataset repository JourneyDB/JourneyDB with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip'] ``` ### Steps to reproduce the bug 1) ``` from huggingface_hub import notebook_login notebook_login() ``` 2) ``` !pip install -q datasets from datasets import load_dataset ``` 3) `dataset = load_dataset("JourneyDB/JourneyDB", data_files="data", use_auth_token=True)` ### Expected behavior Load the dataset ### Environment info Notebook
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6261/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6261/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6260
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6260/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6260/comments
https://api.github.com/repos/huggingface/datasets/issues/6260/events
https://github.com/huggingface/datasets/issues/6260
1,912,593,466
I_kwDODunzps5x_9w6
6,260
REUSE_DATASET_IF_EXISTS don't work
{ "avatar_url": "https://avatars.githubusercontent.com/u/88258534?v=4", "events_url": "https://api.github.com/users/rangehow/events{/privacy}", "followers_url": "https://api.github.com/users/rangehow/followers", "following_url": "https://api.github.com/users/rangehow/following{/other_user}", "gists_url": "https://api.github.com/users/rangehow/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rangehow", "id": 88258534, "login": "rangehow", "node_id": "MDQ6VXNlcjg4MjU4NTM0", "organizations_url": "https://api.github.com/users/rangehow/orgs", "received_events_url": "https://api.github.com/users/rangehow/received_events", "repos_url": "https://api.github.com/users/rangehow/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rangehow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rangehow/subscriptions", "type": "User", "url": "https://api.github.com/users/rangehow" }
[]
closed
false
null
[]
null
3
"2023-09-26T03:02:16"
"2023-09-28T18:23:36"
"2023-09-28T18:23:36"
NONE
null
### Describe the bug I use the following code to download natural_question dataset. Even though I have completely download it, the next time I run this code, the new download procedure will start and cover the original /data/lxy/NQ config=datasets.DownloadConfig(resume_download=True,max_retries=100,cache_dir=r'/data/lxy/NQ',download_desc='NQ') data=datasets.load_dataset('natural_questions',cache_dir=r'/data/lxy/NQ',download_config=config,download_mode=DownloadMode.REUSE_DATASET_IF_EXISTS) --- Since I don't have apache_beam installed, it throw a exception. After I pip install apache_beam ,the download restart.. ![image](https://github.com/huggingface/datasets/assets/88258534/f28ce7fe-29ea-4348-b87f-e69182a8bd41) ### Steps to reproduce the bug run this two line code config=datasets.DownloadConfig(resume_download=True,max_retries=100,cache_dir=r'/data/lxy/NQ',download_desc='NQ') data=datasets.load_dataset('natural_questions',cache_dir=r'/data/lxy/NQ',download_config=config,download_mode=DownloadMode.REUSE_DATASET_IF_EXISTS) ### Expected behavior Download behavior can be correctly follow DownloadMode ### Environment info - `datasets` version: 2.14.4 - Platform: Linux-3.10.0-1160.88.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.17 - Huggingface_hub version: 0.16.4 - PyArrow version: 11.0.0 - Pandas version: 2.0.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6260/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6260/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6259
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6259/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6259/comments
https://api.github.com/repos/huggingface/datasets/issues/6259/events
https://github.com/huggingface/datasets/issues/6259
1,911,965,758
I_kwDODunzps5x9kg-
6,259
Duplicated Rows When Loading Parquet Files from Root Directory with Subdirectories
{ "avatar_url": "https://avatars.githubusercontent.com/u/141304309?v=4", "events_url": "https://api.github.com/users/MF-FOOM/events{/privacy}", "followers_url": "https://api.github.com/users/MF-FOOM/followers", "following_url": "https://api.github.com/users/MF-FOOM/following{/other_user}", "gists_url": "https://api.github.com/users/MF-FOOM/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MF-FOOM", "id": 141304309, "login": "MF-FOOM", "node_id": "U_kgDOCGwh9Q", "organizations_url": "https://api.github.com/users/MF-FOOM/orgs", "received_events_url": "https://api.github.com/users/MF-FOOM/received_events", "repos_url": "https://api.github.com/users/MF-FOOM/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MF-FOOM/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MF-FOOM/subscriptions", "type": "User", "url": "https://api.github.com/users/MF-FOOM" }
[]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
null
1
"2023-09-25T17:20:54"
"2023-09-26T17:54:08"
null
NONE
null
### Describe the bug When parquet files are saved in "train" and "val" subdirectories under a root directory, and datasets are then loaded using `load_dataset("parquet", data_dir="root_directory")`, the resulting dataset has duplicated rows for both the training and validation sets. ### Steps to reproduce the bug 1. Create a root directory, e.g., "testing123". 2. Under "testing123", create two subdirectories: "train" and "val". 3. Create and save a parquet file with 3 unique rows in the "train" subdirectory. 4. Create and save a parquet file with 4 unique rows in the "val" subdirectory. 5. Load the datasets from the root directory using `load_dataset("parquet", data_dir="testing123")` 6. Iterate through the datasets and print the rows Here's a collab reproducing these steps: https://colab.research.google.com/drive/11NEdImnQ3OqJlwKSHRMhr7jCBesNdLY4?usp=sharing ### Expected behavior - Training set should contain 3 unique rows. - Validation set should contain 4 unique rows. ### Environment info - `datasets` version: 2.14.5 - Platform: Linux-5.15.120+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.17.2 - PyArrow version: 9.0.0 - Pandas version: 1.5.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6259/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6259/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6258
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6258/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6258/comments
https://api.github.com/repos/huggingface/datasets/issues/6258/events
https://github.com/huggingface/datasets/pull/6258
1,911,445,373
PR_kwDODunzps5bHxHl
6,258
[DOCS] Fix typo: Elasticsearch
{ "avatar_url": "https://avatars.githubusercontent.com/u/32779855?v=4", "events_url": "https://api.github.com/users/leemthompo/events{/privacy}", "followers_url": "https://api.github.com/users/leemthompo/followers", "following_url": "https://api.github.com/users/leemthompo/following{/other_user}", "gists_url": "https://api.github.com/users/leemthompo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/leemthompo", "id": 32779855, "login": "leemthompo", "node_id": "MDQ6VXNlcjMyNzc5ODU1", "organizations_url": "https://api.github.com/users/leemthompo/orgs", "received_events_url": "https://api.github.com/users/leemthompo/received_events", "repos_url": "https://api.github.com/users/leemthompo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/leemthompo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leemthompo/subscriptions", "type": "User", "url": "https://api.github.com/users/leemthompo" }
[]
closed
false
null
[]
null
2
"2023-09-25T12:50:59"
"2023-09-26T14:55:35"
"2023-09-26T13:36:40"
CONTRIBUTOR
null
Not ElasticSearch :)
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6258/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6258/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6258.diff", "html_url": "https://github.com/huggingface/datasets/pull/6258", "merged_at": "2023-09-26T13:36:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/6258.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6258" }
true
https://api.github.com/repos/huggingface/datasets/issues/6257
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6257/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6257/comments
https://api.github.com/repos/huggingface/datasets/issues/6257/events
https://github.com/huggingface/datasets/issues/6257
1,910,741,044
I_kwDODunzps5x45g0
6,257
HfHubHTTPError - exceeded our hourly quotas for action: commit
{ "avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4", "events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}", "followers_url": "https://api.github.com/users/yuvalkirstain/followers", "following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}", "gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yuvalkirstain", "id": 57996478, "login": "yuvalkirstain", "node_id": "MDQ6VXNlcjU3OTk2NDc4", "organizations_url": "https://api.github.com/users/yuvalkirstain/orgs", "received_events_url": "https://api.github.com/users/yuvalkirstain/received_events", "repos_url": "https://api.github.com/users/yuvalkirstain/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions", "type": "User", "url": "https://api.github.com/users/yuvalkirstain" }
[]
open
false
null
[]
null
4
"2023-09-25T06:11:43"
"2023-09-27T07:04:59"
null
NONE
null
### Describe the bug I try to upload a very large dataset of images, and get the following error: ``` File /fsx-multigen/yuvalkirstain/miniconda/envs/pickapic/lib/python3.10/site-packages/huggingface_hub/hf_api.py:2712, in HfApi.create_commit(self, repo_id, operations, commit_message, commit_description, token, repo_type, revision, create_pr, num_threads, parent_commit, run_as_future) 2710 try: 2711 commit_resp = get_session().post(url=commit_url, headers=headers, data=data, params=params) -> 2712 hf_raise_for_status(commit_resp, endpoint_name="commit") 2713 except RepositoryNotFoundError as e: 2714 e.append_to_message(_CREATE_COMMIT_NO_REPO_ERROR_MESSAGE) File /fsx-multigen/yuvalkirstain/miniconda/envs/pickapic/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py:301, in hf_raise_for_status(response, endpoint_name) 297 raise BadRequestError(message, response=response) from e 299 # Convert `HTTPError` into a `HfHubHTTPError` to display request information 300 # as well (request id and/or server error message) --> 301 raise HfHubHTTPError(str(e), response=response) from e HfHubHTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co/api/datasets/yuvalkirstain/pickapic_v2/commit/main (Request ID: Root=1-65112399-12d63f7d7f28bfa40a36a0fd) You have exceeded our hourly quotas for action: commit. We invite you to retry later. ``` this makes it much less convenient to host large datasets on HF hub. ### Steps to reproduce the bug Upload a very large dataset of images ### Expected behavior the upload to work well ### Environment info - `datasets` version: 2.13.1 - Platform: Linux-5.15.0-1033-aws-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.1 - Pandas version: 1.5.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6257/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6257/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6256
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6256/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6256/comments
https://api.github.com/repos/huggingface/datasets/issues/6256/events
https://github.com/huggingface/datasets/issues/6256
1,910,275,199
I_kwDODunzps5x3Hx_
6,256
load_dataset() function's cache_dir does not seems to work
{ "avatar_url": "https://avatars.githubusercontent.com/u/171831?v=4", "events_url": "https://api.github.com/users/andyzhu/events{/privacy}", "followers_url": "https://api.github.com/users/andyzhu/followers", "following_url": "https://api.github.com/users/andyzhu/following{/other_user}", "gists_url": "https://api.github.com/users/andyzhu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/andyzhu", "id": 171831, "login": "andyzhu", "node_id": "MDQ6VXNlcjE3MTgzMQ==", "organizations_url": "https://api.github.com/users/andyzhu/orgs", "received_events_url": "https://api.github.com/users/andyzhu/received_events", "repos_url": "https://api.github.com/users/andyzhu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/andyzhu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andyzhu/subscriptions", "type": "User", "url": "https://api.github.com/users/andyzhu" }
[]
open
false
null
[]
null
1
"2023-09-24T15:34:06"
"2023-09-27T13:40:45"
null
NONE
null
### Describe the bug datasets version: 2.14.5 when trying to run the following command trec = load_dataset('trec', split='train[:1000]', cache_dir='/path/to/my/dir') I keep getting error saying the command does not have permission to the default cache directory on my macbook pro machine. It seems the cache_dir parameter cannot change the dataset saving directory from the default what ever explained in the https://huggingface.co/docs/datasets/cache does not seem to work ### Steps to reproduce the bug datasets version: 2.14.5 when trying to run the following command trec = load_dataset('trec', split='train[:1000]', cache_dir='/path/to/my/dir') I keep getting error saying the command does not have permission to the default cache directory on my macbook pro machine. It seems the cache_dir parameter cannot change the dataset saving directory from the default what ever explained in the https://huggingface.co/docs/datasets/cache does not seem to work ### Expected behavior the dataset should be saved to the cache_dir points to ### Environment info datasets version: 2.14.5 macos X: Ventura 13.4.1 (c)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6256/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6256/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6255
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6255/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6255/comments
https://api.github.com/repos/huggingface/datasets/issues/6255/events
https://github.com/huggingface/datasets/pull/6255
1,909,842,977
PR_kwDODunzps5bCioS
6,255
Parallelize builder configs creation
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
5
"2023-09-23T11:56:20"
"2023-09-26T15:44:47"
"2023-09-26T15:44:19"
MEMBER
null
For datasets with lots of configs defined in YAML E.g. `load_dataset("uonlp/CulturaX", "fr", revision="refs/pr/6")` from >1min to 15sec
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6255/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6255/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6255.diff", "html_url": "https://github.com/huggingface/datasets/pull/6255", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6255.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6255" }
true
https://api.github.com/repos/huggingface/datasets/issues/6254
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6254/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6254/comments
https://api.github.com/repos/huggingface/datasets/issues/6254/events
https://github.com/huggingface/datasets/issues/6254
1,909,672,104
I_kwDODunzps5x00io
6,254
Dataset.from_generator() cost much more time in vscode debugging mode then running mode
{ "avatar_url": "https://avatars.githubusercontent.com/u/56437469?v=4", "events_url": "https://api.github.com/users/dontnet-wuenze/events{/privacy}", "followers_url": "https://api.github.com/users/dontnet-wuenze/followers", "following_url": "https://api.github.com/users/dontnet-wuenze/following{/other_user}", "gists_url": "https://api.github.com/users/dontnet-wuenze/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dontnet-wuenze", "id": 56437469, "login": "dontnet-wuenze", "node_id": "MDQ6VXNlcjU2NDM3NDY5", "organizations_url": "https://api.github.com/users/dontnet-wuenze/orgs", "received_events_url": "https://api.github.com/users/dontnet-wuenze/received_events", "repos_url": "https://api.github.com/users/dontnet-wuenze/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dontnet-wuenze/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dontnet-wuenze/subscriptions", "type": "User", "url": "https://api.github.com/users/dontnet-wuenze" }
[]
closed
false
null
[]
null
1
"2023-09-23T02:07:26"
"2023-10-03T14:42:53"
"2023-10-03T14:42:53"
NONE
null
### Describe the bug Hey there, I’m using Dataset.from_generator() to convert a torch_dataset to the Huggingface Dataset. However, when I debug my code on vscode, I find that it runs really slow on Dataset.from_generator() which may even 20 times longer then run the script on terminal. ### Steps to reproduce the bug I write a simple test code : ```python import os from functools import partial from typing import Callable import torch import time from torch.utils.data import Dataset as TorchDataset from datasets import load_from_disk, Dataset as HFDataset import torch from torch.utils.data import Dataset class SimpleDataset(Dataset): def __init__(self, data): self.data = data self.keys = list(data[0].keys()) def __len__(self): return len(self.data) def __getitem__(self, index): sample = self.data[index] return {key: sample[key] for key in self.keys} def TorchDataset2HuggingfaceDataset(torch_dataset: TorchDataset, cache_dir: str = None ) -> HFDataset: """ convert torch dataset to huggingface dataset """ generator : Callable[[], TorchDataset] = lambda: (sample for sample in torch_dataset) return HFDataset.from_generator(generator, cache_dir=cache_dir) if __name__ == '__main__': data = [ {'id': 1, 'name': 'Alice'}, {'id': 2, 'name': 'Bob'}, {'id': 3, 'name': 'Charlie'} ] torch_dataset = SimpleDataset(data) start_time = time.time() huggingface_dataset = TorchDataset2HuggingfaceDataset(torch_dataset) end_time = time.time() print("time: ", end_time - start_time) print(huggingface_dataset) ``` ### Expected behavior this test on my machine report that the running time on terminal is 0.086, however the running time in debugging mode on vscode is 0.25, which I think is much longer than expected. I’d like to know is the anything wrong in the code or just because of debugging? I have traced the code and I find is this func which I get stuck. ```python def create_config_id( self, config_kwargs: dict, custom_features: Optional[Features] = None, ) -> str: ... # stuck in this line suffix = Hasher.hash(config_kwargs_to_add_to_suffix) ``` ### Environment info - `datasets` version: 2.12.0 - Platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31 - Python version: 3.11.3 - Huggingface_hub version: 0.17.2 - PyArrow version: 11.0.0 - Pandas version: 2.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6254/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6254/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6253
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6253/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6253/comments
https://api.github.com/repos/huggingface/datasets/issues/6253/events
https://github.com/huggingface/datasets/pull/6253
1,906,618,910
PR_kwDODunzps5a3s__
6,253
Check builder cls default config name in inspect
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
4
"2023-09-21T10:15:32"
"2023-09-21T14:16:44"
"2023-09-21T14:08:00"
MEMBER
null
Fix https://github.com/huggingface/datasets-server/issues/1812 this was causing this issue: ```ipython In [1]: from datasets import * In [2]: inspect.get_dataset_config_names("aakanksha/udpos") Out[2]: ['default'] In [3]: load_dataset_builder("aakanksha/udpos").config.name Out[3]: 'en' ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6253/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6253/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6253.diff", "html_url": "https://github.com/huggingface/datasets/pull/6253", "merged_at": "2023-09-21T14:08:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/6253.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6253" }
true
https://api.github.com/repos/huggingface/datasets/issues/6252
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6252/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6252/comments
https://api.github.com/repos/huggingface/datasets/issues/6252/events
https://github.com/huggingface/datasets/issues/6252
1,906,375,378
I_kwDODunzps5xoPrS
6,252
exif_transpose not done to Image (PIL problem)
{ "avatar_url": "https://avatars.githubusercontent.com/u/108274349?v=4", "events_url": "https://api.github.com/users/rhajou/events{/privacy}", "followers_url": "https://api.github.com/users/rhajou/followers", "following_url": "https://api.github.com/users/rhajou/following{/other_user}", "gists_url": "https://api.github.com/users/rhajou/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rhajou", "id": 108274349, "login": "rhajou", "node_id": "U_kgDOBnQirQ", "organizations_url": "https://api.github.com/users/rhajou/orgs", "received_events_url": "https://api.github.com/users/rhajou/received_events", "repos_url": "https://api.github.com/users/rhajou/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rhajou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rhajou/subscriptions", "type": "User", "url": "https://api.github.com/users/rhajou" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
{ "closed_at": null, "closed_issues": 0, "created_at": "2023-02-13T16:22:42Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }, "description": "Next major release", "due_on": null, "html_url": "https://github.com/huggingface/datasets/milestone/10", "id": 9038583, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/10/labels", "node_id": "MI_kwDODunzps4Aier3", "number": 10, "open_issues": 4, "state": "open", "title": "3.0", "updated_at": "2023-09-22T14:07:52Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/10" }
2
"2023-09-21T08:11:46"
"2023-09-22T14:07:52"
null
NONE
null
### Feature request I noticed that some of my images loaded using PIL have some metadata related to exif that can rotate them when loading. Since the dataset.features.Image uses PIL for loading, the loaded image may be rotated (width and height will be inverted) thus for tasks as object detection and layoutLM this can create some inconsistencies (between input bboxes and input images). For now there is no option in datasets.features.Image to specify that. We need to do the following when preparing examples (when preparing images for training, test or inference): ``` from PIL import Image, ImageOps pil = ImageOps.exif_transpose(pil) ``` reference: https://stackoverflow.com/a/63950647/5720150 Is it possible to add this by default to the datasets.feature.Image ? or to add the option to do the ImageOps.exif_transpose? Thank you ### Motivation Prevent having inverted data related to exif metadata that may affect object detection tasks ### Your contribution Changing in datasets.featrues.Image I can help with that.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6252/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6252/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6251
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6251/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6251/comments
https://api.github.com/repos/huggingface/datasets/issues/6251/events
https://github.com/huggingface/datasets/pull/6251
1,904,418,426
PR_kwDODunzps5awQsy
6,251
Support streaming datasets with pyarrow.parquet.read_table
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
10
"2023-09-20T08:07:02"
"2023-09-27T06:37:03"
"2023-09-27T06:26:24"
MEMBER
null
Support streaming datasets with `pyarrow.parquet.read_table`. See: https://huggingface.co/datasets/uonlp/CulturaX/discussions/2 CC: @AndreaFrancis
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6251/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6251/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6251.diff", "html_url": "https://github.com/huggingface/datasets/pull/6251", "merged_at": "2023-09-27T06:26:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/6251.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6251" }
true
https://api.github.com/repos/huggingface/datasets/issues/6247
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6247/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6247/comments
https://api.github.com/repos/huggingface/datasets/issues/6247/events
https://github.com/huggingface/datasets/pull/6247
1,901,390,945
PR_kwDODunzps5amAQ1
6,247
Update create_dataset.mdx
{ "avatar_url": "https://avatars.githubusercontent.com/u/76403422?v=4", "events_url": "https://api.github.com/users/EswarDivi/events{/privacy}", "followers_url": "https://api.github.com/users/EswarDivi/followers", "following_url": "https://api.github.com/users/EswarDivi/following{/other_user}", "gists_url": "https://api.github.com/users/EswarDivi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/EswarDivi", "id": 76403422, "login": "EswarDivi", "node_id": "MDQ6VXNlcjc2NDAzNDIy", "organizations_url": "https://api.github.com/users/EswarDivi/orgs", "received_events_url": "https://api.github.com/users/EswarDivi/received_events", "repos_url": "https://api.github.com/users/EswarDivi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/EswarDivi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/EswarDivi/subscriptions", "type": "User", "url": "https://api.github.com/users/EswarDivi" }
[]
closed
false
null
[]
null
2
"2023-09-18T17:06:29"
"2023-09-19T18:51:49"
"2023-09-19T18:40:10"
CONTRIBUTOR
null
modified , as AudioFolder and ImageFolder not in Dataset Library. ``` from datasets import AudioFolder ``` and ```from datasets import ImageFolder``` to ```from datasets import load_dataset``` ``` cannot import name 'AudioFolder' from 'datasets' (/home/eswardivi/miniconda3/envs/Hugformers/lib/python3.10/site-packages/datasets/__init__.py) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6247/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6247/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6247.diff", "html_url": "https://github.com/huggingface/datasets/pull/6247", "merged_at": "2023-09-19T18:40:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/6247.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6247" }
true
https://api.github.com/repos/huggingface/datasets/issues/6246
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6246/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6246/comments
https://api.github.com/repos/huggingface/datasets/issues/6246/events
https://github.com/huggingface/datasets/issues/6246
1,899,848,414
I_kwDODunzps5xPWLe
6,246
Add new column to dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4", "events_url": "https://api.github.com/users/andysingal/events{/privacy}", "followers_url": "https://api.github.com/users/andysingal/followers", "following_url": "https://api.github.com/users/andysingal/following{/other_user}", "gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/andysingal", "id": 20493493, "login": "andysingal", "node_id": "MDQ6VXNlcjIwNDkzNDkz", "organizations_url": "https://api.github.com/users/andysingal/orgs", "received_events_url": "https://api.github.com/users/andysingal/received_events", "repos_url": "https://api.github.com/users/andysingal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andysingal/subscriptions", "type": "User", "url": "https://api.github.com/users/andysingal" }
[]
closed
false
null
[]
null
4
"2023-09-17T16:59:48"
"2023-09-18T16:20:09"
"2023-09-18T16:20:09"
NONE
null
### Describe the bug ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) [<ipython-input-9-bd197b36b6a0>](https://localhost:8080/#) in <cell line: 1>() ----> 1 dataset['train']['/workspace/data'] 3 frames [/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py](https://localhost:8080/#) in _check_valid_column_key(key, columns) 518 def _check_valid_column_key(key: str, columns: List[str]) -> None: 519 if key not in columns: --> 520 raise KeyError(f"Column {key} not in the dataset. Current columns in the dataset: {columns}") 521 522 KeyError: "Column train not in the dataset. Current columns in the dataset: ['image', '/workspace/data']" ``` ### Steps to reproduce the bug please find the notebook for reference: https://colab.research.google.com/drive/10lZ_zLtU4itYVmIVTvIEVbjfOtCZaAZy?usp=sharing ### Expected behavior add column to the dataset ### Environment info colab pro
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6246/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6246/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6244
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6244/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6244/comments
https://api.github.com/repos/huggingface/datasets/issues/6244/events
https://github.com/huggingface/datasets/pull/6244
1,898,861,422
PR_kwDODunzps5adtD3
6,244
Add support for `fsspec>=2023.9.0`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
19
"2023-09-15T17:58:25"
"2023-09-26T15:41:38"
"2023-09-26T15:32:51"
CONTRIBUTOR
null
Fix #6214
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6244/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6244/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6244.diff", "html_url": "https://github.com/huggingface/datasets/pull/6244", "merged_at": "2023-09-26T15:32:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/6244.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6244" }
true
https://api.github.com/repos/huggingface/datasets/issues/6243
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6243/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6243/comments
https://api.github.com/repos/huggingface/datasets/issues/6243/events
https://github.com/huggingface/datasets/pull/6243
1,898,532,784
PR_kwDODunzps5aclIy
6,243
Fix cast from fixed size list to variable size list
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
6
"2023-09-15T14:23:33"
"2023-09-19T18:02:21"
"2023-09-19T17:53:17"
CONTRIBUTOR
null
Fix #6242
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6243/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6243/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6243.diff", "html_url": "https://github.com/huggingface/datasets/pull/6243", "merged_at": "2023-09-19T17:53:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/6243.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6243" }
true
https://api.github.com/repos/huggingface/datasets/issues/6242
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6242/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6242/comments
https://api.github.com/repos/huggingface/datasets/issues/6242/events
https://github.com/huggingface/datasets/issues/6242
1,896,899,123
I_kwDODunzps5xEGIz
6,242
Data alteration when loading dataset with unspecified inner sequence length
{ "avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4", "events_url": "https://api.github.com/users/qgallouedec/events{/privacy}", "followers_url": "https://api.github.com/users/qgallouedec/followers", "following_url": "https://api.github.com/users/qgallouedec/following{/other_user}", "gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/qgallouedec", "id": 45557362, "login": "qgallouedec", "node_id": "MDQ6VXNlcjQ1NTU3MzYy", "organizations_url": "https://api.github.com/users/qgallouedec/orgs", "received_events_url": "https://api.github.com/users/qgallouedec/received_events", "repos_url": "https://api.github.com/users/qgallouedec/repos", "site_admin": false, "starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions", "type": "User", "url": "https://api.github.com/users/qgallouedec" }
[]
closed
false
null
[]
null
2
"2023-09-14T16:12:45"
"2023-09-19T17:53:18"
"2023-09-19T17:53:18"
CONTRIBUTOR
null
### Describe the bug When a dataset saved with a specified inner sequence length is loaded without specifying that length, the original data is altered and becomes inconsistent. ### Steps to reproduce the bug ```python from datasets import Dataset, Features, Value, Sequence, load_dataset # Repository ID repo_id = "my_repo_id" # Define features with a specific length of 3 for each inner sequence specified_features = Features({"key": Sequence(Sequence(Value("float32"), length=3))}) # Create a dataset with the specified features data = [ [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]], ] dataset = Dataset.from_dict({"key": data}, features=specified_features) # Push the dataset to the hub dataset.push_to_hub(repo_id) # Define features without specifying the length unspecified_features = Features({"key": Sequence(Sequence(Value("float32")))}) # Load the dataset from the hub with this new feature definition dataset = load_dataset(f"qgallouedec/{repo_id}", split="train", features=unspecified_features) # The obtained data is altered print(dataset.to_dict()) # {'key': [[[1.0], [2.0]], [[3.0], [4.0]]]} ``` ### Expected behavior ```python print(dataset.to_dict()) # {'key': [[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]]]} ``` ### Environment info - `datasets` version: 2.14.4 - Platform: Linux-6.2.0-32-generic-x86_64-with-glibc2.35 - Python version: 3.9.12 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6242/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6242/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6241
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6241/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6241/comments
https://api.github.com/repos/huggingface/datasets/issues/6241/events
https://github.com/huggingface/datasets/pull/6241
1,896,429,694
PR_kwDODunzps5aVfl-
6,241
Remove unused global variables in `audio.py`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
4
"2023-09-14T12:06:32"
"2023-09-15T15:57:10"
"2023-09-15T15:46:07"
CONTRIBUTOR
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6241/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6241/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6241.diff", "html_url": "https://github.com/huggingface/datasets/pull/6241", "merged_at": "2023-09-15T15:46:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/6241.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6241" }
true
https://api.github.com/repos/huggingface/datasets/issues/6240
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6240/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6240/comments
https://api.github.com/repos/huggingface/datasets/issues/6240/events
https://github.com/huggingface/datasets/issues/6240
1,895,723,888
I_kwDODunzps5w_nNw
6,240
Dataloader stuck on multiple GPUs
{ "avatar_url": "https://avatars.githubusercontent.com/u/40049003?v=4", "events_url": "https://api.github.com/users/kuri54/events{/privacy}", "followers_url": "https://api.github.com/users/kuri54/followers", "following_url": "https://api.github.com/users/kuri54/following{/other_user}", "gists_url": "https://api.github.com/users/kuri54/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kuri54", "id": 40049003, "login": "kuri54", "node_id": "MDQ6VXNlcjQwMDQ5MDAz", "organizations_url": "https://api.github.com/users/kuri54/orgs", "received_events_url": "https://api.github.com/users/kuri54/received_events", "repos_url": "https://api.github.com/users/kuri54/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kuri54/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kuri54/subscriptions", "type": "User", "url": "https://api.github.com/users/kuri54" }
[]
closed
false
null
[]
null
2
"2023-09-14T05:30:30"
"2023-09-14T23:54:42"
"2023-09-14T23:54:42"
NONE
null
### Describe the bug I am trying to get CLIP to fine-tuning with my code. When I tried to run it on multiple GPUs using accelerate, I encountered the following phenomenon. - Validation dataloader stuck in 2nd epoch only on multi-GPU Specifically, when the "for inputs in valid_loader:" process is finished, it does not proceed to the next step. train_loader process is completed. Also, both train and valid are working correctly in the first epoch. The accelerate command at that time is as follows. `accelerate launch --multi_gpu --num_processes=2 {script_name.py} {--arg1} {--arg2} ...` - This will not happen when single GPU is used. `CUDA_VISIBLE_DEVICES="0" accelerate launch {script_name.py} --arg1 --arg2 ...` - Setting num_workers=0 in dataloader did not change the result. ### Steps to reproduce the bug 1. The codes for fine-tuning the regular CLIP were updated for accelerate. 2. Run the code with the accelerate command as `accelerate launch --multi_gpu --num_processes=2 {script_name.py} {--arg1} {--arg2} ...` and the above problem will occur. 3. CUDA_VISIBLE_DEVICES="0" accelerate launch {script_name.py} --arg1 --arg2 ...` , it works fine. ### Expected behavior It Should end normally as if it was run on a single GPU. ### Environment info Since `datasets-cli env` did not work, the environment is described below. - OS: Ubuntu 22.04 with Docker - Docker: 24.0.5, build ced0996 - Python: 3.10.12 - torch==2.0.1 - accelerate==0.21.0 - transformers==4.33.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6240/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6240/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6239
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6239/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6239/comments
https://api.github.com/repos/huggingface/datasets/issues/6239/events
https://github.com/huggingface/datasets/issues/6239
1,895,349,382
I_kwDODunzps5w-LyG
6,239
Load local audio data doesn't work
{ "avatar_url": "https://avatars.githubusercontent.com/u/554032?v=4", "events_url": "https://api.github.com/users/abodacs/events{/privacy}", "followers_url": "https://api.github.com/users/abodacs/followers", "following_url": "https://api.github.com/users/abodacs/following{/other_user}", "gists_url": "https://api.github.com/users/abodacs/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/abodacs", "id": 554032, "login": "abodacs", "node_id": "MDQ6VXNlcjU1NDAzMg==", "organizations_url": "https://api.github.com/users/abodacs/orgs", "received_events_url": "https://api.github.com/users/abodacs/received_events", "repos_url": "https://api.github.com/users/abodacs/repos", "site_admin": false, "starred_url": "https://api.github.com/users/abodacs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abodacs/subscriptions", "type": "User", "url": "https://api.github.com/users/abodacs" }
[]
closed
false
null
[]
null
2
"2023-09-13T22:30:01"
"2023-09-15T14:32:10"
"2023-09-15T14:32:10"
NONE
null
### Describe the bug I get a RuntimeError from the following code: ```python audio_dataset = Dataset.from_dict({"audio": ["/kaggle/input/bengaliai-speech/train_mp3s/000005f3362c.mp3"]}).cast_column("audio", Audio()) audio_dataset[0] ``` ### Traceback <details> ```python RuntimeError Traceback (most recent call last) Cell In[33], line 1 ----> 1 train_dataset[0] File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1764, in Dataset.__getitem__(self, key) 1762 def __getitem__(self, key): # noqa: F811 1763 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" -> 1764 return self._getitem( 1765 key, 1766 ) File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1749, in Dataset._getitem(self, key, decoded, **kwargs) 1747 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs) 1748 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) -> 1749 formatted_output = format_table( 1750 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns 1751 ) 1752 return formatted_output File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:532, in format_table(table, key, formatter, format_columns, output_all_columns) 530 python_formatter = PythonFormatter(features=None) 531 if format_columns is None: --> 532 return formatter(pa_table, query_type=query_type) 533 elif query_type == "column": 534 if key in format_columns: File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:281, in Formatter.__call__(self, pa_table, query_type) 279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]: 280 if query_type == "row": --> 281 return self.format_row(pa_table) 282 elif query_type == "column": 283 return self.format_column(pa_table) File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:312, in PythonFormatter.format_row(self, pa_table) 310 row = self.python_arrow_extractor().extract_row(pa_table) 311 if self.decoded: --> 312 row = self.python_features_decoder.decode_row(row) 313 return row File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:221, in PythonFeaturesDecoder.decode_row(self, row) 220 def decode_row(self, row: dict) -> dict: --> 221 return self.features.decode_example(row) if self.features else row File /opt/conda/lib/python3.10/site-packages/datasets/features/features.py:1386, in Features.decode_example(self, example) 1376 def decode_example(self, example: dict): 1377 """Decode example with custom feature decoding. 1378 1379 Args: (...) 1383 :obj:`dict[str, Any]` 1384 """ -> 1386 return { 1387 column_name: decode_nested_example(feature, value) 1388 if self._column_requires_decoding[column_name] 1389 else value 1390 for column_name, (feature, value) in zip_dict( 1391 {key: value for key, value in self.items() if key in example}, example 1392 ) 1393 } File /opt/conda/lib/python3.10/site-packages/datasets/features/features.py:1387, in <dictcomp>(.0) 1376 def decode_example(self, example: dict): 1377 """Decode example with custom feature decoding. 1378 1379 Args: (...) 1383 :obj:`dict[str, Any]` 1384 """ 1386 return { -> 1387 column_name: decode_nested_example(feature, value) 1388 if self._column_requires_decoding[column_name] 1389 else value 1390 for column_name, (feature, value) in zip_dict( 1391 {key: value for key, value in self.items() if key in example}, example 1392 ) 1393 } File /opt/conda/lib/python3.10/site-packages/datasets/features/features.py:1087, in decode_nested_example(schema, obj) 1085 # Object with special decoding: 1086 elif isinstance(schema, (Audio, Image)): -> 1087 return schema.decode_example(obj) if obj is not None else None 1088 return obj File /opt/conda/lib/python3.10/site-packages/datasets/features/audio.py:103, in Audio.decode_example(self, value) 101 raise ValueError(f"An audio sample should have one of 'path' or 'bytes' but both are None in {value}.") 102 elif path is not None and path.endswith("mp3"): --> 103 array, sampling_rate = self._decode_mp3(file if file else path) 104 elif path is not None and path.endswith("opus"): 105 if file: File /opt/conda/lib/python3.10/site-packages/datasets/features/audio.py:241, in Audio._decode_mp3(self, path_or_file) 238 except RuntimeError as err: 239 raise ImportError("To support decoding 'mp3' audio files, please install 'sox'.") from err --> 241 array, sampling_rate = torchaudio.load(path_or_file, format="mp3") 242 if self.sampling_rate and self.sampling_rate != sampling_rate: 243 if not hasattr(self, "_resampler") or self._resampler.orig_freq != sampling_rate: File /opt/conda/lib/python3.10/site-packages/torchaudio/backend/sox_io_backend.py:256, in load(filepath, frame_offset, num_frames, normalize, channels_first, format) 254 if ret is not None: 255 return ret --> 256 return _fallback_load(filepath, frame_offset, num_frames, normalize, channels_first, format) File /opt/conda/lib/python3.10/site-packages/torchaudio/backend/sox_io_backend.py:30, in _fail_load(filepath, frame_offset, num_frames, normalize, channels_first, format) 22 def _fail_load( 23 filepath: str, 24 frame_offset: int = 0, (...) 28 format: Optional[str] = None, 29 ) -> Tuple[torch.Tensor, int]: ---> 30 raise RuntimeError("Failed to load audio from {}".format(filepath)) RuntimeError: Failed to load audio from /kaggle/input/bengaliai-speech/train_mp3s/000005f3362c.mp3 ``` </details> ### Steps to reproduce the bug 1. - Create a custom dataset using Local files of type mp3. 3. - Try to read the first audio item. ### Expected behavior Expected output ```python audio_dataset[0]["audio"] {'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414, 0. , 0. ], dtype=float32), 'path': 'path/to/audio_1', 'sampling_rate': 16000} ``` ### Environment info N/A
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6239/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6239/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6238
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6238/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6238/comments
https://api.github.com/repos/huggingface/datasets/issues/6238/events
https://github.com/huggingface/datasets/issues/6238
1,895,207,828
I_kwDODunzps5w9pOU
6,238
`dataset.filter` ALWAYS removes the first item from the dataset when using batched=True
{ "avatar_url": "https://avatars.githubusercontent.com/u/1330693?v=4", "events_url": "https://api.github.com/users/Taytay/events{/privacy}", "followers_url": "https://api.github.com/users/Taytay/followers", "following_url": "https://api.github.com/users/Taytay/following{/other_user}", "gists_url": "https://api.github.com/users/Taytay/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Taytay", "id": 1330693, "login": "Taytay", "node_id": "MDQ6VXNlcjEzMzA2OTM=", "organizations_url": "https://api.github.com/users/Taytay/orgs", "received_events_url": "https://api.github.com/users/Taytay/received_events", "repos_url": "https://api.github.com/users/Taytay/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Taytay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Taytay/subscriptions", "type": "User", "url": "https://api.github.com/users/Taytay" }
[]
closed
false
null
[]
null
2
"2023-09-13T20:20:37"
"2023-09-17T07:05:07"
"2023-09-17T07:05:07"
NONE
null
### Describe the bug If you call batched=True when calling `filter`, the first item is _always_ filtered out, regardless of the filter condition. ### Steps to reproduce the bug Here's a minimal example: ```python def filter_batch_always_true(batch, indices): print("First index being passed into this filter function: ", indices[0]) return indices # Keep all indices data = {"value": list(range(10))} dataset = Dataset.from_dict(data) filtered_dataset = dataset.filter(filter_batch_always_true, with_indices=True, batched=True) print("Length of original dataset: ", len(dataset)) print("Length of filtered_dataset: ", len(filtered_dataset)) print("Is equal to original? ", len(filtered_dataset) == len(dataset)) print("First item of filtered dataset: ", filtered_dataset[0]) print("Last item of filtered dataset: ", filtered_dataset[-1]) ``` prints: ``` First index being passed into this filter function: 0 Length of original dataset: 10 Length of filtered_dataset: 9 Is equal to original? False First item of filtered dataset: {'value': 1} Last item of filtered dataset: {'value': 9} ``` ### Expected behavior Filter should respect the filter condition. ### Environment info - `datasets` version: 2.14.4 - Platform: macOS-13.5-arm64-arm-64bit - Python version: 3.9.18 - Huggingface_hub version: 0.17.1 - PyArrow version: 10.0.1 - Pandas version: 2.0.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6238/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6238/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6237
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6237/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6237/comments
https://api.github.com/repos/huggingface/datasets/issues/6237/events
https://github.com/huggingface/datasets/issues/6237
1,893,822,321
I_kwDODunzps5w4W9x
6,237
Tokenization with multiple workers is too slow
{ "avatar_url": "https://avatars.githubusercontent.com/u/25720695?v=4", "events_url": "https://api.github.com/users/macabdul9/events{/privacy}", "followers_url": "https://api.github.com/users/macabdul9/followers", "following_url": "https://api.github.com/users/macabdul9/following{/other_user}", "gists_url": "https://api.github.com/users/macabdul9/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/macabdul9", "id": 25720695, "login": "macabdul9", "node_id": "MDQ6VXNlcjI1NzIwNjk1", "organizations_url": "https://api.github.com/users/macabdul9/orgs", "received_events_url": "https://api.github.com/users/macabdul9/received_events", "repos_url": "https://api.github.com/users/macabdul9/repos", "site_admin": false, "starred_url": "https://api.github.com/users/macabdul9/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/macabdul9/subscriptions", "type": "User", "url": "https://api.github.com/users/macabdul9" }
[]
closed
false
null
[]
null
1
"2023-09-13T06:18:34"
"2023-09-19T21:54:58"
"2023-09-19T21:54:58"
NONE
null
I am trying to tokenize a few million documents with multiple workers but the tokenization process is taking forever. Code snippet: ``` raw_datasets.map( encode_function, batched=False, num_proc=args.preprocessing_num_workers, load_from_cache_file=not args.overwrite_cache, remove_columns=[name for name in raw_datasets["train"].column_names if name not in ["input_ids", "labels", "attention_mask"]], desc="Tokenizing data", ) ``` Details: ``` transformers==4.28.0.dev0 datasets==4.28.0.dev0 preprocessing_num_workers==48 ``` tokenizer == decapoda-research/llama-7b-hf
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6237/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6237/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6236
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6236/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6236/comments
https://api.github.com/repos/huggingface/datasets/issues/6236/events
https://github.com/huggingface/datasets/issues/6236
1,893,648,480
I_kwDODunzps5w3shg
6,236
Support buffer shuffle for to_tf_dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/7635551?v=4", "events_url": "https://api.github.com/users/EthanRock/events{/privacy}", "followers_url": "https://api.github.com/users/EthanRock/followers", "following_url": "https://api.github.com/users/EthanRock/following{/other_user}", "gists_url": "https://api.github.com/users/EthanRock/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/EthanRock", "id": 7635551, "login": "EthanRock", "node_id": "MDQ6VXNlcjc2MzU1NTE=", "organizations_url": "https://api.github.com/users/EthanRock/orgs", "received_events_url": "https://api.github.com/users/EthanRock/received_events", "repos_url": "https://api.github.com/users/EthanRock/repos", "site_admin": false, "starred_url": "https://api.github.com/users/EthanRock/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/EthanRock/subscriptions", "type": "User", "url": "https://api.github.com/users/EthanRock" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
3
"2023-09-13T03:19:44"
"2023-09-18T01:11:21"
null
NONE
null
### Feature request I'm using to_tf_dataset to convert a large dataset to tf.data.Dataset and use Keras fit to train model. Currently, to_tf_dataset only supports full size shuffle, which can be very slow on large dataset. tf.data.Dataset support buffer shuffle by default. shuffle( buffer_size, seed=None, reshuffle_each_iteration=None, name=None ) ### Motivation I'm very frustrated to find the loading with shuffling large dataset is very slow. It seems impossible to shuffle before training Keras with big dataset. ### Your contribution NA
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6236/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6236/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6235
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6235/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6235/comments
https://api.github.com/repos/huggingface/datasets/issues/6235/events
https://github.com/huggingface/datasets/issues/6235
1,893,337,083
I_kwDODunzps5w2gf7
6,235
Support multiprocessing for download/extract nestedly
{ "avatar_url": "https://avatars.githubusercontent.com/u/22725729?v=4", "events_url": "https://api.github.com/users/hgt312/events{/privacy}", "followers_url": "https://api.github.com/users/hgt312/followers", "following_url": "https://api.github.com/users/hgt312/following{/other_user}", "gists_url": "https://api.github.com/users/hgt312/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hgt312", "id": 22725729, "login": "hgt312", "node_id": "MDQ6VXNlcjIyNzI1NzI5", "organizations_url": "https://api.github.com/users/hgt312/orgs", "received_events_url": "https://api.github.com/users/hgt312/received_events", "repos_url": "https://api.github.com/users/hgt312/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hgt312/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hgt312/subscriptions", "type": "User", "url": "https://api.github.com/users/hgt312" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
0
"2023-09-12T21:51:08"
"2023-09-12T21:51:08"
null
NONE
null
### Feature request Current multiprocessing for download/extract is not done nestedly. For example, when processing SlimPajama, there is only 3 processes (for train/test/val), while there are many files inside these 3 folders ``` Downloading data files #0: 0%| | 0/1 [00:00<?, ?obj/s] Downloading data files #1: 0%| | 0/1 [00:00<?, ?obj/s] Downloading data files #2: 0%| | 0/1 [00:00<?, ?obj/s] Extracting data files #0: 0%| | 0/1 [00:00<?, ?obj/s] Extracting data files #1: 0%| | 0/1 [00:00<?, ?obj/s] Extracting data files #2: 0%| | 0/1 [00:00<?, ?obj/s] ``` ### Motivation speedup dataset loading ### Your contribution I can help test the feature
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6235/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6235/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6233
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6233/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6233/comments
https://api.github.com/repos/huggingface/datasets/issues/6233/events
https://github.com/huggingface/datasets/pull/6233
1,891,804,286
PR_kwDODunzps5aF3kd
6,233
Update README.md
{ "avatar_url": "https://avatars.githubusercontent.com/u/95188570?v=4", "events_url": "https://api.github.com/users/NinoRisteski/events{/privacy}", "followers_url": "https://api.github.com/users/NinoRisteski/followers", "following_url": "https://api.github.com/users/NinoRisteski/following{/other_user}", "gists_url": "https://api.github.com/users/NinoRisteski/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NinoRisteski", "id": 95188570, "login": "NinoRisteski", "node_id": "U_kgDOBax2Wg", "organizations_url": "https://api.github.com/users/NinoRisteski/orgs", "received_events_url": "https://api.github.com/users/NinoRisteski/received_events", "repos_url": "https://api.github.com/users/NinoRisteski/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NinoRisteski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NinoRisteski/subscriptions", "type": "User", "url": "https://api.github.com/users/NinoRisteski" }
[]
closed
false
null
[]
null
2
"2023-09-12T06:53:06"
"2023-09-13T18:20:50"
"2023-09-13T18:10:04"
CONTRIBUTOR
null
fixed a typo
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6233/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6233/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6233.diff", "html_url": "https://github.com/huggingface/datasets/pull/6233", "merged_at": "2023-09-13T18:10:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/6233.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6233" }
true
https://api.github.com/repos/huggingface/datasets/issues/6232
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6232/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6232/comments
https://api.github.com/repos/huggingface/datasets/issues/6232/events
https://github.com/huggingface/datasets/pull/6232
1,891,109,762
PR_kwDODunzps5aDhhK
6,232
Improve error message for missing function parameters
{ "avatar_url": "https://avatars.githubusercontent.com/u/4016832?v=4", "events_url": "https://api.github.com/users/suavemint/events{/privacy}", "followers_url": "https://api.github.com/users/suavemint/followers", "following_url": "https://api.github.com/users/suavemint/following{/other_user}", "gists_url": "https://api.github.com/users/suavemint/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/suavemint", "id": 4016832, "login": "suavemint", "node_id": "MDQ6VXNlcjQwMTY4MzI=", "organizations_url": "https://api.github.com/users/suavemint/orgs", "received_events_url": "https://api.github.com/users/suavemint/received_events", "repos_url": "https://api.github.com/users/suavemint/repos", "site_admin": false, "starred_url": "https://api.github.com/users/suavemint/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/suavemint/subscriptions", "type": "User", "url": "https://api.github.com/users/suavemint" }
[]
closed
false
null
[]
null
3
"2023-09-11T19:11:58"
"2023-09-15T18:07:56"
"2023-09-15T17:59:02"
CONTRIBUTOR
null
The error message in the fingerprint module was missing the f-string 'f' symbol, so the error message returned by fingerprint.py, line 469 was literally "function {func} is missing parameters {fingerprint_names} in signature." This has been fixed.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6232/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6232/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6232.diff", "html_url": "https://github.com/huggingface/datasets/pull/6232", "merged_at": "2023-09-15T17:59:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/6232.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6232" }
true
https://api.github.com/repos/huggingface/datasets/issues/6231
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6231/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6231/comments
https://api.github.com/repos/huggingface/datasets/issues/6231/events
https://github.com/huggingface/datasets/pull/6231
1,890,863,249
PR_kwDODunzps5aCr8_
6,231
Overwrite legacy default config name in `dataset_infos.json` in packaged datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[]
open
false
null
[]
null
9
"2023-09-11T16:27:09"
"2023-09-26T11:19:36"
null
CONTRIBUTOR
null
Currently if we push data as default config with `.push_to_hub` to a repo that has a legacy `dataset_infos.json` file containing a legacy default config name like `{username}--{dataset_name}`, new key `"default"` is added to `dataset_infos.json` along with the legacy one. I think the legacy one should be dropped in this case. Also, in `load.py` I suggest to check if a legacy config name is indeed a legacy config name because after this fix it might not be the case (this check was first introduced in https://github.com/huggingface/datasets/pull/6218)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6231/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6231/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6231.diff", "html_url": "https://github.com/huggingface/datasets/pull/6231", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6231.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6231" }
true
https://api.github.com/repos/huggingface/datasets/issues/6230
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6230/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6230/comments
https://api.github.com/repos/huggingface/datasets/issues/6230/events
https://github.com/huggingface/datasets/pull/6230
1,890,521,006
PR_kwDODunzps5aBh6L
6,230
Don't skip hidden files in `dl_manager.iter_files` when they are given as input
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
4
"2023-09-11T13:29:19"
"2023-09-13T18:21:28"
"2023-09-13T18:12:09"
CONTRIBUTOR
null
Required for `load_dataset(<format>, data_files=["path/to/.hidden_file"])` to work as expected
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6230/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6230/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6230.diff", "html_url": "https://github.com/huggingface/datasets/pull/6230", "merged_at": "2023-09-13T18:12:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/6230.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6230" }
true

Dataset Card for "hf-github-issues"

More Information needed

Downloads last month
0
Edit dataset card