url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
1.78B
2.32B
node_id
stringlengths
18
19
number
int64
6k
6.92k
title
stringlengths
3
280
user
dict
labels
listlengths
0
2
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
1
milestone
dict
comments
sequencelengths
0
30
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
4 values
active_lock_reason
null
body
stringlengths
3
19.4k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/datasets/issues/6924
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6924/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6924/comments
https://api.github.com/repos/huggingface/datasets/issues/6924/events
https://github.com/huggingface/datasets/issues/6924
2,320,531,015
I_kwDODunzps6KUH5H
6,924
Caching map result of DatasetDict.
{ "login": "MostHumble", "id": 56939432, "node_id": "MDQ6VXNlcjU2OTM5NDMy", "avatar_url": "https://avatars.githubusercontent.com/u/56939432?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MostHumble", "html_url": "https://github.com/MostHumble", "followers_url": "https://api.github.com/users/MostHumble/followers", "following_url": "https://api.github.com/users/MostHumble/following{/other_user}", "gists_url": "https://api.github.com/users/MostHumble/gists{/gist_id}", "starred_url": "https://api.github.com/users/MostHumble/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MostHumble/subscriptions", "organizations_url": "https://api.github.com/users/MostHumble/orgs", "repos_url": "https://api.github.com/users/MostHumble/repos", "events_url": "https://api.github.com/users/MostHumble/events{/privacy}", "received_events_url": "https://api.github.com/users/MostHumble/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "I think you flipped model and tokenizer at the beginning. It should be\r\n```python\r\n\r\nfrom transformers import BartTokenizer, BartForConditionalGeneration\r\n\r\ntokenizer = BartTokenizer.from_pretrained('/Downloads/facebook-bart-large-cnn')\r\nmodel = BartForConditionalGeneration.from_pretrained('/Downloads/facebook-bart-large-cnn')\r\n\r\n```", "Pls reopen if there is another issue!", "Damn, this was embarrassing bug on my end. Thank you! 🍻" ]
"2024-05-28T09:07:41"
"2024-05-28T09:07:41"
null
NONE
null
Hi! I'm currenty using the map function to tokenize a somewhat large dataset, so I need to use the cache to save ~25 mins. Changing num_proc incduces the recomputation of the map, I'm not sure why and if this is excepted behavior? here it says, that cached files are loaded sequentially: https://github.com/huggingface/datasets/blob/bb2664cf540d5ce4b066365e7c8b26e7f1ca4743/src/datasets/arrow_dataset.py#L3005-L3006 it seems like I can pass in a fingerprint, and load it directly: https://github.com/huggingface/datasets/blob/bb2664cf540d5ce4b066365e7c8b26e7f1ca4743/src/datasets/arrow_dataset.py#L3108-L3125 **Environment Setup:** - Python 3.11.9 - datasets 2.19.1 conda-forge - Linux 6.1.83-1.el9.elrepo.x86_64 **MRE** ```python fixed raw_datasets fixed tokenize_function tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, num_proc=9, remove_columns=['text'], load_from_cache_file= True, desc="Running tokenizer on dataset line_by_line", ) tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, num_proc=5, remove_columns=['text'], load_from_cache_file= True, desc="Running tokenizer on dataset line_by_line", ) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6924/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6924/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6923
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6923/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6923/comments
https://api.github.com/repos/huggingface/datasets/issues/6923/events
https://github.com/huggingface/datasets/issues/6923
2,319,292,872
I_kwDODunzps6KPZnI
6,923
Export Parquet Tablet Audio-Set is null bytes in Arrow
{ "login": "anioji", "id": 140120605, "node_id": "U_kgDOCFoSHQ", "avatar_url": "https://avatars.githubusercontent.com/u/140120605?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anioji", "html_url": "https://github.com/anioji", "followers_url": "https://api.github.com/users/anioji/followers", "following_url": "https://api.github.com/users/anioji/following{/other_user}", "gists_url": "https://api.github.com/users/anioji/gists{/gist_id}", "starred_url": "https://api.github.com/users/anioji/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anioji/subscriptions", "organizations_url": "https://api.github.com/users/anioji/orgs", "repos_url": "https://api.github.com/users/anioji/repos", "events_url": "https://api.github.com/users/anioji/events{/privacy}", "received_events_url": "https://api.github.com/users/anioji/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6923?src=pr&el=h1) Report\n> Merging [#6923](https://codecov.io/gh/huggingface/transformers/pull/6923?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ebb52afdb4dc4bcd599e7cb503763e5d4afc962?el=desc) will **increase** coverage by `1.90%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6923/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6923?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6923 +/- ##\n==========================================\n+ Coverage 77.81% 79.72% +1.90% \n==========================================\n Files 157 157 \n Lines 28853 28853 \n==========================================\n+ Hits 22452 23002 +550 \n+ Misses 6401 5851 -550 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6923?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/configuration\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `34.28% <0.00%> (-62.86%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `23.87% <0.00%> (-58.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `69.06% <0.00%> (-29.32%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.72% <0.00%> (-7.19%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| ... and [25 more](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6923?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6923?src=pr&el=footer). Last update [4ebb52a...eaef0cb](https://codecov.io/gh/huggingface/transformers/pull/6923?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-05-27T14:27:57"
"2024-05-27T14:27:57"
null
NONE
null
### Describe the bug Exporting the processed audio inside the table with the dataset.to_parquet function, the object pyarrow {bytes: null, path: "Some/Path"} At the same time, the same dataset uploaded to the hub has bit arrays ![Screenshot from 2024-05-27 19-14-49](https://github.com/huggingface/datasets/assets/140120605/ddfba089-426f-4659-9df4-7a634c948b9e) ![Screenshot from 2024-05-27 19-12-51](https://github.com/huggingface/datasets/assets/140120605/4cf8c0a1-650e-491b-86c8-b475c284a021) ### Steps to reproduce the bug 1.Get dataset from audio and cast it 2.Export and push dataset 3.It’s scary to be indignant at the difference in the uploaded dataset and the fact that it was saved locally ```py from datasets import Dataset, Audio df = Dataset.from_csv("./datasets.csv") df = df.cast_column("audio", Audio(16000)) df.to_parquet("./datasets.parquet") df.push_to_hub(repo_id="************", token="**********************") ``` You can use "try replicate case" for this [replicate_packet.zip](https://github.com/huggingface/datasets/files/15457114/replicate_packet.zip) ### Expected behavior Two parquet tables identical in content. It is obvious? ### Environment info Python 3.11+ (I try did it in 3.12 and got same result )
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6923/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6923/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6922
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6922/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6922/comments
https://api.github.com/repos/huggingface/datasets/issues/6922/events
https://github.com/huggingface/datasets/pull/6922
2,318,602,059
PR_kwDODunzps5wolTm
6,922
Remove torchaudio remnants from code
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I have the same issue too! Please some guidelines ?", "Same" ]
"2024-05-27T08:45:07"
"2024-05-27T09:08:19"
"2024-05-27T08:59:21"
MEMBER
null
Remove torchaudio remnants from code. Follow-up on: - #5573
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6922/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6922/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6922", "html_url": "https://github.com/huggingface/datasets/pull/6922", "diff_url": "https://github.com/huggingface/datasets/pull/6922.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6922.patch", "merged_at": "2024-05-27T08:59:21" }
https://api.github.com/repos/huggingface/datasets/issues/6921
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6921/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6921/comments
https://api.github.com/repos/huggingface/datasets/issues/6921/events
https://github.com/huggingface/datasets/pull/6921
2,318,394,398
PR_kwDODunzps5wn4Dz
6,921
Support fsspec 2024.5.0
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2024-05-27T07:00:59"
"2024-05-27T08:07:16"
"2024-05-27T08:01:08"
MEMBER
null
Support fsspec 2024.5.0.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6921/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6921/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6921", "html_url": "https://github.com/huggingface/datasets/pull/6921", "diff_url": "https://github.com/huggingface/datasets/pull/6921.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6921.patch", "merged_at": "2024-05-27T08:01:08" }
https://api.github.com/repos/huggingface/datasets/issues/6920
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6920/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6920/comments
https://api.github.com/repos/huggingface/datasets/issues/6920/events
https://github.com/huggingface/datasets/pull/6920
2,317,648,021
PR_kwDODunzps5wlchX
6,920
[WebDataset] Add `.pth` support for torch tensors
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Ok, I know it was my fault. I didn't add the argument `--use-external-format` (gpt2-xl is more than 2GB)\r\nActually I had to open the convert_graph_to_onnx.py file and read each argument's description\r\nThanks again, I'm closing the issue now." ]
"2024-05-26T11:12:07"
"2024-05-27T09:11:17"
"2024-05-27T09:04:54"
MEMBER
null
In this PR I add support for `.pth` but with `weights_only=True` to disallow the use of pickle
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6920/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6920/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6920", "html_url": "https://github.com/huggingface/datasets/pull/6920", "diff_url": "https://github.com/huggingface/datasets/pull/6920.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6920.patch", "merged_at": "2024-05-27T09:04:54" }
https://api.github.com/repos/huggingface/datasets/issues/6919
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6919/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6919/comments
https://api.github.com/repos/huggingface/datasets/issues/6919/events
https://github.com/huggingface/datasets/issues/6919
2,315,618,993
I_kwDODunzps6KBYqx
6,919
Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python/tuple>
{ "login": "juanqui", "id": 67964, "node_id": "MDQ6VXNlcjY3OTY0", "avatar_url": "https://avatars.githubusercontent.com/u/67964?v=4", "gravatar_id": "", "url": "https://api.github.com/users/juanqui", "html_url": "https://github.com/juanqui", "followers_url": "https://api.github.com/users/juanqui/followers", "following_url": "https://api.github.com/users/juanqui/following{/other_user}", "gists_url": "https://api.github.com/users/juanqui/gists{/gist_id}", "starred_url": "https://api.github.com/users/juanqui/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/juanqui/subscriptions", "organizations_url": "https://api.github.com/users/juanqui/orgs", "repos_url": "https://api.github.com/users/juanqui/repos", "events_url": "https://api.github.com/users/juanqui/events{/privacy}", "received_events_url": "https://api.github.com/users/juanqui/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6919?src=pr&el=h1) Report\n> Merging [#6919](https://codecov.io/gh/huggingface/transformers/pull/6919?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/dfa10a41ba3fd9c5289bebd3baeff8792b1b2281?el=desc) will **decrease** coverage by `0.20%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6919/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6919?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6919 +/- ##\n==========================================\n- Coverage 80.02% 79.82% -0.21% \n==========================================\n Files 157 157 \n Lines 28586 28586 \n==========================================\n- Hits 22876 22818 -58 \n- Misses 5710 5768 +58 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6919?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `34.28% <0.00%> (-62.86%)` | :arrow_down: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `36.50% <0.00%> (-60.32%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `23.87% <0.00%> (-57.10%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-34.36%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.66% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.32%)` | :arrow_up: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.40% <0.00%> (+0.34%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |\n| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6919?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6919?src=pr&el=footer). Last update [dfa10a4...252c784](https://codecov.io/gh/huggingface/transformers/pull/6919?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-05-24T14:59:45"
"2024-05-24T14:59:45"
null
NONE
null
### Describe the bug I wrote a notebook to load an existing dataset, process it, and upload as a private dataset using `dataset.push_to_hub(...)` at the end. The push to hub is failing with: ``` ValueError: Invalid metadata in README.md. - Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python[/tuple](http://192.168.1.128:8888/tuple)> (50:11) 47 | - 4 48 | - 4 49 | - 8 50 | - !!binary | ----------------^ 51 | TwAAAA== 52 | '1': !!python[/object/apply](http://192.168.1.128:8888/object/apply):nump ... ``` My dataset has a `train` and `validation` dataset. These are the features: ``` {'c1': Value(dtype='string', id=None), 'c2': Value(dtype='string', id=None), 'c3': [{'value': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None)}], 'c4': Value(dtype='string', id=None), 'c5': Value(dtype='string', id=None), 'c6': Value(dtype='string', id=None), 'c7': Value(dtype='string', id=None), 'c8': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None), 'c9': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'c10': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'labels': Sequence(feature=ClassLabel(names=['O', 'B-ABC', 'I-ABC', ...], id=None), length=-1, id=None), 'c12': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)} ``` This used to work until I decided to cast the `labels` feature to a `Sequence(ClassLabel(...))` type with: ``` ds['train'] = ds['train'].cast_column("labels", Sequence(ClassLabel(names=list(labels)))) ds['validation'] = ds['validation'].cast_column("labels", Sequence(ClassLabel(names=list(labels)))) ``` ### Steps to reproduce the bug 1. Start with any token classification dataset. 2. Add a `labels` column with data such as `[0,0,0,12,13,13,13,0,0]`. 3. Cast the label column from `Sequence` to `Sequence(ClassLabel))` with: ``` labels = ['O', 'B-TEST', 'I-TEST'] ds = ds.cast_column("labels", Sequence(ClassLabel(names=labels))) ``` 4. Push to hub with `ds.push_to_hub("me/awesome-stuff-dataset")` ### Expected behavior I expected `push_to_hub` to successfully push my dataset to the hub without error. ### Environment info Python 3.11.9 datasets==2.19.1 transformers==4.41.1 PyYAML==6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6919/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6919/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6918
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6918/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6918/comments
https://api.github.com/repos/huggingface/datasets/issues/6918/events
https://github.com/huggingface/datasets/issues/6918
2,315,322,738
I_kwDODunzps6KAQVy
6,918
NonMatchingSplitsSizesError when using data_dir
{ "login": "srehaag", "id": 86664538, "node_id": "MDQ6VXNlcjg2NjY0NTM4", "avatar_url": "https://avatars.githubusercontent.com/u/86664538?v=4", "gravatar_id": "", "url": "https://api.github.com/users/srehaag", "html_url": "https://github.com/srehaag", "followers_url": "https://api.github.com/users/srehaag/followers", "following_url": "https://api.github.com/users/srehaag/following{/other_user}", "gists_url": "https://api.github.com/users/srehaag/gists{/gist_id}", "starred_url": "https://api.github.com/users/srehaag/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/srehaag/subscriptions", "organizations_url": "https://api.github.com/users/srehaag/orgs", "repos_url": "https://api.github.com/users/srehaag/repos", "events_url": "https://api.github.com/users/srehaag/events{/privacy}", "received_events_url": "https://api.github.com/users/srehaag/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "The `AlbertTokenizer` in `transformers` is a SentencePiece based tokenizer, so it cannot load `vocab.txt`. You could try loading it in `BertTokenizer`, as it seems to be a wordpiece tokenizer vocabulary.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-05-24T12:43:39"
"2024-05-28T12:41:22"
null
NONE
null
### Describe the bug Loading a dataset from with a data_dir argument generates a NonMatchingSplitsSizesError if there are multiple directories in the dataset. This appears to happen because the expected split is calculated based on the data in all the directories whereas the recorded split is calculated based on the data in the directory specified using the data_dir argument. This is recent behavior. Until the past few weeks loading using the data_dir argument worked without any issue. ### Steps to reproduce the bug Simple test dataset available here: https://huggingface.co/datasets/srehaag/hf-bug-temp The dataset contains two directories "data1" and "data2", each with a file called "train.parquet" with a 2 x 5 table. from datasets import load_dataset dataset = load_dataset("srehaag/hf-bug-temp", data_dir = "data1") Generates: --------------------------------------------------------------------------- NonMatchingSplitsSizesError Traceback (most recent call last) Cell In[3], <a href='vscode-notebook-cell:?execution_count=3&line=2'>line 2</a> <a href='vscode-notebook-cell:?execution_count=3&line=1'>1</a> from datasets import load_dataset ----> <a href='vscode-notebook-cell:?execution_count=3&line=2'>2</a> dataset = load_dataset("srehaag/hf-bug-temp", data_dir = "data1") File ~/.python/current/lib/python3.10/site-packages/datasets/load.py:2609, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2606'>2606</a> return builder_instance.as_streaming_dataset(split=split) <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2608'>2608</a> # Download and prepare data -> <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2609'>2609</a> builder_instance.download_and_prepare( <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2610'>2610</a> download_config=download_config, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2611'>2611</a> download_mode=download_mode, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2612'>2612</a> verification_mode=verification_mode, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2613'>2613</a> num_proc=num_proc, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2614'>2614</a> storage_options=storage_options, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2615'>2615</a> ) <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2617'>2617</a> # Build dataset for splits <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2618'>2618</a> keep_in_memory = ( <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2619'>2619</a> keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2620'>2620</a> ) File ~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1027, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1025'>1025</a> if num_proc is not None: <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1026'>1026</a> prepare_split_kwargs["num_proc"] = num_proc -> <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1027'>1027</a> self._download_and_prepare( <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1028'>1028</a> dl_manager=dl_manager, <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1029'>1029</a> verification_mode=verification_mode, <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1030'>1030</a> **prepare_split_kwargs, <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1031'>1031</a> **download_and_prepare_kwargs, <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1032'>1032</a> ) <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1033'>1033</a> # Sync info <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1034'>1034</a> self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1140, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1137'>1137</a> dl_manager.manage_extracted_files() <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1139'>1139</a> if verification_mode == VerificationMode.BASIC_CHECKS or verification_mode == VerificationMode.ALL_CHECKS: -> <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1140'>1140</a> verify_splits(self.info.splits, split_dict) <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1142'>1142</a> # Update the info object with the splits. <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1143'>1143</a> self.info.splits = split_dict File ~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:101, in verify_splits(expected_splits, recorded_splits) <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:95'>95</a> bad_splits = [ <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:96'>96</a> {"expected": expected_splits[name], "recorded": recorded_splits[name]} <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:97'>97</a> for name in expected_splits <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:98'>98</a> if expected_splits[name].num_examples != recorded_splits[name].num_examples <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:99'>99</a> ] <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:100'>100</a> if len(bad_splits) > 0: --> <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:101'>101</a> raise NonMatchingSplitsSizesError(str(bad_splits)) <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:102'>102</a> logger.info("All the splits matched successfully.") NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=212, num_examples=10, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=106, num_examples=5, shard_lengths=None, dataset_name='hf-bug-temp')}] __________ By contrast, this loads the data from both data1/train.parquet and data2/train.parquet without any error message: from datasets import load_dataset dataset = load_dataset("srehaag/hf-bug-temp") ### Expected behavior Should load the 5 x 2 table from data1/train.parquet without error message. ### Environment info Used Codespaces to simplify environment (see details below), but bug is present across various configurations. - `datasets` version: 2.19.1 - Platform: Linux-6.5.0-1021-azure-x86_64-with-glibc2.31 - Python version: 3.10.13 - `huggingface_hub` version: 0.23.1 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6918/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6918/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6917
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6917/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6917/comments
https://api.github.com/repos/huggingface/datasets/issues/6917/events
https://github.com/huggingface/datasets/issues/6917
2,314,683,663
I_kwDODunzps6J90UP
6,917
WinError 32 The process cannot access the file during load_dataset
{ "login": "elwe-2808", "id": 56682168, "node_id": "MDQ6VXNlcjU2NjgyMTY4", "avatar_url": "https://avatars.githubusercontent.com/u/56682168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elwe-2808", "html_url": "https://github.com/elwe-2808", "followers_url": "https://api.github.com/users/elwe-2808/followers", "following_url": "https://api.github.com/users/elwe-2808/following{/other_user}", "gists_url": "https://api.github.com/users/elwe-2808/gists{/gist_id}", "starred_url": "https://api.github.com/users/elwe-2808/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elwe-2808/subscriptions", "organizations_url": "https://api.github.com/users/elwe-2808/orgs", "repos_url": "https://api.github.com/users/elwe-2808/repos", "events_url": "https://api.github.com/users/elwe-2808/events{/privacy}", "received_events_url": "https://api.github.com/users/elwe-2808/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "@mfuntowicz - Since T5 relies on google's sentencepiece tokenizer for now, can we do anything against it before our own sentencepiece tokenizer is implemented? ", "Verified that this is a problem with the original T5 sentencepience tokenizer. Opened an issue with the Google's T5 repository. https://github.com/google-research/text-to-text-transfer-transformer/issues/390", "Closing this issue , quoting from T5 github issue\r\n> > { is OOV because we intentionally removed any pages with { or } from C4 to avoid pre-training on anything other than natural language. So, it gets encoded to ??. SentencePiece has a byte fallback feature but it was not available when we trained our sentencepiece model." ]
"2024-05-24T07:54:51"
"2024-05-24T07:54:51"
null
NONE
null
### Describe the bug When I try to load the opus_book from hugging face (following the [guide on the website](https://huggingface.co/docs/transformers/main/en/tasks/translation)) ```python from datasets import load_dataset, Dataset dataset = load_dataset("Helsinki-NLP/opus_books", "en-fr", features=["id", "translation"]) ``` I get an error: `PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:/Users/Me/.cache/huggingface/datasets/Helsinki-NLP___parquet/ca-de-a39f1ef185b9b73b/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec.incomplete\\parquet-train-00000-00000-of-NNNNN.arrow' ` <details><summary>Full stacktrace</summary> <p> ```python AttributeError Traceback (most recent call last) File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\builder.py:1858, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) [1857](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1857) _time = time.time() -> [1858](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1858) for _, table in generator: [1859](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1859) if max_shard_size is not None and writer._num_bytes > max_shard_size: File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\packaged_modules\parquet\parquet.py:59, in Parquet._generate_tables(self, files) [58](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/packaged_modules/parquet/parquet.py:58) def _generate_tables(self, files): ---> [59](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/packaged_modules/parquet/parquet.py:59) schema = self.config.features.arrow_schema if self.config.features is not None else None [60](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/packaged_modules/parquet/parquet.py:60) if self.config.features is not None and self.config.columns is not None: AttributeError: 'list' object has no attribute 'arrow_schema' During handling of the above exception, another exception occurred: AttributeError Traceback (most recent call last) File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\builder.py:1882, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) [1881](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1881) num_shards = shard_id + 1 -> [1882](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1882) num_examples, num_bytes = writer.finalize() [1883](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1883) writer.close() File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\arrow_writer.py:584, in ArrowWriter.finalize(self, close_stream) [583](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/arrow_writer.py:583) # If schema is known, infer features even if no examples were written --> [584](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/arrow_writer.py:584) if self.pa_writer is None and self.schema: ... --> [627](file:///C:/Users/Me/.conda/envs/ia/lib/shutil.py:627) os.unlink(fullname) [628](file:///C:/Users/Me/.conda/envs/ia/lib/shutil.py:628) except OSError: [629](file:///C:/Users/Me/.conda/envs/ia/lib/shutil.py:629) onerror(os.unlink, fullname, sys.exc_info()) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:/Users/Me/.cache/huggingface/datasets/Helsinki-NLP___parquet/ca-de-a39f1ef185b9b73b/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec.incomplete\\parquet-train-00000-00000-of-NNNNN.arrow' ``` </p> </details> ### Steps to reproduce the bug Steps to reproduce: Just execute these lines ```python from datasets import load_dataset, Dataset dataset = load_dataset("Helsinki-NLP/opus_books", "en-fr", features=["id", "translation"]) ``` ### Expected behavior I expect the dataset to be loaded without any errors. ### Environment info | Package| Version| |--------|--------| | transformers| 4.37.2| | python| 3.9.19| | pytorch| 2.3.0| | datasets|2.12.0 | | arrow | 1.2.3| I am using Conda on Windows 11.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6917/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6917/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6916
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6916/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6916/comments
https://api.github.com/repos/huggingface/datasets/issues/6916/events
https://github.com/huggingface/datasets/issues/6916
2,311,675,564
I_kwDODunzps6JyV6s
6,916
```push_to_hub()``` - Prevent Automatic Generation of Splits
{ "login": "jetlime", "id": 29337128, "node_id": "MDQ6VXNlcjI5MzM3MTI4", "avatar_url": "https://avatars.githubusercontent.com/u/29337128?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jetlime", "html_url": "https://github.com/jetlime", "followers_url": "https://api.github.com/users/jetlime/followers", "following_url": "https://api.github.com/users/jetlime/following{/other_user}", "gists_url": "https://api.github.com/users/jetlime/gists{/gist_id}", "starred_url": "https://api.github.com/users/jetlime/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jetlime/subscriptions", "organizations_url": "https://api.github.com/users/jetlime/orgs", "repos_url": "https://api.github.com/users/jetlime/repos", "events_url": "https://api.github.com/users/jetlime/events{/privacy}", "received_events_url": "https://api.github.com/users/jetlime/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I can confirm it was previously checking the model weights and re-downloading if the weights had been changed. Investigating.", "This is due to the CDN caching files, with a 24 hour delay. After 24 hours it should download your file, but if you want it now you can use the `use_cdn` flag and set it to `False`. You can see the documentation for this [here](https://github.com/huggingface/transformers/blob/master/src/transformers/file_utils.py#L573-L585).", "Thank you for the hint, @LysandreJik. So `from_pretrained(mname, use_cdn=False)`\r\n\r\nBut that might be tricky for end users who won't know that the code base has changed yet the model weights they get are out sync.\r\n\r\nIs there a way to signal CDN to invalidate the cache for some files? It could then be done from the upload util.\r\n\r\n\r\n\r\n", "FWIW, I wrote a one liner to force cache update for the 4 models I'm working at the moment.\r\n```\r\nPYTHONPATH=\"src\" python -c 'from transformers import AutoModel; [AutoModel.from_pretrained(\"stas/fsmt-wmt19-\"+p, use_cdn=False) for p in [\"en-ru\",\"ru-en\",\"en-de\",\"de-en\"]]'\r\n```\r\nI now have that in my script, so I don't need to think about it.", "@LysandreJik, unfortunately this doesn't solve the issue\r\n\r\n`AutoModel.from_pretrained(mname, use_cdn=False)`\r\n\r\nIndeed forces a download of the recently updated model - but then if this flag is no longer used in the application - it still downloads the CDN cached version and ends up using the wrong version.\r\n\r\nSo, basically, this results in 2 copies (different hashes) sitting in the cache dir. \r\n\r\nAnd normal usage w/o using `use_cdn=False` looks up the old version and not the new one. (so things like `run_eval.py` still use the old one)\r\n\r\nThanks.\r\n", "can you run `AutoModel.from_pretrained(mname, use_cdn=False)` in a debugger and check whether the downloaded url is a `https://cdn.huggingface.co` or a `https://s3.amazonaws.com/models.huggingface.co` url?", "I can do that, but I already checked that it downloads the updated model w/ `use_cdn=False`. But then if you run it again w/o `use_cdn=False` it ignores the new download and uses the old model again (if I delete the cached version, it redownloads the old cached version w/o `use_cdn=False` ).", "Oh yeah ok, I see. Can you `run_eval.py` on a local folder path then?", "> Can you `run_eval.py` on a local folder path then?\r\n\r\nYes. Except others can't as they don't have my local copy.\r\n\r\ne.g. @sshleifer wants to eval my PR https://github.com/huggingface/transformers/pull/6940, but now has to wait till tomorrow for CDN to expire (or hack around it).\r\n\r\nLast night I uploaded an experimental model, which proved to be invalid, thought I re-downloaded it OK as it was working OK and made a PR, except I was testing against the non-current cached version, which was a good one.", "Can we please re-open this ticket? It hasn't been resolved", "Can we add a `--no_cdn` boolean flag to `run_eval.py` that would then call `AutoModelForSeq2SeqLM.from_pretrained(use_cdn=False)`?\r\n\r\nIn our dev workflow we mostly don't use the cdn while the files are still in-flux. Cloudfront invalidation comes with its own set of issues so it's better to view cdn as a means to distribute permanent files. (for this reason we don't serve config.json files from Cloudfront)", "> Can we add a `--no_cdn` boolean flag to `run_eval.py` that would then call `AutoModelForSeq2SeqLM.from_pretrained(use_cdn=False)`?\r\n\r\nIt could be done. I have a feeling then there will be others.\r\n\r\nPerhaps an alternative solution would be to introduce an env var, that would transparently override cdn cache in any situation w/o needing to change every script? `TRANSFORMERS_USE_CDN=False`?\r\n\r\n> In our dev workflow we mostly don't use the cdn while the files are still in-flux. Cloudfront invalidation comes with its own set of issues so it's better to view cdn as a means to distribute permanent files. (for this reason we don't serve config.json files from Cloudfront)\r\n\r\nUnderstood!\r\n\r\nHow do you let others onto testing the model files? Putting them on dropbox or something and sharing the link?\r\n", "No, just S3 links!", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "https://github.com/huggingface/transformers/pull/8324 should resolve this." ]
"2024-05-22T23:52:15"
"2024-05-23T00:07:53"
"2024-05-23T00:07:53"
NONE
null
### Describe the bug I currently have a dataset which has not been splited. When pushing the dataset to my hugging face dataset repository, it is split into a testing and training set. How can I prevent the split from happening? ### Steps to reproduce the bug 1. Have a unsplit dataset ```python Dataset({ features: ['input', 'output', 'Attack', '__index_level_0__'], num_rows: 944685 }) ``` 2. Push it to huggingface ```python dataset.push_to_hub(dataset_name) ``` 3. On the hugging face dataset repo, the dataset then appears to be splited: ![image](https://github.com/huggingface/datasets/assets/29337128/b4fbc141-42b0-4f49-98df-dd479648fe09) 4. Indeed, when loading the dataset from this repo, the dataset is split in two testing and training set. ```python from datasets import load_dataset, Dataset dataset = load_dataset("Jetlime/NF-CSE-CIC-IDS2018-v2", streaming=True) dataset ``` output: ``` IterableDatasetDict({ train: IterableDataset({ features: ['input', 'output', 'Attack', '__index_level_0__'], n_shards: 2 }) test: IterableDataset({ features: ['input', 'output', 'Attack', '__index_level_0__'], n_shards: 1 }) ``` ### Expected behavior The dataset shall not be splited, as not requested. ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-6.2.0-35-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.23.0 - PyArrow version: 15.0.2 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6916/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6916/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6915
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6915/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6915/comments
https://api.github.com/repos/huggingface/datasets/issues/6915/events
https://github.com/huggingface/datasets/pull/6915
2,310,564,961
PR_kwDODunzps5wNIUh
6,915
Validate config name and data_files in packaged modules
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6915?src=pr&el=h1) Report\n> Merging [#6915](https://codecov.io/gh/huggingface/transformers/pull/6915?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ebb52afdb4dc4bcd599e7cb503763e5d4afc962?el=desc) will **increase** coverage by `2.01%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6915/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6915?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6915 +/- ##\n==========================================\n+ Coverage 77.81% 79.83% +2.01% \n==========================================\n Files 157 157 \n Lines 28853 28853 \n==========================================\n+ Hits 22452 23034 +582 \n+ Misses 6401 5819 -582 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6915?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.82% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `32.20% <0.00%> (-66.95%)` | :arrow_down: |\n| [src/transformers/tokenization\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `46.03% <0.00%> (-49.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |\n| ... and [23 more](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6915?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6915?src=pr&el=footer). Last update [4ebb52a...481baa3](https://codecov.io/gh/huggingface/transformers/pull/6915?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "I run a test with this change on my ubuntu 18.04 with a 2080Ti GPU, tensorflow-gpu 2.2.0:\r\n```\r\nfrom tensorflow.keras.layers import Input, Embedding, Bidirectional, GRU, Dense\r\nfrom tensorflow.keras.models import Model\r\nfrom transformers import TFDistilBertModel\r\nfrom tensorflow.keras.mixed_precision import experimental as mixed_precision\r\npolicy = mixed_precision.Policy('mixed_float16')\r\nmixed_precision.set_policy(policy)\r\n\r\nbert = TFDistilBertModel.from_pretrained('distilbert-base-uncased')\r\ninputs = Input(shape=(None,), dtype='int32')\r\nbert_out = bert(inputs)[0]\r\noutput = Dense(9, activation='softmax', dtype='float32')(bert_out)\r\nmodel = Model(inputs, output)\r\nmodel.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\r\nmodel.summary()\r\nx = [[5, 2, 3] * 3] * 100\r\ny = [[1, 2, 3] * 3] * 100\r\nmodel.fit(x=x, y=y, epochs=20, batch_size=16)\r\n```\r\nAnd get error info:\r\n```\r\nTraceback (most recent call last):\r\n File \"test.py\", line 8, in <module>\r\n bert = TFDistilBertModel.from_pretrained('distilbert-base-uncased')\r\n File \"/home/xingya/transformers/src/transformers/modeling_tf_utils.py\", line 602, in from_pretrained\r\n model(model.dummy_inputs, training=False) # build the network with dummy inputs\r\n File \"/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py\", line 968, in __call__\r\n outputs = self.call(cast_inputs, *args, **kwargs)\r\n File \"/home/xingya/transformers/src/transformers/modeling_tf_distilbert.py\", line 615, in call\r\n outputs = self.distilbert(inputs, **kwargs)\r\n File \"/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py\", line 968, in __call__\r\n outputs = self.call(cast_inputs, *args, **kwargs)\r\n File \"/home/xingya/transformers/src/transformers/modeling_tf_distilbert.py\", line 508, in call\r\n tfmr_output = self.transformer(\r\n File \"/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py\", line 968, in __call__\r\n outputs = self.call(cast_inputs, *args, **kwargs)\r\n File \"/home/xingya/transformers/src/transformers/modeling_tf_distilbert.py\", line 401, in call\r\n layer_outputs = layer_module(hidden_state, attn_mask, head_mask[i], output_attentions, training=training)\r\n File \"/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py\", line 968, in __call__\r\n outputs = self.call(cast_inputs, *args, **kwargs)\r\n File \"/home/xingya/transformers/src/transformers/modeling_tf_distilbert.py\", line 355, in call\r\n ffn_output = self.ffn(sa_output, training=training) # (bs, seq_length, dim)\r\n File \"/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py\", line 968, in __call__\r\n outputs = self.call(cast_inputs, *args, **kwargs)\r\n File \"/home/xingya/transformers/src/transformers/modeling_tf_distilbert.py\", line 304, in call\r\n x = self.activation(x)\r\n File \"/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py\", line 968, in __call__\r\n outputs = self.call(cast_inputs, *args, **kwargs)\r\n File \"/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/keras/layers/core.py\", line 420, in call\r\n return self.activation(inputs)\r\n File \"/home/xingya/transformers/src/transformers/modeling_tf_distilbert.py\", line 79, in gelu\r\n cdf = 0.5 * (1.0 + tf.math.erf(x / tf.math.sqrt(2.0)))\r\n File \"/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py\", line 984, in binary_op_wrapper\r\n return func(x, y, name=name)\r\n File \"/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py\", line 1081, in _truediv_python3\r\n raise TypeError(\"x and y must have the same dtype, got %r != %r\" %\r\nTypeError: x and y must have the same dtype, got tf.float16 != tf.float32\r\n```\r\nI made a modification to L299:\r\n`self.activation = (\r\n tf.keras.layers.Activation(gelu, dtype='float32') if config.activation == \"gelu\" else tf.keras.activations.relu\r\n )`\r\nAnd then the model began to train, however the loss don't decrease and the accuracy is always 0:\r\n```\r\n7/7 [==============================] - 0s 28ms/step - loss: 2.1972 - accuracy: 0.0000e+00\r\nEpoch 2/20\r\n7/7 [==============================] - 0s 29ms/step - loss: 2.1972 - accuracy: 0.0000e+00\r\nEpoch 3/20\r\n7/7 [==============================] - 0s 30ms/step - loss: 2.1972 - accuracy: 0.0000e+00\r\nEpoch 4/20\r\n7/7 [==============================] - 0s 31ms/step - loss: 2.1972 - accuracy: 0.0000e+00\r\n```\r\n\r\nI have trid this code in float32 precision, and it works. \r\n```\r\nEpoch 1/20\r\n7/7 [==============================] - 0s 31ms/step - loss: 2.5418 - accuracy: 0.2800\r\nEpoch 2/20\r\n7/7 [==============================] - 0s 33ms/step - loss: 1.2452 - accuracy: 0.3356\r\nEpoch 3/20\r\n7/7 [==============================] - 0s 31ms/step - loss: 1.1438 - accuracy: 0.3267\r\nEpoch 4/20\r\n7/7 [==============================] - 0s 33ms/step - loss: 1.1219 - accuracy: 0.3400\r\n```", "@xuxingya , the accuracy not improved during training is due to a line \r\n\r\n > scores = scores - 1e30 * (1.0 - mask)\r\n\r\nwhile `1e30` with `half precision` will cause `nan` values. I am still trying to figure out a way to deal with it.", "@xuxingya Would you mind to run the test on your side again, please? I tested it with your example, and it is fine now.", "@chiapas Yes, I run the test and now it's fine." ]
"2024-05-22T13:36:33"
"2024-05-22T15:02:04"
null
MEMBER
null
Validate the config attributes `name` and `data_files` in packaged modules by making the derived classes call their parent `__post_init__` method. Note that their parent `BuilderConfig` validates its attributes `name` and `data_files` in its `__post_init__` method: https://github.com/huggingface/datasets/blob/60d21efbc01e15d0b596ac1072750cbecd91548a/src/datasets/builder.py#L128-L137 This PR makes the derived config classes call their parent `__post_init__` method to validate their `name` and `data_files` attributes.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6915/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6915/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6915", "html_url": "https://github.com/huggingface/datasets/pull/6915", "diff_url": "https://github.com/huggingface/datasets/pull/6915.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6915.patch", "merged_at": null }
https://api.github.com/repos/huggingface/datasets/issues/6914
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6914/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6914/comments
https://api.github.com/repos/huggingface/datasets/issues/6914/events
https://github.com/huggingface/datasets/pull/6914
2,310,107,326
PR_kwDODunzps5wLi3e
6,914
Preserve JSON column order and support list of strings field
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6914?src=pr&el=h1) Report\n> Merging [#6914](https://codecov.io/gh/huggingface/transformers/pull/6914?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ebb52afdb4dc4bcd599e7cb503763e5d4afc962?el=desc) will **increase** coverage by `1.21%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6914/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6914?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6914 +/- ##\n==========================================\n+ Coverage 77.81% 79.03% +1.21% \n==========================================\n Files 157 157 \n Lines 28853 28853 \n==========================================\n+ Hits 22452 22804 +352 \n+ Misses 6401 6049 -352 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6914?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |\n| [src/transformers/tokenization\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `30.15% <0.00%> (-65.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.73% <0.00%> (-19.35%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.18%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.21% <0.00%> (+0.83%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.64% <0.00%> (+1.34%)` | :arrow_up: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.84% <0.00%> (+1.61%)` | :arrow_up: |\n| [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `85.18% <0.00%> (+2.46%)` | :arrow_up: |\n| ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6914?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6914?src=pr&el=footer). Last update [4ebb52a...408286d](https://codecov.io/gh/huggingface/transformers/pull/6914?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-05-22T09:58:54"
"2024-05-22T12:50:31"
null
MEMBER
null
Preserve column order when loading from a JSON file with a list of dict (or with a field containing a list of dicts). Additionally, support JSON file with a list of strings field. Fix #6913.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6914/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6914/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6914", "html_url": "https://github.com/huggingface/datasets/pull/6914", "diff_url": "https://github.com/huggingface/datasets/pull/6914.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6914.patch", "merged_at": null }
https://api.github.com/repos/huggingface/datasets/issues/6913
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6913/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6913/comments
https://api.github.com/repos/huggingface/datasets/issues/6913/events
https://github.com/huggingface/datasets/issues/6913
2,309,605,889
I_kwDODunzps6JqcoB
6,913
Column order is nondeterministic when loading from JSON
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi! Yes, this isn't an issue, this is the intended behavior. It's the standard behavior with Sphinx/ReadTheDocs. You can see a similar example with the [PyTorch docs](https://pytorch.org/docs/stable/tensors.html)." ]
"2024-05-22T05:30:14"
"2024-05-22T05:31:10"
null
MEMBER
null
As reported by @meg-huggingface, the order of the JSON object keys is not preserved while loading a dataset from a JSON file with a list of objects. For example, when loading a JSON files with a list of objects, each with the following ordered keys: - [ID, Language, Topic], the resulting dataset may have columns: - [ID, Topic, Language], or - [Topic, Language, ID], or - [Topic, ID, Language],... This issue is caused by the use of a Python set (which does not preserve the order): https://github.com/huggingface/datasets/blob/60d21efbc01e15d0b596ac1072750cbecd91548a/src/datasets/packaged_modules/json/json.py#L168 introduced in - #5772
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6913/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6913/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6912
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6912/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6912/comments
https://api.github.com/repos/huggingface/datasets/issues/6912/events
https://github.com/huggingface/datasets/issues/6912
2,309,365,961
I_kwDODunzps6JpiDJ
6,912
Add MedImg for streaming
{ "login": "lhallee", "id": 72926928, "node_id": "MDQ6VXNlcjcyOTI2OTI4", "avatar_url": "https://avatars.githubusercontent.com/u/72926928?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhallee", "html_url": "https://github.com/lhallee", "followers_url": "https://api.github.com/users/lhallee/followers", "following_url": "https://api.github.com/users/lhallee/following{/other_user}", "gists_url": "https://api.github.com/users/lhallee/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhallee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhallee/subscriptions", "organizations_url": "https://api.github.com/users/lhallee/orgs", "repos_url": "https://api.github.com/users/lhallee/repos", "events_url": "https://api.github.com/users/lhallee/events{/privacy}", "received_events_url": "https://api.github.com/users/lhallee/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi, are you sure your issue comes from the tokenizer? If you encode your text using `encode_plus` and `batch_encode_plus`, do you see a difference in the tokens generated?", "I only use encode_plus and batch_encode_plus and call model inference. I do not think the model inference is the problem as you see in the function calls. so I think it is coming from encode_plus and batch_encode_plus. Regarding your question, I see that that batch_encode_plus add ones at the end of the list \" 1, 1, 1, 1, 1, 1]\". and I thought this is this difference may be a reason for the problem.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-05-22T00:55:30"
"2024-05-22T19:19:58"
null
NONE
null
### Feature request Host the MedImg dataset (similar to Imagenet but for biomedical images). ### Motivation There is a clear need for biomedical image foundation models and large scale biomedical datasets that are easily streamable. This would be an excellent tool for the biomedical community. ### Your contribution MedImg can be found [here](https://www.cuilab.cn/medimg/#).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6912/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6912/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6911
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6911/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6911/comments
https://api.github.com/repos/huggingface/datasets/issues/6911/events
https://github.com/huggingface/datasets/pull/6911
2,308,152,711
PR_kwDODunzps5wE2ah
6,911
Remove dead code for non-dict data_files from packaged modules
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6911?src=pr&el=h1) Report\n> Merging [#6911](https://codecov.io/gh/huggingface/transformers/pull/6911?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ebb52afdb4dc4bcd599e7cb503763e5d4afc962?el=desc) will **increase** coverage by `2.25%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6911/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6911?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6911 +/- ##\n==========================================\n+ Coverage 77.81% 80.06% +2.25% \n==========================================\n Files 157 157 \n Lines 28853 28853 \n==========================================\n+ Hits 22452 23102 +650 \n+ Misses 6401 5751 -650 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6911?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.90% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.37%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <0.00%> (-0.14%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.25%)` | :arrow_up: |\n| ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6911?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6911?src=pr&el=footer). Last update [4ebb52a...87055d8](https://codecov.io/gh/huggingface/transformers/pull/6911?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-05-21T12:10:24"
"2024-05-23T08:05:58"
"2024-05-23T07:59:57"
MEMBER
null
Remove dead code for non-dict data_files from packaged modules. Since the merge of this PR: - #2986 the builders' variable self.config.data_files is always a dict, which makes the condition on (str, list, tuple) dead code.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6911/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6911/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6911", "html_url": "https://github.com/huggingface/datasets/pull/6911", "diff_url": "https://github.com/huggingface/datasets/pull/6911.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6911.patch", "merged_at": "2024-05-23T07:59:57" }
https://api.github.com/repos/huggingface/datasets/issues/6910
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6910/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6910/comments
https://api.github.com/repos/huggingface/datasets/issues/6910/events
https://github.com/huggingface/datasets/pull/6910
2,307,570,084
PR_kwDODunzps5wC2An
6,910
Fix wrong type hints in data_files
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-05-21T07:41:09"
"2024-05-23T06:04:05"
"2024-05-23T05:58:05"
MEMBER
null
Fix wrong type hints in data_files introduced in: - #6493
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6910/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6910/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6910", "html_url": "https://github.com/huggingface/datasets/pull/6910", "diff_url": "https://github.com/huggingface/datasets/pull/6910.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6910.patch", "merged_at": "2024-05-23T05:58:05" }
https://api.github.com/repos/huggingface/datasets/issues/6909
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6909/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6909/comments
https://api.github.com/repos/huggingface/datasets/issues/6909/events
https://github.com/huggingface/datasets/pull/6909
2,307,508,120
PR_kwDODunzps5wCoiE
6,909
Update requests >=2.32.1 to fix vulnerability
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I personally wouldn't like having a pre-commit hook change all my commits without me being able to see the end result.\r\nOn my setup, I have a pre-push hook that aborts a push if make quality fails. I think if we had an install script, we could handle both options?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Hi! bring back this because I think in suggest pre-commit instead of `make ...`\r\n\r\nWith the pre-commit, we can see the results/modifications, like by example:\r\n\r\n`git add .`\r\n`git commit -m \"any\"` **this will run the pre-commit**\r\n- if everything it's ok at the pre-commit pipeline, the commit will be created\r\n- else if he modifies something (like black or style hook) he will not create the commit and change the files\r\n - when this occurs, we can see with git diff what the pre-commit change, or can just use the `--show-diff-on-failure` flag when running pre-commit.\r\n\r\nI think that doesn't need everybody use pre-commit, can use both option (the actual format with running manually `make ...` and also using pre-commit) – but maybe don't make sense because will duplicate things? \r\n\r\nA little setup for pre-commit, i have tested here:\r\n\r\nadd `.pre-commit-config.yaml` - \r\n```yml\r\nrepos:\r\n- repo: https://github.com/psf/black\r\n rev: 22.1.0\r\n hooks:\r\n - id: black\r\n- repo: https://github.com/pycqa/isort\r\n rev: 5.10.1\r\n hooks:\r\n - id: isort\r\n name: isort (python)\r\n- repo: https://github.com/PyCQA/flake8\r\n rev: 4.0.1\r\n hooks:\r\n - id: flake8\r\n- repo: local\r\n hooks:\r\n - id: autogenerate_code\r\n name: autogenerate_code\r\n entry: python setup.py deps_table_update\r\n language: python\r\n types: [python]\r\n pass_filenames: false\r\n - id: extra_style_checks\r\n name: extra_style_checks\r\n entry: make extra_style_checks\r\n language: system\r\n```\r\nNote:\r\n - The hooks _autogenerate_code_ and _extra_style_checks_, can be call using the make command or running the python.\r\n\r\nInstall pre-commit:\r\n`pre-commit install`\r\n\r\nModify src/transformers/activations.py:\r\n```diff\r\n@@ -31,7 +31,8 @@ class NewGELUActivation(nn.Module):\r\n \"\"\"\r\n def forward(self, input: Tensor) -> Tensor:\r\n- return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0))))\r\n+ return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 /\r\n+ math.pi) * (input + 0.044715 * torch.pow(input, 3.0))))\r\n```\r\n```console\r\n$ git add -u\r\n$ git commit -m \"test pre-commit pipeline\"\r\n\r\nblack....................................................................Failed\r\n- hook id: black\r\n- files were modified by this hook\r\n\r\nreformatted src/transformers/activations.py\r\n\r\nAll done! ✨ 🍰 ✨\r\n1 file reformatted.\r\n\r\nisort (python)...........................................................Passed\r\nflake8...................................................................Passed\r\nautogenerate_code........................................................Passed\r\nextra_style_checks.......................................................Passed\r\n\r\n$ git status\r\nOn branch master\r\nYour branch is up to date with 'origin/master'.\r\n\r\nChanges to be committed:\r\n (use \"git restore --staged <file>...\" to unstage)\r\n modified: src/transformers/activations.py\r\n\r\nChanges not staged for commit:\r\n (use \"git add <file>...\" to update what will be committed)\r\n (use \"git restore <file>...\" to discard changes in working directory)\r\n modified: src/transformers/activations.py\r\n\r\n$ git diff\r\n--- a/src/transformers/activations.py\r\n+++ b/src/transformers/activations.py\r\n@@ -31,8 +31,7 @@ class NewGELUActivation(nn.Module):\r\n \"\"\"\r\n \r\n def forward(self, input: Tensor) -> Tensor:\r\n- return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 /\r\n- math.pi) * (input + 0.044715 * torch.pow(input, 3.0))))\r\n+ return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0))))\r\n```\r\n\r\n\r\nto show git diff automatically after the pre-commit can add:\r\n```yml\r\n- repo: local\r\n hooks:\r\n - id: git-diff\r\n name: git diff\r\n entry: git diff --exit-code\r\n language: system\r\n pass_filenames: false\r\n always_run: true\r\n```\r\n", "Even though I originally created this thread 1.5 years later I now agree with @sgugger, that I don't want format changes done while pushing - I need to see what has been changed since sometimes the autoformatter messes things up badly and I need to rewrite things to make the end result readable.\r\n\r\nIf this can be done as an option and not a requirement then I'm not against it, but there needs to be a way to validate/reformat files before git is involved.\r\n\r\nBTW, `precommit` can be run manually as well and not via git, which doesn't require `pre-commit install`:\r\n\r\n```\r\npre-commit run --all-files\r\n```\r\n\r\nAnd we have 2 ways to reformat files: `fixup` (fast - only modified files) - `style` (slow)", "yes use pre-commit don't make sense if does not want to always run the pipeline...\r\n\r\nAbout the `fixup` and `style`, i think can be done equal... by default pre-commit will run just in modified files (files at the commit) and if wants to run for all files can do as you shows.\r\nFor me, by default, i think makes sense always just run at modified files. And if the autoformatter messes things we can see, and if we prefer not to use some hook (like the autoformatter that have messed up something), by example run again with `SKIP=black ...`\r\n\r\nAnd the pre-commit tool will not let the commit be created if something fails, if the dev wants “force” the failed hook will need to add the `SKIP=hook ...` before the commit command", "(i personally agree with @sgugger that local hooks are best left as user-level tooling)" ]
"2024-05-21T07:11:20"
"2024-05-21T07:45:58"
"2024-05-21T07:38:25"
MEMBER
null
Update requests >=2.32.1 to fix vulnerability.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6909/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6909/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6909", "html_url": "https://github.com/huggingface/datasets/pull/6909", "diff_url": "https://github.com/huggingface/datasets/pull/6909.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6909.patch", "merged_at": "2024-05-21T07:38:25" }
https://api.github.com/repos/huggingface/datasets/issues/6908
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6908/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6908/comments
https://api.github.com/repos/huggingface/datasets/issues/6908/events
https://github.com/huggingface/datasets/issues/6908
2,304,958,116
I_kwDODunzps6JYt6k
6,908
Fail to load "stas/c4-en-10k" dataset since 2.16 version
{ "login": "guch8017", "id": 38173059, "node_id": "MDQ6VXNlcjM4MTczMDU5", "avatar_url": "https://avatars.githubusercontent.com/u/38173059?v=4", "gravatar_id": "", "url": "https://api.github.com/users/guch8017", "html_url": "https://github.com/guch8017", "followers_url": "https://api.github.com/users/guch8017/followers", "following_url": "https://api.github.com/users/guch8017/following{/other_user}", "gists_url": "https://api.github.com/users/guch8017/gists{/gist_id}", "starred_url": "https://api.github.com/users/guch8017/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/guch8017/subscriptions", "organizations_url": "https://api.github.com/users/guch8017/orgs", "repos_url": "https://api.github.com/users/guch8017/repos", "events_url": "https://api.github.com/users/guch8017/events{/privacy}", "received_events_url": "https://api.github.com/users/guch8017/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6908?src=pr&el=h1) Report\n> Merging [#6908](https://codecov.io/gh/huggingface/transformers/pull/6908?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0f360d3d1c606d6d79cdf1efa53c3d719249573d?el=desc) will **increase** coverage by `0.71%`.\n> The diff coverage is `87.71%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6908/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6908?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6908 +/- ##\n==========================================\n+ Coverage 80.23% 80.95% +0.71% \n==========================================\n Files 161 164 +3 \n Lines 30119 30925 +806 \n==========================================\n+ Hits 24167 25035 +868 \n+ Misses 5952 5890 -62 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6908?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/commands/convert.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9jb252ZXJ0LnB5) | `26.98% <20.00%> (-0.61%)` | :arrow_down: |\n| [src/transformers/modeling\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mdW5uZWwucHk=) | `86.76% <86.76%> (ø)` | |\n| [src/transformers/tokenization\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnVubmVsLnB5) | `97.67% <97.67%> (ø)` | |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.31% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.47% <100.00%> (+0.14%)` | :arrow_up: |\n| [src/transformers/configuration\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Z1bm5lbC5weQ==) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.97% <100.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.87% <100.00%> (+2.22%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: |\n| ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6908?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6908?src=pr&el=footer). Last update [0f360d3...8c684cc](https://codecov.io/gh/huggingface/transformers/pull/6908?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Awesome! The model seems quite complex so I didn't really understand all the functionality. \r\nA couple of things from my side:\r\n\r\n1) IMO, it's super useful to have hard coded integration tests in the test file which makes the model a lot easier to maintain (every change can quickly be checked by making sure the model stays mathematically equivalent).\r\n\r\n2) I guess a couple of comments and assert statements would be nice to make the code a bit easier to understand\r\n\r\n3) Personally, I don't like single letter variables. Search replace commands don't work on such variables and it is very difficult to understand what they mean. ", "Thanks for all the comments. I think I replied/addressed all of them except the fast small integration tests, which are going to take a bit more work (starting on this now). Let me know if I missed anything since there are a lot of comments!", "All checkpoints uploaded so I updated the incomplete lists. Also added mention of the model in all indexes, the model summary and the big table of pretrained models (sorry about the diff on that file, Funnel Transformer is one character too long and required to add an extra space on every line).\r\n\r\nShould be good to merge at the beginning of next week!", "@sgugger although you've named the models \"`funnel-base`\", \"`funnel-medium`\" so on so forth, the paper talks about all this in a different format, could a docstring be added saying `funnel-base` is `B4-4-4H768` and same for the rest. If someone wants to replicate the papers' results that would be great.\r\n\r\nedit: my bad, its there in the comments next to the model name, but still would be better in a docstring too. Sorry!\r\n" ]
"2024-05-20T02:43:59"
"2024-05-24T10:58:09"
"2024-05-24T10:58:09"
NONE
null
### Describe the bug When update datasets library to version 2.16+ ( I test it on 2.16, 2.19.0 and 2.19.1), using the following code to load stas/c4-en-10k dataset ```python from datasets import load_dataset, Dataset dataset = load_dataset('stas/c4-en-10k') ``` and then it raise UnicodeDecodeError like ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 2523, in load_dataset builder_instance = load_dataset_builder( File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 2195, in load_dataset_builder dataset_module = dataset_module_factory( File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 1846, in dataset_module_factory raise e1 from None File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 1798, in dataset_module_factory can_load_config_from_parquet_export = "DEFAULT_CONFIG_NAME" not in f.read() File "/home/*/conda3/envs/watermark/lib/python3.10/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte ``` I found that fs.open loads a gzip file and parses it like plain text using utf-8 encoder. ```python fs = HfFileSystem('https://huggingface.co') fs.open("datasets/stas/c4-en-10k/c4-en-10k.py", "rb") data = fs.read() # data is gzip bytes begin with b'\x1f\x8b\x08\x00\x00\tn\x88\x00...' data2 = unzip_gzip_bytes(data) # data2 is what we want: '# coding=utf-8\n# Copyright 2020 The HuggingFace Datasets...' ``` ### Steps to reproduce the bug 1. Install datasets between version 2.16 and 2.19 2. Use `datasets.load_dataset` method to load `stas/c4-en-10k` dataset. ### Expected behavior Load dataset normally. ### Environment info Platform = Linux-5.4.0-159-generic-x86_64-with-glibc2.35 Python = 3.10.14 Datasets = 2.19
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6908/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6908/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6907
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6907/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6907/comments
https://api.github.com/repos/huggingface/datasets/issues/6907/events
https://github.com/huggingface/datasets/issues/6907
2,303,855,833
I_kwDODunzps6JUgzZ
6,907
Support the deserialization of json lines files comprised of lists
{ "login": "umarbutler", "id": 8473183, "node_id": "MDQ6VXNlcjg0NzMxODM=", "avatar_url": "https://avatars.githubusercontent.com/u/8473183?v=4", "gravatar_id": "", "url": "https://api.github.com/users/umarbutler", "html_url": "https://github.com/umarbutler", "followers_url": "https://api.github.com/users/umarbutler/followers", "following_url": "https://api.github.com/users/umarbutler/following{/other_user}", "gists_url": "https://api.github.com/users/umarbutler/gists{/gist_id}", "starred_url": "https://api.github.com/users/umarbutler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/umarbutler/subscriptions", "organizations_url": "https://api.github.com/users/umarbutler/orgs", "repos_url": "https://api.github.com/users/umarbutler/repos", "events_url": "https://api.github.com/users/umarbutler/events{/privacy}", "received_events_url": "https://api.github.com/users/umarbutler/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Results for 1):\r\n\r\n```\r\n1 / 1\r\n\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\nType: multiple - Script: True 500 128 2.575 \r\nType: multiple - Script: True 500 512 3.898 \r\nType: multiple - Script: True 2500 128 13.173 \r\nType: multiple - Script: True 2500 512 18.263 \r\n--------------------------------------------------------------------------------\r\n1 / 1\r\n\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\nType: multiple - Script: False 500 128 3.733 \r\nType: multiple - Script: False 500 512 3.857 \r\nType: multiple - Script: False 2500 128 19.101 \r\nType: multiple - Script: False 2500 512 19.356 \r\n--------------------------------------------------------------------------------\r\n```\r\n\r\nFor the smaller sequence length 128 we can see a significant speed-up (~30%) - for the longer sequence length 512, the speed-up is much smaller (and only for the bigger list of inputs).", "Results for 2)\r\n\r\n\r\n```\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\n Type: batched - Script: True 512 128 0.819 \r\n Type: batched - Script: True 512 512 3.769 \r\n Type: batched - Script: True 4096 128 6.705 \r\n Type: batched - Script: True 4096 512 26.549 \r\n--------------------------------------------------------------------------------\r\n1 / 1\r\n\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\nType: batched - Script: False 512 128 0.837 \r\nType: batched - Script: False 512 512 3.88 \r\nType: batched - Script: False 4096 128 6.75 \r\nType: batched - Script: False 4096 512 27.162 \r\n--------------------------------------------------------------------------------\r\n```\r\n\r\nHere no clear speed gains can be seen. ", "I'm not sure I understand all the interactions in the benchmarking framework, but I think in line 9 (non-script model) we should be returning torch.jit.trace(model, sample_input), not the untraced model. And the sample input would have be max_length for it to work. That's were most of the gain comes from.\r\nThen the comparison is between using torch.jit.trace() and torch.jit.script(). Or maybe I'm missing some code that does that elsewhere? \r\n\r\n", "Okey, yeah that makes sense! I changed the benchmarking script accordingly and have the following results now: \r\n\r\n1)\r\n```\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\nType: multiple - Script: True 500 128 1.793 \r\nType: multiple - Script: True 500 512 3.628 \r\nType: multiple - Script: True 2500 128 8.774 \r\nType: multiple - Script: True 2500 512 19.471 \r\n--------------------------------------------------------------------------------\r\n1 / 1\r\n\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\nType: multiple - Trace: True 500 128 1.83 \r\nType: multiple - Trace: True 500 512 3.783 \r\nType: multiple - Trace: True 2500 128 9.083 \r\nType: multiple - Trace: True 2500 512 20.569 \r\n--------------------------------------------------------------------------------\r\n```\r\n\r\nand \r\n\r\n2) \r\n```\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\n Type: batched - Script: True 512 128 1.043 \r\n Type: batched - Script: True 512 512 4.913 \r\n Type: batched - Script: True 4096 128 8.499 \r\n Type: batched - Script: True 4096 512 34.187 \r\n--------------------------------------------------------------------------------\r\n1 / 1\r\n\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\nType: batched - Trace: True 512 128 1.046 \r\nType: batched - Trace: True 512 512 4.916 \r\nType: batched - Trace: True 4096 128 8.042 \r\nType: batched - Trace: True 4096 512 30.874 \r\n--------------------------------------------------------------------------------\r\n```\r\n\r\n=> So my understanding is now that `torch.trace(...)` is much more efficient for dynamic input shapes than not using torch.jit at all, but I also don't see how `torch.script(...)` is better than `torch.trace(...)`. If our models are compatible with `torch.trace(...)`, why do we need to have a model that is compatible with `torch.script(...)`? It is definitely more convenient to just call `torch.trace(model)` without having to provide any `input_ids`, but I'm not 100% sure whether it's worth a huge refactoring. \r\n\r\nalso cc @sgugger @LysandreJik ", "We saw different behavior in our experiments a few months ago. Will try to reproduce and update here.", "> We saw different behavior in our experiments a few months ago. Will try to reproduce and update here.\r\n\r\nWas `torch.script()` much faster than `torch.trace()` in your experiments?", "In our experiments, using trace(model, example_input) would result in a model that would only accept a sequence of the same length as example_sequence, whereas script(model) had no such restriction. This is the case mentioned in your documentation here: https://huggingface.co/transformers/torchscript.html#dummy-inputs-and-standard-lengths\r\n\r\nWhat that meant in practice is that you needed to trace with an example sequence of length = max_length, and then pad every example of length < max_length with zeros. Since the speed of the model is basically linear in the sequence length, for a set of inputs with varying sequence lengths we got a speed up of avg_len/max_length by using script() instead of trace().\r\n\r\nUpon further investigation, it looks like when we ran these experiments, several months ago, we were using Torch 1.2. It looks like in Torch 1.3 the fixed-length problem is no longer an issue for your BERT models (we still encounter it with other models architectures we build). So there's no longer a big speed gain from script() vs trace().\r\n\r\nThere are still some good reasons for preferring script() to trace() - scripting is guaranteed to capture the model codepath logic, whereas tracing might miss a logic branch if the example input doesn't flow through it. Also, currently tracing your models produces several warnings like the one below. But I'm not sure if those on their own are enough of a motivation to make major changes in your code base.\r\n```\r\nTracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n```", "> In our experiments, using trace(model, example_input) would result in a model that would only accept a sequence of the same length as example_sequence, whereas script(model) had no such restriction. This is the case mentioned in your documentation here: https://huggingface.co/transformers/torchscript.html#dummy-inputs-and-standard-lengths\r\n> \r\n> What that meant in practice is that you needed to trace with an example sequence of length = max_length, and then pad every example of length < max_length with zeros. Since the speed of the model is basically linear in the sequence length, for a set of inputs with varying sequence lengths we got a speed up of avg_len/max_length by using script() instead of trace().\r\n> \r\n> Upon further investigation, it looks like when we ran these experiments, several months ago, we were using Torch 1.2. It looks like in Torch 1.3 the fixed-length problem is no longer an issue for your BERT models (we still encounter it with other models architectures we build). So there's no longer a big speed gain from script() vs trace().\r\n> \r\n> There are still some good reasons for preferring script() to trace() - scripting is guaranteed to capture the model codepath logic, whereas tracing might miss a logic branch if the example input doesn't flow through it. Also, currently tracing your models produces several warnings like the one below. But I'm not sure if those on their own are enough of a motivation to make major changes in your code base.\r\n> \r\n> ```\r\n> TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n> ```\r\n\r\n@sgugger - what are your thoughts on this? ", "I think adding the scriptable layers seems cleaner to make sure everything works right with scripting/tracing. Not the approach in this PR but the other linked in a comment (@sbrody18 I don't know if you saw my PR to rebase on master for this branch). It ends up with most changes being helpful to read the code (type annotations and asserts) and a few extra classes for the scriptable layers but not much added code.", "@sgugger I agree - I think the extra benefit of the type and None-checking is really helpful to prevent bugs and makes the code better.\r\nI saw your PR late Friday and didn't have time to look into it. Will try to do so by end of day.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-05-18T05:07:23"
"2024-05-18T08:53:28"
null
NONE
null
### Feature request I manage a somewhat large and popular Hugging Face dataset known as the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus). I recently updated my corpus to be stored in a json lines file where each line is an array and each element represents a value at a particular column. Previously, my corpus was stored as a json lines file where each line was a dictionary and the keys were the fields. Essentially, a line in my json lines file used to look like this: ```json {"version_id":"","type":"","jurisdiction":"","source":"","citation":"","url":"","when_scraped":"","text":""} ``` And now it looks like this: ```json ["","","","","","","",""] ``` This saves 65 bytes per document and allows me very quickly serialise and deserialise documents via `msgspec`. After making this change, I found that `datasets` was incapable of deserialising my Corpus without a custom loading script, even if I ensured that the `dataset_info` field in my dataset card contained the desired names of my features. I would like to request that functionality be added to support this format which is more memory-efficent and faster than using dictionaries. ### Motivation The [documentation](https://huggingface.co/docs/datasets/en/dataset_script) for creating dataset loading scripts asserts that: > In the next major release, the new safety features of 🤗 Datasets will disable running dataset loading scripts by default, and you will have to pass trust_remote_code=True to load datasets that require running a dataset script. I would rather not require my users to pass `trust_remote_code=True` which means that I will need built-in support for this format. ### Your contribution I would be happy to submit a PR for this if this is something you would incorporate into `datasets` and if I can be pointed to where the code would need to go.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6907/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6907/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6906
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6906/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6906/comments
https://api.github.com/repos/huggingface/datasets/issues/6906/events
https://github.com/huggingface/datasets/issues/6906
2,303,679,119
I_kwDODunzps6JT1qP
6,906
irc_disentangle - Issue with splitting data
{ "login": "eor51355", "id": 114260604, "node_id": "U_kgDOBs96fA", "avatar_url": "https://avatars.githubusercontent.com/u/114260604?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eor51355", "html_url": "https://github.com/eor51355", "followers_url": "https://api.github.com/users/eor51355/followers", "following_url": "https://api.github.com/users/eor51355/following{/other_user}", "gists_url": "https://api.github.com/users/eor51355/gists{/gist_id}", "starred_url": "https://api.github.com/users/eor51355/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eor51355/subscriptions", "organizations_url": "https://api.github.com/users/eor51355/orgs", "repos_url": "https://api.github.com/users/eor51355/repos", "events_url": "https://api.github.com/users/eor51355/events{/privacy}", "received_events_url": "https://api.github.com/users/eor51355/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2024-05-17T23:19:37"
"2024-05-17T23:19:37"
null
NONE
null
### Describe the bug I am trying to access your database through python using "datasets.load_dataset("irc_disentangle")" and I am getting this error message: ValueError: Instruction "train" corresponds to no data! ### Steps to reproduce the bug import datasets ds = datasets.load_dataset('irc_disentangle') ds ### Expected behavior The data is supposed to load into ds and be accessable as such: ds['train'][1050], ds['train'][1055] ### Environment info I tired Python 3.12 and 3.10
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6906/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6906/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6905
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6905/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6905/comments
https://api.github.com/repos/huggingface/datasets/issues/6905/events
https://github.com/huggingface/datasets/issues/6905
2,303,098,587
I_kwDODunzps6JRn7b
6,905
Extraction protocol for arrow files is not defined
{ "login": "radulescupetru", "id": 26553095, "node_id": "MDQ6VXNlcjI2NTUzMDk1", "avatar_url": "https://avatars.githubusercontent.com/u/26553095?v=4", "gravatar_id": "", "url": "https://api.github.com/users/radulescupetru", "html_url": "https://github.com/radulescupetru", "followers_url": "https://api.github.com/users/radulescupetru/followers", "following_url": "https://api.github.com/users/radulescupetru/following{/other_user}", "gists_url": "https://api.github.com/users/radulescupetru/gists{/gist_id}", "starred_url": "https://api.github.com/users/radulescupetru/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/radulescupetru/subscriptions", "organizations_url": "https://api.github.com/users/radulescupetru/orgs", "repos_url": "https://api.github.com/users/radulescupetru/repos", "events_url": "https://api.github.com/users/radulescupetru/events{/privacy}", "received_events_url": "https://api.github.com/users/radulescupetru/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6905?src=pr&el=h1) Report\n> Merging [#6905](https://codecov.io/gh/huggingface/transformers/pull/6905?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8f2723caf0f1bf7e1f639d28d004f81c96d19bbc?el=desc) will **decrease** coverage by `0.12%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6905/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6905?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6905 +/- ##\n==========================================\n- Coverage 79.81% 79.69% -0.13% \n==========================================\n Files 157 157 \n Lines 28853 28853 \n==========================================\n- Hits 23029 22994 -35 \n- Misses 5824 5859 +35 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6905?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `32.20% <0.00%> (-66.95%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `87.50% <0.00%> (-9.73%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `89.97% <0.00%> (-4.07%)` | :arrow_down: |\n| ... and [14 more](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6905?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6905?src=pr&el=footer). Last update [8f2723c...0037bd4](https://codecov.io/gh/huggingface/transformers/pull/6905?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thx for fixing this!" ]
"2024-05-17T16:01:41"
"2024-05-17T16:01:41"
null
NONE
null
### Describe the bug Passing files with `.arrow` extension into data_files argument, at least when `streaming=True` is very slow. ### Steps to reproduce the bug Basically it goes through the `_get_extraction_protocol` method located [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L820) The method then looks at some base known extensions where `arrow` is not defined so it proceeds to determine the compression with the magic number method which is slow when dealing with a lot of files which are stored in s3 and by looking at this predefined list, I don't see `arrow` in there either so in the end it return None: ``` MAGIC_NUMBER_TO_COMPRESSION_PROTOCOL = { bytes.fromhex("504B0304"): "zip", bytes.fromhex("504B0506"): "zip", # empty archive bytes.fromhex("504B0708"): "zip", # spanned archive bytes.fromhex("425A68"): "bz2", bytes.fromhex("1F8B"): "gzip", bytes.fromhex("FD377A585A00"): "xz", bytes.fromhex("04224D18"): "lz4", bytes.fromhex("28B52FFD"): "zstd", } ``` ### Expected behavior My expectation is that `arrow` would be in the known lists so it would return None without going through the magic number method. ### Environment info datasets 2.19.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6905/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6905/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6904
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6904/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6904/comments
https://api.github.com/repos/huggingface/datasets/issues/6904/events
https://github.com/huggingface/datasets/pull/6904
2,302,912,179
PR_kwDODunzps5vzRlD
6,904
Fix decoding multi part extension
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Didn't realize that `postprocess_next_token_scores` mutates its argument." ]
"2024-05-17T14:32:57"
"2024-05-17T14:52:56"
"2024-05-17T14:46:54"
MEMBER
null
e.g. a field named `url.txt` should be a treated as text I also included a small fix to support .npz correctly
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6904/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6904/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6904", "html_url": "https://github.com/huggingface/datasets/pull/6904", "diff_url": "https://github.com/huggingface/datasets/pull/6904.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6904.patch", "merged_at": "2024-05-17T14:46:54" }
https://api.github.com/repos/huggingface/datasets/issues/6903
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6903/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6903/comments
https://api.github.com/repos/huggingface/datasets/issues/6903/events
https://github.com/huggingface/datasets/issues/6903
2,300,436,053
I_kwDODunzps6JHd5V
6,903
Add the option of saving in parquet instead of arrow
{ "login": "arita37", "id": 18707623, "node_id": "MDQ6VXNlcjE4NzA3NjIz", "avatar_url": "https://avatars.githubusercontent.com/u/18707623?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arita37", "html_url": "https://github.com/arita37", "followers_url": "https://api.github.com/users/arita37/followers", "following_url": "https://api.github.com/users/arita37/following{/other_user}", "gists_url": "https://api.github.com/users/arita37/gists{/gist_id}", "starred_url": "https://api.github.com/users/arita37/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arita37/subscriptions", "organizations_url": "https://api.github.com/users/arita37/orgs", "repos_url": "https://api.github.com/users/arita37/repos", "events_url": "https://api.github.com/users/arita37/events{/privacy}", "received_events_url": "https://api.github.com/users/arita37/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6903?src=pr&el=h1) Report\n> Merging [#6903](https://codecov.io/gh/huggingface/transformers/pull/6903?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/485da7222f7f9ca9854db1a6df027b00d348d017?el=desc) will **increase** coverage by `0.29%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6903/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6903?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6903 +/- ##\n==========================================\n+ Coverage 79.30% 79.59% +0.29% \n==========================================\n Files 157 157 \n Lines 28853 28853 \n==========================================\n+ Hits 22882 22966 +84 \n+ Misses 5971 5887 -84 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6903?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.18% <ø> (ø)` | |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.64% <ø> (+0.67%)` | :arrow_up: |\n| [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `85.18% <ø> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.73% <ø> (ø)` | |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.84% <ø> (ø)` | |\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `66.86% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <ø> (-34.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <ø> (+0.32%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <ø> (ø)` | |\n| ... and [23 more](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6903?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6903?src=pr&el=footer). Last update [485da72...e8fd79c](https://codecov.io/gh/huggingface/transformers/pull/6903?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-05-16T13:35:51"
"2024-05-17T03:40:04"
null
NONE
null
### Feature request In dataset.save_to_disk('/path/to/save/dataset'), add the option to save in parquet format dataset.save_to_disk('/path/to/save/dataset', format="parquet"), because arrow is not used for Production Big data.... (only parquet) ### Motivation because arrow is not used for Production Big data.... (only parquet) ### Your contribution I can do the testing !
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6903/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6903/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6902
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6902/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6902/comments
https://api.github.com/repos/huggingface/datasets/issues/6902/events
https://github.com/huggingface/datasets/pull/6902
2,300,256,241
PR_kwDODunzps5vqLIv
6,902
Make CLI convert_to_parquet not raise error if no rights to create script branch
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for reporting! The PR mentioned above should fix all of those." ]
"2024-05-16T12:21:27"
"2024-05-16T12:57:02"
"2024-05-16T12:51:05"
MEMBER
null
Make CLI convert_to_parquet not raise error if no rights to create "script" branch. Not that before this PR, the error was not critical because it was raised at the end of the script, once all the rest of the steps were already performed. Fix #6901. Related to: - #6809
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6902/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6902/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6902", "html_url": "https://github.com/huggingface/datasets/pull/6902", "diff_url": "https://github.com/huggingface/datasets/pull/6902.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6902.patch", "merged_at": "2024-05-16T12:51:04" }
https://api.github.com/repos/huggingface/datasets/issues/6901
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6901/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6901/comments
https://api.github.com/repos/huggingface/datasets/issues/6901/events
https://github.com/huggingface/datasets/issues/6901
2,300,167,465
I_kwDODunzps6JGcUp
6,901
HTTPError 403 raised by CLI convert_to_parquet when creating script branch on 3rd party repos
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "I don't see anything blocking with this. Wdyt @sgugger @julien-c ?", "We can give a warning but then the rest of the method will fail. Are you thinking of aborting the save entirely for models that are not `PretrainedModel`s? Also, why are you not inheriting from `PretrainedModel` in your example? Is there something limiting?\r\n\r\nNote that Trainer is not supposed to be a generic training loop, but we can surely make it a bit more flexible.", "Yes, `Trainer` is not a general loop, but it works for custom models as I've tried. Majority of its parts are generalized. `PreTrainedModel` also inherits from `nn.Module`, so users can do that, although its quite common for users to inherit from `nn.Module` directly. I'm not sure how the method will fail ? We can just add a warning instead of raising a `ValueError`. The reason why I'm saying is that users would want to do more than just what `transformers` provide out of the box (for instance justing using `AutoModel` and not `SequenceClassification` models (I'm seeing a growing interest in using such models). I think `nlp` is heading towards that direction (making everything general). This works fine for all cases, I guess:\r\n```\r\nfrom types import MethodType\r\n\r\ndef _save(self, output_dir: Optional[str] = None):\r\n output_dir = output_dir if output_dir is not None else self.args.output_dir\r\n os.makedirs(output_dir, exist_ok=True)\r\n logger.info(\"Saving model checkpoint to %s\", output_dir)\r\n\r\n torch.save(\r\n {\"model_state_dict\": self.model.state_dict()},\r\n os.path.join(output_dir, \"pytorch_model.bin\"),\r\n )\r\n\r\n # Good practice: save your training arguments together with the trained model\r\n torch.save(self.args, os.path.join(output_dir, \"training_args.bin\"))\r\n\r\ntrainer._save = MethodType(_save, trainer)\r\n```\r\nWhere do you think the approach may not work ? After providing the warning, its upto users if they further want to make changes by overriding this method (they would know that `transformers` is not responsible anymore since its not a `PreTrainedModel`. Current method completely breaks the training due to `ValueError`.\r\nThis is optional, I felt that it would be useful to have. I'll open a PR if you approve.", "`save_pretrained` does more than the method you mention, but we could refactor the code inside to work with all models probably. I don't see any place it uses specific stuff from `PretrainedModel`. The thing we don't want is to add and maintain too generic code, but if it's easy enough I see no objection.\r\n\r\nYou didn't tell me why subclassing `PreTrainedModel` did not work however ;-) That is what I would expect a user building a custom model using transformers to do .", "The `PreTrainedModel` is a generic class amongst all models in `transformers`, all classes pertaining to it comply in terms of the methods it provides and can use functionalities such as `init_weights`, `prune_heads`. They might not work for custom models. For instance, some methods require `.config.` attribute which custom models may not directly have. I guess one can define their custom model to be exactly what `PreTrainedModel` requires them to be (haven't looked into that), but that would be asking users to read through what `PreTrainedModel` expects or maybe specifying in docs. It's totally up to you what you expect the users to do in case they use custom models.", "After some internal discussion with @julien-c we will lower the requirement from `PreTrainedModel` to some lower abstractclass/protocol so the user knows exactly what they have to implement for their model to work seamlessly with `Trainer`. I will work on this end of this week beginning of next. ", "Sounds good. I'll look forward to that part then." ]
"2024-05-16T11:40:22"
"2024-05-16T12:51:06"
"2024-05-16T12:51:06"
MEMBER
null
CLI convert_to_parquet cannot create "script" branch on 3rd party repos. It can only create it on repos where the user executing the script has write access. Otherwise, a 403 Forbidden HTTPError is raised: ``` Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py", line 304, in hf_raise_for_status response.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/api/datasets/ORG/DATASET/branch/script The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/bin/datasets-cli", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.10/dist-packages/datasets/commands/datasets_cli.py", line 41, in main service.run() File "/usr/local/lib/python3.10/dist-packages/datasets/commands/convert_to_parquet.py", line 92, in run create_branch(dataset_id, branch="script", repo_type="dataset", token=token, exist_ok=True) File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/hf_api.py", line 5503, in create_branch hf_raise_for_status(response) File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py", line 367, in hf_raise_for_status raise HfHubHTTPError(message, response=response) from e huggingface_hub.utils._errors.HfHubHTTPError: (Request ID: Root=1-6645ee0d-4db1ed8a1fbe04956be15897;139a6e23-df7d-4f62-b5ba-adb6d8e6e696) 403 Forbidden: Forbidden: cannot write to script. Cannot access content at: https://huggingface.co/api/datasets/ORG/DATASET/branch/script. If you are trying to create or update content,make sure you have a token with the `write` role. ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6901/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6901/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6900
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6900/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6900/comments
https://api.github.com/repos/huggingface/datasets/issues/6900/events
https://github.com/huggingface/datasets/issues/6900
2,298,489,733
I_kwDODunzps6JACuF
6,900
[WebDataset] KeyError with user-defined `Features` when a field is missing in an example
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "DistilBERT can support sentence pair-like inputs but does not make use of token type IDs. It detects sentence pairs according to the special tokens. cc @VictorSanh ", "@Yusifu Did you find a solution for this problem? I'm also doing sentence-pair classification (NLI) with Distilbert.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-05-15T17:48:34"
"2024-05-15T17:48:49"
null
MEMBER
null
reported at https://huggingface.co/datasets/ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions/discussions/1 ``` File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 109, in _generate_examples example[field_name] = {"path": example["__key__"] + "." + field_name, "bytes": example[field_name]} ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6900/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/6900/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6899
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6899/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6899/comments
https://api.github.com/repos/huggingface/datasets/issues/6899/events
https://github.com/huggingface/datasets/issues/6899
2,298,059,597
I_kwDODunzps6I-ZtN
6,899
List of dictionary features get standardized
{ "login": "sohamparikh94", "id": 11831521, "node_id": "MDQ6VXNlcjExODMxNTIx", "avatar_url": "https://avatars.githubusercontent.com/u/11831521?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sohamparikh94", "html_url": "https://github.com/sohamparikh94", "followers_url": "https://api.github.com/users/sohamparikh94/followers", "following_url": "https://api.github.com/users/sohamparikh94/following{/other_user}", "gists_url": "https://api.github.com/users/sohamparikh94/gists{/gist_id}", "starred_url": "https://api.github.com/users/sohamparikh94/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sohamparikh94/subscriptions", "organizations_url": "https://api.github.com/users/sohamparikh94/orgs", "repos_url": "https://api.github.com/users/sohamparikh94/repos", "events_url": "https://api.github.com/users/sohamparikh94/events{/privacy}", "received_events_url": "https://api.github.com/users/sohamparikh94/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hey @wulaoshi - I don't fully understand your question. Could you maybe post such a higher level question on the forum at `discuss.huggingface.co` ? :-) ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-05-15T14:11:35"
"2024-05-15T14:11:35"
null
NONE
null
### Describe the bug Hi, i’m trying to create a HF dataset from a list using Dataset.from_list. Each sample in the list is a dict with the same keys (which will be my features). The values for each feature are a list of dictionaries, and each such dictionary has a different set of keys. However, the datasets library standardizes all dictionaries under a feature and adds all possible keys (with None value) from all the dictionaries under that feature. How can I keep the same set of keys as in the original list for each dictionary under a feature? ### Steps to reproduce the bug ``` from datasets import Dataset # Define a function to generate a sample with "tools" feature def generate_sample(): # Generate random sample data sample_data = { "text": "Sample text", "feature_1": [] } # Add feature_1 with random keys for this sample feature_1 = [{"key1": "value1"}, {"key2": "value2"}] # Example feature_1 with random keys sample_data["feature_1"].extend(feature_1) return sample_data # Generate multiple samples num_samples = 10 samples = [generate_sample() for _ in range(num_samples)] # Create a Hugging Face Dataset dataset = Dataset.from_list(samples) dataset[0] ``` ```{'text': 'Sample text', 'feature_1': [{'key1': 'value1', 'key2': None}, {'key1': None, 'key2': 'value2'}]}``` ### Expected behavior ```{'text': 'Sample text', 'feature_1': [{'key1': 'value1'}, {'key2': 'value2'}]}``` ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-5.15.0-1040-nvidia-x86_64-with-glibc2.35 - Python version: 3.10.13 - `huggingface_hub` version: 0.23.0 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6899/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6899/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6898
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6898/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6898/comments
https://api.github.com/repos/huggingface/datasets/issues/6898/events
https://github.com/huggingface/datasets/pull/6898
2,294,432,108
PR_kwDODunzps5vWJ9v
6,898
Fix YAML error in README files appearing on GitHub
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6898?src=pr&el=h1) Report\n> Merging [#6898](https://codecov.io/gh/huggingface/transformers/pull/6898?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d822ab636b6a14ed50f7bca0797c1de42c19de61?el=desc) will **increase** coverage by `1.00%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6898/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6898?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6898 +/- ##\n==========================================\n+ Coverage 79.61% 80.62% +1.00% \n==========================================\n Files 157 157 \n Lines 28826 28826 \n==========================================\n+ Hits 22951 23241 +290 \n+ Misses 5875 5585 -290 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6898?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `90.10% <0.00%> (-3.93%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.97% <0.00%> (-0.68%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.90% <0.00%> (-0.34%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.21% <0.00%> (+0.27%)` | :arrow_up: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6898?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6898?src=pr&el=footer). Last update [d822ab6...6b67e49](https://codecov.io/gh/huggingface/transformers/pull/6898?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-05-14T05:21:57"
"2024-05-16T14:36:57"
"2024-05-16T14:28:16"
MEMBER
null
Fix YAML error in README files appearing on GitHub. See error message: ![Screenshot from 2024-05-14 06-58-02](https://github.com/huggingface/datasets/assets/8515462/7984cc4e-96ee-4e83-99a4-4c0c5791fa05) Fix #6897.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6898/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6898/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6898", "html_url": "https://github.com/huggingface/datasets/pull/6898", "diff_url": "https://github.com/huggingface/datasets/pull/6898.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6898.patch", "merged_at": "2024-05-16T14:28:16" }
https://api.github.com/repos/huggingface/datasets/issues/6897
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6897/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6897/comments
https://api.github.com/repos/huggingface/datasets/issues/6897/events
https://github.com/huggingface/datasets/issues/6897
2,293,428,243
I_kwDODunzps6IsvAT
6,897
datasets template guide :: issue in documentation YAML
{ "login": "bghira", "id": 59658056, "node_id": "MDQ6VXNlcjU5NjU4MDU2", "avatar_url": "https://avatars.githubusercontent.com/u/59658056?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bghira", "html_url": "https://github.com/bghira", "followers_url": "https://api.github.com/users/bghira/followers", "following_url": "https://api.github.com/users/bghira/following{/other_user}", "gists_url": "https://api.github.com/users/bghira/gists{/gist_id}", "starred_url": "https://api.github.com/users/bghira/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bghira/subscriptions", "organizations_url": "https://api.github.com/users/bghira/orgs", "repos_url": "https://api.github.com/users/bghira/repos", "events_url": "https://api.github.com/users/bghira/events{/privacy}", "received_events_url": "https://api.github.com/users/bghira/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6897?src=pr&el=h1) Report\n> Merging [#6897](https://codecov.io/gh/huggingface/transformers/pull/6897?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d822ab636b6a14ed50f7bca0797c1de42c19de61?el=desc) will **increase** coverage by `0.77%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6897/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6897?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6897 +/- ##\n==========================================\n+ Coverage 79.61% 80.39% +0.77% \n==========================================\n Files 157 157 \n Lines 28826 28826 \n==========================================\n+ Hits 22951 23174 +223 \n+ Misses 5875 5652 -223 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6897?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.42% <ø> (ø)` | |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `45.41% <0.00%> (-47.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `57.29% <0.00%> (-39.79%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.85% <0.00%> (-7.19%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.14% <0.00%> (-2.15%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.96% <0.00%> (-0.45%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (-0.41%)` | :arrow_down: |\n| ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6897?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6897?src=pr&el=footer). Last update [d822ab6...b6c59a1](https://codecov.io/gh/huggingface/transformers/pull/6897?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-05-13T17:33:59"
"2024-05-16T14:28:17"
"2024-05-16T14:28:17"
NONE
null
### Describe the bug There is a YAML error at the top of the page, and I don't think it's supposed to be there ### Steps to reproduce the bug 1. Browse to [this tutorial document](https://github.com/huggingface/datasets/blob/main/templates/README_guide.md) 2. Observe a big red error at the top 3. The rest of the document remains functional ### Expected behavior I think the YAML block should be displayed or ignored. ### Environment info N/A
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6897/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6897/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6896
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6896/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6896/comments
https://api.github.com/repos/huggingface/datasets/issues/6896/events
https://github.com/huggingface/datasets/issues/6896
2,293,176,061
I_kwDODunzps6Irxb9
6,896
Regression bug: `NonMatchingSplitsSizesError` for (possibly) overwritten dataset
{ "login": "finiteautomata", "id": 167943, "node_id": "MDQ6VXNlcjE2Nzk0Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/167943?v=4", "gravatar_id": "", "url": "https://api.github.com/users/finiteautomata", "html_url": "https://github.com/finiteautomata", "followers_url": "https://api.github.com/users/finiteautomata/followers", "following_url": "https://api.github.com/users/finiteautomata/following{/other_user}", "gists_url": "https://api.github.com/users/finiteautomata/gists{/gist_id}", "starred_url": "https://api.github.com/users/finiteautomata/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/finiteautomata/subscriptions", "organizations_url": "https://api.github.com/users/finiteautomata/orgs", "repos_url": "https://api.github.com/users/finiteautomata/repos", "events_url": "https://api.github.com/users/finiteautomata/events{/privacy}", "received_events_url": "https://api.github.com/users/finiteautomata/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-05-13T15:41:57"
"2024-05-13T15:44:48"
null
NONE
null
### Describe the bug While trying to load the dataset `https://huggingface.co/datasets/pysentimiento/spanish-tweets-small`, I get this error: ```python --------------------------------------------------------------------------- NonMatchingSplitsSizesError Traceback (most recent call last) [<ipython-input-1-d6a3c721d3b8>](https://localhost:8080/#) in <cell line: 3>() 1 from datasets import load_dataset 2 ----> 3 ds = load_dataset("pysentimiento/spanish-tweets-small") 3 frames [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 2150 2151 # Download and prepare data -> 2152 builder_instance.download_and_prepare( 2153 download_config=download_config, 2154 download_mode=download_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 946 if num_proc is not None: 947 prepare_split_kwargs["num_proc"] = num_proc --> 948 self._download_and_prepare( 949 dl_manager=dl_manager, 950 verification_mode=verification_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 1059 1060 if verification_mode == VerificationMode.BASIC_CHECKS or verification_mode == VerificationMode.ALL_CHECKS: -> 1061 verify_splits(self.info.splits, split_dict) 1062 1063 # Update the info object with the splits. [/usr/local/lib/python3.10/dist-packages/datasets/utils/info_utils.py](https://localhost:8080/#) in verify_splits(expected_splits, recorded_splits) 98 ] 99 if len(bad_splits) > 0: --> 100 raise NonMatchingSplitsSizesError(str(bad_splits)) 101 logger.info("All the splits matched successfully.") 102 NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=82649695458, num_examples=597433111, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=3358310095, num_examples=24898932, shard_lengths=[3626991, 3716991, 4036990, 3506990, 3676990, 3716990, 2616990], dataset_name='spanish-tweets-small')}] ``` I think I had this dataset updated, might be related to #6271 It is working fine as late in `2.10.0` , but not in `2.13.0` onwards. ### Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("pysentimiento/spanish-tweets-small") ``` You can run it in [this notebook](https://colab.research.google.com/drive/1FdhqLiVimHIlkn7B54DbhizeQ4U3vGVl#scrollTo=YgA50cBSibUg) ### Expected behavior Load the dataset without any error ### Environment info - `datasets` version: 2.13.0 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.20.3 - PyArrow version: 14.0.2 - Pandas version: 2.0.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6896/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6896/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6895
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6895/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6895/comments
https://api.github.com/repos/huggingface/datasets/issues/6895/events
https://github.com/huggingface/datasets/pull/6895
2,292,993,156
PR_kwDODunzps5vRK8P
6,895
Document that to_json defaults to JSON Lines
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2024-05-13T14:22:34"
"2024-05-16T14:37:25"
"2024-05-16T14:31:26"
MEMBER
null
Document that `Dataset.to_json` defaults to JSON Lines, by adding explanation in the corresponding docstring. Fix #6894.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6895/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6895/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6895", "html_url": "https://github.com/huggingface/datasets/pull/6895", "diff_url": "https://github.com/huggingface/datasets/pull/6895.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6895.patch", "merged_at": "2024-05-16T14:31:26" }
https://api.github.com/repos/huggingface/datasets/issues/6894
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6894/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6894/comments
https://api.github.com/repos/huggingface/datasets/issues/6894/events
https://github.com/huggingface/datasets/issues/6894
2,292,840,226
I_kwDODunzps6Iqfci
6,894
Better document defaults of to_json
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "You can try the following changes:\r\n\r\n```python\r\nfrom transformers import BertPreTrainedModel, RobertaModel, ROBERTA_PRETRAINED_MODEL_ARCHIVE_LIST, RobertaConfig\r\n\r\nclass RobertaForMD(BertPreTrainedModel): # Metaphor Detection, modified from BertForTokenClassification\r\n config_class = RobertaConfig\r\n pretrained_model_archive_map = ROBERTA_PRETRAINED_MODEL_ARCHIVE_LIST\r\n base_model_prefix = \"roberta\"\r\n \r\n def __init__(self, config):\r\n super().__init__(config)\r\n self.num_labels = config.num_labels\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-05-13T13:30:54"
"2024-05-16T14:31:27"
"2024-05-16T14:31:27"
MEMBER
null
Better document defaults of `to_json`: the default format is [JSON-Lines](https://jsonlines.org/). Related to: - #6891
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6894/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6894/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6893
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6893/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6893/comments
https://api.github.com/repos/huggingface/datasets/issues/6893/events
https://github.com/huggingface/datasets/pull/6893
2,292,677,439
PR_kwDODunzps5vQFEv
6,893
Close gzipped files properly
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6893?src=pr&el=h1) Report\n> Merging [#6893](https://codecov.io/gh/huggingface/transformers/pull/6893?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d822ab636b6a14ed50f7bca0797c1de42c19de61?el=desc) will **increase** coverage by `0.46%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6893/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6893?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6893 +/- ##\n==========================================\n+ Coverage 79.61% 80.08% +0.46% \n==========================================\n Files 157 157 \n Lines 28826 28826 \n==========================================\n+ Hits 22951 23086 +135 \n+ Misses 5875 5740 -135 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6893?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [...c/transformers/modeling\\_tf\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.73% <0.00%> (-19.35%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.44% <0.00%> (-7.59%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.85% <0.00%> (-1.43%)` | :arrow_down: |\n| ... and [17 more](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6893?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6893?src=pr&el=footer). Last update [d822ab6...3979cda](https://codecov.io/gh/huggingface/transformers/pull/6893?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "👍 " ]
"2024-05-13T12:24:39"
"2024-05-13T13:53:17"
"2024-05-13T13:01:54"
MEMBER
null
close https://github.com/huggingface/datasets/issues/6877
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6893/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 1, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6893/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6893", "html_url": "https://github.com/huggingface/datasets/pull/6893", "diff_url": "https://github.com/huggingface/datasets/pull/6893.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6893.patch", "merged_at": "2024-05-13T13:01:54" }
https://api.github.com/repos/huggingface/datasets/issues/6892
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6892/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6892/comments
https://api.github.com/repos/huggingface/datasets/issues/6892/events
https://github.com/huggingface/datasets/pull/6892
2,291,201,347
PR_kwDODunzps5vLIlp
6,892
Add support for categorical/dictionary types
{ "login": "EthanSteinberg", "id": 342233, "node_id": "MDQ6VXNlcjM0MjIzMw==", "avatar_url": "https://avatars.githubusercontent.com/u/342233?v=4", "gravatar_id": "", "url": "https://api.github.com/users/EthanSteinberg", "html_url": "https://github.com/EthanSteinberg", "followers_url": "https://api.github.com/users/EthanSteinberg/followers", "following_url": "https://api.github.com/users/EthanSteinberg/following{/other_user}", "gists_url": "https://api.github.com/users/EthanSteinberg/gists{/gist_id}", "starred_url": "https://api.github.com/users/EthanSteinberg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/EthanSteinberg/subscriptions", "organizations_url": "https://api.github.com/users/EthanSteinberg/orgs", "repos_url": "https://api.github.com/users/EthanSteinberg/repos", "events_url": "https://api.github.com/users/EthanSteinberg/events{/privacy}", "received_events_url": "https://api.github.com/users/EthanSteinberg/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Those requirements are all in [here](https://github.com/huggingface/transformers/blob/master/examples/requirements.txt). Are you sure you ran `pip install -r ./examples/requirements.txt` as mentioned in the [README of all examples](https://github.com/huggingface/transformers/tree/master/examples#important-note)?\r\n\r\nThey are not, and won't be requirements of the main library since they are only used for some specific tasks.", "Huh, probably just a local issue then. Thanks!" ]
"2024-05-12T07:15:08"
"2024-05-12T07:15:37"
null
NONE
null
Arrow has a very useful dictionary/categorical type (https://arrow.apache.org/docs/python/generated/pyarrow.dictionary.html). This data type has significant speed, memory and disk benefits over pa.string() when there are only a few unique text strings in a column. Unfortunately, huggingface datasets currently does not support this type. So huggingface datasets cannot natively read many parquet files that use this datatype .This PR adds support for Huggingface Datasets to read categorical/dictionary data. Note: This PR functions by simply converting those dictionary/categorical types to strings. This means that huggingface datasets cannot take advantage of the compute benefits of categoricals, but it significantly simplifies logic. At this time, I do not think it makes sense to optimize categorical support within huggingface datasets and that we should only try to optimize later, if necessary. Closes #5706
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6892/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6892/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6892", "html_url": "https://github.com/huggingface/datasets/pull/6892", "diff_url": "https://github.com/huggingface/datasets/pull/6892.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6892.patch", "merged_at": null }
https://api.github.com/repos/huggingface/datasets/issues/6891
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6891/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6891/comments
https://api.github.com/repos/huggingface/datasets/issues/6891/events
https://github.com/huggingface/datasets/issues/6891
2,291,118,869
I_kwDODunzps6Ij7MV
6,891
Unable to load JSON saved using `to_json`
{ "login": "DarshanDeshpande", "id": 39432636, "node_id": "MDQ6VXNlcjM5NDMyNjM2", "avatar_url": "https://avatars.githubusercontent.com/u/39432636?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DarshanDeshpande", "html_url": "https://github.com/DarshanDeshpande", "followers_url": "https://api.github.com/users/DarshanDeshpande/followers", "following_url": "https://api.github.com/users/DarshanDeshpande/following{/other_user}", "gists_url": "https://api.github.com/users/DarshanDeshpande/gists{/gist_id}", "starred_url": "https://api.github.com/users/DarshanDeshpande/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DarshanDeshpande/subscriptions", "organizations_url": "https://api.github.com/users/DarshanDeshpande/orgs", "repos_url": "https://api.github.com/users/DarshanDeshpande/repos", "events_url": "https://api.github.com/users/DarshanDeshpande/events{/privacy}", "received_events_url": "https://api.github.com/users/DarshanDeshpande/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "I don't see the transformers code that creates this bug. In 3.1.0, `DistilBertConfig` definitely has a 'return_dict' `attribute`. I tried to use your code to investigate the error, but it fails on the line `flair_sent = flair.models.TextClassifier.load('en-sentiment')` for me.\r\n\r\nHappy to investigate a code sample that uses transformers and creates the bug, but this looks like a problem to report on the fair GitHub. ", "> I don't see the transformers code that creates this bug. In 3.1.0, `DistilBertConfig` definitely has a 'return_dict' `attribute`. I tried to use your code to investigate the error, but it fails on the line `flair_sent = flair.models.TextClassifier.load('en-sentiment')` for me.\r\n> \r\n> Happy to investigate a code sample that uses transformers and creates the bug, but this looks like a problem to report on the fair GitHub.\r\n\r\nI already did, just in case I wanted to report the bug in here. Thank you anyway!", "Don't hesitate to reopen if it ends up being on our side, with a small repro using only transformers ideally.", "It ended being on flair side. Here I'll attached the link for future references [/flairNLP/flair/issues/1841](https://github.com/flairNLP/flair/issues/1841)" ]
"2024-05-12T01:02:51"
"2024-05-16T14:32:55"
"2024-05-12T07:02:02"
NONE
null
### Describe the bug Datasets stored in the JSON format cannot be loaded using `json.load()` ### Steps to reproduce the bug ``` import json from datasets import load_dataset dataset = load_dataset("squad") train_dataset, test_dataset = dataset["train"], dataset["validation"] test_dataset.to_json("full_dataset.json") # This works loaded_test = load_dataset("json", data_files="full_dataset.json") # This fails loaded_test = json.load(open("full_dataset.json", "r")) ``` ### Expected behavior The JSON should be correctly formatted when writing so that it can be loaded using `json.load()`. ### Environment info Colab: https://colab.research.google.com/drive/1st1iStFUVgu9ZPvnzSzL4vDeYWDwYpUm?usp=sharing
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6891/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6891/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6890
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6890/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6890/comments
https://api.github.com/repos/huggingface/datasets/issues/6890/events
https://github.com/huggingface/datasets/issues/6890
2,288,699,041
I_kwDODunzps6Iasah
6,890
add `with_transform` and/or `set_transform` to IterableDataset
{ "login": "not-lain", "id": 70411813, "node_id": "MDQ6VXNlcjcwNDExODEz", "avatar_url": "https://avatars.githubusercontent.com/u/70411813?v=4", "gravatar_id": "", "url": "https://api.github.com/users/not-lain", "html_url": "https://github.com/not-lain", "followers_url": "https://api.github.com/users/not-lain/followers", "following_url": "https://api.github.com/users/not-lain/following{/other_user}", "gists_url": "https://api.github.com/users/not-lain/gists{/gist_id}", "starred_url": "https://api.github.com/users/not-lain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/not-lain/subscriptions", "organizations_url": "https://api.github.com/users/not-lain/orgs", "repos_url": "https://api.github.com/users/not-lain/repos", "events_url": "https://api.github.com/users/not-lain/events{/privacy}", "received_events_url": "https://api.github.com/users/not-lain/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6890?src=pr&el=h1) Report\n> Merging [#6890](https://codecov.io/gh/huggingface/transformers/pull/6890?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3726754a6c646adcf9cb2135ab7f72dffe074473?el=desc) will **decrease** coverage by `0.49%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6890/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6890?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6890 +/- ##\n==========================================\n- Coverage 80.05% 79.56% -0.50% \n==========================================\n Files 157 157 \n Lines 28822 28822 \n==========================================\n- Hits 23074 22932 -142 \n- Misses 5748 5890 +142 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6890?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/configuration\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `34.28% <0.00%> (-62.86%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `23.87% <0.00%> (-48.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-34.36%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.85% <0.00%> (-7.05%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.85% <0.00%> (-1.43%)` | :arrow_down: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6890?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6890?src=pr&el=footer). Last update [3726754...eb044f1](https://codecov.io/gh/huggingface/transformers/pull/6890?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-05-10T01:00:12"
"2024-05-10T01:00:46"
null
NONE
null
### Feature request when working with a really large dataset it would save us a lot of time (and compute resources) to use either with_transform or the set_transform from the Dataset class instead of waiting for the entire dataset to map ### Motivation don't want to wait for a really long dataset to map, this would give IterableDataset an extra advantage over the Dataset class. reducing time and resources ### Your contribution I am a little busy with my job search lately, but would post about this feature in my social media. Apologies again (dad going to kick me out soon), if I ever have some free time I will contribute to making this a reality, but that's going to be hard     / (┬┬﹏┬┬)\
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6890/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6890/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6889
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6889/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6889/comments
https://api.github.com/repos/huggingface/datasets/issues/6889/events
https://github.com/huggingface/datasets/pull/6889
2,287,720,539
PR_kwDODunzps5u_hW-
6,889
fix bug #6877
{ "login": "arthasking123", "id": 16257131, "node_id": "MDQ6VXNlcjE2MjU3MTMx", "avatar_url": "https://avatars.githubusercontent.com/u/16257131?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arthasking123", "html_url": "https://github.com/arthasking123", "followers_url": "https://api.github.com/users/arthasking123/followers", "following_url": "https://api.github.com/users/arthasking123/following{/other_user}", "gists_url": "https://api.github.com/users/arthasking123/gists{/gist_id}", "starred_url": "https://api.github.com/users/arthasking123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arthasking123/subscriptions", "organizations_url": "https://api.github.com/users/arthasking123/orgs", "repos_url": "https://api.github.com/users/arthasking123/repos", "events_url": "https://api.github.com/users/arthasking123/events{/privacy}", "received_events_url": "https://api.github.com/users/arthasking123/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6889?src=pr&el=h1) Report\n> Merging [#6889](https://codecov.io/gh/huggingface/transformers/pull/6889?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/311992630cfd6c776bc2672d94dcd81624ad023b?el=desc) will **increase** coverage by `0.64%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6889/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6889?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6889 +/- ##\n==========================================\n+ Coverage 79.06% 79.71% +0.64% \n==========================================\n Files 157 157 \n Lines 28823 28823 \n==========================================\n+ Hits 22789 22976 +187 \n+ Misses 6034 5847 -187 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6889?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `32.20% <0.00%> (-66.95%)` | :arrow_down: |\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `59.43% <0.00%> (-35.85%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `87.04% <0.00%> (-5.27%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-1.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.90% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.63% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6889?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6889?src=pr&el=footer). Last update [3119926...e119e42](https://codecov.io/gh/huggingface/transformers/pull/6889?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-05-09T13:38:40"
"2024-05-13T13:35:32"
"2024-05-13T13:35:32"
NONE
null
fix bug #6877 due to maybe f becomes invaild after yield process the results are below: Resolving data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:01<00:00, 420.41it/s] Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 26148.48it/s] Resolving data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 409731.44it/s] Resolving data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 289720.84it/s] Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 26663.42it/s] Resolving data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 434056.21it/s] Downloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 13222.33files/s] Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:04<00:00, 180.67files/s] Downloading data: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [01:35<00:00, 8.70files/s] Generating train split: 1571592 examples [00:08, 176736.09 examples/s] Generating test split: 85533 examples [00:01, 48224.56 examples/s] Generating validation split: 86246 examples [00:01, 50164.16 examples/s] Fix https://github.com/huggingface/datasets/issues/6877. CC: @natolambert
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6889/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6889/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6889", "html_url": "https://github.com/huggingface/datasets/pull/6889", "diff_url": "https://github.com/huggingface/datasets/pull/6889.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6889.patch", "merged_at": null }
https://api.github.com/repos/huggingface/datasets/issues/6888
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6888/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6888/comments
https://api.github.com/repos/huggingface/datasets/issues/6888/events
https://github.com/huggingface/datasets/pull/6888
2,287,169,676
PR_kwDODunzps5u9omr
6,888
Support WebDataset containing file basenames with dots
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6888?src=pr&el=h1) Report\n> Merging [#6888](https://codecov.io/gh/huggingface/transformers/pull/6888?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/311992630cfd6c776bc2672d94dcd81624ad023b?el=desc) will **decrease** coverage by `0.84%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6888/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6888?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6888 +/- ##\n==========================================\n- Coverage 79.06% 78.22% -0.85% \n==========================================\n Files 157 157 \n Lines 28823 28823 \n==========================================\n- Hits 22789 22546 -243 \n- Misses 6034 6277 +243 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6888?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.47% <0.00%> (-69.44%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `25.63% <0.00%> (-54.32%)` | :arrow_down: |\n| [src/transformers/tokenization\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `46.03% <0.00%> (-49.21%)` | :arrow_down: |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `34.28% <0.00%> (-48.00%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.36% <0.00%> (-14.37%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `87.04% <0.00%> (-5.27%)` | :arrow_down: |\n| ... and [21 more](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6888?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6888?src=pr&el=footer). Last update [3119926...157d717](https://codecov.io/gh/huggingface/transformers/pull/6888?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks @mrm8488 , cc @dccuchile" ]
"2024-05-09T08:25:30"
"2024-05-10T13:54:06"
"2024-05-10T13:54:06"
MEMBER
null
Support WebDataset containing file basenames with dots. Fix #6880.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6888/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6888/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6888", "html_url": "https://github.com/huggingface/datasets/pull/6888", "diff_url": "https://github.com/huggingface/datasets/pull/6888.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6888.patch", "merged_at": null }
https://api.github.com/repos/huggingface/datasets/issues/6887
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6887/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6887/comments
https://api.github.com/repos/huggingface/datasets/issues/6887/events
https://github.com/huggingface/datasets/issues/6887
2,286,786,396
I_kwDODunzps6ITZdc
6,887
FAISS load to None
{ "login": "brainer3220", "id": 40418544, "node_id": "MDQ6VXNlcjQwNDE4NTQ0", "avatar_url": "https://avatars.githubusercontent.com/u/40418544?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brainer3220", "html_url": "https://github.com/brainer3220", "followers_url": "https://api.github.com/users/brainer3220/followers", "following_url": "https://api.github.com/users/brainer3220/following{/other_user}", "gists_url": "https://api.github.com/users/brainer3220/gists{/gist_id}", "starred_url": "https://api.github.com/users/brainer3220/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brainer3220/subscriptions", "organizations_url": "https://api.github.com/users/brainer3220/orgs", "repos_url": "https://api.github.com/users/brainer3220/repos", "events_url": "https://api.github.com/users/brainer3220/events{/privacy}", "received_events_url": "https://api.github.com/users/brainer3220/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2024-05-09T02:43:50"
"2024-05-16T20:44:23"
null
NONE
null
### Describe the bug I've use FAISS with Datasets and save to FAISS. Then load to save FAISS then no error, then ds to None ```python ds.load_faiss_index('embeddings', 'my_index.faiss') ``` ### Steps to reproduce the bug # 1. ```python ds_with_embeddings = ds.map(lambda example: {'embeddings': model(transforms(example['image']).unsqueeze(0)).squeeze()}, batch_size=64) ds_with_embeddings.add_faiss_index(column='embeddings') ds_with_embeddings.save_faiss_index('embeddings', 'index.faiss') ``` # 2. ```python ds.load_faiss_index('embeddings', 'my_index.faiss') ``` ### Expected behavior Add column in Datasets. ### Environment info Google Colab, SageMaker Notebook
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6887/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6887/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6886
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6886/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6886/comments
https://api.github.com/repos/huggingface/datasets/issues/6886/events
https://github.com/huggingface/datasets/issues/6886
2,286,328,984
I_kwDODunzps6IRpyY
6,886
load_dataset with data_dir and cache_dir set fail with not supported
{ "login": "fah", "id": 322496, "node_id": "MDQ6VXNlcjMyMjQ5Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/322496?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fah", "html_url": "https://github.com/fah", "followers_url": "https://api.github.com/users/fah/followers", "following_url": "https://api.github.com/users/fah/following{/other_user}", "gists_url": "https://api.github.com/users/fah/gists{/gist_id}", "starred_url": "https://api.github.com/users/fah/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fah/subscriptions", "organizations_url": "https://api.github.com/users/fah/orgs", "repos_url": "https://api.github.com/users/fah/repos", "events_url": "https://api.github.com/users/fah/events{/privacy}", "received_events_url": "https://api.github.com/users/fah/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6886?src=pr&el=h1) Report\n> Merging [#6886](https://codecov.io/gh/huggingface/transformers/pull/6886?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/311992630cfd6c776bc2672d94dcd81624ad023b?el=desc) will **increase** coverage by `1.04%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6886/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6886?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6886 +/- ##\n==========================================\n+ Coverage 79.06% 80.10% +1.04% \n==========================================\n Files 157 157 \n Lines 28823 28823 \n==========================================\n+ Hits 22789 23089 +300 \n+ Misses 6034 5734 -300 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6886?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.90% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.42% <0.00%> (+0.32%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (+0.65%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.75%)` | :arrow_up: |\n| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6886?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6886?src=pr&el=footer). Last update [3119926...0e167b5](https://codecov.io/gh/huggingface/transformers/pull/6886?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-05-08T19:52:35"
"2024-05-08T19:58:11"
null
NONE
null
### Describe the bug with python 3.11 I execute: ```py from transformers import Wav2Vec2Processor, Data2VecAudioModel import torch from torch import nn from datasets import load_dataset, concatenate_datasets # load demo audio and set processor dataset_clean = load_dataset("librispeech_asr", "clean", split="validation", data_dir="data", cache_dir="cache") ``` This fails in the last line with ```log Found cached dataset librispeech_asr (file:///Users/as/Documents/Project/git/audio2vec/cache/librispeech_asr/clean-data_dir=data/2.1.0/cff5df6e7955c80a67f80e27e7e655de71c689e2d2364bece785b972acb37fe7) Traceback (most recent call last): File "/Users/as/Documents/Project/git/audio2vec/src/music2vec-v1.py", line 7, in <module> dataset_clean = load_dataset("librispeech_asr", "clean", split="validation", data_dir="data", cache_dir="cache") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/as/anaconda3/lib/python3.11/site-packages/datasets/load.py", line 1810, in load_dataset ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/as/anaconda3/lib/python3.11/site-packages/datasets/builder.py", line 1113, in as_dataset raise NotImplementedError(f"Loading a dataset cached in a {type(self._fs).__name__} is not supported.") NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported. ``` ### Steps to reproduce the bug I setup an venv with requirements.txt ```txt transformers==4.40.2 torch==2.2.2 datasets==2.16.0 fsspec==2023.9.2 ``` pip freeze is: ``` aiohttp==3.9.5 aiosignal==1.3.1 attrs==23.2.0 certifi==2024.2.2 charset-normalizer==3.3.2 datasets==2.16.0 dill==0.3.7 filelock==3.14.0 frozenlist==1.4.1 fsspec==2023.9.2 huggingface-hub==0.23.0 idna==3.7 Jinja2==3.1.4 MarkupSafe==2.1.5 mpmath==1.3.0 multidict==6.0.5 multiprocess==0.70.15 networkx==3.3 numpy==1.26.4 packaging==24.0 pandas==2.2.2 pyarrow==16.0.0 pyarrow-hotfix==0.6 python-dateutil==2.9.0.post0 pytz==2024.1 PyYAML==6.0.1 regex==2024.4.28 requests==2.31.0 safetensors==0.4.3 six==1.16.0 sympy==1.12 tokenizers==0.19.1 torch==2.2.2 tqdm==4.66.4 transformers==4.40.2 typing_extensions==4.11.0 tzdata==2024.1 urllib3==2.2.1 xxhash==3.4.1 yarl==1.9.4 ``` I execute this on a M1 Mac. ### Expected behavior I don't understand the error message. Why is "local" caching not supported. Would it possible to give some additional hint with the error message how to solve this issue? ### Environment info source .... python -u example.py
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6886/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6886/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6885
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6885/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6885/comments
https://api.github.com/repos/huggingface/datasets/issues/6885/events
https://github.com/huggingface/datasets/pull/6885
2,285,115,400
PR_kwDODunzps5u2urB
6,885
Support jax 0.4.27 in CI tests
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@sshleifer want to update the inference API so that the correct pipeline shows up at https://huggingface.co/akhooli/mbart-large-cc25-en-ar ? (cc @mfuntowicz)", "Seems fixed?\r\nhttps://huggingface.co/akhooli/mbart-large-cc25-en-ar\r\n![image](https://user-images.githubusercontent.com/6045025/92133236-22273300-edd6-11ea-866e-d7249f38f792.png)\r\n", "> \r\n> \r\n> Seems fixed?\r\n> https://huggingface.co/akhooli/mbart-large-cc25-en-ar\r\n> ![image](https://user-images.githubusercontent.com/6045025/92133236-22273300-edd6-11ea-866e-d7249f38f792.png)\r\nSure, just after the model card was merged. Not sure if it was due to the 'translation' tag in the card or some other magic done by your team.", "Just uploaded https://huggingface.co/akhooli/mbart-large-cc25-ar-en and it seems inference type is not recognized automatically. It defaults to fill-mask (model card submitted).", "model card merged." ]
"2024-05-08T09:19:37"
"2024-05-08T09:43:19"
"2024-05-08T09:35:16"
MEMBER
null
Support jax 0.4.27 in CI tests by using jax Array `devices` method instead of `device` (which no longer exists). Fix #6884.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6885/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6885/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6885", "html_url": "https://github.com/huggingface/datasets/pull/6885", "diff_url": "https://github.com/huggingface/datasets/pull/6885.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6885.patch", "merged_at": "2024-05-08T09:35:16" }
https://api.github.com/repos/huggingface/datasets/issues/6884
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6884/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6884/comments
https://api.github.com/repos/huggingface/datasets/issues/6884/events
https://github.com/huggingface/datasets/issues/6884
2,284,839,687
I_kwDODunzps6IL-MH
6,884
CI is broken after jax-0.4.27 release: AttributeError: 'jaxlib.xla_extension.DeviceList' object has no attribute 'device'
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6884?src=pr&el=h1) Report\n> Merging [#6884](https://codecov.io/gh/huggingface/transformers/pull/6884?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3726754a6c646adcf9cb2135ab7f72dffe074473?el=desc) will **decrease** coverage by `0.22%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6884/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6884?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6884 +/- ##\n==========================================\n- Coverage 80.05% 79.83% -0.23% \n==========================================\n Files 157 157 \n Lines 28822 28823 +1 \n==========================================\n- Hits 23074 23010 -64 \n- Misses 5748 5813 +65 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6884?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `82.18% <100.00%> (+0.05%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.58% <0.00%> (-7.32%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.14% <0.00%> (-2.15%)` | :arrow_down: |\n| ... and [16 more](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6884?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6884?src=pr&el=footer). Last update [3726754...8f97406](https://codecov.io/gh/huggingface/transformers/pull/6884?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-05-08T07:01:47"
"2024-05-08T09:35:17"
"2024-05-08T09:35:17"
MEMBER
null
After jax-0.4.27 release (https://github.com/google/jax/releases/tag/jax-v0.4.27), our CI is broken with the error: ```Python traceback AttributeError: 'jaxlib.xla_extension.DeviceList' object has no attribute 'device'. Did you mean: 'devices'? ``` See: https://github.com/huggingface/datasets/actions/runs/8997488610/job/24715736153 ```Python traceback ___________________ FormatterTest.test_jax_formatter_device ____________________ [gw1] linux -- Python 3.10.14 /opt/hostedtoolcache/Python/3.10.14/x64/bin/python self = <tests.test_formatting.FormatterTest testMethod=test_jax_formatter_device> @require_jax def test_jax_formatter_device(self): import jax from datasets.formatting import JaxFormatter pa_table = self._create_dummy_table() device = jax.devices()[0] formatter = JaxFormatter(device=str(device)) row = formatter.format_row(pa_table) > assert row["a"].device() == device E AttributeError: 'jaxlib.xla_extension.DeviceList' object has no attribute 'device'. Did you mean: 'devices'? tests/test_formatting.py:630: AttributeError ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6884/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6884/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6883
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6883/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6883/comments
https://api.github.com/repos/huggingface/datasets/issues/6883/events
https://github.com/huggingface/datasets/pull/6883
2,284,808,399
PR_kwDODunzps5u1sL1
6,883
Require Pillow >= 9.4.0 to avoid AttributeError when loading image dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2024-05-08T06:43:29"
"2024-05-21T18:37:55"
"2024-05-16T14:34:02"
MEMBER
null
Require Pillow >= 9.4.0 to avoid AttributeError when loading image dataset. The `PIL.Image.ExifTags` that we use in our code was implemented in Pillow-9.4.0: https://github.com/python-pillow/Pillow/commit/24a5405a9f7ea22f28f9c98b3e407292ea5ee1d3 The bug #6881 was introduced in datasets-2.19.0 by this PR: - #6739 Fix #6881.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6883/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6883/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6883", "html_url": "https://github.com/huggingface/datasets/pull/6883", "diff_url": "https://github.com/huggingface/datasets/pull/6883.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6883.patch", "merged_at": "2024-05-16T14:34:02" }
https://api.github.com/repos/huggingface/datasets/issues/6882
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6882/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6882/comments
https://api.github.com/repos/huggingface/datasets/issues/6882/events
https://github.com/huggingface/datasets/issues/6882
2,284,803,158
I_kwDODunzps6IL1RW
6,882
Connection Error When Using By-pass Proxies
{ "login": "MRNOBODY-ZST", "id": 78351684, "node_id": "MDQ6VXNlcjc4MzUxNjg0", "avatar_url": "https://avatars.githubusercontent.com/u/78351684?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MRNOBODY-ZST", "html_url": "https://github.com/MRNOBODY-ZST", "followers_url": "https://api.github.com/users/MRNOBODY-ZST/followers", "following_url": "https://api.github.com/users/MRNOBODY-ZST/following{/other_user}", "gists_url": "https://api.github.com/users/MRNOBODY-ZST/gists{/gist_id}", "starred_url": "https://api.github.com/users/MRNOBODY-ZST/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MRNOBODY-ZST/subscriptions", "organizations_url": "https://api.github.com/users/MRNOBODY-ZST/orgs", "repos_url": "https://api.github.com/users/MRNOBODY-ZST/repos", "events_url": "https://api.github.com/users/MRNOBODY-ZST/events{/privacy}", "received_events_url": "https://api.github.com/users/MRNOBODY-ZST/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "I understand it makes the code slightly cleaner; in terms of speed it is most likely negligible (compared to the embedding lookup, for example).\r\n\r\nBut not sure what to do now as all the pretrained models (that used a lot of compute to pretrain) don't work anymore in the new update.", "Hey @Laksh1997 - note that this line does not break anything. You can neglect warnings about `position_ids` since those are created at instantiation. Will open a PR to fix the warning", "@patrickvonplaten seems to break it for me:\r\n\r\n```\r\n\r\n16:43:52\r\nTraceback (most recent call last):\r\n\r\n16:43:52\r\nFile \"/opt/conda/envs/py36/bin/transformervae\", line 33, in <module>\r\n\r\n16:43:52\r\nsys.exit(load_entry_point('exs-transformervae', 'console_scripts', 'transformervae')())\r\n\r\n16:43:52\r\nFile \"/opt/conda/envs/py36/lib/python3.6/site-packages/click/core.py\", line 829, in __call__\r\n\r\n16:43:52\r\nreturn self.main(*args, **kwargs)\r\n\r\n16:43:52\r\nFile \"/opt/conda/envs/py36/lib/python3.6/site-packages/click/core.py\", line 782, in main\r\n\r\n16:43:52\r\nrv = self.invoke(ctx)\r\n\r\n16:43:52\r\nFile \"/opt/conda/envs/py36/lib/python3.6/site-packages/click/core.py\", line 1259, in invoke\r\n\r\n16:43:52\r\nreturn _process_result(sub_ctx.command.invoke(sub_ctx))\r\n\r\n16:43:52\r\nFile \"/opt/conda/envs/py36/lib/python3.6/site-packages/click/core.py\", line 1066, in invoke\r\n\r\n16:43:52\r\nreturn ctx.invoke(self.callback, **ctx.params)\r\n\r\n16:43:52\r\nFile \"/opt/conda/envs/py36/lib/python3.6/site-packages/click/core.py\", line 610, in invoke\r\n\r\n16:43:52\r\nreturn callback(*args, **kwargs)\r\n\r\n16:43:52\r\nFile \"/app/transformervae/cli.py\", line 355, in train\r\n\r\n16:43:52\r\nmodel = model_cls(hparams, pretrained_model=pretrained_model_path_or_config)\r\n\r\n16:43:52\r\nFile \"/app/transformervae/models/regression.py\", line 35, in __init__\r\n\r\n16:43:52\r\npretrained_model,\r\n\r\n16:43:52\r\nFile \"/app/transformervae/models/finetuning_model.py\", line 37, in __init__\r\n\r\n16:43:52\r\nself.encoder, self.tokenizer = self.load_pretrained_encoder(pretrained_model)\r\n\r\n16:43:52\r\nFile \"/app/transformervae/models/finetuning_model.py\", line 89, in load_pretrained_encoder\r\n\r\n16:43:52\r\npl_model = AutoModel.load(pretrained_model)\r\n\r\n16:43:52\r\nFile \"/app/transformervae/models/automodel.py\", line 98, in load\r\n\r\n16:43:52\r\nreturn model_cls.load(path)\r\n\r\n16:43:52\r\nFile \"/app/transformervae/models/base.py\", line 229, in load\r\n\r\n16:43:52\r\nreturn cls.load_from_checkpoint(filepath)\r\n\r\n16:43:52\r\nFile \"/opt/conda/envs/py36/lib/python3.6/site-packages/pytorch_lightning/core/saving.py\", line 169, in load_from_checkpoint\r\n\r\n16:43:52\r\nmodel = cls._load_model_state(checkpoint, *args, **kwargs)\r\n\r\n16:43:52\r\nFile \"/opt/conda/envs/py36/lib/python3.6/site-packages/pytorch_lightning/core/saving.py\", line 207, in _load_model_state\r\n\r\n16:43:52\r\nmodel.load_state_dict(checkpoint['state_dict'])\r\n\r\n16:43:52\r\nFile \"/opt/conda/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 1045, in load_state_dict\r\n\r\n16:43:52\r\nself.__class__.__name__, \"\\n\\t\".join(error_msgs)))\r\n\r\n16:43:52\r\nRuntimeError: Error(s) in loading state_dict for ElectraLanguageModel:\r\n\r\n16:43:52\r\nMissing key(s) in state_dict: \"generator_model.electra.embeddings.position_ids\", \"discriminator_model.electra.embeddings.position_ids\".\r\n```", "Note, `generator_model.electra` is `ElectraModel`, which uses `BertEmbeddings`.", "Can you send me a code snippet so that I can reproduce your error? \r\n", "It's a big library. But I can try to recreate in a Colab. One sec.", "@patrickvonplaten Colab: https://colab.research.google.com/drive/167CwTImG5T-4c9xeIVEkH9Xrracbn30h?usp=sharing\r\n\r\nLet me know if you can access?", "It also breaks to me. The attribute embedding.position_ids can't be loaded if the model artifact is trained with v3.0.2. So it will raise an KeyError", "Hey @Laksh1997, I can't access the notebook - could you make it public for everybody to see? :-) ", "@patrickvonplaten apologies. Here is the script:\r\n\r\n```python\r\n!pip install transformers==3.0.2\r\n\r\nfrom transformers import ElectraModel, ElectraConfig\r\nimport torch\r\nimport transformers\r\n\r\nprint(transformers.__version__)\r\n\r\nmodel = ElectraModel(ElectraConfig())\r\nstate_dict = model.state_dict()\r\ntorch.save(state_dict, 'checkpoint.pt')\r\n```\r\n\r\n```python\r\n!pip install transformers==3.1.0\r\n\r\nfrom transformers import ElectraModel, ElectraConfig\r\nimport torch\r\nimport transformers\r\n\r\nprint(transformers.__version__)\r\n\r\nmodel = ElectraModel(ElectraConfig())\r\nstate_dict = torch.load('checkpoint.pt')\r\nmodel.load_state_dict(state_dict)\r\n\r\n```", "I encountered the same issue. Old checkpoints (3.0.2) can not be loaded in (3.1.0) due to KeyError.", "@Barcavin @easonnie As a temporary fix, I've just reverted back to 3.0.2. @patrickvonplaten I am hoping something can be done !", "Hi, while we work on patching this issue, you can still use version v3.1.0 by using the `from_pretrained` method. Taking @Laksh1997's example, you would do:\r\n\r\n1. Save the checkpoint in `saved_model_location/pytorch_model.bin`\r\n\r\n```py\r\nfrom transformers import ElectraModel, ElectraConfig\r\nimport torch\r\nimport transformers\r\n\r\nprint(transformers.__version__)\r\n\r\nmodel = ElectraModel(ElectraConfig())\r\nstate_dict = model.state_dict()\r\ntorch.save(state_dict, 'saved_model_location/pytorch_model.bin')\r\n```\r\n\r\n2. Load it using the method `.from_pretrained`\r\n\r\n```py\r\nfrom transformers import ElectraModel, ElectraConfig\r\nimport transformers\r\n\r\nprint(transformers.__version__)\r\n\r\nmodel = ElectraModel.from_pretrained(\"saved_model_location\", config=ElectraConfig())\r\n``` ", "You can also use the `load_state_dict` method with the `strict` option set to `False`:\r\n\r\n```py\r\nmodel.load_state_dict(state_dict, strict=False)\r\n```", "The reason this additional buffer is here now is due to this [PR](https://github.com/huggingface/transformers/pull/5773#issue-449530988). \r\n\r\nIs there a reason why you would use the `load_state_dict` instead of `from_pretrained`, as `from_pretrained` exists in part to prevent such issues from happening?", "Hi @LysandreJik \r\n\r\nThanks for the proposed solution. \r\n\r\nIn my case, I am using Pytorch Lightning which has its own saving and loading infrastructure. Thus the `from_pretrained` method can't exactly be used.\r\n\r\nThe `strict` flag is a good patch for now.\r\n\r\nI think, in general, when building on top of the library, for complex projects one cannot rely on `from_pretrained`, especially if using other ecosystems.", "Using the `strict` flag can enable a number of errors to go undetected, so I would refrain from using it. I think the best solution is to use version 3.0.2 for already trained models until the fix comes out.", "Any update on this @LysandreJik @patrickvonplaten ?", "As the `torch.load` method in `strict` mode does not allow unexpected/missing keys, this is an issue that won't be resolved. Three options are available here:\r\n- Use the recommended `from_pretrained` method, which exists specifically to work around this kind of issues\r\n- Use the `torch.load` method with the `strict` flag set to `False`\r\n- Pin to version v3.0.2 if none of these can be applied.\r\n\r\nMinor changes in model infrastructure can unfortunately happen as we try to optimize for efficiency, which will lead to this kind of issues. We're internally working on having our models on the hub be versionable, which should solve most of these problems. It's at least a couple of months away, however.", "@LysandreJik That is unfortunate that the library will probably have to be pinned, as the first two options are unviable for reasons described in this thread. Especially because pretraining large models is computationally quite expensive (100s of GPU hours)...", "You can also use the work-around explained [here](https://github.com/huggingface/transformers/issues/6882#issuecomment-685509938) if you want to convert your weights to the updated architecture.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Just wanted to add that there is another non-trivial reason why `from_pretrained` might not be useful in all cases: fine-tuning. If I fine-tune BERT's weights on a specific dataset, most likely I will have to use `load_state_dict` afterwards to use the new weights, rather than the original weights that `from_pretrained` would load.", "@LysandreJik @Laksh1997 Setting the [persistent flag ](https://pytorch.org/docs/master/generated/torch.jit.ScriptModule.html#torch.jit.ScriptModule.register_buffer)to False when registering the buffer will avoid adding it to the state_dict and can address the BC issue. ", "Hello there, \r\n\r\nI encountered the same problem. I was using transformers version 4.7.0; but the checkpoint was trained with transformer 3.0.2. I just did `pip uninstall transformers`, and then `pip install transformers==3.0.2` for running the training. Presumably, you can try: `model.load_state_dict(state_dict, strict=False)` as well. However, I don't feel comfortable with the latter solution since I think that might affect the model performance in abstraction –since `position_ids` **might** be used by the model, and putting some random values when it's not present in pre-trained checkpoint might ruin the performance. So safer way is to down-grade the transformers, in my opinion. \r\n\r\nHope this helps you out!", "Can someone confirm if the `position_ids` are used by the model and by not loading it correctly would it affect the performance of the model in transfer learning or continuing to train or inference? Thank you", "I think it's safe to use `model.load_state_dict(state_dict, strict=False)` if the only missing information is the `position_ids` buffer. This tensor is indeed used by the model, but it's just a constant tensor containing a list of integers from 0 to the maximum number of position embeddings. The tensor is first created in the constructor of the `BertEmbeddings` class, in this line:\r\n\r\nhttps://github.com/huggingface/transformers/blob/fcf83011dffce3f2e8aad906f07c1ec14668f877/src/transformers/models/bert/modeling_bert.py#L182\r\n\r\nAs such, it's not really part of the optimizable parameters of the model. This means that it doesn't matter if `position_ids` is not available when calling `load_state_dict`, because the line above will create it anyway in the constructor with the required values.", "> I think it's safe to use `model.load_state_dict(state_dict, strict=False)` if the only missing information is the `position_ids` buffer. This tensor is indeed used by the model, but it's just a constant tensor containing a list of integers from 0 to the maximum number of position embeddings. The tensor is first created in the constructor of the `BertEmbeddings` class, in this line:\r\n> \r\n> https://github.com/huggingface/transformers/blob/fcf83011dffce3f2e8aad906f07c1ec14668f877/src/transformers/models/bert/modeling_bert.py#L182\r\n> \r\n> As such, it's not really part of the optimizable parameters of the model. This means that it doesn't matter if `position_ids` is not available when calling `load_state_dict`, because the line above will create it anyway in the constructor with the required values.\r\n\r\nThank you very much @dfdazac for your detailed reply. " ]
"2024-05-08T06:40:14"
"2024-05-17T06:38:30"
null
NONE
null
### Describe the bug I'm currently using Clash for Windows as my proxy tunnel, after exporting HTTP_PROXY and HTTPS_PROXY to the port that clash provides🤔, it runs into a connection error saying "Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f969d391870>: Failed to establish a new connection: [Errno 111] Connection refused'))")))" I have already read the documentation provided on the hugginface, but I think I didn't see the detailed instruction on how to set up proxies for this library. ### Steps to reproduce the bug 1. Turn on any proxy software like Clash / ShadosocksR etc. 2. export system varibles to the port provided by your proxy software in wsl (It's ok for other applications to use proxy expect dataset-library) 3. load any dataset from hugginface online ### Expected behavior --------------------------------------------------------------------------- ConnectionError Traceback (most recent call last) Cell In[33], [line 3](vscode-notebook-cell:?execution_count=33&line=3) [1](vscode-notebook-cell:?execution_count=33&line=1) from datasets import load_metric ----> [3](vscode-notebook-cell:?execution_count=33&line=3) metric = load_metric("seqeval") File ~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:46, in deprecated.<locals>.decorator.<locals>.wrapper(*args, **kwargs) [44](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:44) warnings.warn(warning_msg, category=FutureWarning, stacklevel=2) [45](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:45) _emitted_deprecation_warnings.add(func_hash) ---> [46](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:46) return deprecated_function(*args, **kwargs) File ~/.local/lib/python3.10/site-packages/datasets/load.py:2104, in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, revision, trust_remote_code, **metric_init_kwargs) [2101](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2101) warnings.filterwarnings("ignore", message=".*https://huggingface.co/docs/evaluate$", category=FutureWarning) [2103](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2103) download_mode = DownloadMode(download_mode or DownloadMode.REUSE_DATASET_IF_EXISTS) -> [2104](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2104) metric_module = metric_module_factory( [2105](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2105) path, [2106](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2106) revision=revision, [2107](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2107) download_config=download_config, [2108](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2108) download_mode=download_mode, [2109](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2109) trust_remote_code=trust_remote_code, [2110](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2110) ).module_path [2111](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2111) metric_cls = import_main_class(metric_module, dataset=False) [2112](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2112) metric = metric_cls( [2113](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2113) config_name=config_name, [2114](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2114) process_id=process_id, ... --> [633](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py:633) raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})") [634](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py:634) elif response is not None: [635](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py:635) raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})") ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (SSLError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (Caused by SSLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)')))"))) ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.23.0 - PyArrow version: 16.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.2.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6882/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6882/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6881
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6881/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6881/comments
https://api.github.com/repos/huggingface/datasets/issues/6881/events
https://github.com/huggingface/datasets/issues/6881
2,284,794,009
I_kwDODunzps6ILzCZ
6,881
AttributeError: module 'PIL.Image' has no attribute 'ExifTags'
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "It was just the naming of \"layer_norm\" instead of \"LayerNorm\" I changed the script and now it works.", "@blueberry-cake which script was that naming in? ", "@blueberry-cake could you tell me the details of how you solve this problem? I have this problem, too,I do not understand the word \"It was just the naming of \"layer_norm\" instead of \"LayerNorm\" I changed the script and now it works.\" Thanks for your help in advance!", "Hi, I encountered the same problem. I spent quite a while googling online but didn't get a solution. Could you please let me know if you get the solution? @blueberry-cake @roxannemiller @ankunw ", "> Hi, I encountered the same problem. I spent quite a while googling online but didn't get a solution. Could you please let me know if you get the solution? @blueberry-cake @roxannemiller @ankunw\r\n\r\nmaybe you could use he latest transformer have a try", "No it still doesn't work. Sign :(", "So I solved this problem with other people's help. Basically, I need to change the key name in my tf1 checkpoints. Here is the code. For further details, please see: https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/migrate/migrating_checkpoints.ipynb?hl=id#scrollTo=NPQsXQveuQiC\r\n\r\n```\r\nimport re\r\ndef change_name(checkpoint_path, output_prefix):\r\n ckpt = tf.train.Checkpoint(vars={name: variable}) \r\n ckpt.restore(converted_ckpt_path)\r\n \"\"\"\r\n Args:\r\n checkpoint_path: Path to the TF1 checkpoint.\r\n output_prefix: Path prefix to the converted checkpoint.\r\n\r\n Returns:\r\n Path to the converted checkpoint.\r\n \"\"\"\r\n vars = {}\r\n reader = tf.train.load_checkpoint(checkpoint_path)\r\n dtypes = reader.get_variable_to_dtype_map()\r\n\r\n for key in dtypes.keys():\r\n new_key = key\r\n if key=='bert/embeddings/layer_normalization/beta' or key=='bert/embeddings/layer_normalization/gamma':\r\n new_key=key.replace('layer_normalization','LayerNorm')\r\n elif re.search('layer_normalization_+\\d+',key):\r\n new_key = re.sub('layer_normalization_+\\d+','LayerNorm',key)\r\n elif re.search('layer_normalization',key):\r\n new_key = re.sub('layer_normalization','LayerNorm',key)\r\n print(new_key)\r\n vars[new_key] = tf.Variable(reader.get_tensor(key))\r\n \r\n return tf1.train.Saver(var_list=vars).save(sess=None, save_path=output_prefix)", "> So I solved this problem with other people's help. Basically, I need to change the key name in my tf1 checkpoints. Here is the code. For further details, please see: https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/migrate/migrating_checkpoints.ipynb?hl=id#scrollTo=NPQsXQveuQiC\r\n> \r\n> ```\r\n> import re\r\n> def change_name(checkpoint_path, output_prefix):\r\n> ckpt = tf.train.Checkpoint(vars={name: variable}) \r\n> ckpt.restore(converted_ckpt_path)\r\n> \"\"\"\r\n> Args:\r\n> checkpoint_path: Path to the TF1 checkpoint.\r\n> output_prefix: Path prefix to the converted checkpoint.\r\n> \r\n> Returns:\r\n> Path to the converted checkpoint.\r\n> \"\"\"\r\n> vars = {}\r\n> reader = tf.train.load_checkpoint(checkpoint_path)\r\n> dtypes = reader.get_variable_to_dtype_map()\r\n> \r\n> for key in dtypes.keys():\r\n> new_key = key\r\n> if key=='bert/embeddings/layer_normalization/beta' or key=='bert/embeddings/layer_normalization/gamma':\r\n> new_key=key.replace('layer_normalization','LayerNorm')\r\n> elif re.search('layer_normalization_+\\d+',key):\r\n> new_key = re.sub('layer_normalization_+\\d+','LayerNorm',key)\r\n> elif re.search('layer_normalization',key):\r\n> new_key = re.sub('layer_normalization','LayerNorm',key)\r\n> print(new_key)\r\n> vars[new_key] = tf.Variable(reader.get_tensor(key))\r\n> \r\n> return tf1.train.Saver(var_list=vars).save(sess=None, save_path=output_prefix)\r\n> ```\r\n\r\nDear friend, is there a complete integration of your code in \"convert_bert_original_tf_checkpoint_to_pytorch.py\"? I don't know how to adjust it using your code." ]
"2024-05-08T06:33:57"
"2024-05-16T14:34:03"
"2024-05-16T14:34:03"
MEMBER
null
When trying to load an image dataset in an old Python environment (with Pillow-8.4.0), an error is raised: ```Python traceback AttributeError: module 'PIL.Image' has no attribute 'ExifTags' ``` The error traceback: ```Python traceback ~/huggingface/datasets/src/datasets/iterable_dataset.py in __iter__(self) 1391 # `IterableDataset` automatically fills missing columns with None. 1392 # This is done with `_apply_feature_types_on_example`. -> 1393 example = _apply_feature_types_on_example( 1394 example, self.features, token_per_repo_id=self._token_per_repo_id 1395 ) ~/huggingface/datasets/src/datasets/iterable_dataset.py in _apply_feature_types_on_example(example, features, token_per_repo_id) 1080 encoded_example = features.encode_example(example) 1081 # Decode example for Audio feature, e.g. -> 1082 decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id) 1083 return decoded_example 1084 ~/huggingface/datasets/src/datasets/features/features.py in decode_example(self, example, token_per_repo_id) 1974 -> 1975 return { 1976 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1977 if self._column_requires_decoding[column_name] ~/huggingface/datasets/src/datasets/features/features.py in <dictcomp>(.0) 1974 1975 return { -> 1976 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1977 if self._column_requires_decoding[column_name] 1978 else value ~/huggingface/datasets/src/datasets/features/features.py in decode_nested_example(schema, obj, token_per_repo_id) 1339 # we pass the token to read and decode files from private repositories in streaming mode 1340 if obj is not None and schema.decode: -> 1341 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) 1342 return obj 1343 ~/huggingface/datasets/src/datasets/features/image.py in decode_example(self, value, token_per_repo_id) 187 image = PIL.Image.open(BytesIO(bytes_)) 188 image.load() # to avoid "Too many open files" errors --> 189 if image.getexif().get(PIL.Image.ExifTags.Base.Orientation) is not None: 190 image = PIL.ImageOps.exif_transpose(image) 191 if self.mode and self.mode != image.mode: ~/huggingface/datasets/venv/lib/python3.9/site-packages/PIL/Image.py in __getattr__(name) 75 ) 76 return categories[name] ---> 77 raise AttributeError(f"module '{__name__}' has no attribute '{name}'") 78 79 AttributeError: module 'PIL.Image' has no attribute 'ExifTags' ``` ### Environment info Since datasets 2.19.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6881/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6881/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6880
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6880/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6880/comments
https://api.github.com/repos/huggingface/datasets/issues/6880/events
https://github.com/huggingface/datasets/issues/6880
2,283,278,337
I_kwDODunzps6IGBAB
6,880
Webdataset: KeyError: 'png' on some datasets when streaming
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "I have added a model_init function in the PyTorch Trainer to support hp-search. Is it possible to use this instead of changing the `args`? This would make a very big difference between the PT Trainer and TF Trainer.", "OK, I will check this. Thanks.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Hello did you have any success fixing this? Can I help? I'm on a tight art student collegiate budget and tpu speed would be awesome if not necessary. I've spent like 20-30 hours on fixing the tpu issue myself and no luck. Any help getting run_clm.py on a tpu so I can quickly iterate would be awesome. But generally I'd love to move mainly to tpu but I'm not sure its there yet. New to open source really want to learn as much as possible. Can I help?", "@arccoxx This PR should be closed because we have identified two different issues:\r\n\r\n1. The first one don't come from Transformers but from TensorFlow. To make it short, TPU don't handle `tf.data.Dataset.from_generator`, Google is currently working on it and we have to wait they release the fix once they have one.\r\n2. Currently you cannot train a LM from scratch with any TF model. We are currently working on this, and it will be possible in our next release.\r\n\r\nSo for your project the best solution would be to use the PyTorch version that works on TPU and you can train from scratch any LM model.", "> @arccoxx This PR should be closed because we have identified two different issues:\r\n> \r\n> 1. The first one don't come from Transformers but from TensorFlow. To make it short, TPU don't handle `tf.data.Dataset.from_generator`, Google is currently working on it and we have to wait they release the fix once they have one.\r\n> 2. Currently you cannot train a LM from scratch with any TF model. We are currently working on this, and it will be possible in our next release.\r\n> \r\n> So for your project the best solution would be to use the PyTorch version that works on TPU and you can train from scratch any LM model.\r\n\r\nI was not able to get any pytorch version to run on xla. Is there any reference notebook that could be linked? I tried finetuning in native pytorch, running (pytorch) tuner, run_language_modeling with multiple transformers library versions 2.1.0-2.9.1, and run_clm with 3.4.0 all with no luck. Ive also tried building a pytorch lightning module and no luck. As the speedup would be that helpful (provided it can handle gpt2 medium) it would be awesome to figure out reduce these compatibility issues. My hope is to then use the tpu in a more complicated model that will use this fine tuned model. Any help would be super appreciated. Thank you!" ]
"2024-05-07T13:09:02"
"2024-05-14T20:34:05"
null
MEMBER
null
reported at https://huggingface.co/datasets/tbone5563/tar_images/discussions/1 ```python >>> from datasets import load_dataset >>> ds = load_dataset("tbone5563/tar_images") Downloading data: 100%  1.41G/1.41G [00:48<00:00, 17.2MB/s] Downloading data: 100%  619M/619M [00:11<00:00, 57.4MB/s] Generating train split:   970/0 [00:02<00:00, 534.94 examples/s] --------------------------------------------------------------------------- KeyError Traceback (most recent call last) [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id) 1747 _time = time.time() -> 1748 for key, record in generator: 1749 if max_shard_size is not None and writer._num_bytes > max_shard_size: 7 frames [/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/webdataset/webdataset.py](https://localhost:8080/#) in _generate_examples(self, tar_paths, tar_iterators) 108 for field_name in image_field_names + audio_field_names: --> 109 example[field_name] = {"path": example["__key__"] + "." + field_name, "bytes": example[field_name]} 110 yield f"{tar_idx}_{example_idx}", example KeyError: 'png' The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) [<ipython-input-2-8e0fbb7badc9>](https://localhost:8080/#) in <cell line: 3>() 1 from datasets import load_dataset 2 ----> 3 ds = load_dataset("tbone5563/tar_images") [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) 2607 2608 # Download and prepare data -> 2609 builder_instance.download_and_prepare( 2610 download_config=download_config, 2611 download_mode=download_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 1025 if num_proc is not None: 1026 prepare_split_kwargs["num_proc"] = num_proc -> 1027 self._download_and_prepare( 1028 dl_manager=dl_manager, 1029 verification_mode=verification_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs) 1787 1788 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs): -> 1789 super()._download_and_prepare( 1790 dl_manager, 1791 verification_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 1120 try: 1121 # Prepare split will record examples associated to the split -> 1122 self._prepare_split(split_generator, **prepare_split_kwargs) 1123 except OSError as e: 1124 raise OSError( [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size) 1625 job_id = 0 1626 with pbar: -> 1627 for job_id, done, content in self._prepare_split_single( 1628 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args 1629 ): [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id) 1782 if isinstance(e, SchemaInferenceError) and e.__context__ is not None: 1783 e = e.__context__ -> 1784 raise DatasetGenerationError("An error occurred while generating the dataset") from e 1785 1786 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6880/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6880/timeline
null
reopened
null
null
https://api.github.com/repos/huggingface/datasets/issues/6879
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6879/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6879/comments
https://api.github.com/repos/huggingface/datasets/issues/6879/events
https://github.com/huggingface/datasets/issues/6879
2,282,968,259
I_kwDODunzps6IE1TD
6,879
Batched mapping does not raise an error if values for an existing column are empty
{ "login": "felix-schneider", "id": 208336, "node_id": "MDQ6VXNlcjIwODMzNg==", "avatar_url": "https://avatars.githubusercontent.com/u/208336?v=4", "gravatar_id": "", "url": "https://api.github.com/users/felix-schneider", "html_url": "https://github.com/felix-schneider", "followers_url": "https://api.github.com/users/felix-schneider/followers", "following_url": "https://api.github.com/users/felix-schneider/following{/other_user}", "gists_url": "https://api.github.com/users/felix-schneider/gists{/gist_id}", "starred_url": "https://api.github.com/users/felix-schneider/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/felix-schneider/subscriptions", "organizations_url": "https://api.github.com/users/felix-schneider/orgs", "repos_url": "https://api.github.com/users/felix-schneider/repos", "events_url": "https://api.github.com/users/felix-schneider/events{/privacy}", "received_events_url": "https://api.github.com/users/felix-schneider/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Thanks for the quick reviews @LysandreJik and @sgugger! And yep updating black version did it (I think) thanks." ]
"2024-05-07T11:02:40"
"2024-05-07T11:02:40"
null
NONE
null
### Describe the bug Using `Dataset.map(fn, batched=True)` allows resizing the dataset by returning a dict of lists, all of which must be the same size. If they are not the same size, an error like `pyarrow.lib.ArrowInvalid: Column 1 named x expected length 1 but got length 0` is raised. This is not the case if the function returns an empty list for an existing column in the dataset. In that case, the dataset is silently resized to 0 rows. ### Steps to reproduce the bug MWE: ``` import datasets data = datasets.Dataset.from_dict({"test": [1]}) def mapping_fn(examples): return {"test": [], "y": [1]} data = data.map(mapping_fn, batched=True) print(len(data)) ``` Note that when returning `"x": []`, the error is raised correctly, also when returning `"test": [1,2]`. ### Expected behavior Expected an exception: `pyarrow.lib.ArrowInvalid: Column 1 named test expected length 1 but got length 0` or `pyarrow.lib.ArrowInvalid: Column 2 named y expected length 0 but got length 1`. Any exception would be acceptable. ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.31 - Python version: 3.11.8 - `huggingface_hub` version: 0.22.2 - PyArrow version: 15.0.2 - Pandas version: 2.2.1 - `fsspec` version: 2024.2.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6879/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6879/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6878
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6878/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6878/comments
https://api.github.com/repos/huggingface/datasets/issues/6878/events
https://github.com/huggingface/datasets/pull/6878
2,282,879,491
PR_kwDODunzps5uviBh
6,878
Create function to convert to parquet
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6878?src=pr&el=h1) Report\n> Merging [#6878](https://codecov.io/gh/huggingface/transformers/pull/6878?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3726754a6c646adcf9cb2135ab7f72dffe074473?el=desc) will **decrease** coverage by `3.21%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6878/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6878?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6878 +/- ##\n==========================================\n- Coverage 80.05% 76.84% -3.22% \n==========================================\n Files 157 157 \n Lines 28822 28825 +3 \n==========================================\n- Hits 23074 22150 -924 \n- Misses 5748 6675 +927 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6878?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.28% <ø> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.36% <ø> (-14.37%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG1fcm9iZXJ0YS5weQ==) | `100.00% <100.00%> (ø)` | |\n| [...c/transformers/modeling\\_tf\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `16.25% <0.00%> (-63.52%)` | :arrow_down: |\n| [src/transformers/configuration\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3RyYW5zZm9feGwucHk=) | `27.27% <0.00%> (-61.82%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `6.71% <0.00%> (-59.71%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `25.63% <0.00%> (-54.32%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| ... and [24 more](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6878?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6878?src=pr&el=footer). Last update [3726754...06cc500](https://codecov.io/gh/huggingface/transformers/pull/6878?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "cc @laibamehnaz ", "> cc @laibamehnaz\r\n\r\nThank you :)" ]
"2024-05-07T10:27:07"
"2024-05-16T14:46:44"
"2024-05-16T14:38:23"
MEMBER
null
Analogously with `delete_from_hub`, this PR: - creates the Python function `convert_to_parquet` - makes the corresponding CLI command use that function. This way, the functionality can be used both from a terminal and from a Python console. This PR also implements a test for convert_to_parquet function.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6878/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6878/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6878", "html_url": "https://github.com/huggingface/datasets/pull/6878", "diff_url": "https://github.com/huggingface/datasets/pull/6878.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6878.patch", "merged_at": "2024-05-16T14:38:22" }
https://api.github.com/repos/huggingface/datasets/issues/6877
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6877/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6877/comments
https://api.github.com/repos/huggingface/datasets/issues/6877/events
https://github.com/huggingface/datasets/issues/6877
2,282,068,337
I_kwDODunzps6IBZlx
6,877
OSError: [Errno 24] Too many open files
{ "login": "loicmagne", "id": 53355258, "node_id": "MDQ6VXNlcjUzMzU1MjU4", "avatar_url": "https://avatars.githubusercontent.com/u/53355258?v=4", "gravatar_id": "", "url": "https://api.github.com/users/loicmagne", "html_url": "https://github.com/loicmagne", "followers_url": "https://api.github.com/users/loicmagne/followers", "following_url": "https://api.github.com/users/loicmagne/following{/other_user}", "gists_url": "https://api.github.com/users/loicmagne/gists{/gist_id}", "starred_url": "https://api.github.com/users/loicmagne/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loicmagne/subscriptions", "organizations_url": "https://api.github.com/users/loicmagne/orgs", "repos_url": "https://api.github.com/users/loicmagne/repos", "events_url": "https://api.github.com/users/loicmagne/events{/privacy}", "received_events_url": "https://api.github.com/users/loicmagne/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6877?src=pr&el=h1) Report\n> Merging [#6877](https://codecov.io/gh/huggingface/transformers/pull/6877?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a32d85f0d405be53117b96075eef2875d2185892?el=desc) will **increase** coverage by `0.16%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6877/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6877?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6877 +/- ##\n==========================================\n+ Coverage 80.48% 80.65% +0.16% \n==========================================\n Files 157 157 \n Lines 28794 28796 +2 \n==========================================\n+ Hits 23175 23224 +49 \n+ Misses 5619 5572 -47 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6877?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.38% <100.00%> (+<0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.21%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `86.63% <0.00%> (-5.27%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.97% <0.00%> (-0.68%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |\n| ... and [11 more](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6877?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6877?src=pr&el=footer). Last update [a32d85f...ddbccd8](https://codecov.io/gh/huggingface/transformers/pull/6877?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "As a comparision. When running this line on current `master`:\r\n```\r\nTF_CPP_MIN_LOG_LEVEL=3 python examples/benchmarking/run_benchmark_tf.py --models bert-base-uncased --no_memory --batch_sizes 1 --sequence_lengths 128 256 512\r\n```\r\n\r\none gets the following results: \r\n\r\n```\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\n bert-base-uncased 1 128 0.006 \r\n bert-base-uncased 1 256 0.009 \r\n bert-base-uncased 1 512 0.017 \r\n--------------------------------------------------------------------------------\r\n\r\n==================== ENVIRONMENT INFORMATION ====================\r\n- transformers_version: 3.0.2\r\n- framework: TensorFlow\r\n- eager_mode: False\r\n- use_xla: False\r\n- framework_version: 2.3.0\r\n- python_version: 3.6.10\r\n- system: Linux\r\n- cpu: x86_64\r\n- architecture: 64bit\r\n- date: 2020-09-01\r\n- time: 11:23:30.836691\r\n- fp16: False\r\n- use_multiprocessing: True\r\n- only_pretrain_model: False\r\n- cpu_ram_mb: 32088\r\n- use_gpu: True\r\n- num_gpus: 1\r\n- gpu: TITAN RTX\r\n- gpu_ram_mb: 24217\r\n- gpu_power_watts: 280.0\r\n- gpu_performance_state: 2\r\n- use_tpu: False\r\n```\r\n\r\nfor a TITAN RTX GPU.\r\n\r\nWhen running the above line on this branch, one gets the following results:\r\n\r\n```\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\n bert-base-uncased 1 128 0.006 \r\n bert-base-uncased 1 256 0.008 \r\n bert-base-uncased 1 512 0.016 \r\n--------------------------------------------------------------------------------\r\n\r\n==================== ENVIRONMENT INFORMATION ====================\r\n- transformers_version: 3.0.2\r\n- framework: TensorFlow\r\n- eager_mode: False\r\n- use_xla: False\r\n- framework_version: 2.3.0\r\n- python_version: 3.6.10\r\n- system: Linux\r\n- cpu: x86_64\r\n- architecture: 64bit\r\n- date: 2020-09-01\r\n- time: 11:28:12.021389\r\n- fp16: False\r\n- use_multiprocessing: True\r\n- only_pretrain_model: False\r\n- cpu_ram_mb: 32088\r\n- use_gpu: True\r\n- num_gpus: 1\r\n- gpu: TITAN RTX\r\n- gpu_ram_mb: 24217\r\n- gpu_power_watts: 280.0\r\n- gpu_performance_state: 2\r\n- use_tpu: False\r\n```\r\n\r\nSo, I cannot see a real difference here :-/ @jlei2", "I will see whether the benchmark results are better on a GPU-V100 GPU. \r\n@jlei2 - could you post the code you used to benchmark HF Bert vs. Google Bert ? This would help a lot for reproducability.", "oops didn't see it was [WIP]", "@patrickvonplaten can you update your code with the version given by @jlei2 [here](https://github.com/jlei2/transformers/pull/2) when you have time please. Thanks a lot!" ]
"2024-05-07T01:15:09"
"2024-05-13T15:36:08"
"2024-05-13T13:01:55"
NONE
null
### Describe the bug I am trying to load the 'default' subset of the following dataset which contains lots of files (828 per split): [https://huggingface.co/datasets/mteb/biblenlp-corpus-mmteb](https://huggingface.co/datasets/mteb/biblenlp-corpus-mmteb) When trying to load it using the `load_dataset` function I get the following error ```python >>> from datasets import load_dataset >>> d = load_dataset('mteb/biblenlp-corpus-mmteb') Downloading readme: 100%|████████████████████████████████████████████████████████████████████████| 201k/201k [00:00<00:00, 1.07MB/s] Resolving data files: 100%|█████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 1069.15it/s] Resolving data files: 100%|███████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 436182.33it/s] Resolving data files: 100%|█████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 2228.75it/s] Resolving data files: 100%|███████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 646478.73it/s] Resolving data files: 100%|███████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 831032.24it/s] Resolving data files: 100%|███████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 517645.51it/s] Downloading data: 100%|████████████████████████████████████████████████████████████████████████| 828/828 [00:33<00:00, 24.87files/s] Downloading data: 100%|████████████████████████████████████████████████████████████████████████| 828/828 [00:30<00:00, 27.48files/s] Downloading data: 100%|████████████████████████████████████████████████████████████████████████| 828/828 [00:30<00:00, 26.94files/s] Generating train split: 1571592 examples [00:03, 461438.97 examples/s] Generating test split: 11163 examples [00:00, 118190.72 examples/s] Traceback (most recent call last): File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1995, in _prepare_split_single for _, table in generator: File ".env/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 99, in _generate_tables with open(file, "rb") as f: ^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/datasets/streaming.py", line 75, in wrapper return function(*args, download_config=download_config, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 1224, in xopen file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/core.py", line 135, in open return self.__enter__() ^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/core.py", line 103, in __enter__ f = self.fs.open(self.path, mode=mode) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/spec.py", line 1293, in open f = self._open( ^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/datasets/filesystems/compression.py", line 81, in _open return self.file.open() ^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/core.py", line 135, in open return self.__enter__() ^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/core.py", line 103, in __enter__ f = self.fs.open(self.path, mode=mode) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/spec.py", line 1293, in open f = self._open( ^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/implementations/local.py", line 197, in _open return LocalFileOpener(path, mode, fs=self, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/implementations/local.py", line 322, in __init__ self._open() File ".env/lib/python3.12/site-packages/fsspec/implementations/local.py", line 327, in _open self.f = open(self.path, mode=self.mode) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ OSError: [Errno 24] Too many open files: '.cache/huggingface/datasets/downloads/3a347186abfc0f9c924dde0221d246db758c7232c0101523f04a87c17d696618' The above exception was the direct cause of the following exception: Traceback (most recent call last): File ".env/lib/python3.12/site-packages/datasets/builder.py", line 981, in incomplete_dir yield tmp_dir File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File ".env/lib/python3.12/site-packages/datasets/builder.py", line 2038, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File ".env/lib/python3.12/site-packages/datasets/load.py", line 2609, in load_dataset builder_instance.download_and_prepare( File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1007, in download_and_prepare with incomplete_dir(self._output_dir) as tmp_output_dir: File "/usr/lib/python3.12/contextlib.py", line 158, in __exit__ self.gen.throw(value) File ".env/lib/python3.12/site-packages/datasets/builder.py", line 988, in incomplete_dir shutil.rmtree(tmp_dir) File "/usr/lib/python3.12/shutil.py", line 785, in rmtree _rmtree_safe_fd(fd, path, onexc) File "/usr/lib/python3.12/shutil.py", line 661, in _rmtree_safe_fd onexc(os.scandir, path, err) File "/usr/lib/python3.12/shutil.py", line 657, in _rmtree_safe_fd with os.scandir(topfd) as scandir_it: ^^^^^^^^^^^^^^^^^ OSError: [Errno 24] Too many open files: '.cache/huggingface/datasets/mteb___biblenlp-corpus-mmteb/default/0.0.0/3912ed967b0834547f35b2da9470c4976b357c9a.incomplete' ``` I looked for the maximum number of open files on my machine (Ubuntu 24.04) and it seems to be 1024, but even when I try to load a single split (`load_dataset('mteb/biblenlp-corpus-mmteb', split='train')`) I get the same error ### Steps to reproduce the bug ```python from datasets import load_dataset d = load_dataset('mteb/biblenlp-corpus-mmteb') ``` ### Expected behavior Load the dataset without error ### Environment info - `datasets` version: 2.19.0 - Platform: Linux-6.8.0-31-generic-x86_64-with-glibc2.39 - Python version: 3.12.3 - `huggingface_hub` version: 0.23.0 - PyArrow version: 16.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6877/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6877/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6876
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6876/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6876/comments
https://api.github.com/repos/huggingface/datasets/issues/6876/events
https://github.com/huggingface/datasets/pull/6876
2,281,450,743
PR_kwDODunzps5uqs46
6,876
Unpin hfh
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been stale for 1 month." ]
"2024-05-06T18:10:49"
"2024-05-27T10:20:42"
"2024-05-27T10:14:40"
MEMBER
null
Needed to use those in dataset-viewer: - dev version of hfh https://github.com/huggingface/dataset-viewer/pull/2781: don't span the hub with /paths-info requests - dev version of datasets at https://github.com/huggingface/datasets/pull/6875: don't write too big logs in the viewer close https://github.com/huggingface/datasets/issues/6863
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6876/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6876/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6876", "html_url": "https://github.com/huggingface/datasets/pull/6876", "diff_url": "https://github.com/huggingface/datasets/pull/6876.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6876.patch", "merged_at": "2024-05-27T10:14:40" }
https://api.github.com/repos/huggingface/datasets/issues/6875
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6875/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6875/comments
https://api.github.com/repos/huggingface/datasets/issues/6875/events
https://github.com/huggingface/datasets/pull/6875
2,281,428,826
PR_kwDODunzps5uqoJ_
6,875
Shorten long logs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2024-05-06T17:57:07"
"2024-05-07T12:31:46"
"2024-05-07T12:25:45"
MEMBER
null
Some datasets may have unexpectedly long features/types (e.g. if the files are not formatted correctly). In that case we should still be able to log something readable
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6875/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6875/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6875", "html_url": "https://github.com/huggingface/datasets/pull/6875", "diff_url": "https://github.com/huggingface/datasets/pull/6875.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6875.patch", "merged_at": "2024-05-07T12:25:45" }
https://api.github.com/repos/huggingface/datasets/issues/6874
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6874/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6874/comments
https://api.github.com/repos/huggingface/datasets/issues/6874/events
https://github.com/huggingface/datasets/pull/6874
2,280,717,233
PR_kwDODunzps5uoOk-
6,874
Use pandas ujson in JSON loader to improve performance
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "What costs the most is the gradient computation, storing few predictions is ok general. I can run a sequence classification training with a batch of 32 of 128 sequence length and an acummulation of 3 with a 8GB GPU.\r\n\r\nDid you encounter during your experiments a memory issue? If yes, let me know and I will look at it.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-05-06T12:01:27"
"2024-05-17T16:28:29"
"2024-05-17T16:22:27"
MEMBER
null
Use pandas ujson in JSON loader to improve performance. Note that `datasets` has `pandas` as required dependency. And `pandas` includes `ujson` in `pd.io.json.ujson_loads`. Fix #6867. CC: @natolambert
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6874/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6874/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6874", "html_url": "https://github.com/huggingface/datasets/pull/6874", "diff_url": "https://github.com/huggingface/datasets/pull/6874.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6874.patch", "merged_at": "2024-05-17T16:22:27" }
https://api.github.com/repos/huggingface/datasets/issues/6873
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6873/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6873/comments
https://api.github.com/repos/huggingface/datasets/issues/6873/events
https://github.com/huggingface/datasets/pull/6873
2,280,463,182
PR_kwDODunzps5unXnq
6,873
Set dev version
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Indeed this seems very problematic. Let's look into it cc @sgugger ", "Some hints - The main process takes 3.5x more RAM than the other processes individually.", "Do you have a commit id that gives the first graph, so we can look into the diff?", "I think I'm having a similar issue. I'm using `n1-highmem-16 (16 vCPUs, 104 GB memory)` with `v3-8` TPU for pre-training a RoBERTa model on 24GB text data.\r\n\r\nI was able to load the dataset using `nlp` (https://github.com/huggingface/nlp/issues/532), but it eats up all the available memory during training.\r\n\r\n<img width=\"860\" alt=\"Screen Shot 2020-09-01 at 9 19 17 PM\" src=\"https://user-images.githubusercontent.com/20531705/91850804-213cb700-ec99-11ea-853a-2e8a433bfbff.png\">\r\n\r\n(master branch on Aug 25 installed with `pip install git+https://github.com/huggingface/transformers`. Not sure how to check a commit id...)\r\n ", "Same question. I was wondering are there any strategies implemented to save memory?\r\nSomething like lazyDataloader?", "@sgugger I retried a run with the commit id 86c07e634f3624cdf3f9e4e81ca53b808c4b22c6 (20 Aug) and it seems to not have this memory blowup that we see on the current master \r\n![image](https://user-images.githubusercontent.com/1271289/91885268-5a930980-ec3c-11ea-93e3-3f07d6f1af97.png)\r\n", "@shizhediao Because the default behavior of Huggingface TPU Trainer is to load features into memory 8 times into all the processes separately, it quickly eats up vast amounts of system memory.\r\nThere are two options to save memory-\r\n1. Write a lazy loading Dataset whose `__getitem__` function quickly loads features from disk when provided with the key. This could save the most memory. Even though I haven't tested this I suspect the disk random lookup and IO in the critical path of the training loop could become a bottleneck.\r\n2. Cache the features in memory only once and share them among all the processes. I did this by using an in-memory key value server Redis by dumping all the pickled features to redis server and writing the `__getitem__` function where it loads the key from the redis server when requested. I saw empirically that this made by training about 20% faster on my workload than loading all the features 8 times into memory (probably due to cache thrashing). I used unix sockets to make the lookups even faster.", "Thanks for your reply!\r\nWould you like to share your code or are there any open-sourced code I can refer to?\r\nThanks!", "Sure, this is in the `__init__` function of my Dataset function. As compared to Huggingface TextDataset, this particular way sped up training by 20% for me while using around 1/7 memory and generating features faster (due to less tail-latency in multiprocessing and not writing and reading features from disk)\r\n```\r\n file_paths_copy = copy.deepcopy(file_paths)\r\n file_paths_copy = sorted(file_paths_copy) #multiprocess env, we want all processes to have the files in the same order\r\n self.redis = redis.Redis(unix_socket_path=\"/tmp/redis.sock\")\r\n self.pipe = self.redis.pipeline()\r\n file_lineno_map = {}\r\n line_counter = 0\r\n for file in file_paths_copy:\r\n num_lines = count_lines(file)\r\n\r\n file_lineno_map[file] = line_counter\r\n line_counter += num_lines\r\n # This is so that lines in each file gets assigned a unique line number in a multi-process env\r\n self.num_examples = line_counter\r\n for index, file_path in enumerate(file_paths_copy): # Can be multiple files\r\n if index % xm.xrt_world_size() == xm.get_ordinal():\r\n # If this process is assigned to process the following file, so we can use 8 cpu cores to load data parallely\r\n\r\n logger.info(\"Creating features from dataset file at %s\", file_path)\r\n with open(file_path, encoding=\"utf-8\") as f:\r\n for line_num, line in enumerate(f.read().splitlines()): # Text to Text file where each file is an example and source and target is separated by a tab symbol\r\n if (len(line) > 0 and not line.isspace()):\r\n if line.find('\\t') == -1:\r\n logger.warning(\r\n f\"Encountered a line without tab separator in file {file_path} line {line_num+1}\"\r\n )\r\n continue\r\n input, output = line.split('\\t')\r\n features = self.text_pair_to_features(input, output)\r\n\r\n key = line_num + file_lineno_map[\r\n file_path] if not self.val else \"val-\" + str(\r\n line_num + file_lineno_map[file_path]) # The name of the redis key\r\n\r\n self.pipe.set(key, pickle.dumps(features))\r\n if line_num % self.num_operations_pipelined == 1:\r\n self.pipe.execute() # So that we only dump to redis as a batch, can speed up writing\r\n self.pipe.execute()\r\n if is_torch_tpu_available():\r\n xm.rendezvous(tag=\"featuresGenerated\") # So that the multi-process environment all wait for each other before doing anything else\r\n```\r\nWith the `__getitem__` function being\r\n```\r\n def __getitem__(self, i) -> Dict[str, torch.Tensor]:\r\n if self.val:\r\n key = f\"val-{i}\"\r\n else:\r\n key = i\r\n example = pickle.loads(self.redis.get(key))\r\n return {\"input_ids\": example[0], \"attention_masks\": example[1], \"labels\": example[2]}\r\n```", "Thanks so much!", "Cool dataset!\r\n\r\n`Seq2SeqDataset` is also lazy, but no redis. I wonder the speed difference: https://github.com/huggingface/transformers/blob/master/examples/seq2seq/utils.py#L159\r\n\r\n@patil-suraj is this going to be an issue for `Seq2SeqTrainer`? We can't read all examples into memory for MT.", "@sshleifer Not sure. I have yet to experiment with `Seq2SeqTrainer` on TPU so can't say much. But I have managed to successfully train t5-base and on TPU using `Trainer` with lazy dataset.", "@sshleifer @patil-suraj I studied the linecache way of doing things and the reasons for not going with linecache for me were\r\n- Our data files are on mounted network disks so first byte access time would be too large.\r\n- Data sharded in multiple files leading to linecache being less effective as compared to just one file.\r\n- I also suspect how much would linecache help because we are not reading lines sequentially where caching would have helped but rather reading random lines where reading a whole block of text from disk would still mean that on average we only use only one line from the block.\r\n- I am also generally wary of involving disks in the critical path of the training loop as disks are very slow. Given that TPU requires higher input feed rate and evidence that Huggingface Trainer only uses a single CPU worker rather than many which could have helped with CPU generating features from disk in parallel while the TPU was working. See https://github.com/huggingface/transformers/issues/6316 . I believe if multiple workers were allowed in DataLoader then loading features from disk would be a valid solution.", "@misrasaurabh1 We just merged a simple fix that was obviously leaking memory for training (non-detached tensors) and that came from a recent change, so it might very well be the source of your leaks. Could you confirm whether or not current master has the leak or not? If so, using the same fix in the evaluation loop should also fix the eval memory leak we currently have.", "Yes, with the latest master the memory leak during training is not there anymore! Memory usage seems to be constant during training.\r\n\r\n![image](https://user-images.githubusercontent.com/1271289/92520308-40bf6c80-f1d0-11ea-9ef0-0edabd646527.png)\r\n\r\nAlthough if the same `.detach()` method would fix the evaluation memory leak, that would be huge! I could go down from a 32-CPU 208GB machine I am using right now to something like 16-CPU 64GB machine resulting in big monetary savings over time.", "Will look at the evaluation leak a bit more. From a first read, it looks like everything is properly detached, so it seems like this leak has another cause.\r\n\r\nThanks a lot for checking!", "\r\n\r\n> @shizhediao Because the default behavior of Huggingface TPU Trainer is to load features into memory 8 times into all the processes separately, it quickly eats up vast amounts of system memory.\r\n> There are two options to save memory-\r\n> \r\n> 1. Write a lazy loading Dataset whose `__getitem__` function quickly loads features from disk when provided with the key. This could save the most memory. Even though I haven't tested this I suspect the disk random lookup and IO in the critical path of the training loop could become a bottleneck.\r\n> 2. Cache the features in memory only once and share them among all the processes. I did this by using an in-memory key value server Redis by dumping all the pickled features to redis server and writing the `__getitem__` function where it loads the key from the redis server when requested. I saw empirically that this made by training about 20% faster on my workload than loading all the features 8 times into memory (probably due to cache thrashing). I used unix sockets to make the lookups even faster.\r\n\r\nRecently I had the same issue and such behavior is on GPU as well. One good solution is to use memory-mapped dataset, which is in spirit similar to Option 1 here. I used the awesome [huggingface/datasets](https://github.com/huggingface/datasets) library which provides memory-mapped dataset class automatically through Apache Arrow and it is fairly easy to use. I reduced my RAM usage from 90G to 6G and it won't grow with the dataset size.", "Is there any update on this? Is the memory leak during evaluation fixed?", "@sgugger Is the memory leak during evaluation fixed by https://github.com/huggingface/transformers/pull/7767 ?", "I don't know, as I have not had time to investigate the leak during evaluation on TPUs yet.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-05-06T09:43:18"
"2024-05-06T10:03:19"
"2024-05-06T09:57:12"
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6873/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6873/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6873", "html_url": "https://github.com/huggingface/datasets/pull/6873", "diff_url": "https://github.com/huggingface/datasets/pull/6873.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6873.patch", "merged_at": "2024-05-06T09:57:12" }
https://api.github.com/repos/huggingface/datasets/issues/6872
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6872/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6872/comments
https://api.github.com/repos/huggingface/datasets/issues/6872/events
https://github.com/huggingface/datasets/pull/6872
2,280,438,432
PR_kwDODunzps5unSPA
6,872
Release 2.19.1
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-05-06T09:29:15"
"2024-05-06T09:35:33"
"2024-05-06T09:35:32"
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6872/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6872/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6872", "html_url": "https://github.com/huggingface/datasets/pull/6872", "diff_url": "https://github.com/huggingface/datasets/pull/6872.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6872.patch", "merged_at": "2024-05-06T09:35:32" }
https://api.github.com/repos/huggingface/datasets/issues/6871
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6871/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6871/comments
https://api.github.com/repos/huggingface/datasets/issues/6871/events
https://github.com/huggingface/datasets/pull/6871
2,280,102,869
PR_kwDODunzps5umJS6
6,871
Fix download for dict of dicts of URLs
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-05-06T06:06:52"
"2024-05-06T09:32:03"
"2024-05-06T09:25:52"
MEMBER
null
Fix download for a dict of dicts of URLs when batched (default), introduced by: - #6794 This PR also implements regression tests. Fix #6869, fix #6850.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6871/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6871/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6871", "html_url": "https://github.com/huggingface/datasets/pull/6871", "diff_url": "https://github.com/huggingface/datasets/pull/6871.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6871.patch", "merged_at": "2024-05-06T09:25:52" }
https://api.github.com/repos/huggingface/datasets/issues/6870
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6870/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6870/comments
https://api.github.com/repos/huggingface/datasets/issues/6870/events
https://github.com/huggingface/datasets/pull/6870
2,280,084,008
PR_kwDODunzps5umFOL
6,870
Update tqdm >= 4.66.3 to fix vulnerability
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi, we have this [test](https://github.com/huggingface/transformers/blob/master/tests/test_configuration_auto.py#L45) to prevent exactly this. In what situation did you face an issue?", "@LysandreJik There is no problem for pretrained models of huggingface transformers, because the config class of them are inherited from \"PretrainedConfig\". However, for users who want to add new models, their self-defined config class may be inherited from config class of some existing pretrained models. For example, I am trying to add a new model based on \"BART\" and my NewBartConig is inherited from \"BartConig\". My new tokenizer will not be used because a “NewBartConig” object is an instance of \"BartConig\" and bart tokenizer will be used incorrectly.", "Yes, but we have similar issues with models, for example the `RobertaModel` inherits from `BertModel`. The test I mentioned above checks that (the example here is for configurations but we have the same test for models and tokenizers).\r\n\r\nCurrently the way to make sure your tokenizer is used and not the one on which it's depending is to put your tokenizer above the one it's inheriting from in the mapping. The for loop will then see this one first and use this one instead of the next one.", "@LysandreJik Changing the order of items in TOKENIZER_MAPPING can solve the problem indeed. But get the rid of mapping order is more user-friendly, right? Close the pull request if you don't think the pr is necessary. Thanks for the review" ]
"2024-05-06T05:49:36"
"2024-05-06T06:08:06"
"2024-05-06T06:02:00"
MEMBER
null
Update tqdm >= 4.66.3 to fix vulnerability,
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6870/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6870/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6870", "html_url": "https://github.com/huggingface/datasets/pull/6870", "diff_url": "https://github.com/huggingface/datasets/pull/6870.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6870.patch", "merged_at": "2024-05-06T06:02:00" }
https://api.github.com/repos/huggingface/datasets/issues/6869
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6869/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6869/comments
https://api.github.com/repos/huggingface/datasets/issues/6869/events
https://github.com/huggingface/datasets/issues/6869
2,280,048,297
I_kwDODunzps6H5sap
6,869
Download is broken for dict of dicts: FileNotFoundError
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-05-06T05:13:36"
"2024-05-06T09:25:53"
"2024-05-06T09:25:53"
MEMBER
null
It seems there is a bug when downloading a dict of dicts of URLs introduced by: - #6794 ## Steps to reproduce the bug: ```python from datasets import DownloadManager dl_manager = DownloadManager() paths = dl_manager.download({"train": {"frr": "hf://datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-00001.parquet"}}) ``` Stack trace: ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) <ipython-input-7-0e0d76d25b09> in <module> ----> 1 paths = dl_manager.download({"train": {"frr": "hf://datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-00001.parquet"}}) .../huggingface/datasets/src/datasets/download/download_manager.py in download(self, url_or_urls) 255 start_time = datetime.now() 256 with stack_multiprocessing_download_progress_bars(): --> 257 downloaded_path_or_paths = map_nested( 258 download_func, 259 url_or_urls, .../huggingface/datasets/src/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, batched, batch_size, types, disable_tqdm, desc) 506 batch_size = max(len(iterable) // num_proc + int(len(iterable) % num_proc > 0), 1) 507 iterable = list(iter_batched(iterable, batch_size)) --> 508 mapped = [ 509 _single_map_nested((function, obj, batched, batch_size, types, None, True, None)) 510 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc) .../huggingface/datasets/src/datasets/utils/py_utils.py in <listcomp>(.0) 507 iterable = list(iter_batched(iterable, batch_size)) 508 mapped = [ --> 509 _single_map_nested((function, obj, batched, batch_size, types, None, True, None)) 510 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc) 511 ] .../huggingface/datasets/src/datasets/utils/py_utils.py in _single_map_nested(args) 375 and all(not isinstance(v, types) for v in data_struct) 376 ): --> 377 return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)] 378 379 # Reduce logging to keep things readable in multiprocessing with tqdm .../huggingface/datasets/src/datasets/utils/py_utils.py in <listcomp>(.0) 375 and all(not isinstance(v, types) for v in data_struct) 376 ): --> 377 return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)] 378 379 # Reduce logging to keep things readable in multiprocessing with tqdm .../huggingface/datasets/src/datasets/download/download_manager.py in _download_batched(self, url_or_filenames, download_config) 311 ) 312 else: --> 313 return [ 314 self._download_single(url_or_filename, download_config=download_config) 315 for url_or_filename in url_or_filenames .../huggingface/datasets/src/datasets/download/download_manager.py in <listcomp>(.0) 312 else: 313 return [ --> 314 self._download_single(url_or_filename, download_config=download_config) 315 for url_or_filename in url_or_filenames 316 ] .../huggingface/datasets/src/datasets/download/download_manager.py in _download_single(self, url_or_filename, download_config) 321 # append the relative path to the base_path 322 url_or_filename = url_or_path_join(self._base_path, url_or_filename) --> 323 out = cached_path(url_or_filename, download_config=download_config) 324 out = tracked_str(out) 325 out.set_origin(url_or_filename) .../huggingface/datasets/src/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 220 elif is_local_path(url_or_filename): 221 # File, but it doesn't exist. --> 222 raise FileNotFoundError(f"Local file {url_or_filename} doesn't exist") 223 else: 224 # Something unknown FileNotFoundError: Local file .../huggingface/datasets/{'frr': 'hf:/datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-00001.parquet'} doesn't exist ``` Related to: - #6850
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6869/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6869/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6868
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6868/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6868/comments
https://api.github.com/repos/huggingface/datasets/issues/6868/events
https://github.com/huggingface/datasets/issues/6868
2,279,385,159
I_kwDODunzps6H3KhH
6,868
datasets.BuilderConfig does not work.
{ "login": "jdm4pku", "id": 148830652, "node_id": "U_kgDOCN75vA", "avatar_url": "https://avatars.githubusercontent.com/u/148830652?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jdm4pku", "html_url": "https://github.com/jdm4pku", "followers_url": "https://api.github.com/users/jdm4pku/followers", "following_url": "https://api.github.com/users/jdm4pku/following{/other_user}", "gists_url": "https://api.github.com/users/jdm4pku/gists{/gist_id}", "starred_url": "https://api.github.com/users/jdm4pku/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jdm4pku/subscriptions", "organizations_url": "https://api.github.com/users/jdm4pku/orgs", "repos_url": "https://api.github.com/users/jdm4pku/repos", "events_url": "https://api.github.com/users/jdm4pku/events{/privacy}", "received_events_url": "https://api.github.com/users/jdm4pku/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I run the same demo program on another server. The program can work properly. " ]
"2024-05-05T08:08:55"
"2024-05-05T12:15:02"
"2024-05-05T12:15:01"
NONE
null
### Describe the bug I custom a BuilderConfig and GeneratorBasedBuilder. Here is the code for BuilderConfig ``` class UIEConfig(datasets.BuilderConfig): def __init__( self, *args, data_dir=None, instruction_file=None, instruction_strategy=None, task_config_dir=None, num_examples=None, max_num_instances_per_task=None, max_num_instances_per_eval_task=None, over_sampling=None, **kwargs ): super().__init__(*args, **kwargs) self.data_dir = data_dir self.num_examples = num_examples self.over_sampling = over_sampling self.instructions = self._parse_instruction(instruction_file) self.task_configs = self._parse_task_config(task_config_dir) self.instruction_strategy = instruction_strategy self.max_num_instances_per_task = max_num_instances_per_task self.max_num_instances_per_eval_task = max_num_instances_per_eval_task ``` Besides, here is the code for GeneratorBasedBuilder. ``` class UIEInstructions(datasets.GeneratorBasedBuilder): VERSION = datasets.Version("2.0.0") BUILDER_CONFIG_CLASS = UIEConfig BUILDER_CONFIGS = [ UIEConfig(name="default", description="Default config for NaturalInstructions") ] DEFAULT_CONFIG_NAME = "default" ``` Here is the load_dataset ``` raw_datasets = load_dataset( os.path.join(CURRENT_DIR, "uie_dataset.py"), data_dir=data_args.data_dir, task_config_dir=data_args.task_config_dir, instruction_file=data_args.instruction_file, instruction_strategy=data_args.instruction_strategy, cache_dir=data_cache_dir, # for debug, change dataset size, otherwise open it max_num_instances_per_task=data_args.max_num_instances_per_task, max_num_instances_per_eval_task=data_args.max_num_instances_per_eval_task, num_examples=data_args.num_examples, over_sampling=data_args.over_sampling ) ``` Finally, I met the error. ``` BuilderConfig UIEConfig(name='default', version=0.0.0, data_dir=None, data_files=None, description='Default config for NaturalInstructions') doesn't have a 'task_config_dir' key. ``` I debugged the code, but I find the parameters added by me may not work. ### Steps to reproduce the bug https://github.com/BeyonderXX/InstructUIE/blob/master/src/uie_dataset.py ### Expected behavior ``` BuilderConfig UIEConfig(name='default', version=0.0.0, data_dir=None, data_files=None, description='Default config for NaturalInstructions') doesn't have a 'task_config_dir' key. ``` ### Environment info torch 2.3.0+cu118 transformers 4.40.1 python 3.8
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6868/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6868/timeline
null
not_planned
null
null
https://api.github.com/repos/huggingface/datasets/issues/6867
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6867/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6867/comments
https://api.github.com/repos/huggingface/datasets/issues/6867/events
https://github.com/huggingface/datasets/issues/6867
2,279,059,787
I_kwDODunzps6H17FL
6,867
Improve performance of JSON loader
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6867?src=pr&el=h1) Report\n> Merging [#6867](https://codecov.io/gh/huggingface/transformers/pull/6867?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/59a6a32a61a87f9a1cccb57c3b4df725384d34ae?el=desc) will **decrease** coverage by `1.75%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6867/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6867?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6867 +/- ##\n==========================================\n- Coverage 79.91% 78.16% -1.76% \n==========================================\n Files 157 157 \n Lines 28795 28795 \n==========================================\n- Hits 23012 22508 -504 \n- Misses 5783 6287 +504 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6867?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.47% <0.00%> (-69.44%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `13.54% <0.00%> (-41.13%)` | :arrow_down: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `57.14% <0.00%> (-39.69%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `66.66% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `59.57% <0.00%> (-19.15%)` | :arrow_down: |\n| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `83.58% <0.00%> (-2.99%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |\n| ... and [16 more](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6867?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6867?src=pr&el=footer). Last update [59a6a32...e59e26a](https://codecov.io/gh/huggingface/transformers/pull/6867?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-05-04T15:04:16"
"2024-05-17T16:22:28"
"2024-05-17T16:22:28"
MEMBER
null
As reported by @natolambert, loading regular JSON files with `datasets` shows poor performance. The cause is that we use the `json` Python standard library instead of other faster libraries. See my old comment: https://github.com/huggingface/datasets/pull/2638#pullrequestreview-706983714 > There are benchmarks that compare different JSON packages, with the Standard Library one among the worst performant: > - https://github.com/ultrajson/ultrajson#benchmarks > - https://github.com/ijl/orjson#performance I remember having a discussion about this and it was decided that it was better not to include an additional dependency on a 3rd-party library. However: - We already depend on `pandas` and `pandas` depends on `ujson`: so we have an indirect dependency on `ujson` - Even if the above were not the case, we always could include `ujson` as an optional extra dependency, and check at runtime if it is installed to decide which library to use, either json or ujson
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6867/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6867/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6866
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6866/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6866/comments
https://api.github.com/repos/huggingface/datasets/issues/6866/events
https://github.com/huggingface/datasets/issues/6866
2,278,736,221
I_kwDODunzps6H0sFd
6,866
DataFilesNotFoundError for datasets in the open-llm-leaderboard
{ "login": "jerome-white", "id": 6140840, "node_id": "MDQ6VXNlcjYxNDA4NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/6140840?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jerome-white", "html_url": "https://github.com/jerome-white", "followers_url": "https://api.github.com/users/jerome-white/followers", "following_url": "https://api.github.com/users/jerome-white/following{/other_user}", "gists_url": "https://api.github.com/users/jerome-white/gists{/gist_id}", "starred_url": "https://api.github.com/users/jerome-white/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jerome-white/subscriptions", "organizations_url": "https://api.github.com/users/jerome-white/orgs", "repos_url": "https://api.github.com/users/jerome-white/repos", "events_url": "https://api.github.com/users/jerome-white/events{/privacy}", "received_events_url": "https://api.github.com/users/jerome-white/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6866?src=pr&el=h1) Report\n> Merging [#6866](https://codecov.io/gh/huggingface/transformers/pull/6866?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/59a6a32a61a87f9a1cccb57c3b4df725384d34ae?el=desc) will **decrease** coverage by `0.24%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6866/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6866?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6866 +/- ##\n==========================================\n- Coverage 79.91% 79.67% -0.25% \n==========================================\n Files 157 157 \n Lines 28795 28795 \n==========================================\n- Hits 23012 22942 -70 \n- Misses 5783 5853 +70 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6866?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `20.53% <0.00%> (-21.21%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-2.26%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/tokenization\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `95.00% <0.00%> (+13.33%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+23.42%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6866?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6866?src=pr&el=footer). Last update [59a6a32...b4f1cad](https://codecov.io/gh/huggingface/transformers/pull/6866?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "The next step is to look for other places those class attributes are defined and remove them:\r\n\r\n\r\n```bash\r\n$ git grep test_pruning | grep tf\r\ntests/test_modeling_tf_distilbert.py: test_pruning = True\r\ntests/test_modeling_tf_longformer.py: test_pruning = False # pruning is not supported\r\ntests/test_modeling_tf_transfo_xl.py: test_pruning = False\r\ntests/test_modeling_tf_xlnet.py: test_pruning = False\r\n```\r\n```bash\r\n$ git grep test_torchscript | grep tf\r\ntests/test_modeling_tf_common.py: test_torchscript = True\r\ntests/test_modeling_tf_distilbert.py: test_torchscript = True\r\ntests/test_modeling_tf_longformer.py: test_torchscript = False\r\ntests/test_modeling_tf_transfo_xl.py: test_torchscript = False\r\n```\r\n", "Thanks for the support.\r\nI also found `test_head_masking` which was unused. So deleted it too. Let me know if you didn't want that to happen.\r\n\r\n```bash\r\n$ git grep -e \"test_head\" | grep tf\r\ntests/test_modeling_tf_distilbert.py: test_head_masking = True\r\ntests/test_modeling_tf_longformer.py: test_headmasking = False # head masking is not supported\r\n```\r\n\r\nThanks\r\n\r\nPS: suggestion for any other issue, which I can pick up would be great. I am looking under label `help wanted`, etc" ]
"2024-05-04T04:59:00"
"2024-05-14T08:09:56"
"2024-05-14T08:09:56"
NONE
null
### Describe the bug When trying to get config names or load any dataset within the open-llm-leaderboard ecosystem (`open-llm-leaderboard/details_`) I receive the DataFilesNotFoundError. For the last month or so I've been loading datasets from the leaderboard almost everyday; yesterday was the first time I started seeing this. ### Steps to reproduce the bug This snippet has three cells: 1. Loads the modules 2. Tries to get config names 3. Tries to load the dataset I've chosen "davidkim205"'s Rhea-72b-v0.5 model because it is one of the best performers on the leaderboard should likely have no dataset issues: ```python In [1]: from datasets import load_dataset, get_dataset_config_names In [2]: get_dataset_config_names("open-llm-leaderboard/details_davidkim205__Rhea ...: -72b-v0.5") --------------------------------------------------------------------------- DataFilesNotFoundError Traceback (most recent call last) Cell In[2], line 1 ----> 1 get_dataset_config_names("open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.5") File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/inspect.py:347, in get_dataset_config_names(path, revision, download_config, download_mode, dynamic_modules_path, data_files, **download_kwargs) 291 def get_dataset_config_names( 292 path: str, 293 revision: Optional[Union[str, Version]] = None, (...) 298 **download_kwargs, 299 ): 300 """Get the list of available config names for a particular dataset. 301 302 Args: (...) 345 ``` 346 """ --> 347 dataset_module = dataset_module_factory( 348 path, 349 revision=revision, 350 download_config=download_config, 351 download_mode=download_mode, 352 dynamic_modules_path=dynamic_modules_path, 353 data_files=data_files, 354 **download_kwargs, 355 ) 356 builder_cls = get_dataset_builder_class(dataset_module, dataset_name=os.path.basename(path)) 357 return list(builder_cls.builder_configs.keys()) or [ 358 dataset_module.builder_kwargs.get("config_name", builder_cls.DEFAULT_CONFIG_NAME or "default") 359 ] File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:1821, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs) 1812 return LocalDatasetModuleFactoryWithScript( 1813 combined_path, 1814 download_mode=download_mode, 1815 dynamic_modules_path=dynamic_modules_path, 1816 trust_remote_code=trust_remote_code, 1817 ).get_module() 1818 elif os.path.isdir(path): 1819 return LocalDatasetModuleFactoryWithoutScript( 1820 path, data_dir=data_dir, data_files=data_files, download_mode=download_mode -> 1821 ).get_module() 1822 # Try remotely 1823 elif is_relative_path(path) and path.count("/") <= 1: File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:1039, in LocalDatasetModuleFactoryWithoutScript.get_module(self) 1033 patterns = get_data_patterns(base_path) 1034 data_files = DataFilesDict.from_patterns( 1035 patterns, 1036 base_path=base_path, 1037 allowed_extensions=ALL_ALLOWED_EXTENSIONS, 1038 ) -> 1039 module_name, default_builder_kwargs = infer_module_for_data_files( 1040 data_files=data_files, 1041 path=self.path, 1042 ) 1043 data_files = data_files.filter_extensions(_MODULE_TO_EXTENSIONS[module_name]) 1044 # Collect metadata files if the module supports them File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:597, in infer_module_for_data_files(data_files, path, download_config) 595 raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}") 596 if not module_name: --> 597 raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else "")) 598 return module_name, default_builder_kwargs DataFilesNotFoundError: No (supported) data files found in open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.5 In [3]: data = load_dataset("open-llm-leaderboard/details_davidkim205__Rhea-72b- ...: v0.5", "harness_winogrande_5") --------------------------------------------------------------------------- DataFilesNotFoundError Traceback (most recent call last) Cell In[3], line 1 ----> 1 data = load_dataset("open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.5", "harness_winogrande_5") File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:2587, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) 2582 verification_mode = VerificationMode( 2583 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS 2584 ) 2586 # Create a dataset builder -> 2587 builder_instance = load_dataset_builder( 2588 path=path, 2589 name=name, 2590 data_dir=data_dir, 2591 data_files=data_files, 2592 cache_dir=cache_dir, 2593 features=features, 2594 download_config=download_config, 2595 download_mode=download_mode, 2596 revision=revision, 2597 token=token, 2598 storage_options=storage_options, 2599 trust_remote_code=trust_remote_code, 2600 _require_default_config_name=name is None, 2601 **config_kwargs, 2602 ) 2604 # Return iterable dataset in case of streaming 2605 if streaming: File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:2259, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs) 2257 download_config = download_config.copy() if download_config else DownloadConfig() 2258 download_config.storage_options.update(storage_options) -> 2259 dataset_module = dataset_module_factory( 2260 path, 2261 revision=revision, 2262 download_config=download_config, 2263 download_mode=download_mode, 2264 data_dir=data_dir, 2265 data_files=data_files, 2266 cache_dir=cache_dir, 2267 trust_remote_code=trust_remote_code, 2268 _require_default_config_name=_require_default_config_name, 2269 _require_custom_configs=bool(config_kwargs), 2270 ) 2271 # Get dataset builder class from the processing script 2272 builder_kwargs = dataset_module.builder_kwargs File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:1821, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs) 1812 return LocalDatasetModuleFactoryWithScript( 1813 combined_path, 1814 download_mode=download_mode, 1815 dynamic_modules_path=dynamic_modules_path, 1816 trust_remote_code=trust_remote_code, 1817 ).get_module() 1818 elif os.path.isdir(path): 1819 return LocalDatasetModuleFactoryWithoutScript( 1820 path, data_dir=data_dir, data_files=data_files, download_mode=download_mode -> 1821 ).get_module() 1822 # Try remotely 1823 elif is_relative_path(path) and path.count("/") <= 1: File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:1039, in LocalDatasetModuleFactoryWithoutScript.get_module(self) 1033 patterns = get_data_patterns(base_path) 1034 data_files = DataFilesDict.from_patterns( 1035 patterns, 1036 base_path=base_path, 1037 allowed_extensions=ALL_ALLOWED_EXTENSIONS, 1038 ) -> 1039 module_name, default_builder_kwargs = infer_module_for_data_files( 1040 data_files=data_files, 1041 path=self.path, 1042 ) 1043 data_files = data_files.filter_extensions(_MODULE_TO_EXTENSIONS[module_name]) 1044 # Collect metadata files if the module supports them File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:597, in infer_module_for_data_files(data_files, path, download_config) 595 raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}") 596 if not module_name: --> 597 raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else "")) 598 return module_name, default_builder_kwargs DataFilesNotFoundError: No (supported) data files found in open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.5 ``` ### Expected behavior No exceptions from `get_dataset_config_names` or `load_dataset` ### Environment info - `datasets` version: 2.19.0 - Platform: Linux-6.5.0-1018-aws-aarch64-with-glibc2.35 - Python version: 3.11.8 - `huggingface_hub` version: 0.23.0 - PyArrow version: 16.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6866/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6866/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6865
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6865/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6865/comments
https://api.github.com/repos/huggingface/datasets/issues/6865/events
https://github.com/huggingface/datasets/issues/6865
2,277,304,832
I_kwDODunzps6HvOoA
6,865
Example on Semantic segmentation contains bug
{ "login": "ducha-aiki", "id": 4803565, "node_id": "MDQ6VXNlcjQ4MDM1NjU=", "avatar_url": "https://avatars.githubusercontent.com/u/4803565?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ducha-aiki", "html_url": "https://github.com/ducha-aiki", "followers_url": "https://api.github.com/users/ducha-aiki/followers", "following_url": "https://api.github.com/users/ducha-aiki/following{/other_user}", "gists_url": "https://api.github.com/users/ducha-aiki/gists{/gist_id}", "starred_url": "https://api.github.com/users/ducha-aiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ducha-aiki/subscriptions", "organizations_url": "https://api.github.com/users/ducha-aiki/orgs", "repos_url": "https://api.github.com/users/ducha-aiki/repos", "events_url": "https://api.github.com/users/ducha-aiki/events{/privacy}", "received_events_url": "https://api.github.com/users/ducha-aiki/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "There are no pre-trained reformer weights yet -> so that's a no sadly", "Following this issue for updates.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-05-03T09:40:12"
"2024-05-03T09:40:12"
null
NONE
null
### Describe the bug https://huggingface.co/docs/datasets/en/semantic_segmentation shows wrong example with torchvision transforms. Specifically, as one can see in screenshot below, the object boundaries have weird colors. <img width="689" alt="image" src="https://github.com/huggingface/datasets/assets/4803565/59aa0e2c-2e3e-415b-9d42-2314044c5aee"> Original example with `albumentations` is correct <img width="705" alt="image" src="https://github.com/huggingface/datasets/assets/4803565/27dbd725-cea5-4e48-ba59-7050c3ce17b3"> That is because `torch vision.transforms.Resize` interpolates with bilinear everything which is wrong when used for segmentation labels - you just cannot mix them. Overall, `torchvision.transforms` is designed for classification only and cannot be used to images and masks together, unless you write two separate branches of augmentations. The correct way would be to use `v2` version of transforms and convert the segmentation labels to https://pytorch.org/vision/main/generated/torchvision.tv_tensors.Mask.html#torchvision.tv_tensors.Mask object ### Steps to reproduce the bug Go to the website. <img width="689" alt="image" src="https://github.com/huggingface/datasets/assets/4803565/ea1276d0-d69a-48cf-b9c2-cd61217815ef"> https://huggingface.co/docs/datasets/en/semantic_segmentation ### Expected behavior Results, similar to `albumentation`. Or remove the torch vision part altogether. Or use `kornia` instead. ### Environment info Irrelevant
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6865/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6865/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6864
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6864/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6864/comments
https://api.github.com/repos/huggingface/datasets/issues/6864/events
https://github.com/huggingface/datasets/issues/6864
2,276,986,981
I_kwDODunzps6HuBBl
6,864
Dataset 'rewardsignal/reddit_writing_prompts' doesn't exist on the Hub
{ "login": "vinodrajendran001", "id": 5783246, "node_id": "MDQ6VXNlcjU3ODMyNDY=", "avatar_url": "https://avatars.githubusercontent.com/u/5783246?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vinodrajendran001", "html_url": "https://github.com/vinodrajendran001", "followers_url": "https://api.github.com/users/vinodrajendran001/followers", "following_url": "https://api.github.com/users/vinodrajendran001/following{/other_user}", "gists_url": "https://api.github.com/users/vinodrajendran001/gists{/gist_id}", "starred_url": "https://api.github.com/users/vinodrajendran001/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vinodrajendran001/subscriptions", "organizations_url": "https://api.github.com/users/vinodrajendran001/orgs", "repos_url": "https://api.github.com/users/vinodrajendran001/repos", "events_url": "https://api.github.com/users/vinodrajendran001/events{/privacy}", "received_events_url": "https://api.github.com/users/vinodrajendran001/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "The model architecture is simple:\r\n![image](https://user-images.githubusercontent.com/38073340/91795291-db0f3580-ec4f-11ea-814f-2342ecfd1b1a.png)\r\n", "Sorry for the late reply. This is because you did not respect the signature of `TFBertMainLayer` in order to properly use it you can do:\r\n\r\n```python\r\nimport tensorflow as tf\r\nfrom transformers import TFBertForSequenceClassification\r\n\r\na = tf.constant([[1,2,3,4,5]])\r\nb = tf.constant([[1,1,1,1,1]])\r\ninp = {\"input_ids\": a, \"attention_mask\": b}\r\nmodel = TFBertForSequenceClassification.from_pretrained(\"bert-base-cased\")\r\nmodel._saved_model_inputs_spec = None\r\nmodel._set_save_spec(inp)\r\ntf.saved_model.save(model, \"/tmp\")\r\nmodel = tf.keras.models.load_model(\"/tmp\")\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-05-03T06:03:30"
"2024-05-06T06:36:42"
"2024-05-06T06:36:41"
NONE
null
### Describe the bug The dataset `rewardsignal/reddit_writing_prompts` is missing in Huggingface Hub. ### Steps to reproduce the bug ``` from datasets import load_dataset prompt_response_dataset = load_dataset("rewardsignal/reddit_writing_prompts", data_files="prompt_responses_full.csv", split='train[:80%]') ``` ### Expected behavior DatasetNotFoundError: Dataset 'rewardsignal/reddit_writing_prompts' doesn't exist on the Hub or cannot be accessed ### Environment info Nothing to do with versions
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6864/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6864/timeline
null
not_planned
null
null
https://api.github.com/repos/huggingface/datasets/issues/6863
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6863/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6863/comments
https://api.github.com/repos/huggingface/datasets/issues/6863/events
https://github.com/huggingface/datasets/issues/6863
2,276,977,534
I_kwDODunzps6Ht-t-
6,863
Revert temporary pin huggingface-hub < 0.23.0
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "I think that's reasonable, the point of `skip_special_tokens` isn't to skip unknown tokens. cf @mfuntowicz @thomwolf @n1t0 ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-05-03T05:53:55"
"2024-05-27T10:14:41"
"2024-05-27T10:14:41"
MEMBER
null
Revert temporary pin huggingface-hub < 0.23.0 introduced by - #6861 once the following issue is fixed and released: - huggingface/transformers#30618
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6863/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6863/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6862
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6862/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6862/comments
https://api.github.com/repos/huggingface/datasets/issues/6862/events
https://github.com/huggingface/datasets/pull/6862
2,276,763,745
PR_kwDODunzps5ubOoL
6,862
Issue 6598: load_dataset broken for data_files on s3
{ "login": "matstrand", "id": 544843, "node_id": "MDQ6VXNlcjU0NDg0Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/544843?v=4", "gravatar_id": "", "url": "https://api.github.com/users/matstrand", "html_url": "https://github.com/matstrand", "followers_url": "https://api.github.com/users/matstrand/followers", "following_url": "https://api.github.com/users/matstrand/following{/other_user}", "gists_url": "https://api.github.com/users/matstrand/gists{/gist_id}", "starred_url": "https://api.github.com/users/matstrand/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/matstrand/subscriptions", "organizations_url": "https://api.github.com/users/matstrand/orgs", "repos_url": "https://api.github.com/users/matstrand/repos", "events_url": "https://api.github.com/users/matstrand/events{/privacy}", "received_events_url": "https://api.github.com/users/matstrand/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6862?src=pr&el=h1) Report\n> Merging [#6862](https://codecov.io/gh/huggingface/transformers/pull/6862?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/367235ee52537ff7cada5e1c5c41cdd78731f092?el=desc) will **increase** coverage by `3.77%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6862/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6862?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6862 +/- ##\n==========================================\n+ Coverage 76.27% 80.04% +3.77% \n==========================================\n Files 157 157 \n Lines 28795 28794 -1 \n==========================================\n+ Hits 21963 23049 +1086 \n+ Misses 6832 5745 -1087 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6862?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.97% <ø> (-0.70%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.58% <0.00%> (-7.19%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.21%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-1.51%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.85% <0.00%> (-1.43%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |\n| ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6862?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6862?src=pr&el=footer). Last update [367235e...b1b2b17](https://codecov.io/gh/huggingface/transformers/pull/6862?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-05-03T01:43:47"
"2024-05-03T09:04:55"
null
NONE
null
Fixes huggingface/datasets/issues/6598 I've added a new test case and a solution. Before applying the solution the test case was failing with the same error described in the linked issue. I encountered this issue while following the Hugging Face documentation, trying to perform GPT-2 fine-tuning using `run_clm.py` on SageMaker with a data file stored on S3. MRE: ``` pip install "datasets[s3]" python -c "from datasets import load_dataset; load_dataset('csv', data_files={'train': 's3://noaa-gsod-pds/2024/A5125600451.csv'})" ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6862/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6862/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6862", "html_url": "https://github.com/huggingface/datasets/pull/6862", "diff_url": "https://github.com/huggingface/datasets/pull/6862.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6862.patch", "merged_at": null }
https://api.github.com/repos/huggingface/datasets/issues/6861
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6861/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6861/comments
https://api.github.com/repos/huggingface/datasets/issues/6861/events
https://github.com/huggingface/datasets/pull/6861
2,275,988,990
PR_kwDODunzps5uYkMy
6,861
Fix CI by temporarily pinning huggingface-hub < 0.23.0
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6861?src=pr&el=h1) Report\n> Merging [#6861](https://codecov.io/gh/huggingface/transformers/pull/6861?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/431ab19d7a467905018b165bc29b2a1130c1b188?el=desc) will **increase** coverage by `3.38%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6861/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6861?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6861 +/- ##\n==========================================\n+ Coverage 76.81% 80.20% +3.38% \n==========================================\n Files 157 157 \n Lines 28795 28795 \n==========================================\n+ Hits 22118 23094 +976 \n+ Misses 6677 5701 -976 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6861?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.02%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.08% <0.00%> (-0.51%)` | :arrow_down: |\n| ... and [25 more](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6861?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6861?src=pr&el=footer). Last update [431ab19...39d237c](https://codecov.io/gh/huggingface/transformers/pull/6861?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-05-02T16:40:04"
"2024-05-02T16:59:42"
"2024-05-02T16:53:42"
MEMBER
null
As a hotfix for CI, temporarily pin `huggingface-hub` upper version Fix #6860. Revert once root cause is fixed, see: - https://github.com/huggingface/transformers/issues/30618
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6861/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6861/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6861", "html_url": "https://github.com/huggingface/datasets/pull/6861", "diff_url": "https://github.com/huggingface/datasets/pull/6861.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6861.patch", "merged_at": "2024-05-02T16:53:42" }
https://api.github.com/repos/huggingface/datasets/issues/6860
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6860/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6860/comments
https://api.github.com/repos/huggingface/datasets/issues/6860/events
https://github.com/huggingface/datasets/issues/6860
2,275,537,137
I_kwDODunzps6HofDx
6,860
CI fails after huggingface_hub-0.23.0 release: FutureWarning: "resume_download"
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Note that there are multiple frameworks that provide generic training loops. The goal of `Trainer` (I'm assuming you're talking about it since there is no `train.py` file) is not to replace them or compete with them but to provide an easy way to train and finetune Transformers models. Those models don't take nested inputs, so Trainer does not support this. Those models are expected to return the loss as the first item of their output, so Trainer expects it too.\r\n\r\nMaking Trainer more easily customizable by providing better hooks for subclassing (your use case could be done by overriding the two private methods you mention for instance) is something we are working on, but we won't have a base Trainer that is too generic, it will remain customized to the models the library provides.", "Thank you for your consideration and comments! ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-05-02T13:24:17"
"2024-05-02T16:53:45"
"2024-05-02T16:53:45"
MEMBER
null
CI fails after latest huggingface_hub-0.23.0 release: https://github.com/huggingface/huggingface_hub/releases/tag/v0.23.0 ``` FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_bertscore - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_perplexity - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. FAILED tests/test_fingerprint.py::TokenizersHashTest::test_hash_tokenizer - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. FAILED tests/test_fingerprint.py::TokenizersHashTest::test_hash_tokenizer_with_cache - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. FAILED tests/test_arrow_dataset.py::MiscellaneousDatasetTest::test_set_format_encode - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6860/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6860/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6859
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6859/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6859/comments
https://api.github.com/repos/huggingface/datasets/issues/6859/events
https://github.com/huggingface/datasets/pull/6859
2,274,996,774
PR_kwDODunzps5uVIoZ
6,859
Support folder-based datasets with large metadata.jsonl
{ "login": "gbenson", "id": 580564, "node_id": "MDQ6VXNlcjU4MDU2NA==", "avatar_url": "https://avatars.githubusercontent.com/u/580564?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gbenson", "html_url": "https://github.com/gbenson", "followers_url": "https://api.github.com/users/gbenson/followers", "following_url": "https://api.github.com/users/gbenson/following{/other_user}", "gists_url": "https://api.github.com/users/gbenson/gists{/gist_id}", "starred_url": "https://api.github.com/users/gbenson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gbenson/subscriptions", "organizations_url": "https://api.github.com/users/gbenson/orgs", "repos_url": "https://api.github.com/users/gbenson/repos", "events_url": "https://api.github.com/users/gbenson/events{/privacy}", "received_events_url": "https://api.github.com/users/gbenson/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6859?src=pr&el=h1) Report\n> Merging [#6859](https://codecov.io/gh/huggingface/transformers/pull/6859?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bbdba0a76d70ff347884cbe62e0f13de903d84c7?el=desc) will **increase** coverage by `2.94%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6859/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6859?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6859 +/- ##\n==========================================\n+ Coverage 77.22% 80.17% +2.94% \n==========================================\n Files 157 157 \n Lines 28793 28793 \n==========================================\n+ Hits 22235 23084 +849 \n+ Misses 6558 5709 -849 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6859?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [...c/transformers/modeling\\_tf\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.44% <0.00%> (-7.59%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.85% <0.00%> (-1.43%)` | :arrow_down: |\n| ... and [17 more](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6859?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6859?src=pr&el=footer). Last update [bbdba0a...b8eb2a3](https://codecov.io/gh/huggingface/transformers/pull/6859?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-05-02T09:07:26"
"2024-05-02T09:07:26"
null
NONE
null
I tried creating an `imagefolder` dataset with a 714MB `metadata.jsonl` but got the error below. This pull request fixes the problem by increasing the block size like the message suggests. ``` >>> from datasets import load_dataset >>> dataset = load_dataset("imagefolder", data_dir="data-for-upload") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/path/to/datasets/load.py", line 2609, in load_dataset builder_instance.download_and_prepare( ... File "/path/to/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 245, in _read_metadata return paj.read_json(f) File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6859/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6859/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6859", "html_url": "https://github.com/huggingface/datasets/pull/6859", "diff_url": "https://github.com/huggingface/datasets/pull/6859.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6859.patch", "merged_at": null }
https://api.github.com/repos/huggingface/datasets/issues/6858
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6858/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6858/comments
https://api.github.com/repos/huggingface/datasets/issues/6858/events
https://github.com/huggingface/datasets/issues/6858
2,274,917,185
I_kwDODunzps6HmHtB
6,858
Segmentation fault
{ "login": "scampion", "id": 554155, "node_id": "MDQ6VXNlcjU1NDE1NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/554155?v=4", "gravatar_id": "", "url": "https://api.github.com/users/scampion", "html_url": "https://github.com/scampion", "followers_url": "https://api.github.com/users/scampion/followers", "following_url": "https://api.github.com/users/scampion/following{/other_user}", "gists_url": "https://api.github.com/users/scampion/gists{/gist_id}", "starred_url": "https://api.github.com/users/scampion/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/scampion/subscriptions", "organizations_url": "https://api.github.com/users/scampion/orgs", "repos_url": "https://api.github.com/users/scampion/repos", "events_url": "https://api.github.com/users/scampion/events{/privacy}", "received_events_url": "https://api.github.com/users/scampion/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Indeed! Do you want to open a PR to fix this?", "@LysandreJik \r\n\r\nI can do that. However @patrickvonplaten has already self-assigned for this. How do you think, @patrickvonplaten?", "Hey @chiapas, it would be great if you can open a PR for it :-) ", "Hi @patrickvonplaten , OK, that would be my first contribution to transformers :)" ]
"2024-05-02T08:28:49"
"2024-05-03T08:43:21"
"2024-05-03T08:42:36"
NONE
null
### Describe the bug Using various version for datasets, I'm no more longer able to load that dataset without a segmentation fault. Several others files are also concerned. ### Steps to reproduce the bug # Create a new venv python3 -m venv venv_test source venv_test/bin/activate # Install the latest version pip install datasets # Load that dataset python3 -q -X faulthandler -c "from datasets import load_dataset; load_dataset('EuropeanParliament/Eurovoc', '1998-09')" ### Expected behavior Data must be loaded ### Environment info datasets==2.19.0 Python 3.11.7 Darwin 22.5.0 Darwin Kernel Version 22.5.0: Mon Apr 24 20:51:50 PDT 2023; root:xnu-8796.121.2~5/RELEASE_X86_64 x86_64
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6858/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6858/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6857
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6857/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6857/comments
https://api.github.com/repos/huggingface/datasets/issues/6857/events
https://github.com/huggingface/datasets/pull/6857
2,274,849,730
PR_kwDODunzps5uUooF
6,857
Fix line-endings in tests on Windows
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Cool!" ]
"2024-05-02T07:49:15"
"2024-05-02T11:49:35"
"2024-05-02T11:43:00"
MEMBER
null
EDIT: ~~Fix test_delete_from_hub on Windows by passing explicit encoding.~~ Fix test_delete_from_hub and test_xgetsize_private by uploading the README file content directly (encoding the string), instead of writing a local file and uploading it. Note that local files created on Windows will have "\r\n" line endings, instead of "\n". These are no longer transformed to "\n" by the Hub. Fix #6856.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6857/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6857/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6857", "html_url": "https://github.com/huggingface/datasets/pull/6857", "diff_url": "https://github.com/huggingface/datasets/pull/6857.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6857.patch", "merged_at": "2024-05-02T11:43:00" }
https://api.github.com/repos/huggingface/datasets/issues/6856
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6856/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6856/comments
https://api.github.com/repos/huggingface/datasets/issues/6856/events
https://github.com/huggingface/datasets/issues/6856
2,274,828,933
I_kwDODunzps6HlyKF
6,856
CI fails on Windows for test_delete_from_hub and test_xgetsize_private due to new-line character
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hey @andifunke, \r\n\r\nThanks a lot for your issue! Could you link the different implementation of `torch.multinomial` between PT v1.5.1 and PT v1.6.0 ? \r\nI understand your argument, but I think setting `replacement=True` is logically false...", "Hi @patrickvonplaten ,\r\n\r\nthanks for your reply!\r\n\r\n> Could you link the different implementation of torch.multinomial between PT v1.5.1 and PT v1.6.0 ?\r\n\r\nSure. The PR for the implementation is here: https://github.com/pytorch/pytorch/pull/39742 and the merge commit here: https://github.com/pytorch/pytorch/commit/97dfdaaad89c2082c90aebfa9180293847cffd60\r\n\r\n> I understand your argument, but I think setting replacement=True is logically false...\r\n\r\nI agree, it feels a bit hacky, but let me give you an example, why I think this workaround is justified:\r\n\r\nThe following code will behave differently in PT1.5.1 vs 1.6:\r\n\r\n```python\r\nimport torch\r\n\r\ntorch.manual_seed(0)\r\nt = torch.rand(10, 10)\r\n\r\ntorch.manual_seed(0)\r\na = torch.multinomial(t, num_samples=1, replacement=False)\r\n\r\ntorch.manual_seed(0)\r\nb = torch.multinomial(t, num_samples=1, replacement=True)\r\n\r\ntorch.__version__, a, b, all(a == b)\r\n```\r\n\r\nPytorch 1.5.1:\r\n\r\n```\r\nOut[1]: \r\n('1.5.1',\r\n tensor([[9],\r\n [7],\r\n [3],\r\n [9],\r\n [7],\r\n [6],\r\n [1],\r\n [3],\r\n [5],\r\n [1]]),\r\n tensor([[9],\r\n [7],\r\n [3],\r\n [9],\r\n [7],\r\n [6],\r\n [1],\r\n [3],\r\n [5],\r\n [1]]),\r\n True)\r\n```\r\n\r\nPytorch 1.6:\r\n\r\n```\r\n('1.6.0',\r\n tensor([[7],\r\n [7],\r\n [6],\r\n [1],\r\n [6],\r\n [1],\r\n [9],\r\n [5],\r\n [1],\r\n [2]]),\r\n tensor([[9],\r\n [7],\r\n [3],\r\n [9],\r\n [7],\r\n [6],\r\n [1],\r\n [3],\r\n [5],\r\n [1]]),\r\n False)\r\n```\r\n\r\nThis of course breaks reproducibility between versions when generating text.", "Oh, and here is another option, if `replacement=True` feels irritating:\r\n\r\nYou could use `torch.distributions.categorical.Categorical` instead, which uses the same sampling approach.\r\n\r\nexample:\r\n```python\r\nimport torch\r\n\r\ntorch.manual_seed(0)\r\nt = torch.rand(10, 10)\r\n\r\ntorch.manual_seed(0)\r\na = torch.distributions.categorical.Categorical(t).sample()\r\n\r\ntorch.manual_seed(0)\r\nb = torch.multinomial(t, num_samples=1, replacement=True)\r\n\r\ntorch.__version__, a, b, all(a == b.reshape(10))\r\n```\r\n\r\n```\r\nOut[1]:\r\n('1.6.0',\r\n tensor([9, 7, 3, 9, 7, 6, 1, 3, 5, 1]),\r\n tensor([[9],\r\n [7],\r\n [3],\r\n [9],\r\n [7],\r\n [6],\r\n [1],\r\n [3],\r\n [5],\r\n [1]]),\r\n True)\r\n```\r\n", "Hey @andifunke, \r\n\r\nThanks for your detailed comments - this is great! So it seems like the change was made to speed up the `torch.multinomial(do_replacement=False)` function. This is not really of interest to us though as it will never be the bottleneck in the `.generate()` function. \r\n\r\nI agree with you that we want to keep backward compatibility here. I think the best option in this case in to use `torch.distributions.categorical.Categorical(t).sample()` in this case.\r\n\r\nWill open a PR about it :-) ", "Great, thanks!", "Actually, I just noticed that `torch.distributions.categorical.Categorical(...)` uses `torch.multinomial` under the hood with `do_replacement=True` - so that this is not a better option. \r\n\r\nI'm not 100% sure how to proceed here now. @LysandreJik, @sgugger - what is your opinion on that? \r\n\r\nThe problem is the following: Because of a change in PyTorch's `torch.multinomial` function for 1.6, our generation method with `do_sample=True` yields different results when setting `torch.manual_seed(0)` between torch > 1.6 and 1.6.\r\n\r\nAs @andifunke pointed out, a simple fix would be to set `do_replacement=True`, which is logically not correct IMO, but it does not make a difference for sampling with `num_beams = 1`. For sampling with `num_beams > 1`.\r\n\r\nDo you guys think we should go for the simple fix of `do_replacement=True` to keep backward compatibility when using `torch.manual_seed(0)` ? \r\nIt seems like backwards compatibility for `num_beams > 1` is broken either way since it would be false to set `do_replacement=True` there. ", "Can we copy the old implementation somewhere and just use that or is it hidden in C/CUDA?", "Did we also reach out to the PyTorch team and make sure they are aware of this BC?", "Looks like this is hidden in C/CUDA: https://github.com/pytorch/pytorch/pull/39742/files .\r\nNot sure whether the PyTorch is aware of it...", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-05-02T07:37:03"
"2024-05-02T11:43:01"
"2024-05-02T11:43:01"
MEMBER
null
CI fails on Windows for test_delete_from_hub after the merge of: - #6820 This is weird because the CI was green in the PR branch before merging to main. ``` FAILED tests/test_hub.py::test_delete_from_hub - AssertionError: assert [CommitOperat...\r\n---\r\n')] == [CommitOperat...in/*\n---\n')] At index 1 diff: CommitOperationAdd(path_in_repo='README.md', path_or_fileobj=b'---\r\nconfigs:\r\n- config_name: cats\r\n data_files:\r\n - split: train\r\n path: cats/train/*\r\n---\r\n') != CommitOperationAdd(path_in_repo='README.md', path_or_fileobj=b'---\nconfigs:\n- config_name: cats\n data_files:\n - split: train\n path: cats/train/*\n---\n') Full diff: [ CommitOperationDelete( path_in_repo='dogs/train/0000.csv', is_folder=False, ), CommitOperationAdd( path_in_repo='README.md', - path_or_fileobj=b'---\nconfigs:\n- config_name: cats\n data_files:\n ' ? -------- + path_or_fileobj=b'---\r\nconfigs:\r\n- config_name: cats\r\n data_f' ? ++ ++ ++ - b' - split: train\n path: cats/train/*\n---\n', ? ^^^^^^ - + b'iles:\r\n - split: train\r\n path: cats/train/*\r' ? ++++++++++ ++ ^ + b'\n---\r\n', ), ] ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6856/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6856/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6855
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6855/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6855/comments
https://api.github.com/repos/huggingface/datasets/issues/6855/events
https://github.com/huggingface/datasets/pull/6855
2,274,777,812
PR_kwDODunzps5uUZNT
6,855
Fix dataset name for community Hub script-datasets
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Try smaller batch sizes and/or bigger GPUs", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-05-02T07:05:44"
"2024-05-03T15:58:00"
"2024-05-03T15:51:57"
MEMBER
null
Fix dataset name for community Hub script-datasets by passing explicit dataset_name to HubDatasetModuleFactoryWithScript. Fix #6854. CC: @Wauplin
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6855/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6855/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6855", "html_url": "https://github.com/huggingface/datasets/pull/6855", "diff_url": "https://github.com/huggingface/datasets/pull/6855.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6855.patch", "merged_at": "2024-05-03T15:51:57" }
https://api.github.com/repos/huggingface/datasets/issues/6854
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6854/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6854/comments
https://api.github.com/repos/huggingface/datasets/issues/6854/events
https://github.com/huggingface/datasets/issues/6854
2,274,767,686
I_kwDODunzps6HljNG
6,854
Wrong example of usage when config name is missing for community script-datasets
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6854?src=pr&el=h1) Report\n> Merging [#6854](https://codecov.io/gh/huggingface/transformers/pull/6854?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/61b7ba93f5f4dfcef795e20a9fb11b2d4ee7608e?el=desc) will **decrease** coverage by `0.13%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6854/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6854?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6854 +/- ##\n==========================================\n- Coverage 79.94% 79.80% -0.14% \n==========================================\n Files 157 157 \n Lines 28739 28739 \n==========================================\n- Hits 22974 22936 -38 \n- Misses 5765 5803 +38 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6854?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.85% <0.00%> (-7.19%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.85% <0.00%> (-1.43%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.96% <0.00%> (-0.45%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.45% <0.00%> (-0.40%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.71% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `100.00% <0.00%> (+57.89%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6854?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6854?src=pr&el=footer). Last update [61b7ba9...a83ab56](https://codecov.io/gh/huggingface/transformers/pull/6854?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-05-02T06:59:39"
"2024-05-03T15:51:59"
"2024-05-03T15:51:58"
MEMBER
null
As reported by @Wauplin, when loading a community dataset with script, there is a bug in the example of usage of the error message if the dataset has multiple configs (and no default config) and the user does not pass any config. For example: ```python >>> ds = load_dataset("google/fleurs") ValueError: Config name is missing. Please pick one among the available configs: ['af_za', 'am_et', 'ar_eg', 'as_in', 'ast_es', 'az_az', 'be_by', 'bg_bg', 'bn_in', 'bs_ba', 'ca_es', 'ceb_ph', 'ckb_iq', 'cmn_hans_cn', 'cs_cz', 'cy_gb', 'da_dk', 'de_de', 'el_gr', 'en_us', 'es_419', 'et_ee', 'fa_ir', 'ff_sn', 'fi_fi', 'fil_ph', 'fr_fr', 'ga_ie', 'gl_es', 'gu_in', 'ha_ng', 'he_il', 'hi_in', 'hr_hr', 'hu_hu', 'hy_am', 'id_id', 'ig_ng', 'is_is', 'it_it', 'ja_jp', 'jv_id', 'ka_ge', 'kam_ke', 'kea_cv', 'kk_kz', 'km_kh', 'kn_in', 'ko_kr', 'ky_kg', 'lb_lu', 'lg_ug', 'ln_cd', 'lo_la', 'lt_lt', 'luo_ke', 'lv_lv', 'mi_nz', 'mk_mk', 'ml_in', 'mn_mn', 'mr_in', 'ms_my', 'mt_mt', 'my_mm', 'nb_no', 'ne_np', 'nl_nl', 'nso_za', 'ny_mw', 'oc_fr', 'om_et', 'or_in', 'pa_in', 'pl_pl', 'ps_af', 'pt_br', 'ro_ro', 'ru_ru', 'sd_in', 'sk_sk', 'sl_si', 'sn_zw', 'so_so', 'sr_rs', 'sv_se', 'sw_ke', 'ta_in', 'te_in', 'tg_tj', 'th_th', 'tr_tr', 'uk_ua', 'umb_ao', 'ur_pk', 'uz_uz', 'vi_vn', 'wo_sn', 'xh_za', 'yo_ng', 'yue_hant_hk', 'zu_za', 'all'] Example of usage: `load_dataset('fleurs', 'af_za')` ``` Note the example of usage in the error message suggests loading "fleurs" instead of "google/fleurs".
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6854/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6854/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6853
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6853/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6853/comments
https://api.github.com/repos/huggingface/datasets/issues/6853/events
https://github.com/huggingface/datasets/issues/6853
2,272,570,000
I_kwDODunzps6HdKqQ
6,853
Support soft links for load_datasets imagefolder
{ "login": "billytcl", "id": 10386511, "node_id": "MDQ6VXNlcjEwMzg2NTEx", "avatar_url": "https://avatars.githubusercontent.com/u/10386511?v=4", "gravatar_id": "", "url": "https://api.github.com/users/billytcl", "html_url": "https://github.com/billytcl", "followers_url": "https://api.github.com/users/billytcl/followers", "following_url": "https://api.github.com/users/billytcl/following{/other_user}", "gists_url": "https://api.github.com/users/billytcl/gists{/gist_id}", "starred_url": "https://api.github.com/users/billytcl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/billytcl/subscriptions", "organizations_url": "https://api.github.com/users/billytcl/orgs", "repos_url": "https://api.github.com/users/billytcl/repos", "events_url": "https://api.github.com/users/billytcl/events{/privacy}", "received_events_url": "https://api.github.com/users/billytcl/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
"2024-04-30T22:14:29"
"2024-04-30T22:14:29"
null
NONE
null
### Feature request Load_dataset from a folder of images doesn't seem to support soft links. It would be nice if it did, especially during methods development where image folders are being curated. ### Motivation Images are coming from a complex variety of sources and we'd like to be able to soft link directly from the originating folders as opposed to copying. Having a copy of the file ensures that there may be issues with image versioning as well as having double the amount of required disk space. ### Your contribution N/A
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6853/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6853/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6852
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6852/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6852/comments
https://api.github.com/repos/huggingface/datasets/issues/6852/events
https://github.com/huggingface/datasets/issues/6852
2,272,465,011
I_kwDODunzps6HcxBz
6,852
Write token isn't working while pushing to datasets
{ "login": "zaibutcooler", "id": 130903099, "node_id": "U_kgDOB81sOw", "avatar_url": "https://avatars.githubusercontent.com/u/130903099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zaibutcooler", "html_url": "https://github.com/zaibutcooler", "followers_url": "https://api.github.com/users/zaibutcooler/followers", "following_url": "https://api.github.com/users/zaibutcooler/following{/other_user}", "gists_url": "https://api.github.com/users/zaibutcooler/gists{/gist_id}", "starred_url": "https://api.github.com/users/zaibutcooler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zaibutcooler/subscriptions", "organizations_url": "https://api.github.com/users/zaibutcooler/orgs", "repos_url": "https://api.github.com/users/zaibutcooler/repos", "events_url": "https://api.github.com/users/zaibutcooler/events{/privacy}", "received_events_url": "https://api.github.com/users/zaibutcooler/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6852?src=pr&el=h1) Report\n> Merging [#6852](https://codecov.io/gh/huggingface/transformers/pull/6852?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/02d09c8fcc6bda2c345c84cec53289abbe7532ac?el=desc) will **increase** coverage by `1.00%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6852/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6852?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6852 +/- ##\n==========================================\n+ Coverage 79.01% 80.01% +1.00% \n==========================================\n Files 157 157 \n Lines 28739 28739 \n==========================================\n+ Hits 22707 22995 +288 \n+ Misses 6032 5744 -288 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6852?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/utils/logging.py](https://codecov.io/gh/huggingface/transformers/pull/6852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy91dGlscy9sb2dnaW5nLnB5) | `75.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.40% <0.00%> (-0.18%)` | :arrow_down: |\n| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `86.56% <0.00%> (+2.98%)` | :arrow_up: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `84.44% <0.00%> (+20.00%)` | :arrow_up: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `91.66% <0.00%> (+25.00%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `53.23% <0.00%> (+40.12%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6852?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6852?src=pr&el=footer). Last update [02d09c8...e45ca17](https://codecov.io/gh/huggingface/transformers/pull/6852?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "How do we get `transformers.logging.*`? There is either `transformers.utils.logging.*` or `logging.*` if the latter was imported.\r\n\r\nUnrelated, also has the default just changed from INFO to WARN? I rebased my copy and noticed this change. Ah, yes, it was https://github.com/huggingface/transformers/commit/4561f05c5fafc2b636a2fc1d0dded9057d439745", "You get `transformerts.logging.*` after doing `import transformers`. logging is imported in the project init, so there is no need to add the .utils.", "Ah, I see - the test I was working on was doing `from transformers import logging`. If we follow this in docs it leads to a shorter:\r\n\r\n`logging.set_verbosity(logging.INFO)`\r\n\r\nand it matches the actual `logging.INFO` from the logging package.\r\n\r\n.... but then `from transformers import logging` makes it hard to do `import logging`... same `logging` name. So then:\r\n\r\n```\r\nimport transformers\r\ntransformers.logging.set_verbosity(transformers.logging.INFO)`\r\n```\r\nwhile being quite verbose, has no collision with the normal `logging` package\r\n\r\nThank you for expanding the docs, @sgugger - this is awesome!", "Note that you have the shortcut\r\n```\r\ntransformers.logging.set_verbosity_info()\r\n```\r\nbut yes, importing logging directly will create a conflict with the logging module.", "You meant `transformers.logging.set_verbosity_{info|warning|...}` (must be a typo in `login` :)\r\n\r\nYes, this is good!", "Oops, fixed my comment." ]
"2024-04-30T21:18:20"
"2024-05-02T00:55:46"
"2024-05-02T00:55:46"
NONE
null
### Describe the bug <img width="1001" alt="Screenshot 2024-05-01 at 3 37 06 AM" src="https://github.com/huggingface/datasets/assets/130903099/00fcf12c-fcc1-4749-8592-d263d4efcbcc"> As you can see I logged in to my account and the write token is valid. But I can't upload on my main account and I am getting that error. It was okay on my test account at first try. (I refreshed the token, tried a new token but still doesn't work) ### Steps to reproduce the bug 1. I loaded a dataset. 2. I logged in using both cli and huggingface_hub 3. I pushed to my down dataset (It went well without any issues on my test account) ### Expected behavior It should have gone smoothly and this is not even my first time uploading to huggingface datasets ### Environment info colab, dataset (tried multiple versions)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6852/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6852/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6851
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6851/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6851/comments
https://api.github.com/repos/huggingface/datasets/issues/6851/events
https://github.com/huggingface/datasets/issues/6851
2,270,965,503
I_kwDODunzps6HXC7_
6,851
load_dataset('emotion') UnicodeDecodeError
{ "login": "L-Block-C", "id": 32314558, "node_id": "MDQ6VXNlcjMyMzE0NTU4", "avatar_url": "https://avatars.githubusercontent.com/u/32314558?v=4", "gravatar_id": "", "url": "https://api.github.com/users/L-Block-C", "html_url": "https://github.com/L-Block-C", "followers_url": "https://api.github.com/users/L-Block-C/followers", "following_url": "https://api.github.com/users/L-Block-C/following{/other_user}", "gists_url": "https://api.github.com/users/L-Block-C/gists{/gist_id}", "starred_url": "https://api.github.com/users/L-Block-C/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/L-Block-C/subscriptions", "organizations_url": "https://api.github.com/users/L-Block-C/orgs", "repos_url": "https://api.github.com/users/L-Block-C/repos", "events_url": "https://api.github.com/users/L-Block-C/events{/privacy}", "received_events_url": "https://api.github.com/users/L-Block-C/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2024-04-30T09:25:01"
"2024-04-30T09:25:01"
null
NONE
null
### Describe the bug **emotions = load_dataset('emotion')** _UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte_ ### Steps to reproduce the bug load_dataset('emotion') ### Expected behavior succese ### Environment info py3.10 transformers 4.41.0.dev0 datasets 2.19.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6851/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6851/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6850
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6850/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6850/comments
https://api.github.com/repos/huggingface/datasets/issues/6850/events
https://github.com/huggingface/datasets/issues/6850
2,269,500,624
I_kwDODunzps6HRdTQ
6,850
Problem loading voxpopuli dataset
{ "login": "Namangarg110", "id": 40496687, "node_id": "MDQ6VXNlcjQwNDk2Njg3", "avatar_url": "https://avatars.githubusercontent.com/u/40496687?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Namangarg110", "html_url": "https://github.com/Namangarg110", "followers_url": "https://api.github.com/users/Namangarg110/followers", "following_url": "https://api.github.com/users/Namangarg110/following{/other_user}", "gists_url": "https://api.github.com/users/Namangarg110/gists{/gist_id}", "starred_url": "https://api.github.com/users/Namangarg110/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Namangarg110/subscriptions", "organizations_url": "https://api.github.com/users/Namangarg110/orgs", "repos_url": "https://api.github.com/users/Namangarg110/repos", "events_url": "https://api.github.com/users/Namangarg110/events{/privacy}", "received_events_url": "https://api.github.com/users/Namangarg110/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6850?src=pr&el=h1) Report\n> Merging [#6850](https://codecov.io/gh/huggingface/transformers/pull/6850?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b8e4906c974101d328bdd01245bc1695f9b07088?el=desc) will **increase** coverage by `0.17%`.\n> The diff coverage is `78.57%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6850/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6850?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6850 +/- ##\n==========================================\n+ Coverage 80.44% 80.61% +0.17% \n==========================================\n Files 161 161 \n Lines 30113 30119 +6 \n==========================================\n+ Hits 24224 24281 +57 \n+ Misses 5889 5838 -51 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6850?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `54.95% <78.57%> (+0.27%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.90% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.21% <0.00%> (+0.27%)` | :arrow_up: |\n| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6850?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6850?src=pr&el=footer). Last update [b8e4906...a27cdd3](https://codecov.io/gh/huggingface/transformers/pull/6850?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "I don't think you want to do the logger setup within `train` as users can call `Trainer` for evaluation only as well.\r\nIt probably needs to stay within `__init__` but should also go into a hyperparameter search function, maybe `_objective`.\r\n\r\nWhat's important is for loggers to setup at `__init__` and at each parameter search.\r\n\r\nHowever @sgugger will have a better idea in how to organize this function.", "Yes, the train method could be called several times on the same Trainer, or the Trainer could be used for evaluation only, and those logging platforms should be setup once only, so the init looks best. Maybe we could add a private attribute `_has_been_setup` that could be checked inside the log method before reporting to wandb/comet and call the setup method if needed? Would that work for the hp search with Ray?", "That sounds good. Should it still be setup in the init then? For hyperparameter search this doesn't really make sense (and creates an \"empty\" run in wandb), and if it is setup on logging calls anyway we wouldn't necessarily need it there. But happy to leave it there, too.", "We can leave the setup to the first time we try to log something or the first call to train then (if there is a check to the same flag, we can call the setup method several times safely).", "I think the first time we try to log makes sense, and also allow to use `Trainer` in eval only.\r\n\r\nIf people just want to call multiple times `train`, it would be nice if it was straightforward for them to choose between logging to the same run or logging to a new run. Hyperparameter search would obviously automatically choose to log to a new run.\r\n\r\nNote that logging several `train` calls to the same run is actually not currently supported due to `global_step` being reset to 0 [here](https://github.com/huggingface/transformers/blob/54cfefc2ac9e3e1c0968a2ed0dd3c711eee76196/src/transformers/trainer.py#L645) which will cause issues at least in both Tensorboard and W&B.", "I adjusted the PR so the loggers will be initialized on the first call to `log()`. Is this what you had in mind?", "Yes. I just think we should add the line to setup at the beginning of log, so that the loggers get initialized if we try to log something.", "Okay, so the current position is good? (When clicking the \"Files changed\" link it looks like it's in `_hp_search_setup`, but it's actually right at the beginning of `log`)", "Looks great!", "Oh yeah, sorry I looked too fast. LGTM!" ]
"2024-04-29T16:46:51"
"2024-05-06T09:25:54"
"2024-05-06T09:25:54"
NONE
null
### Describe the bug ``` Exception has occurred: FileNotFoundError Couldn't find file at https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/{'en': 'data/en/asr_train.tsv'} ``` Error in logic for link url creation. The link should be https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/en/asr_train.tsv Basically there should be links directly under ```metadata["train"]```, not under ```metadata["train"][self.config.languages[0]]``` same for audio urls ### Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset("facebook/voxpopuli","en") ``` ### Expected behavior Dataset should be loaded successfully. ### Environment info - `datasets` version: 2.19.0 - Platform: Linux-5.15.0-1041-aws-x86_64-with-glibc2.31 - Python version: 3.10.13 - `huggingface_hub` version: 0.22.2 - PyArrow version: 16.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.12.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6850/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6850/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6849
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6849/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6849/comments
https://api.github.com/repos/huggingface/datasets/issues/6849/events
https://github.com/huggingface/datasets/pull/6849
2,268,718,355
PR_kwDODunzps5t_wnu
6,849
fix webdataset filename split
{ "login": "Bowser1704", "id": 43539191, "node_id": "MDQ6VXNlcjQzNTM5MTkx", "avatar_url": "https://avatars.githubusercontent.com/u/43539191?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bowser1704", "html_url": "https://github.com/Bowser1704", "followers_url": "https://api.github.com/users/Bowser1704/followers", "following_url": "https://api.github.com/users/Bowser1704/following{/other_user}", "gists_url": "https://api.github.com/users/Bowser1704/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bowser1704/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bowser1704/subscriptions", "organizations_url": "https://api.github.com/users/Bowser1704/orgs", "repos_url": "https://api.github.com/users/Bowser1704/repos", "events_url": "https://api.github.com/users/Bowser1704/events{/privacy}", "received_events_url": "https://api.github.com/users/Bowser1704/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi, you have an example of how to do exactly this in the [documentation](https://huggingface.co/transformers/task_summary.html#sequence-classification):\r\n\r\n```py\r\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification\r\nimport torch\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased-finetuned-mrpc\")\r\nmodel = AutoModelForSequenceClassification.from_pretrained(\"bert-base-cased-finetuned-mrpc\")\r\n\r\nclasses = [\"not paraphrase\", \"is paraphrase\"]\r\nsequence_0 = \"The company HuggingFace is based in New York City\"\r\nsequence_1 = \"Apples are especially bad for your health\"\r\nsequence_2 = \"HuggingFace's headquarters are situated in Manhattan\"\r\n\r\nparaphrase = tokenizer(sequence_0, sequence_2, return_tensors=\"pt\")\r\nnot_paraphrase = tokenizer(sequence_0, sequence_1, return_tensors=\"pt\")\r\n\r\nparaphrase_classification_logits = model(**paraphrase).logits\r\nnot_paraphrase_classification_logits = model(**not_paraphrase).logits\r\n\r\nparaphrase_results = torch.softmax(paraphrase_classification_logits, dim=1).tolist()[0]\r\nnot_paraphrase_results = torch.softmax(not_paraphrase_classification_logits, dim=1).tolist()[0]\r\n\r\n# Should be paraphrase\r\nfor i in range(len(classes)):\r\n print(f\"{classes[i]}: {int(round(paraphrase_results[i] * 100))}%\")\r\n\r\n# Should not be paraphrase\r\nfor i in range(len(classes)):\r\n print(f\"{classes[i]}: {int(round(not_paraphrase_results[i] * 100))}%\")\r\n```" ]
"2024-04-29T10:57:18"
"2024-04-29T11:14:41"
null
NONE
null
use `os.path.splitext` to parse field_name. fix filename which has dot. like: ``` a.b.jpeg a.b.txt ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6849/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6849/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6849", "html_url": "https://github.com/huggingface/datasets/pull/6849", "diff_url": "https://github.com/huggingface/datasets/pull/6849.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6849.patch", "merged_at": null }
https://api.github.com/repos/huggingface/datasets/issues/6848
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6848/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6848/comments
https://api.github.com/repos/huggingface/datasets/issues/6848/events
https://github.com/huggingface/datasets/issues/6848
2,268,622,609
I_kwDODunzps6HOG8R
6,848
Cant Downlaod Common Voice 17.0 hy-AM
{ "login": "mheryerznkanyan", "id": 31586104, "node_id": "MDQ6VXNlcjMxNTg2MTA0", "avatar_url": "https://avatars.githubusercontent.com/u/31586104?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mheryerznkanyan", "html_url": "https://github.com/mheryerznkanyan", "followers_url": "https://api.github.com/users/mheryerznkanyan/followers", "following_url": "https://api.github.com/users/mheryerznkanyan/following{/other_user}", "gists_url": "https://api.github.com/users/mheryerznkanyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/mheryerznkanyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mheryerznkanyan/subscriptions", "organizations_url": "https://api.github.com/users/mheryerznkanyan/orgs", "repos_url": "https://api.github.com/users/mheryerznkanyan/repos", "events_url": "https://api.github.com/users/mheryerznkanyan/events{/privacy}", "received_events_url": "https://api.github.com/users/mheryerznkanyan/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-04-29T10:06:02"
"2024-05-13T06:09:30"
null
NONE
null
### Describe the bug I want to download Common Voice 17.0 hy-AM but it returns an error. ``` The version_base parameter is not specified. Please specify a compatability version level, or None. Will assume defaults for version 1.1 @hydra.main(config_name='hfds_config', config_path=None) /usr/local/lib/python3.10/dist-packages/hydra/_internal/hydra.py:119: UserWarning: Future Hydra versions will no longer change working directory at job runtime by default. See https://hydra.cc/docs/1.2/upgrades/1.1_to_1.2/changes_to_job_working_dir/ for more information. ret = run_job( /usr/local/lib/python3.10/dist-packages/datasets/load.py:1429: FutureWarning: The repository for mozilla-foundation/common_voice_17_0 contains custom code which must be executed to correctly load the dataset. You can inspect the repository content at https://hf.co/datasets/mozilla-foundation/common_voice_17_0 You can avoid this message in future by passing the argument `trust_remote_code=True`. Passing `trust_remote_code=True` will be mandatory to load this dataset from the next major release of `datasets`. warnings.warn( Reading metadata...: 6180it [00:00, 133224.37it/s]les/s] Generating train split: 0 examples [00:00, ? examples/s] HuggingFace datasets failed due to some reason (stack trace below). For certain datasets (eg: MCV), it may be necessary to login to the huggingface-cli (via `huggingface-cli login`). Once logged in, you need to set `use_auth_token=True` when calling this script. Traceback error for reference : Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1743, in _prepare_split_single example = self.info.features.encode_example(record) if self.info.features is not None else record File "/usr/local/lib/python3.10/dist-packages/datasets/features/features.py", line 1878, in encode_example return encode_nested_example(self, example) File "/usr/local/lib/python3.10/dist-packages/datasets/features/features.py", line 1243, in encode_nested_example { File "/usr/local/lib/python3.10/dist-packages/datasets/features/features.py", line 1243, in <dictcomp> { File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 326, in zip_dict yield key, tuple(d[key] for d in dicts) File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 326, in <genexpr> yield key, tuple(d[key] for d in dicts) KeyError: 'sentence_id' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/workspace/nemo/scripts/speech_recognition/convert_hf_dataset_to_nemo.py", line 358, in main dataset = load_dataset( File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2549, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1005, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1767, in _download_and_prepare super()._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1100, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1605, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1762, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset ``` ### Steps to reproduce the bug ``` from datasets import load_dataset cv_17 = load_dataset("mozilla-foundation/common_voice_17_0", "hy-AM") ``` ### Expected behavior It works fine with common_voice_16_1 ### Environment info - `datasets` version: 2.18.0 - Platform: Linux-5.15.0-1042-nvidia-x86_64-with-glibc2.35 - Python version: 3.11.6 - `huggingface_hub` version: 0.22.2 - PyArrow version: 15.0.2 - Pandas version: 2.2.2 - `fsspec` version: 2024.2.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6848/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6848/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6847
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6847/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6847/comments
https://api.github.com/repos/huggingface/datasets/issues/6847/events
https://github.com/huggingface/datasets/issues/6847
2,268,589,177
I_kwDODunzps6HN-x5
6,847
[Streaming] Only load requested splits without resolving files for the other splits
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[]
"2024-04-29T09:49:32"
"2024-05-07T04:43:59"
null
MEMBER
null
e.g. [thangvip](https://huggingface.co/thangvip)/[cosmopedia_vi_math](https://huggingface.co/datasets/thangvip/cosmopedia_vi_math) has 300 splits and it takes a very long time to load only one split. This is due to `load_dataset()` resolving the files of all the splits even if only one is needed. In `dataset-viewer` the splits are loaded in different jobs so it results in 300 jobs that resolve 300 splits -> 90k calls to `/paths-info`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6847/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6847/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6846
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6846/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6846/comments
https://api.github.com/repos/huggingface/datasets/issues/6846/events
https://github.com/huggingface/datasets/issues/6846
2,267,352,120
I_kwDODunzps6HJQw4
6,846
Unimaginable super slow iteration
{ "login": "rangehow", "id": 88258534, "node_id": "MDQ6VXNlcjg4MjU4NTM0", "avatar_url": "https://avatars.githubusercontent.com/u/88258534?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rangehow", "html_url": "https://github.com/rangehow", "followers_url": "https://api.github.com/users/rangehow/followers", "following_url": "https://api.github.com/users/rangehow/following{/other_user}", "gists_url": "https://api.github.com/users/rangehow/gists{/gist_id}", "starred_url": "https://api.github.com/users/rangehow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rangehow/subscriptions", "organizations_url": "https://api.github.com/users/rangehow/orgs", "repos_url": "https://api.github.com/users/rangehow/repos", "events_url": "https://api.github.com/users/rangehow/events{/privacy}", "received_events_url": "https://api.github.com/users/rangehow/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks a lot for doing this! It's a great way for us to realize the needed changes to get fully torch-scriptable models. Aside from the tests (we can help fix them once the design is approved), I'd love to see what parts we can reuse from bert (with potential non-harmful modifications) and what parts need to be rewritten because they're not compatible with the rest of our API.\r\n\r\nFor instance, the change in the embeddings layer is just a type annotation which we could do in bert (it would be a nice addition) and then import that layer. On the other hand, the whole parts with `return_dict` are probably fully incompatible with scripting.\r\n\r\nI guess in an ideal world, we would reuse the same internal layers from bert and only change the full models if that is possible.", "As you can see in a [previous comment on the thread](https://github.com/huggingface/transformers/issues/5067#issuecomment-662586999) my initial implementation tried to go the minimal-duplication route. I modified the original models to be scriptable, and then had a thin wrapper around them to transform the output into dictionary form.\r\nSo basically, you had BertScriptableModel returning a tuple of fixed size, and BertModel who's forward just ran BertScriptableModel and put the output in a dictionary, to keep the interface.\r\nThe main issue with that was that the code kept changing. Other than that, it should be doable.\r\n", "90% of the changes were type annotations, and assertions about Nullity (which would improve the code quality regardless).\r\nThe added bonus of the minimal duplication route is that it makes it easier to convert other models that use BERT components, e.g., Albert.", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6846?src=pr&el=h1) Report\n> Merging [#6846](https://codecov.io/gh/huggingface/transformers/pull/6846?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2de7ee0385bee4134ca894a208fa3a2aaf7d5371?el=desc) will **decrease** coverage by `0.85%`.\n> The diff coverage is `18.92%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6846/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6846?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6846 +/- ##\n==========================================\n- Coverage 80.20% 79.35% -0.86% \n==========================================\n Files 157 158 +1 \n Lines 28734 29257 +523 \n==========================================\n+ Hits 23047 23216 +169 \n- Misses 5687 6041 +354 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6846?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_scriptable\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19zY3JpcHRhYmxlX2JlcnQucHk=) | `18.92% <18.92%> (ø)` | |\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `59.43% <0.00%> (-35.85%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `92.41% <0.00%> (+0.44%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.55%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.66% <0.00%> (+0.66%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+1.42%)` | :arrow_up: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6846?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6846?src=pr&el=footer). Last update [2de7ee0...a2c6c43](https://codecov.io/gh/huggingface/transformers/pull/6846?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Yes, I can see that clearly now. Sorry for going back and forth with you on this. We definitely want the type annotations in the main bert file, and I think the first implementation is better on that regard. It just misses the `return_dict` argument, which is easy to add with the way you designed things (happy to do it myself if you give me access to a branch).", "The previous implementation is at https://github.com/sbrody18/transformers/tree/scripting\r\nAs mentioned, it is a behind head, and still needs some work.\r\nI sent an invite for access to my repo. Let me know if there's a better way to share the branch.", "Yes, saw the invite and accepted it. I have some stuff to finish but it's on my TODO and I hope to be able to add the missing stuff before the end of the week. Do you prefer a PR or can I directly commit on this branch?", "No rush on my side.\r\nA PR might be better, to make it easier to comment, but direct is fine if that's too much trouble.", "Super cool PR!\r\nI can tweak our benchmarking tools a bit to get some numbers on speed improvements using your scriptable Bert model tomorrow", "@patrickvonplaten That would be great!\r\nThe major improvement is expected when running a large set of inputs with varying lengths, individually or in small batches (that's where not having to pad to max_length would come into play)", "> @patrickvonplaten That would be great!\r\n> The major improvement is expected when running a large set of inputs with varying lengths, individually or in small batches (that's where not having to pad to max_length would come into play)\r\n\r\nGot it!", "This is great, looking forward to this PR!", "Okey I did some benchmarking, which can be seen here: https://github.com/huggingface/transformers/pull/6907. \r\n\r\n@sbrody18 - it would be awesome if you could take a look if I am using the function correctly.", "Ok, after reviewing this PR and the other design in [this diff](https://github.com/huggingface/transformers/compare/clean_scripting?expand=1), along with @patrickvonplaten benchmark results in #6907 we've come to the conclusion that adding scriptable layers is a bit too much for almost no gain, since `script` and `trace` now have the same speed in PyTorch.\r\n\r\nAll type annotations and asserts are welcome additions on the other hand, if you want to suggest a PR with just those changes.", "Sure. Makes sense. I'll see if I can put one together, but other things might take priority.\r\nThanks for all the work you've put in to look into this.", "@sbrody18 - Thanks a lot for making us aware of this issue! I learned a lot about the differences between `torch.jit.trace` and `torch.jit.script` thanks to you!", "Yes thanks a lot for all your work on this, I learned a lot on scriptable pytorch modules thanks to the PR!", "I just wanted to point out that, IIUC, a big benefit of making everything scriptable is free reuse from languages other than Python (for example, from the C++ frontend). I know that the prescribed setup is to train in python, trace, then deploy at runtime with a traced TorchScript, but the freedom to train from C++, or even the JVM with a few extra bindings, is a pretty big win. ", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
"2024-04-28T05:24:14"
"2024-05-06T08:30:03"
"2024-05-06T08:30:03"
NONE
null
### Describe the bug Assuming there is a dataset with 52000 sentences, each with a length of 500, it takes 20 seconds to extract a sentence from the dataset……?Is there something wrong with my iteration? ### Steps to reproduce the bug ```python import datasets import time import random num_rows = 52000 num_cols = 500 random_input = [[random.randint(1, 100) for _ in range(num_cols)] for _ in range(num_rows)] random_output = [[random.randint(1, 100) for _ in range(num_cols)] for _ in range(num_rows)] s=time.time() d={'random_input':random_input,'random_output':random_output} dataset=datasets.Dataset.from_dict(d) print('from dict',time.time()-s) print(dataset) for i in range(len(dataset)): aa=time.time() a,b=dataset['random_input'][i],dataset['random_output'][i] print(time.time()-aa) ``` corresponding output ```bash from dict 9.215498685836792 Dataset({ features: ['random_input', 'random_output'], num_rows: 52000 }) 19.129778146743774 19.329464197158813 19.27668261528015 19.28557538986206 19.247620582580566 19.624247074127197 19.28673791885376 19.301053047180176 19.290496110916138 19.291821718215942 19.357765197753906 ``` ### Expected behavior Under normal circumstances, iteration should be very rapid as it does not involve the main tasks other than getting items ### Environment info - `datasets` version: 2.19.0 - Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.10.13 - `huggingface_hub` version: 0.21.4 - PyArrow version: 15.0.0 - Pandas version: 2.2.1 - `fsspec` version: 2024.2.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6846/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6846/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6845
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6845/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6845/comments
https://api.github.com/repos/huggingface/datasets/issues/6845/events
https://github.com/huggingface/datasets/issues/6845
2,265,876,551
I_kwDODunzps6HDohH
6,845
load_dataset doesn't support list column
{ "login": "arthasking123", "id": 16257131, "node_id": "MDQ6VXNlcjE2MjU3MTMx", "avatar_url": "https://avatars.githubusercontent.com/u/16257131?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arthasking123", "html_url": "https://github.com/arthasking123", "followers_url": "https://api.github.com/users/arthasking123/followers", "following_url": "https://api.github.com/users/arthasking123/following{/other_user}", "gists_url": "https://api.github.com/users/arthasking123/gists{/gist_id}", "starred_url": "https://api.github.com/users/arthasking123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arthasking123/subscriptions", "organizations_url": "https://api.github.com/users/arthasking123/orgs", "repos_url": "https://api.github.com/users/arthasking123/repos", "events_url": "https://api.github.com/users/arthasking123/events{/privacy}", "received_events_url": "https://api.github.com/users/arthasking123/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6845?src=pr&el=h1) Report\n> Merging [#6845](https://codecov.io/gh/huggingface/transformers/pull/6845?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2de7ee0385bee4134ca894a208fa3a2aaf7d5371?el=desc) will **decrease** coverage by `0.36%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6845/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6845?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6845 +/- ##\n==========================================\n- Coverage 80.20% 79.83% -0.37% \n==========================================\n Files 157 157 \n Lines 28734 28734 \n==========================================\n- Hits 23047 22941 -106 \n- Misses 5687 5793 +106 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6845?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `82.28% <ø> (ø)` | |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `32.20% <0.00%> (-66.95%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `20.53% <0.00%> (-21.21%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `92.41% <0.00%> (+0.44%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+1.42%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.63% <0.00%> (+7.18%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+10.95%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6845?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6845?src=pr&el=footer). Last update [2de7ee0...9e2f5ef](https://codecov.io/gh/huggingface/transformers/pull/6845?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-04-26T14:11:44"
"2024-05-15T12:06:59"
null
NONE
null
### Describe the bug dataset = load_dataset("Doraemon-AI/text-to-neo4j-cypher-chinese") got exception: Generating train split: 1834 examples [00:00, 5227.98 examples/s] Traceback (most recent call last): File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 2011, in _prepare_split_single writer.write_table(table) File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_writer.py", line 585, in write_table pa_table = table_cast(pa_table, self._schema) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2295, in table_cast return cast_table_to_schema(table, schema) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2254, in cast_table_to_schema arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2254, in <listcomp> arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 1802, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 1802, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2018, in cast_array_to_feature casted_array_values = _c(array.values, feature[0]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 1804, in wrapper return func(array, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2115, in cast_array_to_feature raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") TypeError: Couldn't cast array of type struct<m.name: string, x.name: string, p.name: string, n.name: string, h.name: string, name: string, c: int64, collect(r.name): list<item: string>, q.name: string, rel.name: string, count(p): int64, 1: int64, p.location: string, max(n.name): null, mn.name: string, p.time: int64, min(q.name): string> to {'q.name': Value(dtype='string', id=None), 'mn.name': Value(dtype='string', id=None), 'x.name': Value(dtype='string', id=None), 'p.name': Value(dtype='string', id=None), 'n.name': Value(dtype='string', id=None), 'name': Value(dtype='string', id=None), 'm.name': Value(dtype='string', id=None), 'h.name': Value(dtype='string', id=None), 'count(p)': Value(dtype='int64', id=None), 'rel.name': Value(dtype='string', id=None), 'c': Value(dtype='int64', id=None), 'collect(r.name)': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), '1': Value(dtype='int64', id=None), 'p.location': Value(dtype='string', id=None), 'substring(h.name,0,5)': Value(dtype='string', id=None), 'p.time': Value(dtype='int64', id=None)} The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/ubuntu/llm/train-2.py", line 150, in <module> dataset = load_dataset("Doraemon-AI/text-to-neo4j-cypher-chinese") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/load.py", line 2609, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 2038, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset ### Steps to reproduce the bug dataset = load_dataset("Doraemon-AI/text-to-neo4j-cypher-chinese") ### Expected behavior no exception ### Environment info python 3.11 datasets 2.19.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6845/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6845/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6844
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6844/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6844/comments
https://api.github.com/repos/huggingface/datasets/issues/6844/events
https://github.com/huggingface/datasets/pull/6844
2,265,870,546
PR_kwDODunzps5t2PRA
6,844
Retry on HF Hub error when streaming
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "If anyone wants to help, evaluate on a dataset where the third column is not filled it.\r\nSteps:\r\nFirst, download the data from nlp package, save to disk in format described in https://github.com/huggingface/transformers/blob/master/examples/seq2seq/download_wmt.py\r\n\r\nHelper function for run_eval\r\n```bash\r\ngen_test_hub_summ () {\r\n\t# need to add --fp16 and --bs = whatever\r\n model=$1\r\n DATA_DIR=$2\r\n echo $DATA_DIR\r\n\tsave_dir=$3\r\n\tmkdir -p $save_dir\r\n\tshift\r\n shift\r\n shift\r\n python run_eval.py $model $DATA_DIR/test.source $save_dir/test_gens.txt --reference_path $DATA_DIR/test.target --score_path $save_dir/test_rouge.json --task summarization $@\r\n}\r\n\r\n```\r\nThen Roughly:\r\n```\r\ncd examples/seq2seq\r\ngen_test_hub_summ google/pegasus-{dataset} dataset {dataset}_results --bs 4\r\n```\r\n\r\nLeave the results, as well as any observations about truncation produced summaries as a comment in this issue!\r\n", "### CNN Dailymail\r\n\r\nOne possible reason for replication issue is that our beam search logic differs from the original, causing 16% of the summaries to be truncated.\r\n\r\nFinetuning with our finetuning code and `--max_target_length=142` partially fixes this issue:\r\n+ Can get a distilled version (16-4) `43.23/21.29/31.3` .436 S/sample (released at `sshleifer/dpx-cnn-16-4`)\r\n+ Can finetune the 16-16 pegasus-cnn checkpoint to get `44.13/21.37/30.94` 1.4S/Sample (0.2 Rouge2 behind published.) ( `sshleifer/pegasus-cnn-ft-v2`)\r\n+ original google/pegasus-cnn_dailymail scored 20.73 Rouge 2.\r\n+ For both of these finetuned models, >99.8% of generations end in punctuation.\r\n\r\n\r\n### XSUM\r\n\r\n`sshleifer/distill-pegasus-xsum-16-4`\r\n```\r\n{\"rouge1\": 44.942, \"rouge2\": 23.0412, \"rougeL\": 37.8579,\r\n \"n_obs\": 11333, \"seconds_per_sample\": 0.1972, \"batch_size\": 16}\r\n```\r\n\r\nTeacher metrics (I don't remember batch size):\r\n```\r\n{\"rouge1\": 46.8773, \"rouge2\": 24.46, \"rougeL\": 39.1507, \r\n\"n_obs\": 11328, \"seconds_per_sample\": 0.3308}\r\n```\r\n", "I intend to post a writeup on distillation techniques at some point before Oct 15!", "Re: replication, best download strategy maybe to start with\r\nhttps://github.com/google-research/pegasus/blob/master/pegasus/data/public_datasets_test.py and modify.", "Cnn update: \r\n- I believe we have a preprocessing issue. Ported models generate the `<n>` token at the beginning of sentences, whereas ours do not. The pegasus original code replaces newline symbol with `<n>`. `PegasusTokenizer` should probably do this: https://github.com/huggingface/transformers/issues/7327", "For CNNDM, I can get this score with `google/pegasus-cnn_dailymail` model.\r\n``` \r\nROUGE-1:\r\nrouge_1_f_score: 0.4436 with confidence interval (0.4413, 0.4459)\r\nrouge_1_recall: 0.4825 with confidence interval (0.4797, 0.4853)\r\nrouge_1_precision: 0.4368 with confidence interval (0.4339, 0.4395)\r\n\r\nROUGE-2:\r\nrouge_2_f_score: 0.2145 with confidence interval (0.2120, 0.2170)\r\nrouge_2_recall: 0.2323 with confidence interval (0.2297, 0.2350)\r\nrouge_2_precision: 0.2124 with confidence interval (0.2097, 0.2150)\r\n\r\nROUGE-l:\r\nrouge_l_f_score: 0.4141 with confidence interval (0.4118, 0.4165)\r\nrouge_l_recall: 0.4501 with confidence interval (0.4474, 0.4530)\r\nrouge_l_precision: 0.4079 with confidence interval (0.4051, 0.4106)\r\n```\r\nScript I run:\r\n```\r\n./run_eval.py google/pegasus-cnn_dailymail /home/ffajri/Data/huggingface/cnn_dm/test.source pred_cnndm_pegasus.txt \\\r\n --reference_path /home/ffajri/Data/huggingface/cnn_dm/test.target \\\r\n --score_path cnn_rouge.json \\\r\n --task summarization \\\r\n --device cuda \\\r\n --max_source_length 512 \\\r\n --max_target_length 128 \\\r\n --bs 4\r\n```\r\nI notice the first R1 output from the transformer is 43.xx something, but I recalculate ROUGE (to get the scores above) as follows:\r\n1) First, I replace `<n>` with `\\n` in the decoding results. (as you said above)\r\n2) I don't use the gold summary provided by `huggingface` because sentences are not separated by the newline character. I think its necessary to separate sentences in the gold summary. So I use the original gold test set from See et al., 2017 to compute ROUGE.\r\n2) I lower case all decoded and gold summary (but not sure if it really affects the ROUGE score)\r\n3) I calculate ROUGE with the `pyrouge` code (not the ROUGE in transformer)\r\n\r\nHope it can help the fix. \r\n", "Would you be willing to share a few lines of \r\n\r\n`cnn_dm/test.source`, `pred_cnndm_pegasus.txt`, and `cnn_dm/test.target`\r\n\r\nThanks!", "Hi, for inference, I use the same set from `huggingface`\r\n\r\n**`test.source`**\r\n``\r\nMarseille, France (CNN)The French prosecutor leading an investigation into the crash of Germanwings Flight 9525 insisted Wednesday that he was not aware of any video footage from on board the plane. Marseille prosecutor Brice Robin told CNN that \"so far no videos were used in the crash investigation.\" He added, \"A person who has such a video needs to immediately give it to the investigators.\" ............\r\n``\r\n\r\n**`test.target`**\r\n``\r\nMarseille prosecutor says \"so far no videos were used in the crash investigation\" despite media reports . Journalists at Bild and Paris Match are \"very confident\" the video clip is real, an editor says . Andreas Lubitz had informed his Lufthansa training school of an episode of severe depression, airline says .\r\n``\r\n\r\n**`pred_cnndm_pegasus.txt`** (Result)\r\n``\r\n\"A person who has such a video needs to immediately give it to the investigators,\" prosecutor says .<n>\"It is a very disturbing scene,\" editor-in-chief of Bild online tells \"Erin Burnett: Outfront\"\r\n``\r\n\r\nThen, I got R1 = 43.xx (as the `./run_eval.py` output)\r\n\r\nTo get the R1 = 44.xx, I separately calculate ROUGE (pyrouge) with:\r\n\r\n**`test.target`** from [See et al., 2017 ](https://github.com/abisee/pointer-generator)\r\n``\r\nmarseille prosecutor says '' so far no videos were used in the crash investigation '' despite media reports .\\njournalists at bild and paris match are '' very confident '' the video clip is real , an editor says .\\nandreas lubitz had informed his lufthansa training school of an episode of severe depression , airline says .\r\n``\r\n\r\n_updated_ **`pred_cnndm_pegasus.txt`**\r\n``\r\n\"a person who has such a video needs to immediately give it to the investigators,\" prosecutor says .\\n\"it is a very disturbing scene,\" editor-in-chief of bild online tells \"erin burnett: outfront\"\r\n``\r\n\r\nBoth now have `\\n` which I think is necessary for calculating ROUGE.", "We fixed our `calculate_rouge_score` to address the `\\n` issue and now we are getting\r\n\r\n44.31/21.53/41.15 for `sshleifer/pegasus-cnn-ft-v2`! Thanks for the help!\r\n\r\n\r\n", "Updated the table in the Issue description with most recent results after the `calculate_rouge_fix` \r\nMoving forward, questions about specific results should be asked on the forums or in a separate issue with @stas00, @patil-suraj, and @sshleifer tagged.", "hi guys : \r\n\r\nis there code to pretrainning the model used for my own data .\r\nThank you \r\n \r\n ", "Thank you for reproducing this results! \r\nRegarding the treatment of the \\<n\\>, newline char \"\\n\" in input text are being replaced by \"\\<n\\>\" and vice versa for the output.", "I have tried around 10 sets of hyperparameters and only achieved nearly worse results. (ROUGE-1 ~ 43.9, for CNN/DailyMail) These are options of my experiments:\r\n\r\n- Optimizer: Adafactor <-> AdamW\r\n- Learning rate: 5e-4 <-> 1e-4\r\n- Batch size: 4\r\n- Gradient accumulation steps: 1 <-> 8 <-> 64\r\n- Accelarator: dp <-> ddp\r\n- Epochs: 20 - 80 (after around 10 epochs it started to overfit (val loss increases))\r\n- Datasets: both old and new versions (old version doesn't consist of \r\n\\<n\\> in the target summary)\r\n\r\nI don't know what to continue, can someone tell me what my problems are?", "Hi @thongnguyen050999 \r\n\r\nSee if this comment above helps \r\nhttps://github.com/huggingface/transformers/issues/6844#issuecomment-699499846", "Hi @patil-suraj,\r\n\r\nYes, I did notice that, these are my results:\r\n\r\n- Sentence ends with \"\\<n\\>\": ROUGE-1: 45.94, ROUGE-L: 32.24\r\n- Sentence ends with \"\\\\n\": ROUGE-1: 43.96, ROUGE-L: 40.87", "Are my results reasonable (representing the expected outcome)? :-) ", "> Are my results reasonable (representing the expected outcome)? :-)\r\n\r\nHi, can you please tell me a bit about what do you want to achieve? and which pre-trained Pegasus model are you currently using? It seems to me you are not doing only inference but some fine-tuning of the Pegasus model (based on your hyperparameter)?\r\n", "Yes, here is my experiment description:\r\n\r\n- Goal: I want to reproduce the results from the Pegasus paper (in the future I might add some changes based upon the baseline 🧑‍🎓 ), in which I finetuned from the pretrained checkpoint\r\n- Pretrained model I use: google/pegasus-large ", "I guess `google/pegasus-large` in `huggingface` is a Mixed & Stochastic model where we expect to have 44.16/21.56/41.30 (which is slightly lower than your current score).\r\n\r\nHave you tried to set the hyperparameter of the original implementation? You can check it [here]( https://github.com/google-research/pegasus/blob/939830367bcf411193d2b5eca2f2f90f3f9260ca/pegasus/params/public_params.py).\r\n\r\nThe primary hyperparameter will be this:\r\n\"max_input_len\": 1024, --> (longer text)\r\n\"max_output_len\": 128,\r\n\"train_steps\": 210000,\r\n\"learning_rate\": 0.001,\r\n \"batch_size\": 8,\r\n\r\nYou probably want to follow their hyperparameter for inference as well (e.g. beam size etc)", "Hi @fajri91, I have tried your suggestion and achieved the following results after 210k steps:\r\n\r\n- Huggingface version:\r\n+ ROUGE-1 = 43.2011\r\n+ ROUGE-L = 39.99\r\n\r\n- Google version (I ran their default code without modifications)\r\n+ ROUGE-1 = 43.01\r\n+ ROUGE-L = 39.92", "> ### Replication\r\n> [link](https://github.com/google-research/pegasus)\r\n> \r\n> mixed & stochastic column of this [table](https://github.com/google-research/pegasus#results-update)\r\n> \r\n> dataset\tAuthors\tThis Repo\tbest bart\tbest bart name\r\n> xsum\t47.60/24.83/39.64\t46.87/24.46/39.15\t22.32/37.39\tdistilbart-xsum-12-6\r\n> cnn_dailymail\t44.16/21.56/41.30\tsee comment\t21.26/30.59\tdistilbart-cnn-12-6\r\n> newsroom\t45.07/33.39/41.28\t41.03/29.83/36.96\t\t\r\n> multi_news\t47.65/18.75/24.95\t47.58/19.0/24.77\t\t\r\n> gigaword\t39.65/20.47/36.76\t39.79/20.56/36.80\t\t\r\n> wikihow\t46.39/22.12/38.41 *\t46.85/23.64/28.73\t\t\r\n> reddit_tifu\t27.99/9.81/22.94\t32.75/11.68/24.97\t\t\r\n> big_patent\t52.29/33.08/41.66 *\t\t\t\r\n> arxiv\t44.21/16.95/25.67\t44.83/17.34/25.60\t\t\r\n> pubmed\t45.97/20.15/28.25\t45.40/19.42/26.93\t\t\r\n> aeslc\t37.68/21.25/36.51\t37.09/21.40/35.93\t\t\r\n> billsum\t59.67/41.58/47.59\t56.18/39.94/45.39\t\t\r\n> * (* (authors footnote)) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data\r\n> \r\n> #### Final Update (2020-10-16)\r\n> Mission accomplished thanks to the work of @patil-suraj, and @stas00 !\r\n> \r\n> The above table now shows that our results are close enough. We suspect differences are due to treatment of the `<n>` character that pegasus generates and slightly different beam search implementations.\r\n> \r\n> [Link to Spreadsheet with timing data](https://docs.google.com/spreadsheets/d/1ODfoK-tXOV6TLXDMnujdGLtFhA8oVTy-Cv6Ib6qKgWk/edit?usp=sharing)\r\n> \r\n> Questions about specific results should be asked on the forums/separate issues with @stas00, @patil-suraj, and @sshleifer tagged.\r\n\r\nHi Sam, I have a quick question regarding to obtain the results for Gigaword using checkpoint \"google/pegasus-gigaword\" provided by Google. Currently, I followed a very simple setup using \"google/pegasus-gigaword\" and follow directly from huggingface default codes in generating gigaword summary. For dataset, I directly load 'gigaword' from datasets library without pre-processing. I currently use rouge_score library to compute the rouge score. However, my results evaluating on 1951 test samples in Gigaword deviates almost 10 rouge points (rouge1, rouge2, rougel: 28, 12 and 25 vs 39.79/20.56/36.80). Is it OK if you can share your setup in reproducing your experiment.\r\n\r\nThanks in advance!\r\n" ]
"2024-04-26T14:09:04"
"2024-04-26T15:37:42"
"2024-04-26T15:37:42"
COLLABORATOR
null
Retry on the `huggingface_hub`'s `HfHubHTTPError` in the streaming mode. Fix #6843
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6844/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6844/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6844", "html_url": "https://github.com/huggingface/datasets/pull/6844", "diff_url": "https://github.com/huggingface/datasets/pull/6844.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6844.patch", "merged_at": null }
https://api.github.com/repos/huggingface/datasets/issues/6843
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6843/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6843/comments
https://api.github.com/repos/huggingface/datasets/issues/6843/events
https://github.com/huggingface/datasets/issues/6843
2,265,432,897
I_kwDODunzps6HB8NB
6,843
IterableDataset raises exception instead of retrying
{ "login": "bauwenst", "id": 145220868, "node_id": "U_kgDOCKflBA", "avatar_url": "https://avatars.githubusercontent.com/u/145220868?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bauwenst", "html_url": "https://github.com/bauwenst", "followers_url": "https://api.github.com/users/bauwenst/followers", "following_url": "https://api.github.com/users/bauwenst/following{/other_user}", "gists_url": "https://api.github.com/users/bauwenst/gists{/gist_id}", "starred_url": "https://api.github.com/users/bauwenst/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bauwenst/subscriptions", "organizations_url": "https://api.github.com/users/bauwenst/orgs", "repos_url": "https://api.github.com/users/bauwenst/repos", "events_url": "https://api.github.com/users/bauwenst/events{/privacy}", "received_events_url": "https://api.github.com/users/bauwenst/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6843?src=pr&el=h1) Report\n> Merging [#6843](https://codecov.io/gh/huggingface/transformers/pull/6843?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2de7ee0385bee4134ca894a208fa3a2aaf7d5371?el=desc) will **decrease** coverage by `2.38%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6843/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6843?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6843 +/- ##\n==========================================\n- Coverage 80.20% 77.82% -2.39% \n==========================================\n Files 157 157 \n Lines 28734 28734 \n==========================================\n- Hits 23047 22362 -685 \n- Misses 5687 6372 +685 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6843?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.31%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-79.30%)` | :arrow_down: |\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.46% <0.00%> (-0.76%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.32%)` | :arrow_up: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.40% <0.00%> (+0.34%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |\n| ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6843?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6843?src=pr&el=footer). Last update [2de7ee0...52b62c3](https://codecov.io/gh/huggingface/transformers/pull/6843?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Stale" ]
"2024-04-26T10:00:43"
"2024-04-30T13:14:13"
null
NONE
null
### Describe the bug In light of the recent server outages, I decided to look into whether I could somehow wrap my IterableDataset streams to retry rather than error out immediately. To my surprise, `datasets` [already supports retries](https://github.com/huggingface/datasets/issues/6172#issuecomment-1794876229). Since a commit by @lhoestq [last week](https://github.com/huggingface/datasets/commit/a188022dc43a76a119d90c03832d51d6e4a94d91), that code lives here: https://github.com/huggingface/datasets/blob/fe2bea6a4b09b180bd23b88fe96dfd1a11191a4f/src/datasets/utils/file_utils.py#L1097C1-L1111C19 If GitHub code snippets still aren't working, here's a copy: ```python def read_with_retries(*args, **kwargs): disconnect_err = None for retry in range(1, max_retries + 1): try: out = read(*args, **kwargs) break except (ClientError, TimeoutError) as err: disconnect_err = err logger.warning( f"Got disconnected from remote data host. Retrying in {config.STREAMING_READ_RETRY_INTERVAL}sec [{retry}/{max_retries}]" ) time.sleep(config.STREAMING_READ_RETRY_INTERVAL) else: raise ConnectionError("Server Disconnected") from disconnect_err return out ``` With the latest outage, the end of my stack trace looked like this: ``` ... File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/download/streaming_download_manager.py", line 342, in read_with_retries out = read(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/gzip.py", line 301, in read return self._buffer.read(size) ^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/_compression.py", line 68, in readinto data = self.read(len(byte_view)) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/gzip.py", line 505, in read buf = self._fp.read(io.DEFAULT_BUFFER_SIZE) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/gzip.py", line 88, in read return self.file.read(size) ^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/site-packages/fsspec/spec.py", line 1856, in read out = self.cache._fetch(self.loc, self.loc + length) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/site-packages/fsspec/caching.py", line 189, in _fetch self.cache = self.fetcher(start, end) # new block replaces old ^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/site-packages/huggingface_hub/hf_file_system.py", line 626, in _fetch_range hf_raise_for_status(r) File "/miniconda3/envs/draft/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 333, in hf_raise_for_status raise HfHubHTTPError(str(e), response=response) from e huggingface_hub.utils._errors.HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/datasets/allenai/c4/resolve/1588ec454efa1a09f29cd18ddd04fe05fc8653a2/en/c4-train.00346-of-01024.json.gz ``` Indeed, the code for retries only catches `ClientError`s and `TimeoutError`s, and all other exceptions, *including HuggingFace's own custom HTTP error class*, **are not caught. Nothing is retried,** and instead the exception is propagated upwards immediately. ### Steps to reproduce the bug Not sure how you reproduce this. Maybe unplug your Ethernet cable while streaming a dataset; the issue is pretty clear from the stack trace. ### Expected behavior All HTTP errors while iterating a streamable dataset should cause retries. ### Environment info Output from `datasets-cli env`: - `datasets` version: 2.18.0 - Platform: Linux-4.18.0-513.24.1.el8_9.x86_64-x86_64-with-glibc2.28 - Python version: 3.11.7 - `huggingface_hub` version: 0.20.3 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6843/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6843/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6842
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6842/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6842/comments
https://api.github.com/repos/huggingface/datasets/issues/6842/events
https://github.com/huggingface/datasets/issues/6842
2,264,692,159
I_kwDODunzps6G_HW_
6,842
Datasets with files with colon : in filenames cannot be used on Windows
{ "login": "jacobjennings", "id": 1038927, "node_id": "MDQ6VXNlcjEwMzg5Mjc=", "avatar_url": "https://avatars.githubusercontent.com/u/1038927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jacobjennings", "html_url": "https://github.com/jacobjennings", "followers_url": "https://api.github.com/users/jacobjennings/followers", "following_url": "https://api.github.com/users/jacobjennings/following{/other_user}", "gists_url": "https://api.github.com/users/jacobjennings/gists{/gist_id}", "starred_url": "https://api.github.com/users/jacobjennings/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jacobjennings/subscriptions", "organizations_url": "https://api.github.com/users/jacobjennings/orgs", "repos_url": "https://api.github.com/users/jacobjennings/repos", "events_url": "https://api.github.com/users/jacobjennings/events{/privacy}", "received_events_url": "https://api.github.com/users/jacobjennings/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2024-04-26T00:14:16"
"2024-04-26T00:14:16"
null
NONE
null
### Describe the bug Datasets (such as https://huggingface.co/datasets/MLCommons/peoples_speech) cannot be used on Windows due to the fact that windows does not allow colons ":" in filenames. These should be converted into alternative strings. ### Steps to reproduce the bug 1. Attempt to run load_dataset on MLCommons/peoples_speech ### Expected behavior Does not crash during extraction ### Environment info Windows 11, NTFS filesystem, Python 3.12
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6842/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6842/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6841
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6841/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6841/comments
https://api.github.com/repos/huggingface/datasets/issues/6841/events
https://github.com/huggingface/datasets/issues/6841
2,264,687,683
I_kwDODunzps6G_GRD
6,841
Unable to load wiki_auto_asset_turk from GEM
{ "login": "abhinavsethy", "id": 23074600, "node_id": "MDQ6VXNlcjIzMDc0NjAw", "avatar_url": "https://avatars.githubusercontent.com/u/23074600?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhinavsethy", "html_url": "https://github.com/abhinavsethy", "followers_url": "https://api.github.com/users/abhinavsethy/followers", "following_url": "https://api.github.com/users/abhinavsethy/following{/other_user}", "gists_url": "https://api.github.com/users/abhinavsethy/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhinavsethy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhinavsethy/subscriptions", "organizations_url": "https://api.github.com/users/abhinavsethy/orgs", "repos_url": "https://api.github.com/users/abhinavsethy/repos", "events_url": "https://api.github.com/users/abhinavsethy/events{/privacy}", "received_events_url": "https://api.github.com/users/abhinavsethy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6841?src=pr&el=h1) Report\n> Merging [#6841](https://codecov.io/gh/huggingface/transformers/pull/6841?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/05c3214153d30245928279724ce2a9b701ec8aab?el=desc) will **decrease** coverage by `0.10%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6841/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6841?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6841 +/- ##\n==========================================\n- Coverage 80.27% 80.16% -0.11% \n==========================================\n Files 157 157 \n Lines 28586 28586 \n==========================================\n- Hits 22946 22916 -30 \n- Misses 5640 5670 +30 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6841?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6841/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <ø> (ø)` | |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6841/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6841/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6841/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (-1.01%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6841/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.25%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6841?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6841?src=pr&el=footer). Last update [05c3214...f863f8e](https://codecov.io/gh/huggingface/transformers/pull/6841?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-04-26T00:08:47"
"2024-04-26T17:22:58"
"2024-04-26T16:12:29"
NONE
null
### Describe the bug I am unable to load the wiki_auto_asset_turk dataset. I get a fatal error while trying to access wiki_auto_asset_turk and load it with datasets.load_dataset. The error (TypeError: expected str, bytes or os.PathLike object, not NoneType) is from filenames_for_dataset_split in a os.path.join call >>import datasets >>print (datasets.__version__) >>dataset = datasets.load_dataset("GEM/wiki_auto_asset_turk") System output: Generating train split: 100%|█| 483801/483801 [00:03<00:00, 127164.26 examples/s Generating validation split: 100%|█| 20000/20000 [00:00<00:00, 116052.94 example Generating test_asset split: 100%|██| 359/359 [00:00<00:00, 76155.93 examples/s] Generating test_turk split: 100%|███| 359/359 [00:00<00:00, 87691.76 examples/s] Traceback (most recent call last): File "/Users/abhinav.sethy/Code/openai_evals/evals/evals/grammarly_tasks/gem_sari.py", line 3, in <module> dataset = datasets.load_dataset("GEM/wiki_auto_asset_turk") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/load.py", line 2582, in load_dataset builder_instance.download_and_prepare( File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py", line 1005, in download_and_prepare self._download_and_prepare( File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py", line 1767, in _download_and_prepare super()._download_and_prepare( File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py", line 1100, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py", line 1565, in _prepare_split split_info = self.info.splits[split_generator.name] ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/splits.py", line 532, in __getitem__ instructions = make_file_instructions( ^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/arrow_reader.py", line 121, in make_file_instructions info.name: filenames_for_dataset_split( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/naming.py", line 72, in filenames_for_dataset_split prefix = os.path.join(path, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<frozen posixpath>", line 76, in join TypeError: expected str, bytes or os.PathLike object, not NoneType ### Steps to reproduce the bug import datasets print (datasets.__version__) dataset = datasets.load_dataset("GEM/wiki_auto_asset_turk") ### Expected behavior Should be able to load the dataset without any issues ### Environment info datasets version 2.18.0 (was able to reproduce bug with older versions 2.16 and 2.14 also) Python 3.12.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6841/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6841/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6840
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6840/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6840/comments
https://api.github.com/repos/huggingface/datasets/issues/6840/events
https://github.com/huggingface/datasets/issues/6840
2,264,604,766
I_kwDODunzps6G-yBe
6,840
Delete uploaded files from the UI
{ "login": "saicharan2804", "id": 62512681, "node_id": "MDQ6VXNlcjYyNTEyNjgx", "avatar_url": "https://avatars.githubusercontent.com/u/62512681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/saicharan2804", "html_url": "https://github.com/saicharan2804", "followers_url": "https://api.github.com/users/saicharan2804/followers", "following_url": "https://api.github.com/users/saicharan2804/following{/other_user}", "gists_url": "https://api.github.com/users/saicharan2804/gists{/gist_id}", "starred_url": "https://api.github.com/users/saicharan2804/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/saicharan2804/subscriptions", "organizations_url": "https://api.github.com/users/saicharan2804/orgs", "repos_url": "https://api.github.com/users/saicharan2804/repos", "events_url": "https://api.github.com/users/saicharan2804/events{/privacy}", "received_events_url": "https://api.github.com/users/saicharan2804/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
"2024-04-25T22:33:57"
"2024-04-25T22:33:57"
null
NONE
null
### Feature request Once a file is uploaded and the commit is made, I am unable to delete individual files without completely deleting the whole dataset via the website UI. ### Motivation Would be a useful addition ### Your contribution Would love to help out with some guidance
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6840/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6840/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6839
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6839/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6839/comments
https://api.github.com/repos/huggingface/datasets/issues/6839/events
https://github.com/huggingface/datasets/pull/6839
2,263,761,062
PR_kwDODunzps5tvC1c
6,839
Remove token arg from CLI examples
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Yes ! https://discuss.huggingface.co/t/pegasus-questions/838/8?u=valhalla", "Thanks " ]
"2024-04-25T14:36:58"
"2024-04-26T17:03:51"
"2024-04-26T16:57:40"
MEMBER
null
Remove token arg from CLI examples. Fix #6838. CC: @Wauplin
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6839/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6839/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6839", "html_url": "https://github.com/huggingface/datasets/pull/6839", "diff_url": "https://github.com/huggingface/datasets/pull/6839.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6839.patch", "merged_at": "2024-04-26T16:57:40" }
https://api.github.com/repos/huggingface/datasets/issues/6838
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6838/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6838/comments
https://api.github.com/repos/huggingface/datasets/issues/6838/events
https://github.com/huggingface/datasets/issues/6838
2,263,674,843
I_kwDODunzps6G7O_b
6,838
Remove token arg from CLI examples
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
"2024-04-25T14:00:38"
"2024-04-26T16:57:41"
"2024-04-26T16:57:41"
MEMBER
null
As suggested by @Wauplin, see: https://github.com/huggingface/datasets/pull/6831#discussion_r1579492603 > I would not advertise the --token arg in the example as this shouldn't be the recommended way (best to login with env variable or huggingface-cli login)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6838/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6838/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6837
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6837/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6837/comments
https://api.github.com/repos/huggingface/datasets/issues/6837/events
https://github.com/huggingface/datasets/issues/6837
2,263,273,983
I_kwDODunzps6G5tH_
6,837
Cannot use cached dataset without Internet connection (or when servers are down)
{ "login": "DionisMuzenitov", "id": 112088378, "node_id": "U_kgDOBq5VOg", "avatar_url": "https://avatars.githubusercontent.com/u/112088378?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DionisMuzenitov", "html_url": "https://github.com/DionisMuzenitov", "followers_url": "https://api.github.com/users/DionisMuzenitov/followers", "following_url": "https://api.github.com/users/DionisMuzenitov/following{/other_user}", "gists_url": "https://api.github.com/users/DionisMuzenitov/gists{/gist_id}", "starred_url": "https://api.github.com/users/DionisMuzenitov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DionisMuzenitov/subscriptions", "organizations_url": "https://api.github.com/users/DionisMuzenitov/orgs", "repos_url": "https://api.github.com/users/DionisMuzenitov/repos", "events_url": "https://api.github.com/users/DionisMuzenitov/events{/privacy}", "received_events_url": "https://api.github.com/users/DionisMuzenitov/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi! The `save_vocabulary` method, as its name implies and as is explained in its [docstring](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.save_vocabulary), only saves the vocabulary. If you want to save the entire tokenizer (with special tokens), you should use the `save_pretrained` method." ]
"2024-04-25T10:48:20"
"2024-04-26T14:27:15"
null
NONE
null
### Describe the bug I want to be able to use cached dataset from HuggingFace even when I have no Internet connection (or when HuggingFace servers are down, or my company has network issues). The problem why I can't use it: `data_files` argument from `datasets.load_dataset()` function get it updates from the server before calculating hash for caching. As a result, when I run the same code with and without Internet I get different dataset configuration directory name. ### Steps to reproduce the bug ``` import datasets c4_dataset = datasets.load_dataset( path="allenai/c4", data_files={"train": "en/c4-train.00000-of-01024.json.gz"}, split="train", cache_dir="/datesets/cache", download_mode="reuse_cache_if_exists", token=False, ) ``` 1. Run this code with the Internet. 2. Run the same code without the Internet. ### Expected behavior When running without the Internet connection, the loader should be able to get dataset from cache ### Environment info - `datasets` version: 2.19.0 - Platform: Windows-10-10.0.19044-SP0 - Python version: 3.10.13 - `huggingface_hub` version: 0.22.2 - PyArrow version: 16.0.0 - Pandas version: 1.5.3 - `fsspec` version: 2023.12.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6837/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6837/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6836
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6836/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6836/comments
https://api.github.com/repos/huggingface/datasets/issues/6836/events
https://github.com/huggingface/datasets/issues/6836
2,262,249,919
I_kwDODunzps6G1zG_
6,836
ExpectedMoreSplits error on load_dataset when upgrading to 2.19.0
{ "login": "ebsmothers", "id": 24319399, "node_id": "MDQ6VXNlcjI0MzE5Mzk5", "avatar_url": "https://avatars.githubusercontent.com/u/24319399?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ebsmothers", "html_url": "https://github.com/ebsmothers", "followers_url": "https://api.github.com/users/ebsmothers/followers", "following_url": "https://api.github.com/users/ebsmothers/following{/other_user}", "gists_url": "https://api.github.com/users/ebsmothers/gists{/gist_id}", "starred_url": "https://api.github.com/users/ebsmothers/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ebsmothers/subscriptions", "organizations_url": "https://api.github.com/users/ebsmothers/orgs", "repos_url": "https://api.github.com/users/ebsmothers/repos", "events_url": "https://api.github.com/users/ebsmothers/events{/privacy}", "received_events_url": "https://api.github.com/users/ebsmothers/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Do you solve the problem?......", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-04-24T21:52:35"
"2024-05-14T04:08:19"
null
NONE
null
### Describe the bug Hi there, thanks for the great library! We have been using it a lot in torchtune and it's been a huge help for us. Regarding the bug: the same call to `load_dataset` errors with `ExpectedMoreSplits` in 2.19.0 after working fine in 2.18.0. Full details given in the repro below. ### Steps to reproduce the bug On 2.18.0, things work fine: ``` # First clear the locally cached dataset rm -r ~/.cache/huggingface/datasets/lvwerra___stack-exchange-paired pip install "datasets==2.18.0" python3 >>> from datasets import load_dataset >>> dataset = load_dataset('lvwerra/stack-exchange-paired', split='train', data_dir='data/rl') ``` On 2.19.0, they do not: ``` # First clear the locally cached dataset rm -r ~/.cache/huggingface/datasets/lvwerra___stack-exchange-paired pip install "datasets==2.19.0" python3 >>> from datasets import load_dataset >>> dataset = load_dataset('lvwerra/stack-exchange-paired', split='train', data_dir='data/rl') ``` The stack trace I see from the 2.19.0 version of load_dataset can be seen [here](https://gist.github.com/ebsmothers/f9b1f1949bee7030a8d7bb8a491550d2). (Maybe unsurprising but) notably if I do not delete the cache first I am able to load the dataset successfully. So based on this I suspect the cause is somewhere in the download logic. ### Expected behavior Download the dataset successfully :) ### Environment info - `datasets` version: 2.19.0 - Platform: Linux-5.12.0-0_fbk16_zion_7661_geb00762ce6d2-x86_64-with-glibc2.34 - Python version: 3.11.9 - `huggingface_hub` version: 0.22.2 - PyArrow version: 16.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6836/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6836/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6835
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6835/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6835/comments
https://api.github.com/repos/huggingface/datasets/issues/6835/events
https://github.com/huggingface/datasets/pull/6835
2,261,079,263
PR_kwDODunzps5tl2fc
6,835
LargeListType support #6834
{ "login": "Modexus", "id": 37351874, "node_id": "MDQ6VXNlcjM3MzUxODc0", "avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Modexus", "html_url": "https://github.com/Modexus", "followers_url": "https://api.github.com/users/Modexus/followers", "following_url": "https://api.github.com/users/Modexus/following{/other_user}", "gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}", "starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Modexus/subscriptions", "organizations_url": "https://api.github.com/users/Modexus/orgs", "repos_url": "https://api.github.com/users/Modexus/repos", "events_url": "https://api.github.com/users/Modexus/events{/privacy}", "received_events_url": "https://api.github.com/users/Modexus/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2024-04-24T11:34:24"
"2024-04-30T13:16:14"
null
CONTRIBUTOR
null
Fixes #6834
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6835/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6835/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6835", "html_url": "https://github.com/huggingface/datasets/pull/6835", "diff_url": "https://github.com/huggingface/datasets/pull/6835.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6835.patch", "merged_at": null }
https://api.github.com/repos/huggingface/datasets/issues/6834
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6834/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6834/comments
https://api.github.com/repos/huggingface/datasets/issues/6834/events
https://github.com/huggingface/datasets/issues/6834
2,261,078,104
I_kwDODunzps6GxVBY
6,834
largelisttype not supported (.from_polars())
{ "login": "Modexus", "id": 37351874, "node_id": "MDQ6VXNlcjM3MzUxODc0", "avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Modexus", "html_url": "https://github.com/Modexus", "followers_url": "https://api.github.com/users/Modexus/followers", "following_url": "https://api.github.com/users/Modexus/following{/other_user}", "gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}", "starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Modexus/subscriptions", "organizations_url": "https://api.github.com/users/Modexus/orgs", "repos_url": "https://api.github.com/users/Modexus/repos", "events_url": "https://api.github.com/users/Modexus/events{/privacy}", "received_events_url": "https://api.github.com/users/Modexus/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6834?src=pr&el=h1) Report\n> Merging [#6834](https://codecov.io/gh/huggingface/transformers/pull/6834?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c5d43a872f0e85ce069e921c5bda02374e5b9cbf?el=desc) will **decrease** coverage by `2.98%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6834/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6834?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6834 +/- ##\n==========================================\n- Coverage 80.02% 77.04% -2.99% \n==========================================\n Files 161 161 \n Lines 30120 30120 \n==========================================\n- Hits 24104 23205 -899 \n- Misses 6016 6915 +899 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6834?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6834/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6834/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-77.64%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6834/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6834/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `91.90% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6834/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.03% <0.00%> (+0.27%)` | :arrow_up: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6834/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.41% <0.00%> (+0.50%)` | :arrow_up: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6834/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `90.00% <0.00%> (+5.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6834/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `90.00% <0.00%> (+30.00%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6834?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6834?src=pr&el=footer). Last update [c5d43a8...7325ecf](https://codecov.io/gh/huggingface/transformers/pull/6834?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-04-24T11:33:43"
"2024-04-24T12:06:37"
null
CONTRIBUTOR
null
### Describe the bug The following code fails because LargeListType is not supported. This is especially a problem for .from_polars since polars uses LargeListType. ### Steps to reproduce the bug ```python import datasets import polars as pl df = pl.DataFrame({"list": [[]]}) datasets.Dataset.from_polars(df) ``` ### Expected behavior Convert LargeListType to list. ### Environment info - `datasets` version: 2.19.1.dev0 - Platform: Linux-6.8.7-200.fc39.x86_64-x86_64-with-glibc2.38 - Python version: 3.12.2 - `huggingface_hub` version: 0.22.2 - PyArrow version: 16.0.0 - Pandas version: 2.1.4 - `fsspec` version: 2024.3.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6834/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6834/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6833
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6833/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6833/comments
https://api.github.com/repos/huggingface/datasets/issues/6833/events
https://github.com/huggingface/datasets/issues/6833
2,259,731,274
I_kwDODunzps6GsMNK
6,833
Super slow iteration with trivial custom transform
{ "login": "xslittlegrass", "id": 2780075, "node_id": "MDQ6VXNlcjI3ODAwNzU=", "avatar_url": "https://avatars.githubusercontent.com/u/2780075?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xslittlegrass", "html_url": "https://github.com/xslittlegrass", "followers_url": "https://api.github.com/users/xslittlegrass/followers", "following_url": "https://api.github.com/users/xslittlegrass/following{/other_user}", "gists_url": "https://api.github.com/users/xslittlegrass/gists{/gist_id}", "starred_url": "https://api.github.com/users/xslittlegrass/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xslittlegrass/subscriptions", "organizations_url": "https://api.github.com/users/xslittlegrass/orgs", "repos_url": "https://api.github.com/users/xslittlegrass/repos", "events_url": "https://api.github.com/users/xslittlegrass/events{/privacy}", "received_events_url": "https://api.github.com/users/xslittlegrass/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6833?src=pr&el=h1) Report\n> Merging [#6833](https://codecov.io/gh/huggingface/transformers/pull/6833?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/dfa10a41ba3fd9c5289bebd3baeff8792b1b2281?el=desc) will **decrease** coverage by `1.18%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6833/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6833?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6833 +/- ##\n==========================================\n- Coverage 80.02% 78.84% -1.19% \n==========================================\n Files 157 157 \n Lines 28586 28586 \n==========================================\n- Hits 22876 22538 -338 \n- Misses 5710 6048 +338 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6833?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `34.28% <0.00%> (-62.86%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `23.87% <0.00%> (-57.10%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `13.21% <0.00%> (-40.45%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-34.36%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `66.66% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `64.44% <0.00%> (-20.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `83.58% <0.00%> (-2.99%)` | :arrow_down: |\n| ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6833?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6833?src=pr&el=footer). Last update [dfa10a4...14cdaee](https://codecov.io/gh/huggingface/transformers/pull/6833?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-04-23T20:40:59"
"2024-05-04T11:24:37"
null
NONE
null
### Describe the bug Dataset is 10X slower when applying trivial transforms: ``` import time import numpy as np from datasets import Dataset, Features, Array2D a = np.zeros((800, 800)) a = np.stack([a] * 1000) features = Features({"a": Array2D(shape=(800, 800), dtype="uint8")}) ds1 = Dataset.from_dict({"a": a}, features=features).with_format('numpy') def transform(batch): return batch ds2 = ds1.with_transform(transform) %time sum(1 for _ in ds1) %time sum(1 for _ in ds2) ``` ``` CPU times: user 472 ms, sys: 319 ms, total: 791 ms Wall time: 794 ms CPU times: user 9.32 s, sys: 443 ms, total: 9.76 s Wall time: 9.78 s ``` In my real code I'm using set_transform to apply some post-processing on-the-fly for the 2d array, but it significantly slows down the dataset even if the transform itself is trivial. Related issue: https://github.com/huggingface/datasets/issues/5841 ### Steps to reproduce the bug Use code in the description to reproduce. ### Expected behavior Trivial custom transform in the example should not slowdown the dataset iteration. ### Environment info - `datasets` version: 2.18.0 - Platform: Linux-5.15.0-79-generic-x86_64-with-glibc2.35 - Python version: 3.11.4 - `huggingface_hub` version: 0.20.2 - PyArrow version: 15.0.0 - Pandas version: 1.5.3 - `fsspec` version: 2023.12.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6833/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6833/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6832
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6832/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6832/comments
https://api.github.com/repos/huggingface/datasets/issues/6832/events
https://github.com/huggingface/datasets/pull/6832
2,258,761,447
PR_kwDODunzps5teFoJ
6,832
Support downloading specific splits in `load_dataset`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Humm for me it looks like it is an issue with the dataset creation, but I might be wrong as I don't have the code that creates the features.\r\n\r\nCan you try without the `steps_per_epoch` parameter?", "isn't steps_per_epoch equal to `num_examples/batch_size` ? futhermore your labels first dimention (batch) and dataset batch dimention must be equal i.e # rows of dataset/X == # rows of labels `where 32768 !=1024`", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-04-23T12:32:27"
"2024-04-30T08:55:28"
null
COLLABORATOR
null
This PR builds on https://github.com/huggingface/datasets/pull/6639 to support downloading only the specified splits in `load_dataset`. For this to work, a builder's `_split_generators` need to be able to accept the requested splits (as a list) via a `splits` argument to avoid processing the non-requested ones. Also, the builder has to define a `_available_splits` method that lists all the possible `splits` values. Close https://github.com/huggingface/datasets/issues/4101, close https://github.com/huggingface/datasets/issues/2538 (I'm probably missing some) Should also make it possible to address https://github.com/huggingface/datasets/issues/6793
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6832/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6832/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6832", "html_url": "https://github.com/huggingface/datasets/pull/6832", "diff_url": "https://github.com/huggingface/datasets/pull/6832.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6832.patch", "merged_at": null }
https://api.github.com/repos/huggingface/datasets/issues/6831
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6831/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6831/comments
https://api.github.com/repos/huggingface/datasets/issues/6831/events
https://github.com/huggingface/datasets/pull/6831
2,258,537,405
PR_kwDODunzps5tdTy_
6,831
Add docs about the CLI
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6831?src=pr&el=h1) Report\n> Merging [#6831](https://codecov.io/gh/huggingface/transformers/pull/6831?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/32fe44086c2191c4551b7ff00db7ae1cace9b02e?el=desc) will **increase** coverage by `0.66%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6831/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6831?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6831 +/- ##\n==========================================\n+ Coverage 78.10% 78.77% +0.66% \n==========================================\n Files 157 157 \n Lines 28586 28586 \n==========================================\n+ Hits 22328 22519 +191 \n+ Misses 6258 6067 -191 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6831?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `34.28% <0.00%> (-62.86%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `23.87% <0.00%> (-57.10%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `13.21% <0.00%> (-40.45%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-34.36%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `66.66% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `64.44% <0.00%> (-20.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.21%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `81.45% <0.00%> (-5.02%)` | :arrow_down: |\n| ... and [20 more](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6831?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6831?src=pr&el=footer). Last update [32fe440...fb1404a](https://codecov.io/gh/huggingface/transformers/pull/6831?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-04-23T10:41:03"
"2024-04-26T16:51:09"
"2024-04-25T10:44:10"
MEMBER
null
Add docs about the CLI. Close #6830. CC: @severo
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6831/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6831/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6831", "html_url": "https://github.com/huggingface/datasets/pull/6831", "diff_url": "https://github.com/huggingface/datasets/pull/6831.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6831.patch", "merged_at": "2024-04-25T10:44:10" }
https://api.github.com/repos/huggingface/datasets/issues/6830
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6830/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6830/comments
https://api.github.com/repos/huggingface/datasets/issues/6830/events
https://github.com/huggingface/datasets/issues/6830
2,258,433,178
I_kwDODunzps6GnPSa
6,830
Add a doc page for the convert_to_parquet CLI
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "If you are looking for summerization in non-English languages you can try using `MBartForConditionalGeneration`, or multilingual Bert using the `EncoderDecoder` framework. Not sure if xlm-r is yet supported in `EncoderDecoder`", "Looking at xlm-r source code it seems that it can be easily added in EncoderDecoder as it subclasses Roberta which is supported in EncoderDecoder", "Alright. I can try mBART and mBERT. \r\nWhat I was wondering about XLM was, if we could use it in a language modeling setting for this task, like how we use GPT for any seq2seq task.\r\n\r\nSending both text and the summary together and calculating the loss only over the summaries.", "Not sure if that'll work since it's trained with MLM and encoder only with bi-directional attention. What you described above will need a causal LM with unidirectional attention.", "EncoderDecoder class allows you to use encoder only models as both encoder and decoder and fine-tune for seq-2-seq task. Here's an example of Roberta2Roberta fine-tuned on CNN dm https://huggingface.co/patrickvonplaten/roberta2roberta-cnn_dailymail-fp16", "Makes sense. Saw XLMWithLMHead in https://github.com/huggingface/transformers/blob/master/examples/text-generation/run_generation.py, so just got curious.\r\n\r\n> Not sure if that'll work since it's trained with MLM and encoder only with bi-directional attention. What you described above will need a causal LM with unidirectional attention.\r\n\r\n", "> EncoderDecoder class allows you to use encoder only models as both encoder and decoder and fine-tune for seq-2-seq task. Here's an example of Roberta2Roberta fine-tuned on CNN dm https://huggingface.co/patrickvonplaten/roberta2roberta-cnn_dailymail-fp16\r\n\r\nOh thank you so much!", "Also what you said is doable, xlm-r can be used like a causal LM by configuring the attention mask. Might not give the best results though. See how RobertaForCausalLM is implemented. ", "> Makes sense. Saw XLMWithLMHead in https://github.com/huggingface/transformers/blob/master/examples/text-generation/run_generation.py, so just got curious.\r\n> \r\n> > Not sure if that'll work since it's trained with MLM and encoder only with bi-directional attention. What you described above will need a causal LM with unidirectional attention.\r\n\r\nAah Sorry, typo, I meant XLM-R, not xlm", "> Also what you said is doable, xlm-r can be used like a causal LM by configuring the attention mask. Might not give the best results though. See how RobertaForCausalLM is implemented.\r\n\r\nOhh, sure. Will check it out.", "Also, what will be the best way to finetune T5 in a multi-task setting.", "Also, are there any models we can use for code-switched data.", "> Also, are there any models we can use for code-switched data.\r\n\r\nNot too familiar with this, but seen few models on model hub and they used Bert.\r\nhttps://huggingface.co/sagorsarker/codeswitch-hineng-ner-lince\r\n\r\n> Also, what will be the best way to finetune T5 in a multi-task setting.\r\n\r\nIf you can cast all your tasks in text-2-text format then multi-task training can be done simply using task pre-fixes as shown in the paper. Also I think the performance will depend upon the tasks and datasets so some experimentation is necessary. Most important thing when doing multi-task is how you sample examples from different tasks. See section 3.5.2 of T5 paper.\r\n\r\nAlso the best place to ask this question would be\r\nhttps://discuss.huggingface.co/t/t5-finetuning-tips/684", "Alright, thank you so much for the help !!", "I tried using Xlmr2Xlmr but seems that regardless of what input I provide I get the same output; I checked to see the is_decoder flag is set to true in the decoder. This issue persists throughout the finetuning process", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-04-23T09:49:04"
"2024-04-25T10:44:11"
"2024-04-25T10:44:11"
CONTRIBUTOR
null
Follow-up to https://github.com/huggingface/datasets/pull/6795. Useful for https://github.com/huggingface/dataset-viewer/issues/2742. cc @albertvillanova
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6830/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6830/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6829
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6829/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6829/comments
https://api.github.com/repos/huggingface/datasets/issues/6829/events
https://github.com/huggingface/datasets/issues/6829
2,258,424,577
I_kwDODunzps6GnNMB
6,829
Load and save from/to disk no longer accept pathlib.Path
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "@abedkhooli Could I have the command you ran + environment details so that I can try to replicate this?\r\nThanks!\r\n", "Thanks @sshleifer for looking into this. \r\nTPU type: TPU v2 which is 8 cores, 64 GB (using Google Colab)\r\n```\r\n%%bash\r\nexport ENRO_DIR='/content/wmt_en_ro' # Download instructions above\r\n#export WANDB_PROJECT=\"MT\" # optional\r\nexport MAX_LEN=32\r\nexport BS=8\r\ncd /content/transformers\r\n./mbart_enro.sh\r\n```\r\nmbart_enro.sh:\r\n```\r\n#!/usr/bin/env bash\r\nexport PYTHONPATH=\"../\":\"${PYTHONPATH}\"\r\n\r\npython examples/xla_spawn.py --num_cores 8 \\\r\n\t examples/seq2seq/finetune.py \\\r\n --learning_rate=3e-5 \\\r\n --fp16 \\\r\n --do_train \\\r\n --val_check_interval=0.25 \\\r\n --adam_eps 1e-06 \\\r\n --num_train_epochs 1 --src_lang en_XX --tgt_lang ro_RO \\\r\n --data_dir $ENRO_DIR \\\r\n --max_source_length $MAX_LEN --max_target_length $MAX_LEN --val_max_target_length $MAX_LEN --test_max_target_length $MAX_LEN \\\r\n --train_batch_size=$BS --eval_batch_size=$BS \\\r\n --task translation \\\r\n --warmup_steps 500 \\\r\n --freeze_embeds \\\r\n --model_name_or_path=facebook/mbart-large-cc25 \\\r\n --output_dir enro_finetune_baseline \\\r\n --label_smoothing 0.1 \\\r\n --fp16_opt_level=O1 --sortish_sampler --n_train 5000 --n_val 500 \\\r\n \"$@\"\r\n```\r\nI believe the issue is adding the correct _mp_fn to examples/seq2seq/finetune.py that matches the main() call (I am not an experienced coder :-)).\r\n", "I see a related [PR#5960](https://github.com/huggingface/transformers/pull/5960) - does that mean moving away from [xla_spawn](https://github.com/huggingface/transformers/blob/master/examples/xla_spawn.py) ?", "That PR is stalled, I am open to using any tpu implementation that works!\r\n", "If using [xla_spawn](https://github.com/huggingface/transformers/blob/master/examples/xla_spawn.py), and adding _mp_fn(..) to [finetune.py](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune.py), how should it (_mp_fn) be defined?", "I don't know, great question. Maybe @LysandreJik would know the answer.", "`_mp_fn(index)` should simply be an entry point to your script that leverages `transformers.Trainer`. You can see examples of it [here](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py).\r\n\r\nPlease note that we implemented this to mimic torch's `torch.distributed.launch`. I have no idea how this would work with a `pytorch-lightning` implementation. Doesn't pytorch-lightning have its own way of managing TPU training?", "The main() function in [finetune.py](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune.py#L350) takes arguments, so _mp_fn(index) signature won't work. \r\n```\r\ndef _mp_fn(index):\r\n # For xla_spawn (TPUs)\r\n main()\r\n```\r\n`Exception in device=TPU:0: main() missing 1 required positional argument: 'args'`", "Right, but even if you manage to make it work with the args, `finetune.py` is using pytorch-lightning so it won't work with `xla_spawn.py`. You can check the [pytorch-lightning docs](https://pytorch-lightning.readthedocs.io/en/latest/tpu.html) to see how to run on TPU.", "So, [lightning_base.py](https://github.com/huggingface/transformers/blob/master/examples/lightning_base.py#L165) is not ready for TPU yet.", "This is now supported by `Seq2SeqTrainer` which doesn't use PL.\r\nSee https://github.com/huggingface/transformers/blob/master/examples/seq2seq/builtin_trainer/finetune_tpu.sh" ]
"2024-04-23T09:44:45"
"2024-04-23T09:44:46"
null
MEMBER
null
Reported by @vttrifonov at https://github.com/huggingface/datasets/pull/6704#issuecomment-2071168296: > This change is breaking in > https://github.com/huggingface/datasets/blob/f96e74d5c633cd5435dd526adb4a74631eb05c43/src/datasets/arrow_dataset.py#L1515 > when the input is `pathlib.Path`. The issue is that `url_to_fs` expects a `str` and cannot deal with `Path`. `get_fs_token_paths` converts to `str` so it is not a problem This change was introduced in: - #6704
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6829/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6829/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6828
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6828/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6828/comments
https://api.github.com/repos/huggingface/datasets/issues/6828/events
https://github.com/huggingface/datasets/pull/6828
2,258,420,421
PR_kwDODunzps5tc55y
6,828
Support PathLike input in save_to_disk / load_from_disk
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi, @rkoystart \r\nI think this notebook will [help](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb)", "@patil-suraj so it means by default the longformers model provided by huggingface supports maximum tokens of 4096 right ?\r\nif suppose we want to pretrained model to support for even more longer sentences than 4096 we have to follow the instructions in the notebook you have mentioned above\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-04-23T09:42:38"
"2024-04-23T11:05:52"
null
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6828/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6828/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6828", "html_url": "https://github.com/huggingface/datasets/pull/6828", "diff_url": "https://github.com/huggingface/datasets/pull/6828.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6828.patch", "merged_at": null }
https://api.github.com/repos/huggingface/datasets/issues/6827
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6827/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6827/comments
https://api.github.com/repos/huggingface/datasets/issues/6827/events
https://github.com/huggingface/datasets/issues/6827
2,254,011,833
I_kwDODunzps6GWX25
6,827
Loading a remote dataset fails in the last release (v2.19.0)
{ "login": "zrthxn", "id": 35369637, "node_id": "MDQ6VXNlcjM1MzY5NjM3", "avatar_url": "https://avatars.githubusercontent.com/u/35369637?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zrthxn", "html_url": "https://github.com/zrthxn", "followers_url": "https://api.github.com/users/zrthxn/followers", "following_url": "https://api.github.com/users/zrthxn/following{/other_user}", "gists_url": "https://api.github.com/users/zrthxn/gists{/gist_id}", "starred_url": "https://api.github.com/users/zrthxn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zrthxn/subscriptions", "organizations_url": "https://api.github.com/users/zrthxn/orgs", "repos_url": "https://api.github.com/users/zrthxn/repos", "events_url": "https://api.github.com/users/zrthxn/events{/privacy}", "received_events_url": "https://api.github.com/users/zrthxn/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6827?src=pr&el=h1) Report\n> Merging [#6827](https://codecov.io/gh/huggingface/transformers/pull/6827?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/22933e661fe789874ef58b13d3a9bb2554ba5891?el=desc) will **decrease** coverage by `0.09%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6827/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6827?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6827 +/- ##\n==========================================\n- Coverage 80.02% 79.93% -0.10% \n==========================================\n Files 157 157 \n Lines 28586 28586 \n==========================================\n- Hits 22877 22851 -26 \n- Misses 5709 5735 +26 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6827?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6827/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `20.53% <0.00%> (-21.21%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6827/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6827/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6827/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6827/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6827/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6827/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+2.14%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6827/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.49% <0.00%> (+7.18%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6827/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+21.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6827?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6827?src=pr&el=footer). Last update [22933e6...ceb655f](https://codecov.io/gh/huggingface/transformers/pull/6827?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-04-19T21:11:58"
"2024-04-19T21:13:42"
null
NONE
null
While loading a dataset with multiple splits I get an error saying `Couldn't find file at <URL>` I am loading the dataset like so, nothing out of the ordinary. This dataset needs a token to access it. ``` token="hf_myhftoken-sdhbdsjgkhbd" load_dataset("speechcolab/gigaspeech", "test", cache_dir=f"gigaspeech/test", token=token) ``` I get the following error ![Screenshot 2024-04-19 at 11 03 07 PM](https://github.com/huggingface/datasets/assets/35369637/8dce757f-08ff-45dd-85b5-890fced7c5bc) Now you can see that the URL that it is trying to reach has the JSON object of the dataset split appended to the base URL. I think this may be due to a newly introduced issue. I did not have this issue with the previous version of the datasets. Everything was fine for me yesterday and after the release 12 hours ago, this seems to have broken. Also, the dataset in question runs custom code and I checked and there have been no commits to the dataset on Huggingface in 6 months. ### Steps to reproduce the bug Since this happened with one particular dataset for me, I am listing steps to use that dataset. 1. Open https://huggingface.co/datasets/speechcolab/gigaspeech and fill the form to get access. 2. Create a token on your huggingface account with read access. 3. Run the following line, substituing `<your_token_here>` with your token. ``` load_dataset("speechcolab/gigaspeech", "test", cache_dir=f"gigaspeech/test", token="<your_token_here>") ``` ### Expected behavior Be able to load the dataset in question. ### Environment info datasets == 2.19.0 python == 3.10 kernel == Linux 6.1.58+
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6827/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6827/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6826
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6826/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6826/comments
https://api.github.com/repos/huggingface/datasets/issues/6826/events
https://github.com/huggingface/datasets/pull/6826
2,252,445,242
PR_kwDODunzps5tJMZh
6,826
Set dev version
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-04-19T08:51:42"
"2024-04-19T09:05:25"
"2024-04-19T08:52:14"
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6826/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6826/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6826", "html_url": "https://github.com/huggingface/datasets/pull/6826", "diff_url": "https://github.com/huggingface/datasets/pull/6826.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6826.patch", "merged_at": "2024-04-19T08:52:13" }
https://api.github.com/repos/huggingface/datasets/issues/6825
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6825/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6825/comments
https://api.github.com/repos/huggingface/datasets/issues/6825/events
https://github.com/huggingface/datasets/pull/6825
2,252,404,599
PR_kwDODunzps5tJEMw
6,825
Release: 2.19.0
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6825?src=pr&el=h1) Report\n> Merging [#6825](https://codecov.io/gh/huggingface/transformers/pull/6825?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/22933e661fe789874ef58b13d3a9bb2554ba5891?el=desc) will **increase** coverage by `0.20%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6825/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6825?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6825 +/- ##\n==========================================\n+ Coverage 80.02% 80.23% +0.20% \n==========================================\n Files 157 157 \n Lines 28586 28586 \n==========================================\n+ Hits 22877 22936 +59 \n+ Misses 5709 5650 -59 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6825?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `59.43% <0.00%> (-35.85%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+2.14%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.49% <0.00%> (+7.18%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+21.91%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `100.00% <0.00%> (+57.89%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6825?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6825?src=pr&el=footer). Last update [22933e6...747ed9e](https://codecov.io/gh/huggingface/transformers/pull/6825?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-04-19T08:29:02"
"2024-05-04T12:23:26"
"2024-04-19T08:44:57"
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6825/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/6825/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6825", "html_url": "https://github.com/huggingface/datasets/pull/6825", "diff_url": "https://github.com/huggingface/datasets/pull/6825.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6825.patch", "merged_at": "2024-04-19T08:44:57" }