url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
1.78B
2.32B
node_id
stringlengths
18
19
number
int64
6k
6.92k
title
stringlengths
3
280
user
dict
labels
listlengths
0
2
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
1
milestone
dict
comments
sequencelengths
0
30
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
4 values
active_lock_reason
null
body
stringlengths
3
19.4k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/datasets/issues/6924
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6924/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6924/comments
https://api.github.com/repos/huggingface/datasets/issues/6924/events
https://github.com/huggingface/datasets/issues/6924
2,320,531,015
I_kwDODunzps6KUH5H
6,924
Caching map result of DatasetDict.
{ "login": "MostHumble", "id": 56939432, "node_id": "MDQ6VXNlcjU2OTM5NDMy", "avatar_url": "https://avatars.githubusercontent.com/u/56939432?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MostHumble", "html_url": "https://github.com/MostHumble", "followers_url": "https://api.github.com/users/MostHumble/followers", "following_url": "https://api.github.com/users/MostHumble/following{/other_user}", "gists_url": "https://api.github.com/users/MostHumble/gists{/gist_id}", "starred_url": "https://api.github.com/users/MostHumble/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MostHumble/subscriptions", "organizations_url": "https://api.github.com/users/MostHumble/orgs", "repos_url": "https://api.github.com/users/MostHumble/repos", "events_url": "https://api.github.com/users/MostHumble/events{/privacy}", "received_events_url": "https://api.github.com/users/MostHumble/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "I think you flipped model and tokenizer at the beginning. It should be\r\n```python\r\n\r\nfrom transformers import BartTokenizer, BartForConditionalGeneration\r\n\r\ntokenizer = BartTokenizer.from_pretrained('/Downloads/facebook-bart-large-cnn')\r\nmodel = BartForConditionalGeneration.from_pretrained('/Downloads/facebook-bart-large-cnn')\r\n\r\n```", "Pls reopen if there is another issue!", "Damn, this was embarrassing bug on my end. Thank you! 🍻" ]
"2024-05-28T09:07:41"
"2024-05-28T09:07:41"
null
NONE
null
Hi! I'm currenty using the map function to tokenize a somewhat large dataset, so I need to use the cache to save ~25 mins. Changing num_proc incduces the recomputation of the map, I'm not sure why and if this is excepted behavior? here it says, that cached files are loaded sequentially: https://github.com/huggingface/datasets/blob/bb2664cf540d5ce4b066365e7c8b26e7f1ca4743/src/datasets/arrow_dataset.py#L3005-L3006 it seems like I can pass in a fingerprint, and load it directly: https://github.com/huggingface/datasets/blob/bb2664cf540d5ce4b066365e7c8b26e7f1ca4743/src/datasets/arrow_dataset.py#L3108-L3125 **Environment Setup:** - Python 3.11.9 - datasets 2.19.1 conda-forge - Linux 6.1.83-1.el9.elrepo.x86_64 **MRE** ```python fixed raw_datasets fixed tokenize_function tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, num_proc=9, remove_columns=['text'], load_from_cache_file= True, desc="Running tokenizer on dataset line_by_line", ) tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, num_proc=5, remove_columns=['text'], load_from_cache_file= True, desc="Running tokenizer on dataset line_by_line", ) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6924/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6924/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6923
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6923/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6923/comments
https://api.github.com/repos/huggingface/datasets/issues/6923/events
https://github.com/huggingface/datasets/issues/6923
2,319,292,872
I_kwDODunzps6KPZnI
6,923
Export Parquet Tablet Audio-Set is null bytes in Arrow
{ "login": "anioji", "id": 140120605, "node_id": "U_kgDOCFoSHQ", "avatar_url": "https://avatars.githubusercontent.com/u/140120605?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anioji", "html_url": "https://github.com/anioji", "followers_url": "https://api.github.com/users/anioji/followers", "following_url": "https://api.github.com/users/anioji/following{/other_user}", "gists_url": "https://api.github.com/users/anioji/gists{/gist_id}", "starred_url": "https://api.github.com/users/anioji/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anioji/subscriptions", "organizations_url": "https://api.github.com/users/anioji/orgs", "repos_url": "https://api.github.com/users/anioji/repos", "events_url": "https://api.github.com/users/anioji/events{/privacy}", "received_events_url": "https://api.github.com/users/anioji/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6923?src=pr&el=h1) Report\n> Merging [#6923](https://codecov.io/gh/huggingface/transformers/pull/6923?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ebb52afdb4dc4bcd599e7cb503763e5d4afc962?el=desc) will **increase** coverage by `1.90%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6923/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6923?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6923 +/- ##\n==========================================\n+ Coverage 77.81% 79.72% +1.90% \n==========================================\n Files 157 157 \n Lines 28853 28853 \n==========================================\n+ Hits 22452 23002 +550 \n+ Misses 6401 5851 -550 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6923?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/configuration\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `34.28% <0.00%> (-62.86%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `23.87% <0.00%> (-58.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `69.06% <0.00%> (-29.32%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.72% <0.00%> (-7.19%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| ... and [25 more](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6923?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6923?src=pr&el=footer). Last update [4ebb52a...eaef0cb](https://codecov.io/gh/huggingface/transformers/pull/6923?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-05-27T14:27:57"
"2024-05-27T14:27:57"
null
NONE
null
### Describe the bug Exporting the processed audio inside the table with the dataset.to_parquet function, the object pyarrow {bytes: null, path: "Some/Path"} At the same time, the same dataset uploaded to the hub has bit arrays ![Screenshot from 2024-05-27 19-14-49](https://github.com/huggingface/datasets/assets/140120605/ddfba089-426f-4659-9df4-7a634c948b9e) ![Screenshot from 2024-05-27 19-12-51](https://github.com/huggingface/datasets/assets/140120605/4cf8c0a1-650e-491b-86c8-b475c284a021) ### Steps to reproduce the bug 1.Get dataset from audio and cast it 2.Export and push dataset 3.It’s scary to be indignant at the difference in the uploaded dataset and the fact that it was saved locally ```py from datasets import Dataset, Audio df = Dataset.from_csv("./datasets.csv") df = df.cast_column("audio", Audio(16000)) df.to_parquet("./datasets.parquet") df.push_to_hub(repo_id="************", token="**********************") ``` You can use "try replicate case" for this [replicate_packet.zip](https://github.com/huggingface/datasets/files/15457114/replicate_packet.zip) ### Expected behavior Two parquet tables identical in content. It is obvious? ### Environment info Python 3.11+ (I try did it in 3.12 and got same result )
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6923/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6923/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6922
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6922/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6922/comments
https://api.github.com/repos/huggingface/datasets/issues/6922/events
https://github.com/huggingface/datasets/pull/6922
2,318,602,059
PR_kwDODunzps5wolTm
6,922
Remove torchaudio remnants from code
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I have the same issue too! Please some guidelines ?", "Same" ]
"2024-05-27T08:45:07"
"2024-05-27T09:08:19"
"2024-05-27T08:59:21"
MEMBER
null
Remove torchaudio remnants from code. Follow-up on: - #5573
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6922/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6922/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6922", "html_url": "https://github.com/huggingface/datasets/pull/6922", "diff_url": "https://github.com/huggingface/datasets/pull/6922.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6922.patch", "merged_at": "2024-05-27T08:59:21" }
https://api.github.com/repos/huggingface/datasets/issues/6921
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6921/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6921/comments
https://api.github.com/repos/huggingface/datasets/issues/6921/events
https://github.com/huggingface/datasets/pull/6921
2,318,394,398
PR_kwDODunzps5wn4Dz
6,921
Support fsspec 2024.5.0
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2024-05-27T07:00:59"
"2024-05-27T08:07:16"
"2024-05-27T08:01:08"
MEMBER
null
Support fsspec 2024.5.0.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6921/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6921/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6921", "html_url": "https://github.com/huggingface/datasets/pull/6921", "diff_url": "https://github.com/huggingface/datasets/pull/6921.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6921.patch", "merged_at": "2024-05-27T08:01:08" }
https://api.github.com/repos/huggingface/datasets/issues/6920
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6920/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6920/comments
https://api.github.com/repos/huggingface/datasets/issues/6920/events
https://github.com/huggingface/datasets/pull/6920
2,317,648,021
PR_kwDODunzps5wlchX
6,920
[WebDataset] Add `.pth` support for torch tensors
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Ok, I know it was my fault. I didn't add the argument `--use-external-format` (gpt2-xl is more than 2GB)\r\nActually I had to open the convert_graph_to_onnx.py file and read each argument's description\r\nThanks again, I'm closing the issue now." ]
"2024-05-26T11:12:07"
"2024-05-27T09:11:17"
"2024-05-27T09:04:54"
MEMBER
null
In this PR I add support for `.pth` but with `weights_only=True` to disallow the use of pickle
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6920/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6920/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6920", "html_url": "https://github.com/huggingface/datasets/pull/6920", "diff_url": "https://github.com/huggingface/datasets/pull/6920.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6920.patch", "merged_at": "2024-05-27T09:04:54" }
https://api.github.com/repos/huggingface/datasets/issues/6919
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6919/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6919/comments
https://api.github.com/repos/huggingface/datasets/issues/6919/events
https://github.com/huggingface/datasets/issues/6919
2,315,618,993
I_kwDODunzps6KBYqx
6,919
Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python/tuple>
{ "login": "juanqui", "id": 67964, "node_id": "MDQ6VXNlcjY3OTY0", "avatar_url": "https://avatars.githubusercontent.com/u/67964?v=4", "gravatar_id": "", "url": "https://api.github.com/users/juanqui", "html_url": "https://github.com/juanqui", "followers_url": "https://api.github.com/users/juanqui/followers", "following_url": "https://api.github.com/users/juanqui/following{/other_user}", "gists_url": "https://api.github.com/users/juanqui/gists{/gist_id}", "starred_url": "https://api.github.com/users/juanqui/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/juanqui/subscriptions", "organizations_url": "https://api.github.com/users/juanqui/orgs", "repos_url": "https://api.github.com/users/juanqui/repos", "events_url": "https://api.github.com/users/juanqui/events{/privacy}", "received_events_url": "https://api.github.com/users/juanqui/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6919?src=pr&el=h1) Report\n> Merging [#6919](https://codecov.io/gh/huggingface/transformers/pull/6919?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/dfa10a41ba3fd9c5289bebd3baeff8792b1b2281?el=desc) will **decrease** coverage by `0.20%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6919/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6919?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6919 +/- ##\n==========================================\n- Coverage 80.02% 79.82% -0.21% \n==========================================\n Files 157 157 \n Lines 28586 28586 \n==========================================\n- Hits 22876 22818 -58 \n- Misses 5710 5768 +58 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6919?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `34.28% <0.00%> (-62.86%)` | :arrow_down: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `36.50% <0.00%> (-60.32%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `23.87% <0.00%> (-57.10%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-34.36%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.66% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.32%)` | :arrow_up: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.40% <0.00%> (+0.34%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |\n| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6919?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6919?src=pr&el=footer). Last update [dfa10a4...252c784](https://codecov.io/gh/huggingface/transformers/pull/6919?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-05-24T14:59:45"
"2024-05-24T14:59:45"
null
NONE
null
### Describe the bug I wrote a notebook to load an existing dataset, process it, and upload as a private dataset using `dataset.push_to_hub(...)` at the end. The push to hub is failing with: ``` ValueError: Invalid metadata in README.md. - Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python[/tuple](http://192.168.1.128:8888/tuple)> (50:11) 47 | - 4 48 | - 4 49 | - 8 50 | - !!binary | ----------------^ 51 | TwAAAA== 52 | '1': !!python[/object/apply](http://192.168.1.128:8888/object/apply):nump ... ``` My dataset has a `train` and `validation` dataset. These are the features: ``` {'c1': Value(dtype='string', id=None), 'c2': Value(dtype='string', id=None), 'c3': [{'value': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None)}], 'c4': Value(dtype='string', id=None), 'c5': Value(dtype='string', id=None), 'c6': Value(dtype='string', id=None), 'c7': Value(dtype='string', id=None), 'c8': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None), 'c9': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'c10': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'labels': Sequence(feature=ClassLabel(names=['O', 'B-ABC', 'I-ABC', ...], id=None), length=-1, id=None), 'c12': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)} ``` This used to work until I decided to cast the `labels` feature to a `Sequence(ClassLabel(...))` type with: ``` ds['train'] = ds['train'].cast_column("labels", Sequence(ClassLabel(names=list(labels)))) ds['validation'] = ds['validation'].cast_column("labels", Sequence(ClassLabel(names=list(labels)))) ``` ### Steps to reproduce the bug 1. Start with any token classification dataset. 2. Add a `labels` column with data such as `[0,0,0,12,13,13,13,0,0]`. 3. Cast the label column from `Sequence` to `Sequence(ClassLabel))` with: ``` labels = ['O', 'B-TEST', 'I-TEST'] ds = ds.cast_column("labels", Sequence(ClassLabel(names=labels))) ``` 4. Push to hub with `ds.push_to_hub("me/awesome-stuff-dataset")` ### Expected behavior I expected `push_to_hub` to successfully push my dataset to the hub without error. ### Environment info Python 3.11.9 datasets==2.19.1 transformers==4.41.1 PyYAML==6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6919/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6919/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6918
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6918/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6918/comments
https://api.github.com/repos/huggingface/datasets/issues/6918/events
https://github.com/huggingface/datasets/issues/6918
2,315,322,738
I_kwDODunzps6KAQVy
6,918
NonMatchingSplitsSizesError when using data_dir
{ "login": "srehaag", "id": 86664538, "node_id": "MDQ6VXNlcjg2NjY0NTM4", "avatar_url": "https://avatars.githubusercontent.com/u/86664538?v=4", "gravatar_id": "", "url": "https://api.github.com/users/srehaag", "html_url": "https://github.com/srehaag", "followers_url": "https://api.github.com/users/srehaag/followers", "following_url": "https://api.github.com/users/srehaag/following{/other_user}", "gists_url": "https://api.github.com/users/srehaag/gists{/gist_id}", "starred_url": "https://api.github.com/users/srehaag/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/srehaag/subscriptions", "organizations_url": "https://api.github.com/users/srehaag/orgs", "repos_url": "https://api.github.com/users/srehaag/repos", "events_url": "https://api.github.com/users/srehaag/events{/privacy}", "received_events_url": "https://api.github.com/users/srehaag/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "The `AlbertTokenizer` in `transformers` is a SentencePiece based tokenizer, so it cannot load `vocab.txt`. You could try loading it in `BertTokenizer`, as it seems to be a wordpiece tokenizer vocabulary.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-05-24T12:43:39"
"2024-05-28T12:41:22"
null
NONE
null
### Describe the bug Loading a dataset from with a data_dir argument generates a NonMatchingSplitsSizesError if there are multiple directories in the dataset. This appears to happen because the expected split is calculated based on the data in all the directories whereas the recorded split is calculated based on the data in the directory specified using the data_dir argument. This is recent behavior. Until the past few weeks loading using the data_dir argument worked without any issue. ### Steps to reproduce the bug Simple test dataset available here: https://huggingface.co/datasets/srehaag/hf-bug-temp The dataset contains two directories "data1" and "data2", each with a file called "train.parquet" with a 2 x 5 table. from datasets import load_dataset dataset = load_dataset("srehaag/hf-bug-temp", data_dir = "data1") Generates: --------------------------------------------------------------------------- NonMatchingSplitsSizesError Traceback (most recent call last) Cell In[3], <a href='vscode-notebook-cell:?execution_count=3&line=2'>line 2</a> <a href='vscode-notebook-cell:?execution_count=3&line=1'>1</a> from datasets import load_dataset ----> <a href='vscode-notebook-cell:?execution_count=3&line=2'>2</a> dataset = load_dataset("srehaag/hf-bug-temp", data_dir = "data1") File ~/.python/current/lib/python3.10/site-packages/datasets/load.py:2609, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2606'>2606</a> return builder_instance.as_streaming_dataset(split=split) <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2608'>2608</a> # Download and prepare data -> <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2609'>2609</a> builder_instance.download_and_prepare( <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2610'>2610</a> download_config=download_config, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2611'>2611</a> download_mode=download_mode, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2612'>2612</a> verification_mode=verification_mode, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2613'>2613</a> num_proc=num_proc, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2614'>2614</a> storage_options=storage_options, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2615'>2615</a> ) <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2617'>2617</a> # Build dataset for splits <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2618'>2618</a> keep_in_memory = ( <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2619'>2619</a> keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2620'>2620</a> ) File ~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1027, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1025'>1025</a> if num_proc is not None: <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1026'>1026</a> prepare_split_kwargs["num_proc"] = num_proc -> <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1027'>1027</a> self._download_and_prepare( <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1028'>1028</a> dl_manager=dl_manager, <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1029'>1029</a> verification_mode=verification_mode, <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1030'>1030</a> **prepare_split_kwargs, <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1031'>1031</a> **download_and_prepare_kwargs, <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1032'>1032</a> ) <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1033'>1033</a> # Sync info <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1034'>1034</a> self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1140, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1137'>1137</a> dl_manager.manage_extracted_files() <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1139'>1139</a> if verification_mode == VerificationMode.BASIC_CHECKS or verification_mode == VerificationMode.ALL_CHECKS: -> <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1140'>1140</a> verify_splits(self.info.splits, split_dict) <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1142'>1142</a> # Update the info object with the splits. <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1143'>1143</a> self.info.splits = split_dict File ~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:101, in verify_splits(expected_splits, recorded_splits) <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:95'>95</a> bad_splits = [ <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:96'>96</a> {"expected": expected_splits[name], "recorded": recorded_splits[name]} <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:97'>97</a> for name in expected_splits <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:98'>98</a> if expected_splits[name].num_examples != recorded_splits[name].num_examples <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:99'>99</a> ] <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:100'>100</a> if len(bad_splits) > 0: --> <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:101'>101</a> raise NonMatchingSplitsSizesError(str(bad_splits)) <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:102'>102</a> logger.info("All the splits matched successfully.") NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=212, num_examples=10, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=106, num_examples=5, shard_lengths=None, dataset_name='hf-bug-temp')}] __________ By contrast, this loads the data from both data1/train.parquet and data2/train.parquet without any error message: from datasets import load_dataset dataset = load_dataset("srehaag/hf-bug-temp") ### Expected behavior Should load the 5 x 2 table from data1/train.parquet without error message. ### Environment info Used Codespaces to simplify environment (see details below), but bug is present across various configurations. - `datasets` version: 2.19.1 - Platform: Linux-6.5.0-1021-azure-x86_64-with-glibc2.31 - Python version: 3.10.13 - `huggingface_hub` version: 0.23.1 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6918/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6918/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6917
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6917/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6917/comments
https://api.github.com/repos/huggingface/datasets/issues/6917/events
https://github.com/huggingface/datasets/issues/6917
2,314,683,663
I_kwDODunzps6J90UP
6,917
WinError 32 The process cannot access the file during load_dataset
{ "login": "elwe-2808", "id": 56682168, "node_id": "MDQ6VXNlcjU2NjgyMTY4", "avatar_url": "https://avatars.githubusercontent.com/u/56682168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elwe-2808", "html_url": "https://github.com/elwe-2808", "followers_url": "https://api.github.com/users/elwe-2808/followers", "following_url": "https://api.github.com/users/elwe-2808/following{/other_user}", "gists_url": "https://api.github.com/users/elwe-2808/gists{/gist_id}", "starred_url": "https://api.github.com/users/elwe-2808/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elwe-2808/subscriptions", "organizations_url": "https://api.github.com/users/elwe-2808/orgs", "repos_url": "https://api.github.com/users/elwe-2808/repos", "events_url": "https://api.github.com/users/elwe-2808/events{/privacy}", "received_events_url": "https://api.github.com/users/elwe-2808/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "@mfuntowicz - Since T5 relies on google's sentencepiece tokenizer for now, can we do anything against it before our own sentencepiece tokenizer is implemented? ", "Verified that this is a problem with the original T5 sentencepience tokenizer. Opened an issue with the Google's T5 repository. https://github.com/google-research/text-to-text-transfer-transformer/issues/390", "Closing this issue , quoting from T5 github issue\r\n> > { is OOV because we intentionally removed any pages with { or } from C4 to avoid pre-training on anything other than natural language. So, it gets encoded to ??. SentencePiece has a byte fallback feature but it was not available when we trained our sentencepiece model." ]
"2024-05-24T07:54:51"
"2024-05-24T07:54:51"
null
NONE
null
### Describe the bug When I try to load the opus_book from hugging face (following the [guide on the website](https://huggingface.co/docs/transformers/main/en/tasks/translation)) ```python from datasets import load_dataset, Dataset dataset = load_dataset("Helsinki-NLP/opus_books", "en-fr", features=["id", "translation"]) ``` I get an error: `PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:/Users/Me/.cache/huggingface/datasets/Helsinki-NLP___parquet/ca-de-a39f1ef185b9b73b/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec.incomplete\\parquet-train-00000-00000-of-NNNNN.arrow' ` <details><summary>Full stacktrace</summary> <p> ```python AttributeError Traceback (most recent call last) File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\builder.py:1858, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) [1857](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1857) _time = time.time() -> [1858](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1858) for _, table in generator: [1859](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1859) if max_shard_size is not None and writer._num_bytes > max_shard_size: File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\packaged_modules\parquet\parquet.py:59, in Parquet._generate_tables(self, files) [58](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/packaged_modules/parquet/parquet.py:58) def _generate_tables(self, files): ---> [59](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/packaged_modules/parquet/parquet.py:59) schema = self.config.features.arrow_schema if self.config.features is not None else None [60](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/packaged_modules/parquet/parquet.py:60) if self.config.features is not None and self.config.columns is not None: AttributeError: 'list' object has no attribute 'arrow_schema' During handling of the above exception, another exception occurred: AttributeError Traceback (most recent call last) File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\builder.py:1882, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) [1881](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1881) num_shards = shard_id + 1 -> [1882](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1882) num_examples, num_bytes = writer.finalize() [1883](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1883) writer.close() File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\arrow_writer.py:584, in ArrowWriter.finalize(self, close_stream) [583](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/arrow_writer.py:583) # If schema is known, infer features even if no examples were written --> [584](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/arrow_writer.py:584) if self.pa_writer is None and self.schema: ... --> [627](file:///C:/Users/Me/.conda/envs/ia/lib/shutil.py:627) os.unlink(fullname) [628](file:///C:/Users/Me/.conda/envs/ia/lib/shutil.py:628) except OSError: [629](file:///C:/Users/Me/.conda/envs/ia/lib/shutil.py:629) onerror(os.unlink, fullname, sys.exc_info()) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:/Users/Me/.cache/huggingface/datasets/Helsinki-NLP___parquet/ca-de-a39f1ef185b9b73b/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec.incomplete\\parquet-train-00000-00000-of-NNNNN.arrow' ``` </p> </details> ### Steps to reproduce the bug Steps to reproduce: Just execute these lines ```python from datasets import load_dataset, Dataset dataset = load_dataset("Helsinki-NLP/opus_books", "en-fr", features=["id", "translation"]) ``` ### Expected behavior I expect the dataset to be loaded without any errors. ### Environment info | Package| Version| |--------|--------| | transformers| 4.37.2| | python| 3.9.19| | pytorch| 2.3.0| | datasets|2.12.0 | | arrow | 1.2.3| I am using Conda on Windows 11.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6917/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6917/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6916
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6916/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6916/comments
https://api.github.com/repos/huggingface/datasets/issues/6916/events
https://github.com/huggingface/datasets/issues/6916
2,311,675,564
I_kwDODunzps6JyV6s
6,916
```push_to_hub()``` - Prevent Automatic Generation of Splits
{ "login": "jetlime", "id": 29337128, "node_id": "MDQ6VXNlcjI5MzM3MTI4", "avatar_url": "https://avatars.githubusercontent.com/u/29337128?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jetlime", "html_url": "https://github.com/jetlime", "followers_url": "https://api.github.com/users/jetlime/followers", "following_url": "https://api.github.com/users/jetlime/following{/other_user}", "gists_url": "https://api.github.com/users/jetlime/gists{/gist_id}", "starred_url": "https://api.github.com/users/jetlime/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jetlime/subscriptions", "organizations_url": "https://api.github.com/users/jetlime/orgs", "repos_url": "https://api.github.com/users/jetlime/repos", "events_url": "https://api.github.com/users/jetlime/events{/privacy}", "received_events_url": "https://api.github.com/users/jetlime/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I can confirm it was previously checking the model weights and re-downloading if the weights had been changed. Investigating.", "This is due to the CDN caching files, with a 24 hour delay. After 24 hours it should download your file, but if you want it now you can use the `use_cdn` flag and set it to `False`. You can see the documentation for this [here](https://github.com/huggingface/transformers/blob/master/src/transformers/file_utils.py#L573-L585).", "Thank you for the hint, @LysandreJik. So `from_pretrained(mname, use_cdn=False)`\r\n\r\nBut that might be tricky for end users who won't know that the code base has changed yet the model weights they get are out sync.\r\n\r\nIs there a way to signal CDN to invalidate the cache for some files? It could then be done from the upload util.\r\n\r\n\r\n\r\n", "FWIW, I wrote a one liner to force cache update for the 4 models I'm working at the moment.\r\n```\r\nPYTHONPATH=\"src\" python -c 'from transformers import AutoModel; [AutoModel.from_pretrained(\"stas/fsmt-wmt19-\"+p, use_cdn=False) for p in [\"en-ru\",\"ru-en\",\"en-de\",\"de-en\"]]'\r\n```\r\nI now have that in my script, so I don't need to think about it.", "@LysandreJik, unfortunately this doesn't solve the issue\r\n\r\n`AutoModel.from_pretrained(mname, use_cdn=False)`\r\n\r\nIndeed forces a download of the recently updated model - but then if this flag is no longer used in the application - it still downloads the CDN cached version and ends up using the wrong version.\r\n\r\nSo, basically, this results in 2 copies (different hashes) sitting in the cache dir. \r\n\r\nAnd normal usage w/o using `use_cdn=False` looks up the old version and not the new one. (so things like `run_eval.py` still use the old one)\r\n\r\nThanks.\r\n", "can you run `AutoModel.from_pretrained(mname, use_cdn=False)` in a debugger and check whether the downloaded url is a `https://cdn.huggingface.co` or a `https://s3.amazonaws.com/models.huggingface.co` url?", "I can do that, but I already checked that it downloads the updated model w/ `use_cdn=False`. But then if you run it again w/o `use_cdn=False` it ignores the new download and uses the old model again (if I delete the cached version, it redownloads the old cached version w/o `use_cdn=False` ).", "Oh yeah ok, I see. Can you `run_eval.py` on a local folder path then?", "> Can you `run_eval.py` on a local folder path then?\r\n\r\nYes. Except others can't as they don't have my local copy.\r\n\r\ne.g. @sshleifer wants to eval my PR https://github.com/huggingface/transformers/pull/6940, but now has to wait till tomorrow for CDN to expire (or hack around it).\r\n\r\nLast night I uploaded an experimental model, which proved to be invalid, thought I re-downloaded it OK as it was working OK and made a PR, except I was testing against the non-current cached version, which was a good one.", "Can we please re-open this ticket? It hasn't been resolved", "Can we add a `--no_cdn` boolean flag to `run_eval.py` that would then call `AutoModelForSeq2SeqLM.from_pretrained(use_cdn=False)`?\r\n\r\nIn our dev workflow we mostly don't use the cdn while the files are still in-flux. Cloudfront invalidation comes with its own set of issues so it's better to view cdn as a means to distribute permanent files. (for this reason we don't serve config.json files from Cloudfront)", "> Can we add a `--no_cdn` boolean flag to `run_eval.py` that would then call `AutoModelForSeq2SeqLM.from_pretrained(use_cdn=False)`?\r\n\r\nIt could be done. I have a feeling then there will be others.\r\n\r\nPerhaps an alternative solution would be to introduce an env var, that would transparently override cdn cache in any situation w/o needing to change every script? `TRANSFORMERS_USE_CDN=False`?\r\n\r\n> In our dev workflow we mostly don't use the cdn while the files are still in-flux. Cloudfront invalidation comes with its own set of issues so it's better to view cdn as a means to distribute permanent files. (for this reason we don't serve config.json files from Cloudfront)\r\n\r\nUnderstood!\r\n\r\nHow do you let others onto testing the model files? Putting them on dropbox or something and sharing the link?\r\n", "No, just S3 links!", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "https://github.com/huggingface/transformers/pull/8324 should resolve this." ]
"2024-05-22T23:52:15"
"2024-05-23T00:07:53"
"2024-05-23T00:07:53"
NONE
null
### Describe the bug I currently have a dataset which has not been splited. When pushing the dataset to my hugging face dataset repository, it is split into a testing and training set. How can I prevent the split from happening? ### Steps to reproduce the bug 1. Have a unsplit dataset ```python Dataset({ features: ['input', 'output', 'Attack', '__index_level_0__'], num_rows: 944685 }) ``` 2. Push it to huggingface ```python dataset.push_to_hub(dataset_name) ``` 3. On the hugging face dataset repo, the dataset then appears to be splited: ![image](https://github.com/huggingface/datasets/assets/29337128/b4fbc141-42b0-4f49-98df-dd479648fe09) 4. Indeed, when loading the dataset from this repo, the dataset is split in two testing and training set. ```python from datasets import load_dataset, Dataset dataset = load_dataset("Jetlime/NF-CSE-CIC-IDS2018-v2", streaming=True) dataset ``` output: ``` IterableDatasetDict({ train: IterableDataset({ features: ['input', 'output', 'Attack', '__index_level_0__'], n_shards: 2 }) test: IterableDataset({ features: ['input', 'output', 'Attack', '__index_level_0__'], n_shards: 1 }) ``` ### Expected behavior The dataset shall not be splited, as not requested. ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-6.2.0-35-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.23.0 - PyArrow version: 15.0.2 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6916/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6916/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6915
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6915/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6915/comments
https://api.github.com/repos/huggingface/datasets/issues/6915/events
https://github.com/huggingface/datasets/pull/6915
2,310,564,961
PR_kwDODunzps5wNIUh
6,915
Validate config name and data_files in packaged modules
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6915?src=pr&el=h1) Report\n> Merging [#6915](https://codecov.io/gh/huggingface/transformers/pull/6915?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ebb52afdb4dc4bcd599e7cb503763e5d4afc962?el=desc) will **increase** coverage by `2.01%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6915/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6915?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6915 +/- ##\n==========================================\n+ Coverage 77.81% 79.83% +2.01% \n==========================================\n Files 157 157 \n Lines 28853 28853 \n==========================================\n+ Hits 22452 23034 +582 \n+ Misses 6401 5819 -582 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6915?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.82% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `32.20% <0.00%> (-66.95%)` | :arrow_down: |\n| [src/transformers/tokenization\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `46.03% <0.00%> (-49.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |\n| ... and [23 more](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6915?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6915?src=pr&el=footer). Last update [4ebb52a...481baa3](https://codecov.io/gh/huggingface/transformers/pull/6915?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "I run a test with this change on my ubuntu 18.04 with a 2080Ti GPU, tensorflow-gpu 2.2.0:\r\n```\r\nfrom tensorflow.keras.layers import Input, Embedding, Bidirectional, GRU, Dense\r\nfrom tensorflow.keras.models import Model\r\nfrom transformers import TFDistilBertModel\r\nfrom tensorflow.keras.mixed_precision import experimental as mixed_precision\r\npolicy = mixed_precision.Policy('mixed_float16')\r\nmixed_precision.set_policy(policy)\r\n\r\nbert = TFDistilBertModel.from_pretrained('distilbert-base-uncased')\r\ninputs = Input(shape=(None,), dtype='int32')\r\nbert_out = bert(inputs)[0]\r\noutput = Dense(9, activation='softmax', dtype='float32')(bert_out)\r\nmodel = Model(inputs, output)\r\nmodel.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\r\nmodel.summary()\r\nx = [[5, 2, 3] * 3] * 100\r\ny = [[1, 2, 3] * 3] * 100\r\nmodel.fit(x=x, y=y, epochs=20, batch_size=16)\r\n```\r\nAnd get error info:\r\n```\r\nTraceback (most recent call last):\r\n File \"test.py\", line 8, in <module>\r\n bert = TFDistilBertModel.from_pretrained('distilbert-base-uncased')\r\n File \"/home/xingya/transformers/src/transformers/modeling_tf_utils.py\", line 602, in from_pretrained\r\n model(model.dummy_inputs, training=False) # build the network with dummy inputs\r\n File \"/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py\", line 968, in __call__\r\n outputs = self.call(cast_inputs, *args, **kwargs)\r\n File \"/home/xingya/transformers/src/transformers/modeling_tf_distilbert.py\", line 615, in call\r\n outputs = self.distilbert(inputs, **kwargs)\r\n File \"/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py\", line 968, in __call__\r\n outputs = self.call(cast_inputs, *args, **kwargs)\r\n File \"/home/xingya/transformers/src/transformers/modeling_tf_distilbert.py\", line 508, in call\r\n tfmr_output = self.transformer(\r\n File \"/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py\", line 968, in __call__\r\n outputs = self.call(cast_inputs, *args, **kwargs)\r\n File \"/home/xingya/transformers/src/transformers/modeling_tf_distilbert.py\", line 401, in call\r\n layer_outputs = layer_module(hidden_state, attn_mask, head_mask[i], output_attentions, training=training)\r\n File \"/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py\", line 968, in __call__\r\n outputs = self.call(cast_inputs, *args, **kwargs)\r\n File \"/home/xingya/transformers/src/transformers/modeling_tf_distilbert.py\", line 355, in call\r\n ffn_output = self.ffn(sa_output, training=training) # (bs, seq_length, dim)\r\n File \"/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py\", line 968, in __call__\r\n outputs = self.call(cast_inputs, *args, **kwargs)\r\n File \"/home/xingya/transformers/src/transformers/modeling_tf_distilbert.py\", line 304, in call\r\n x = self.activation(x)\r\n File \"/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py\", line 968, in __call__\r\n outputs = self.call(cast_inputs, *args, **kwargs)\r\n File \"/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/keras/layers/core.py\", line 420, in call\r\n return self.activation(inputs)\r\n File \"/home/xingya/transformers/src/transformers/modeling_tf_distilbert.py\", line 79, in gelu\r\n cdf = 0.5 * (1.0 + tf.math.erf(x / tf.math.sqrt(2.0)))\r\n File \"/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py\", line 984, in binary_op_wrapper\r\n return func(x, y, name=name)\r\n File \"/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py\", line 1081, in _truediv_python3\r\n raise TypeError(\"x and y must have the same dtype, got %r != %r\" %\r\nTypeError: x and y must have the same dtype, got tf.float16 != tf.float32\r\n```\r\nI made a modification to L299:\r\n`self.activation = (\r\n tf.keras.layers.Activation(gelu, dtype='float32') if config.activation == \"gelu\" else tf.keras.activations.relu\r\n )`\r\nAnd then the model began to train, however the loss don't decrease and the accuracy is always 0:\r\n```\r\n7/7 [==============================] - 0s 28ms/step - loss: 2.1972 - accuracy: 0.0000e+00\r\nEpoch 2/20\r\n7/7 [==============================] - 0s 29ms/step - loss: 2.1972 - accuracy: 0.0000e+00\r\nEpoch 3/20\r\n7/7 [==============================] - 0s 30ms/step - loss: 2.1972 - accuracy: 0.0000e+00\r\nEpoch 4/20\r\n7/7 [==============================] - 0s 31ms/step - loss: 2.1972 - accuracy: 0.0000e+00\r\n```\r\n\r\nI have trid this code in float32 precision, and it works. \r\n```\r\nEpoch 1/20\r\n7/7 [==============================] - 0s 31ms/step - loss: 2.5418 - accuracy: 0.2800\r\nEpoch 2/20\r\n7/7 [==============================] - 0s 33ms/step - loss: 1.2452 - accuracy: 0.3356\r\nEpoch 3/20\r\n7/7 [==============================] - 0s 31ms/step - loss: 1.1438 - accuracy: 0.3267\r\nEpoch 4/20\r\n7/7 [==============================] - 0s 33ms/step - loss: 1.1219 - accuracy: 0.3400\r\n```", "@xuxingya , the accuracy not improved during training is due to a line \r\n\r\n > scores = scores - 1e30 * (1.0 - mask)\r\n\r\nwhile `1e30` with `half precision` will cause `nan` values. I am still trying to figure out a way to deal with it.", "@xuxingya Would you mind to run the test on your side again, please? I tested it with your example, and it is fine now.", "@chiapas Yes, I run the test and now it's fine." ]
"2024-05-22T13:36:33"
"2024-05-22T15:02:04"
null
MEMBER
null
Validate the config attributes `name` and `data_files` in packaged modules by making the derived classes call their parent `__post_init__` method. Note that their parent `BuilderConfig` validates its attributes `name` and `data_files` in its `__post_init__` method: https://github.com/huggingface/datasets/blob/60d21efbc01e15d0b596ac1072750cbecd91548a/src/datasets/builder.py#L128-L137 This PR makes the derived config classes call their parent `__post_init__` method to validate their `name` and `data_files` attributes.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6915/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6915/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6915", "html_url": "https://github.com/huggingface/datasets/pull/6915", "diff_url": "https://github.com/huggingface/datasets/pull/6915.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6915.patch", "merged_at": null }
https://api.github.com/repos/huggingface/datasets/issues/6914
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6914/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6914/comments
https://api.github.com/repos/huggingface/datasets/issues/6914/events
https://github.com/huggingface/datasets/pull/6914
2,310,107,326
PR_kwDODunzps5wLi3e
6,914
Preserve JSON column order and support list of strings field
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6914?src=pr&el=h1) Report\n> Merging [#6914](https://codecov.io/gh/huggingface/transformers/pull/6914?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ebb52afdb4dc4bcd599e7cb503763e5d4afc962?el=desc) will **increase** coverage by `1.21%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6914/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6914?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6914 +/- ##\n==========================================\n+ Coverage 77.81% 79.03% +1.21% \n==========================================\n Files 157 157 \n Lines 28853 28853 \n==========================================\n+ Hits 22452 22804 +352 \n+ Misses 6401 6049 -352 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6914?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |\n| [src/transformers/tokenization\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `30.15% <0.00%> (-65.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.73% <0.00%> (-19.35%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.18%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.21% <0.00%> (+0.83%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.64% <0.00%> (+1.34%)` | :arrow_up: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.84% <0.00%> (+1.61%)` | :arrow_up: |\n| [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `85.18% <0.00%> (+2.46%)` | :arrow_up: |\n| ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6914?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6914?src=pr&el=footer). Last update [4ebb52a...408286d](https://codecov.io/gh/huggingface/transformers/pull/6914?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-05-22T09:58:54"
"2024-05-22T12:50:31"
null
MEMBER
null
Preserve column order when loading from a JSON file with a list of dict (or with a field containing a list of dicts). Additionally, support JSON file with a list of strings field. Fix #6913.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6914/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6914/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6914", "html_url": "https://github.com/huggingface/datasets/pull/6914", "diff_url": "https://github.com/huggingface/datasets/pull/6914.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6914.patch", "merged_at": null }
https://api.github.com/repos/huggingface/datasets/issues/6913
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6913/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6913/comments
https://api.github.com/repos/huggingface/datasets/issues/6913/events
https://github.com/huggingface/datasets/issues/6913
2,309,605,889
I_kwDODunzps6JqcoB
6,913
Column order is nondeterministic when loading from JSON
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi! Yes, this isn't an issue, this is the intended behavior. It's the standard behavior with Sphinx/ReadTheDocs. You can see a similar example with the [PyTorch docs](https://pytorch.org/docs/stable/tensors.html)." ]
"2024-05-22T05:30:14"
"2024-05-22T05:31:10"
null
MEMBER
null
As reported by @meg-huggingface, the order of the JSON object keys is not preserved while loading a dataset from a JSON file with a list of objects. For example, when loading a JSON files with a list of objects, each with the following ordered keys: - [ID, Language, Topic], the resulting dataset may have columns: - [ID, Topic, Language], or - [Topic, Language, ID], or - [Topic, ID, Language],... This issue is caused by the use of a Python set (which does not preserve the order): https://github.com/huggingface/datasets/blob/60d21efbc01e15d0b596ac1072750cbecd91548a/src/datasets/packaged_modules/json/json.py#L168 introduced in - #5772
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6913/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6913/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6912
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6912/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6912/comments
https://api.github.com/repos/huggingface/datasets/issues/6912/events
https://github.com/huggingface/datasets/issues/6912
2,309,365,961
I_kwDODunzps6JpiDJ
6,912
Add MedImg for streaming
{ "login": "lhallee", "id": 72926928, "node_id": "MDQ6VXNlcjcyOTI2OTI4", "avatar_url": "https://avatars.githubusercontent.com/u/72926928?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhallee", "html_url": "https://github.com/lhallee", "followers_url": "https://api.github.com/users/lhallee/followers", "following_url": "https://api.github.com/users/lhallee/following{/other_user}", "gists_url": "https://api.github.com/users/lhallee/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhallee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhallee/subscriptions", "organizations_url": "https://api.github.com/users/lhallee/orgs", "repos_url": "https://api.github.com/users/lhallee/repos", "events_url": "https://api.github.com/users/lhallee/events{/privacy}", "received_events_url": "https://api.github.com/users/lhallee/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi, are you sure your issue comes from the tokenizer? If you encode your text using `encode_plus` and `batch_encode_plus`, do you see a difference in the tokens generated?", "I only use encode_plus and batch_encode_plus and call model inference. I do not think the model inference is the problem as you see in the function calls. so I think it is coming from encode_plus and batch_encode_plus. Regarding your question, I see that that batch_encode_plus add ones at the end of the list \" 1, 1, 1, 1, 1, 1]\". and I thought this is this difference may be a reason for the problem.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-05-22T00:55:30"
"2024-05-22T19:19:58"
null
NONE
null
### Feature request Host the MedImg dataset (similar to Imagenet but for biomedical images). ### Motivation There is a clear need for biomedical image foundation models and large scale biomedical datasets that are easily streamable. This would be an excellent tool for the biomedical community. ### Your contribution MedImg can be found [here](https://www.cuilab.cn/medimg/#).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6912/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6912/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6911
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6911/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6911/comments
https://api.github.com/repos/huggingface/datasets/issues/6911/events
https://github.com/huggingface/datasets/pull/6911
2,308,152,711
PR_kwDODunzps5wE2ah
6,911
Remove dead code for non-dict data_files from packaged modules
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6911?src=pr&el=h1) Report\n> Merging [#6911](https://codecov.io/gh/huggingface/transformers/pull/6911?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ebb52afdb4dc4bcd599e7cb503763e5d4afc962?el=desc) will **increase** coverage by `2.25%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6911/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6911?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6911 +/- ##\n==========================================\n+ Coverage 77.81% 80.06% +2.25% \n==========================================\n Files 157 157 \n Lines 28853 28853 \n==========================================\n+ Hits 22452 23102 +650 \n+ Misses 6401 5751 -650 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6911?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.90% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.37%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <0.00%> (-0.14%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.25%)` | :arrow_up: |\n| ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6911?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6911?src=pr&el=footer). Last update [4ebb52a...87055d8](https://codecov.io/gh/huggingface/transformers/pull/6911?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-05-21T12:10:24"
"2024-05-23T08:05:58"
"2024-05-23T07:59:57"
MEMBER
null
Remove dead code for non-dict data_files from packaged modules. Since the merge of this PR: - #2986 the builders' variable self.config.data_files is always a dict, which makes the condition on (str, list, tuple) dead code.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6911/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6911/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6911", "html_url": "https://github.com/huggingface/datasets/pull/6911", "diff_url": "https://github.com/huggingface/datasets/pull/6911.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6911.patch", "merged_at": "2024-05-23T07:59:57" }
https://api.github.com/repos/huggingface/datasets/issues/6910
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6910/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6910/comments
https://api.github.com/repos/huggingface/datasets/issues/6910/events
https://github.com/huggingface/datasets/pull/6910
2,307,570,084
PR_kwDODunzps5wC2An
6,910
Fix wrong type hints in data_files
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-05-21T07:41:09"
"2024-05-23T06:04:05"
"2024-05-23T05:58:05"
MEMBER
null
Fix wrong type hints in data_files introduced in: - #6493
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6910/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6910/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6910", "html_url": "https://github.com/huggingface/datasets/pull/6910", "diff_url": "https://github.com/huggingface/datasets/pull/6910.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6910.patch", "merged_at": "2024-05-23T05:58:05" }
https://api.github.com/repos/huggingface/datasets/issues/6909
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6909/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6909/comments
https://api.github.com/repos/huggingface/datasets/issues/6909/events
https://github.com/huggingface/datasets/pull/6909
2,307,508,120
PR_kwDODunzps5wCoiE
6,909
Update requests >=2.32.1 to fix vulnerability
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I personally wouldn't like having a pre-commit hook change all my commits without me being able to see the end result.\r\nOn my setup, I have a pre-push hook that aborts a push if make quality fails. I think if we had an install script, we could handle both options?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Hi! bring back this because I think in suggest pre-commit instead of `make ...`\r\n\r\nWith the pre-commit, we can see the results/modifications, like by example:\r\n\r\n`git add .`\r\n`git commit -m \"any\"` **this will run the pre-commit**\r\n- if everything it's ok at the pre-commit pipeline, the commit will be created\r\n- else if he modifies something (like black or style hook) he will not create the commit and change the files\r\n - when this occurs, we can see with git diff what the pre-commit change, or can just use the `--show-diff-on-failure` flag when running pre-commit.\r\n\r\nI think that doesn't need everybody use pre-commit, can use both option (the actual format with running manually `make ...` and also using pre-commit) – but maybe don't make sense because will duplicate things? \r\n\r\nA little setup for pre-commit, i have tested here:\r\n\r\nadd `.pre-commit-config.yaml` - \r\n```yml\r\nrepos:\r\n- repo: https://github.com/psf/black\r\n rev: 22.1.0\r\n hooks:\r\n - id: black\r\n- repo: https://github.com/pycqa/isort\r\n rev: 5.10.1\r\n hooks:\r\n - id: isort\r\n name: isort (python)\r\n- repo: https://github.com/PyCQA/flake8\r\n rev: 4.0.1\r\n hooks:\r\n - id: flake8\r\n- repo: local\r\n hooks:\r\n - id: autogenerate_code\r\n name: autogenerate_code\r\n entry: python setup.py deps_table_update\r\n language: python\r\n types: [python]\r\n pass_filenames: false\r\n - id: extra_style_checks\r\n name: extra_style_checks\r\n entry: make extra_style_checks\r\n language: system\r\n```\r\nNote:\r\n - The hooks _autogenerate_code_ and _extra_style_checks_, can be call using the make command or running the python.\r\n\r\nInstall pre-commit:\r\n`pre-commit install`\r\n\r\nModify src/transformers/activations.py:\r\n```diff\r\n@@ -31,7 +31,8 @@ class NewGELUActivation(nn.Module):\r\n \"\"\"\r\n def forward(self, input: Tensor) -> Tensor:\r\n- return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0))))\r\n+ return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 /\r\n+ math.pi) * (input + 0.044715 * torch.pow(input, 3.0))))\r\n```\r\n```console\r\n$ git add -u\r\n$ git commit -m \"test pre-commit pipeline\"\r\n\r\nblack....................................................................Failed\r\n- hook id: black\r\n- files were modified by this hook\r\n\r\nreformatted src/transformers/activations.py\r\n\r\nAll done! ✨ 🍰 ✨\r\n1 file reformatted.\r\n\r\nisort (python)...........................................................Passed\r\nflake8...................................................................Passed\r\nautogenerate_code........................................................Passed\r\nextra_style_checks.......................................................Passed\r\n\r\n$ git status\r\nOn branch master\r\nYour branch is up to date with 'origin/master'.\r\n\r\nChanges to be committed:\r\n (use \"git restore --staged <file>...\" to unstage)\r\n modified: src/transformers/activations.py\r\n\r\nChanges not staged for commit:\r\n (use \"git add <file>...\" to update what will be committed)\r\n (use \"git restore <file>...\" to discard changes in working directory)\r\n modified: src/transformers/activations.py\r\n\r\n$ git diff\r\n--- a/src/transformers/activations.py\r\n+++ b/src/transformers/activations.py\r\n@@ -31,8 +31,7 @@ class NewGELUActivation(nn.Module):\r\n \"\"\"\r\n \r\n def forward(self, input: Tensor) -> Tensor:\r\n- return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 /\r\n- math.pi) * (input + 0.044715 * torch.pow(input, 3.0))))\r\n+ return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0))))\r\n```\r\n\r\n\r\nto show git diff automatically after the pre-commit can add:\r\n```yml\r\n- repo: local\r\n hooks:\r\n - id: git-diff\r\n name: git diff\r\n entry: git diff --exit-code\r\n language: system\r\n pass_filenames: false\r\n always_run: true\r\n```\r\n", "Even though I originally created this thread 1.5 years later I now agree with @sgugger, that I don't want format changes done while pushing - I need to see what has been changed since sometimes the autoformatter messes things up badly and I need to rewrite things to make the end result readable.\r\n\r\nIf this can be done as an option and not a requirement then I'm not against it, but there needs to be a way to validate/reformat files before git is involved.\r\n\r\nBTW, `precommit` can be run manually as well and not via git, which doesn't require `pre-commit install`:\r\n\r\n```\r\npre-commit run --all-files\r\n```\r\n\r\nAnd we have 2 ways to reformat files: `fixup` (fast - only modified files) - `style` (slow)", "yes use pre-commit don't make sense if does not want to always run the pipeline...\r\n\r\nAbout the `fixup` and `style`, i think can be done equal... by default pre-commit will run just in modified files (files at the commit) and if wants to run for all files can do as you shows.\r\nFor me, by default, i think makes sense always just run at modified files. And if the autoformatter messes things we can see, and if we prefer not to use some hook (like the autoformatter that have messed up something), by example run again with `SKIP=black ...`\r\n\r\nAnd the pre-commit tool will not let the commit be created if something fails, if the dev wants “force” the failed hook will need to add the `SKIP=hook ...` before the commit command", "(i personally agree with @sgugger that local hooks are best left as user-level tooling)" ]
"2024-05-21T07:11:20"
"2024-05-21T07:45:58"
"2024-05-21T07:38:25"
MEMBER
null
Update requests >=2.32.1 to fix vulnerability.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6909/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6909/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6909", "html_url": "https://github.com/huggingface/datasets/pull/6909", "diff_url": "https://github.com/huggingface/datasets/pull/6909.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6909.patch", "merged_at": "2024-05-21T07:38:25" }
https://api.github.com/repos/huggingface/datasets/issues/6908
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6908/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6908/comments
https://api.github.com/repos/huggingface/datasets/issues/6908/events
https://github.com/huggingface/datasets/issues/6908
2,304,958,116
I_kwDODunzps6JYt6k
6,908
Fail to load "stas/c4-en-10k" dataset since 2.16 version
{ "login": "guch8017", "id": 38173059, "node_id": "MDQ6VXNlcjM4MTczMDU5", "avatar_url": "https://avatars.githubusercontent.com/u/38173059?v=4", "gravatar_id": "", "url": "https://api.github.com/users/guch8017", "html_url": "https://github.com/guch8017", "followers_url": "https://api.github.com/users/guch8017/followers", "following_url": "https://api.github.com/users/guch8017/following{/other_user}", "gists_url": "https://api.github.com/users/guch8017/gists{/gist_id}", "starred_url": "https://api.github.com/users/guch8017/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/guch8017/subscriptions", "organizations_url": "https://api.github.com/users/guch8017/orgs", "repos_url": "https://api.github.com/users/guch8017/repos", "events_url": "https://api.github.com/users/guch8017/events{/privacy}", "received_events_url": "https://api.github.com/users/guch8017/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6908?src=pr&el=h1) Report\n> Merging [#6908](https://codecov.io/gh/huggingface/transformers/pull/6908?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0f360d3d1c606d6d79cdf1efa53c3d719249573d?el=desc) will **increase** coverage by `0.71%`.\n> The diff coverage is `87.71%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6908/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6908?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6908 +/- ##\n==========================================\n+ Coverage 80.23% 80.95% +0.71% \n==========================================\n Files 161 164 +3 \n Lines 30119 30925 +806 \n==========================================\n+ Hits 24167 25035 +868 \n+ Misses 5952 5890 -62 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6908?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/commands/convert.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9jb252ZXJ0LnB5) | `26.98% <20.00%> (-0.61%)` | :arrow_down: |\n| [src/transformers/modeling\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mdW5uZWwucHk=) | `86.76% <86.76%> (ø)` | |\n| [src/transformers/tokenization\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnVubmVsLnB5) | `97.67% <97.67%> (ø)` | |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.31% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.47% <100.00%> (+0.14%)` | :arrow_up: |\n| [src/transformers/configuration\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Z1bm5lbC5weQ==) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.97% <100.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.87% <100.00%> (+2.22%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: |\n| ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6908?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6908?src=pr&el=footer). Last update [0f360d3...8c684cc](https://codecov.io/gh/huggingface/transformers/pull/6908?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Awesome! The model seems quite complex so I didn't really understand all the functionality. \r\nA couple of things from my side:\r\n\r\n1) IMO, it's super useful to have hard coded integration tests in the test file which makes the model a lot easier to maintain (every change can quickly be checked by making sure the model stays mathematically equivalent).\r\n\r\n2) I guess a couple of comments and assert statements would be nice to make the code a bit easier to understand\r\n\r\n3) Personally, I don't like single letter variables. Search replace commands don't work on such variables and it is very difficult to understand what they mean. ", "Thanks for all the comments. I think I replied/addressed all of them except the fast small integration tests, which are going to take a bit more work (starting on this now). Let me know if I missed anything since there are a lot of comments!", "All checkpoints uploaded so I updated the incomplete lists. Also added mention of the model in all indexes, the model summary and the big table of pretrained models (sorry about the diff on that file, Funnel Transformer is one character too long and required to add an extra space on every line).\r\n\r\nShould be good to merge at the beginning of next week!", "@sgugger although you've named the models \"`funnel-base`\", \"`funnel-medium`\" so on so forth, the paper talks about all this in a different format, could a docstring be added saying `funnel-base` is `B4-4-4H768` and same for the rest. If someone wants to replicate the papers' results that would be great.\r\n\r\nedit: my bad, its there in the comments next to the model name, but still would be better in a docstring too. Sorry!\r\n" ]
"2024-05-20T02:43:59"
"2024-05-24T10:58:09"
"2024-05-24T10:58:09"
NONE
null
### Describe the bug When update datasets library to version 2.16+ ( I test it on 2.16, 2.19.0 and 2.19.1), using the following code to load stas/c4-en-10k dataset ```python from datasets import load_dataset, Dataset dataset = load_dataset('stas/c4-en-10k') ``` and then it raise UnicodeDecodeError like ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 2523, in load_dataset builder_instance = load_dataset_builder( File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 2195, in load_dataset_builder dataset_module = dataset_module_factory( File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 1846, in dataset_module_factory raise e1 from None File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 1798, in dataset_module_factory can_load_config_from_parquet_export = "DEFAULT_CONFIG_NAME" not in f.read() File "/home/*/conda3/envs/watermark/lib/python3.10/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte ``` I found that fs.open loads a gzip file and parses it like plain text using utf-8 encoder. ```python fs = HfFileSystem('https://huggingface.co') fs.open("datasets/stas/c4-en-10k/c4-en-10k.py", "rb") data = fs.read() # data is gzip bytes begin with b'\x1f\x8b\x08\x00\x00\tn\x88\x00...' data2 = unzip_gzip_bytes(data) # data2 is what we want: '# coding=utf-8\n# Copyright 2020 The HuggingFace Datasets...' ``` ### Steps to reproduce the bug 1. Install datasets between version 2.16 and 2.19 2. Use `datasets.load_dataset` method to load `stas/c4-en-10k` dataset. ### Expected behavior Load dataset normally. ### Environment info Platform = Linux-5.4.0-159-generic-x86_64-with-glibc2.35 Python = 3.10.14 Datasets = 2.19
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6908/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6908/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6907
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6907/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6907/comments
https://api.github.com/repos/huggingface/datasets/issues/6907/events
https://github.com/huggingface/datasets/issues/6907
2,303,855,833
I_kwDODunzps6JUgzZ
6,907
Support the deserialization of json lines files comprised of lists
{ "login": "umarbutler", "id": 8473183, "node_id": "MDQ6VXNlcjg0NzMxODM=", "avatar_url": "https://avatars.githubusercontent.com/u/8473183?v=4", "gravatar_id": "", "url": "https://api.github.com/users/umarbutler", "html_url": "https://github.com/umarbutler", "followers_url": "https://api.github.com/users/umarbutler/followers", "following_url": "https://api.github.com/users/umarbutler/following{/other_user}", "gists_url": "https://api.github.com/users/umarbutler/gists{/gist_id}", "starred_url": "https://api.github.com/users/umarbutler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/umarbutler/subscriptions", "organizations_url": "https://api.github.com/users/umarbutler/orgs", "repos_url": "https://api.github.com/users/umarbutler/repos", "events_url": "https://api.github.com/users/umarbutler/events{/privacy}", "received_events_url": "https://api.github.com/users/umarbutler/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Results for 1):\r\n\r\n```\r\n1 / 1\r\n\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\nType: multiple - Script: True 500 128 2.575 \r\nType: multiple - Script: True 500 512 3.898 \r\nType: multiple - Script: True 2500 128 13.173 \r\nType: multiple - Script: True 2500 512 18.263 \r\n--------------------------------------------------------------------------------\r\n1 / 1\r\n\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\nType: multiple - Script: False 500 128 3.733 \r\nType: multiple - Script: False 500 512 3.857 \r\nType: multiple - Script: False 2500 128 19.101 \r\nType: multiple - Script: False 2500 512 19.356 \r\n--------------------------------------------------------------------------------\r\n```\r\n\r\nFor the smaller sequence length 128 we can see a significant speed-up (~30%) - for the longer sequence length 512, the speed-up is much smaller (and only for the bigger list of inputs).", "Results for 2)\r\n\r\n\r\n```\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\n Type: batched - Script: True 512 128 0.819 \r\n Type: batched - Script: True 512 512 3.769 \r\n Type: batched - Script: True 4096 128 6.705 \r\n Type: batched - Script: True 4096 512 26.549 \r\n--------------------------------------------------------------------------------\r\n1 / 1\r\n\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\nType: batched - Script: False 512 128 0.837 \r\nType: batched - Script: False 512 512 3.88 \r\nType: batched - Script: False 4096 128 6.75 \r\nType: batched - Script: False 4096 512 27.162 \r\n--------------------------------------------------------------------------------\r\n```\r\n\r\nHere no clear speed gains can be seen. ", "I'm not sure I understand all the interactions in the benchmarking framework, but I think in line 9 (non-script model) we should be returning torch.jit.trace(model, sample_input), not the untraced model. And the sample input would have be max_length for it to work. That's were most of the gain comes from.\r\nThen the comparison is between using torch.jit.trace() and torch.jit.script(). Or maybe I'm missing some code that does that elsewhere? \r\n\r\n", "Okey, yeah that makes sense! I changed the benchmarking script accordingly and have the following results now: \r\n\r\n1)\r\n```\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\nType: multiple - Script: True 500 128 1.793 \r\nType: multiple - Script: True 500 512 3.628 \r\nType: multiple - Script: True 2500 128 8.774 \r\nType: multiple - Script: True 2500 512 19.471 \r\n--------------------------------------------------------------------------------\r\n1 / 1\r\n\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\nType: multiple - Trace: True 500 128 1.83 \r\nType: multiple - Trace: True 500 512 3.783 \r\nType: multiple - Trace: True 2500 128 9.083 \r\nType: multiple - Trace: True 2500 512 20.569 \r\n--------------------------------------------------------------------------------\r\n```\r\n\r\nand \r\n\r\n2) \r\n```\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\n Type: batched - Script: True 512 128 1.043 \r\n Type: batched - Script: True 512 512 4.913 \r\n Type: batched - Script: True 4096 128 8.499 \r\n Type: batched - Script: True 4096 512 34.187 \r\n--------------------------------------------------------------------------------\r\n1 / 1\r\n\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\nType: batched - Trace: True 512 128 1.046 \r\nType: batched - Trace: True 512 512 4.916 \r\nType: batched - Trace: True 4096 128 8.042 \r\nType: batched - Trace: True 4096 512 30.874 \r\n--------------------------------------------------------------------------------\r\n```\r\n\r\n=> So my understanding is now that `torch.trace(...)` is much more efficient for dynamic input shapes than not using torch.jit at all, but I also don't see how `torch.script(...)` is better than `torch.trace(...)`. If our models are compatible with `torch.trace(...)`, why do we need to have a model that is compatible with `torch.script(...)`? It is definitely more convenient to just call `torch.trace(model)` without having to provide any `input_ids`, but I'm not 100% sure whether it's worth a huge refactoring. \r\n\r\nalso cc @sgugger @LysandreJik ", "We saw different behavior in our experiments a few months ago. Will try to reproduce and update here.", "> We saw different behavior in our experiments a few months ago. Will try to reproduce and update here.\r\n\r\nWas `torch.script()` much faster than `torch.trace()` in your experiments?", "In our experiments, using trace(model, example_input) would result in a model that would only accept a sequence of the same length as example_sequence, whereas script(model) had no such restriction. This is the case mentioned in your documentation here: https://huggingface.co/transformers/torchscript.html#dummy-inputs-and-standard-lengths\r\n\r\nWhat that meant in practice is that you needed to trace with an example sequence of length = max_length, and then pad every example of length < max_length with zeros. Since the speed of the model is basically linear in the sequence length, for a set of inputs with varying sequence lengths we got a speed up of avg_len/max_length by using script() instead of trace().\r\n\r\nUpon further investigation, it looks like when we ran these experiments, several months ago, we were using Torch 1.2. It looks like in Torch 1.3 the fixed-length problem is no longer an issue for your BERT models (we still encounter it with other models architectures we build). So there's no longer a big speed gain from script() vs trace().\r\n\r\nThere are still some good reasons for preferring script() to trace() - scripting is guaranteed to capture the model codepath logic, whereas tracing might miss a logic branch if the example input doesn't flow through it. Also, currently tracing your models produces several warnings like the one below. But I'm not sure if those on their own are enough of a motivation to make major changes in your code base.\r\n```\r\nTracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n```", "> In our experiments, using trace(model, example_input) would result in a model that would only accept a sequence of the same length as example_sequence, whereas script(model) had no such restriction. This is the case mentioned in your documentation here: https://huggingface.co/transformers/torchscript.html#dummy-inputs-and-standard-lengths\r\n> \r\n> What that meant in practice is that you needed to trace with an example sequence of length = max_length, and then pad every example of length < max_length with zeros. Since the speed of the model is basically linear in the sequence length, for a set of inputs with varying sequence lengths we got a speed up of avg_len/max_length by using script() instead of trace().\r\n> \r\n> Upon further investigation, it looks like when we ran these experiments, several months ago, we were using Torch 1.2. It looks like in Torch 1.3 the fixed-length problem is no longer an issue for your BERT models (we still encounter it with other models architectures we build). So there's no longer a big speed gain from script() vs trace().\r\n> \r\n> There are still some good reasons for preferring script() to trace() - scripting is guaranteed to capture the model codepath logic, whereas tracing might miss a logic branch if the example input doesn't flow through it. Also, currently tracing your models produces several warnings like the one below. But I'm not sure if those on their own are enough of a motivation to make major changes in your code base.\r\n> \r\n> ```\r\n> TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n> ```\r\n\r\n@sgugger - what are your thoughts on this? ", "I think adding the scriptable layers seems cleaner to make sure everything works right with scripting/tracing. Not the approach in this PR but the other linked in a comment (@sbrody18 I don't know if you saw my PR to rebase on master for this branch). It ends up with most changes being helpful to read the code (type annotations and asserts) and a few extra classes for the scriptable layers but not much added code.", "@sgugger I agree - I think the extra benefit of the type and None-checking is really helpful to prevent bugs and makes the code better.\r\nI saw your PR late Friday and didn't have time to look into it. Will try to do so by end of day.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-05-18T05:07:23"
"2024-05-18T08:53:28"
null
NONE
null
### Feature request I manage a somewhat large and popular Hugging Face dataset known as the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus). I recently updated my corpus to be stored in a json lines file where each line is an array and each element represents a value at a particular column. Previously, my corpus was stored as a json lines file where each line was a dictionary and the keys were the fields. Essentially, a line in my json lines file used to look like this: ```json {"version_id":"","type":"","jurisdiction":"","source":"","citation":"","url":"","when_scraped":"","text":""} ``` And now it looks like this: ```json ["","","","","","","",""] ``` This saves 65 bytes per document and allows me very quickly serialise and deserialise documents via `msgspec`. After making this change, I found that `datasets` was incapable of deserialising my Corpus without a custom loading script, even if I ensured that the `dataset_info` field in my dataset card contained the desired names of my features. I would like to request that functionality be added to support this format which is more memory-efficent and faster than using dictionaries. ### Motivation The [documentation](https://huggingface.co/docs/datasets/en/dataset_script) for creating dataset loading scripts asserts that: > In the next major release, the new safety features of 🤗 Datasets will disable running dataset loading scripts by default, and you will have to pass trust_remote_code=True to load datasets that require running a dataset script. I would rather not require my users to pass `trust_remote_code=True` which means that I will need built-in support for this format. ### Your contribution I would be happy to submit a PR for this if this is something you would incorporate into `datasets` and if I can be pointed to where the code would need to go.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6907/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6907/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6906
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6906/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6906/comments
https://api.github.com/repos/huggingface/datasets/issues/6906/events
https://github.com/huggingface/datasets/issues/6906
2,303,679,119
I_kwDODunzps6JT1qP
6,906
irc_disentangle - Issue with splitting data
{ "login": "eor51355", "id": 114260604, "node_id": "U_kgDOBs96fA", "avatar_url": "https://avatars.githubusercontent.com/u/114260604?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eor51355", "html_url": "https://github.com/eor51355", "followers_url": "https://api.github.com/users/eor51355/followers", "following_url": "https://api.github.com/users/eor51355/following{/other_user}", "gists_url": "https://api.github.com/users/eor51355/gists{/gist_id}", "starred_url": "https://api.github.com/users/eor51355/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eor51355/subscriptions", "organizations_url": "https://api.github.com/users/eor51355/orgs", "repos_url": "https://api.github.com/users/eor51355/repos", "events_url": "https://api.github.com/users/eor51355/events{/privacy}", "received_events_url": "https://api.github.com/users/eor51355/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2024-05-17T23:19:37"
"2024-05-17T23:19:37"
null
NONE
null
### Describe the bug I am trying to access your database through python using "datasets.load_dataset("irc_disentangle")" and I am getting this error message: ValueError: Instruction "train" corresponds to no data! ### Steps to reproduce the bug import datasets ds = datasets.load_dataset('irc_disentangle') ds ### Expected behavior The data is supposed to load into ds and be accessable as such: ds['train'][1050], ds['train'][1055] ### Environment info I tired Python 3.12 and 3.10
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6906/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6906/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6905
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6905/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6905/comments
https://api.github.com/repos/huggingface/datasets/issues/6905/events
https://github.com/huggingface/datasets/issues/6905
2,303,098,587
I_kwDODunzps6JRn7b
6,905
Extraction protocol for arrow files is not defined
{ "login": "radulescupetru", "id": 26553095, "node_id": "MDQ6VXNlcjI2NTUzMDk1", "avatar_url": "https://avatars.githubusercontent.com/u/26553095?v=4", "gravatar_id": "", "url": "https://api.github.com/users/radulescupetru", "html_url": "https://github.com/radulescupetru", "followers_url": "https://api.github.com/users/radulescupetru/followers", "following_url": "https://api.github.com/users/radulescupetru/following{/other_user}", "gists_url": "https://api.github.com/users/radulescupetru/gists{/gist_id}", "starred_url": "https://api.github.com/users/radulescupetru/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/radulescupetru/subscriptions", "organizations_url": "https://api.github.com/users/radulescupetru/orgs", "repos_url": "https://api.github.com/users/radulescupetru/repos", "events_url": "https://api.github.com/users/radulescupetru/events{/privacy}", "received_events_url": "https://api.github.com/users/radulescupetru/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6905?src=pr&el=h1) Report\n> Merging [#6905](https://codecov.io/gh/huggingface/transformers/pull/6905?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8f2723caf0f1bf7e1f639d28d004f81c96d19bbc?el=desc) will **decrease** coverage by `0.12%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6905/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6905?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6905 +/- ##\n==========================================\n- Coverage 79.81% 79.69% -0.13% \n==========================================\n Files 157 157 \n Lines 28853 28853 \n==========================================\n- Hits 23029 22994 -35 \n- Misses 5824 5859 +35 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6905?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `32.20% <0.00%> (-66.95%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `87.50% <0.00%> (-9.73%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `89.97% <0.00%> (-4.07%)` | :arrow_down: |\n| ... and [14 more](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6905?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6905?src=pr&el=footer). Last update [8f2723c...0037bd4](https://codecov.io/gh/huggingface/transformers/pull/6905?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thx for fixing this!" ]
"2024-05-17T16:01:41"
"2024-05-17T16:01:41"
null
NONE
null
### Describe the bug Passing files with `.arrow` extension into data_files argument, at least when `streaming=True` is very slow. ### Steps to reproduce the bug Basically it goes through the `_get_extraction_protocol` method located [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L820) The method then looks at some base known extensions where `arrow` is not defined so it proceeds to determine the compression with the magic number method which is slow when dealing with a lot of files which are stored in s3 and by looking at this predefined list, I don't see `arrow` in there either so in the end it return None: ``` MAGIC_NUMBER_TO_COMPRESSION_PROTOCOL = { bytes.fromhex("504B0304"): "zip", bytes.fromhex("504B0506"): "zip", # empty archive bytes.fromhex("504B0708"): "zip", # spanned archive bytes.fromhex("425A68"): "bz2", bytes.fromhex("1F8B"): "gzip", bytes.fromhex("FD377A585A00"): "xz", bytes.fromhex("04224D18"): "lz4", bytes.fromhex("28B52FFD"): "zstd", } ``` ### Expected behavior My expectation is that `arrow` would be in the known lists so it would return None without going through the magic number method. ### Environment info datasets 2.19.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6905/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6905/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6904
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6904/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6904/comments
https://api.github.com/repos/huggingface/datasets/issues/6904/events
https://github.com/huggingface/datasets/pull/6904
2,302,912,179
PR_kwDODunzps5vzRlD
6,904
Fix decoding multi part extension
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Didn't realize that `postprocess_next_token_scores` mutates its argument." ]
"2024-05-17T14:32:57"
"2024-05-17T14:52:56"
"2024-05-17T14:46:54"
MEMBER
null
e.g. a field named `url.txt` should be a treated as text I also included a small fix to support .npz correctly
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6904/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6904/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6904", "html_url": "https://github.com/huggingface/datasets/pull/6904", "diff_url": "https://github.com/huggingface/datasets/pull/6904.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6904.patch", "merged_at": "2024-05-17T14:46:54" }
https://api.github.com/repos/huggingface/datasets/issues/6903
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6903/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6903/comments
https://api.github.com/repos/huggingface/datasets/issues/6903/events
https://github.com/huggingface/datasets/issues/6903
2,300,436,053
I_kwDODunzps6JHd5V
6,903
Add the option of saving in parquet instead of arrow
{ "login": "arita37", "id": 18707623, "node_id": "MDQ6VXNlcjE4NzA3NjIz", "avatar_url": "https://avatars.githubusercontent.com/u/18707623?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arita37", "html_url": "https://github.com/arita37", "followers_url": "https://api.github.com/users/arita37/followers", "following_url": "https://api.github.com/users/arita37/following{/other_user}", "gists_url": "https://api.github.com/users/arita37/gists{/gist_id}", "starred_url": "https://api.github.com/users/arita37/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arita37/subscriptions", "organizations_url": "https://api.github.com/users/arita37/orgs", "repos_url": "https://api.github.com/users/arita37/repos", "events_url": "https://api.github.com/users/arita37/events{/privacy}", "received_events_url": "https://api.github.com/users/arita37/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6903?src=pr&el=h1) Report\n> Merging [#6903](https://codecov.io/gh/huggingface/transformers/pull/6903?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/485da7222f7f9ca9854db1a6df027b00d348d017?el=desc) will **increase** coverage by `0.29%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6903/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6903?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6903 +/- ##\n==========================================\n+ Coverage 79.30% 79.59% +0.29% \n==========================================\n Files 157 157 \n Lines 28853 28853 \n==========================================\n+ Hits 22882 22966 +84 \n+ Misses 5971 5887 -84 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6903?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.18% <ø> (ø)` | |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.64% <ø> (+0.67%)` | :arrow_up: |\n| [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `85.18% <ø> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.73% <ø> (ø)` | |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.84% <ø> (ø)` | |\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `66.86% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <ø> (-34.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <ø> (+0.32%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <ø> (ø)` | |\n| ... and [23 more](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6903?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6903?src=pr&el=footer). Last update [485da72...e8fd79c](https://codecov.io/gh/huggingface/transformers/pull/6903?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-05-16T13:35:51"
"2024-05-17T03:40:04"
null
NONE
null
### Feature request In dataset.save_to_disk('/path/to/save/dataset'), add the option to save in parquet format dataset.save_to_disk('/path/to/save/dataset', format="parquet"), because arrow is not used for Production Big data.... (only parquet) ### Motivation because arrow is not used for Production Big data.... (only parquet) ### Your contribution I can do the testing !
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6903/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6903/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6902
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6902/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6902/comments
https://api.github.com/repos/huggingface/datasets/issues/6902/events
https://github.com/huggingface/datasets/pull/6902
2,300,256,241
PR_kwDODunzps5vqLIv
6,902
Make CLI convert_to_parquet not raise error if no rights to create script branch
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for reporting! The PR mentioned above should fix all of those." ]
"2024-05-16T12:21:27"
"2024-05-16T12:57:02"
"2024-05-16T12:51:05"
MEMBER
null
Make CLI convert_to_parquet not raise error if no rights to create "script" branch. Not that before this PR, the error was not critical because it was raised at the end of the script, once all the rest of the steps were already performed. Fix #6901. Related to: - #6809
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6902/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6902/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6902", "html_url": "https://github.com/huggingface/datasets/pull/6902", "diff_url": "https://github.com/huggingface/datasets/pull/6902.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6902.patch", "merged_at": "2024-05-16T12:51:04" }
https://api.github.com/repos/huggingface/datasets/issues/6901
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6901/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6901/comments
https://api.github.com/repos/huggingface/datasets/issues/6901/events
https://github.com/huggingface/datasets/issues/6901
2,300,167,465
I_kwDODunzps6JGcUp
6,901
HTTPError 403 raised by CLI convert_to_parquet when creating script branch on 3rd party repos
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "I don't see anything blocking with this. Wdyt @sgugger @julien-c ?", "We can give a warning but then the rest of the method will fail. Are you thinking of aborting the save entirely for models that are not `PretrainedModel`s? Also, why are you not inheriting from `PretrainedModel` in your example? Is there something limiting?\r\n\r\nNote that Trainer is not supposed to be a generic training loop, but we can surely make it a bit more flexible.", "Yes, `Trainer` is not a general loop, but it works for custom models as I've tried. Majority of its parts are generalized. `PreTrainedModel` also inherits from `nn.Module`, so users can do that, although its quite common for users to inherit from `nn.Module` directly. I'm not sure how the method will fail ? We can just add a warning instead of raising a `ValueError`. The reason why I'm saying is that users would want to do more than just what `transformers` provide out of the box (for instance justing using `AutoModel` and not `SequenceClassification` models (I'm seeing a growing interest in using such models). I think `nlp` is heading towards that direction (making everything general). This works fine for all cases, I guess:\r\n```\r\nfrom types import MethodType\r\n\r\ndef _save(self, output_dir: Optional[str] = None):\r\n output_dir = output_dir if output_dir is not None else self.args.output_dir\r\n os.makedirs(output_dir, exist_ok=True)\r\n logger.info(\"Saving model checkpoint to %s\", output_dir)\r\n\r\n torch.save(\r\n {\"model_state_dict\": self.model.state_dict()},\r\n os.path.join(output_dir, \"pytorch_model.bin\"),\r\n )\r\n\r\n # Good practice: save your training arguments together with the trained model\r\n torch.save(self.args, os.path.join(output_dir, \"training_args.bin\"))\r\n\r\ntrainer._save = MethodType(_save, trainer)\r\n```\r\nWhere do you think the approach may not work ? After providing the warning, its upto users if they further want to make changes by overriding this method (they would know that `transformers` is not responsible anymore since its not a `PreTrainedModel`. Current method completely breaks the training due to `ValueError`.\r\nThis is optional, I felt that it would be useful to have. I'll open a PR if you approve.", "`save_pretrained` does more than the method you mention, but we could refactor the code inside to work with all models probably. I don't see any place it uses specific stuff from `PretrainedModel`. The thing we don't want is to add and maintain too generic code, but if it's easy enough I see no objection.\r\n\r\nYou didn't tell me why subclassing `PreTrainedModel` did not work however ;-) That is what I would expect a user building a custom model using transformers to do .", "The `PreTrainedModel` is a generic class amongst all models in `transformers`, all classes pertaining to it comply in terms of the methods it provides and can use functionalities such as `init_weights`, `prune_heads`. They might not work for custom models. For instance, some methods require `.config.` attribute which custom models may not directly have. I guess one can define their custom model to be exactly what `PreTrainedModel` requires them to be (haven't looked into that), but that would be asking users to read through what `PreTrainedModel` expects or maybe specifying in docs. It's totally up to you what you expect the users to do in case they use custom models.", "After some internal discussion with @julien-c we will lower the requirement from `PreTrainedModel` to some lower abstractclass/protocol so the user knows exactly what they have to implement for their model to work seamlessly with `Trainer`. I will work on this end of this week beginning of next. ", "Sounds good. I'll look forward to that part then." ]
"2024-05-16T11:40:22"
"2024-05-16T12:51:06"
"2024-05-16T12:51:06"
MEMBER
null
CLI convert_to_parquet cannot create "script" branch on 3rd party repos. It can only create it on repos where the user executing the script has write access. Otherwise, a 403 Forbidden HTTPError is raised: ``` Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py", line 304, in hf_raise_for_status response.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/api/datasets/ORG/DATASET/branch/script The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/bin/datasets-cli", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.10/dist-packages/datasets/commands/datasets_cli.py", line 41, in main service.run() File "/usr/local/lib/python3.10/dist-packages/datasets/commands/convert_to_parquet.py", line 92, in run create_branch(dataset_id, branch="script", repo_type="dataset", token=token, exist_ok=True) File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/hf_api.py", line 5503, in create_branch hf_raise_for_status(response) File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py", line 367, in hf_raise_for_status raise HfHubHTTPError(message, response=response) from e huggingface_hub.utils._errors.HfHubHTTPError: (Request ID: Root=1-6645ee0d-4db1ed8a1fbe04956be15897;139a6e23-df7d-4f62-b5ba-adb6d8e6e696) 403 Forbidden: Forbidden: cannot write to script. Cannot access content at: https://huggingface.co/api/datasets/ORG/DATASET/branch/script. If you are trying to create or update content,make sure you have a token with the `write` role. ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6901/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6901/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6900
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6900/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6900/comments
https://api.github.com/repos/huggingface/datasets/issues/6900/events
https://github.com/huggingface/datasets/issues/6900
2,298,489,733
I_kwDODunzps6JACuF
6,900
[WebDataset] KeyError with user-defined `Features` when a field is missing in an example
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "DistilBERT can support sentence pair-like inputs but does not make use of token type IDs. It detects sentence pairs according to the special tokens. cc @VictorSanh ", "@Yusifu Did you find a solution for this problem? I'm also doing sentence-pair classification (NLI) with Distilbert.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-05-15T17:48:34"
"2024-05-15T17:48:49"
null
MEMBER
null
reported at https://huggingface.co/datasets/ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions/discussions/1 ``` File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 109, in _generate_examples example[field_name] = {"path": example["__key__"] + "." + field_name, "bytes": example[field_name]} ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6900/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/6900/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6899
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6899/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6899/comments
https://api.github.com/repos/huggingface/datasets/issues/6899/events
https://github.com/huggingface/datasets/issues/6899
2,298,059,597
I_kwDODunzps6I-ZtN
6,899
List of dictionary features get standardized
{ "login": "sohamparikh94", "id": 11831521, "node_id": "MDQ6VXNlcjExODMxNTIx", "avatar_url": "https://avatars.githubusercontent.com/u/11831521?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sohamparikh94", "html_url": "https://github.com/sohamparikh94", "followers_url": "https://api.github.com/users/sohamparikh94/followers", "following_url": "https://api.github.com/users/sohamparikh94/following{/other_user}", "gists_url": "https://api.github.com/users/sohamparikh94/gists{/gist_id}", "starred_url": "https://api.github.com/users/sohamparikh94/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sohamparikh94/subscriptions", "organizations_url": "https://api.github.com/users/sohamparikh94/orgs", "repos_url": "https://api.github.com/users/sohamparikh94/repos", "events_url": "https://api.github.com/users/sohamparikh94/events{/privacy}", "received_events_url": "https://api.github.com/users/sohamparikh94/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hey @wulaoshi - I don't fully understand your question. Could you maybe post such a higher level question on the forum at `discuss.huggingface.co` ? :-) ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2024-05-15T14:11:35"
"2024-05-15T14:11:35"
null
NONE
null
### Describe the bug Hi, i’m trying to create a HF dataset from a list using Dataset.from_list. Each sample in the list is a dict with the same keys (which will be my features). The values for each feature are a list of dictionaries, and each such dictionary has a different set of keys. However, the datasets library standardizes all dictionaries under a feature and adds all possible keys (with None value) from all the dictionaries under that feature. How can I keep the same set of keys as in the original list for each dictionary under a feature? ### Steps to reproduce the bug ``` from datasets import Dataset # Define a function to generate a sample with "tools" feature def generate_sample(): # Generate random sample data sample_data = { "text": "Sample text", "feature_1": [] } # Add feature_1 with random keys for this sample feature_1 = [{"key1": "value1"}, {"key2": "value2"}] # Example feature_1 with random keys sample_data["feature_1"].extend(feature_1) return sample_data # Generate multiple samples num_samples = 10 samples = [generate_sample() for _ in range(num_samples)] # Create a Hugging Face Dataset dataset = Dataset.from_list(samples) dataset[0] ``` ```{'text': 'Sample text', 'feature_1': [{'key1': 'value1', 'key2': None}, {'key1': None, 'key2': 'value2'}]}``` ### Expected behavior ```{'text': 'Sample text', 'feature_1': [{'key1': 'value1'}, {'key2': 'value2'}]}``` ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-5.15.0-1040-nvidia-x86_64-with-glibc2.35 - Python version: 3.10.13 - `huggingface_hub` version: 0.23.0 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6899/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6899/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6898
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6898/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6898/comments
https://api.github.com/repos/huggingface/datasets/issues/6898/events
https://github.com/huggingface/datasets/pull/6898
2,294,432,108
PR_kwDODunzps5vWJ9v
6,898
Fix YAML error in README files appearing on GitHub
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6898?src=pr&el=h1) Report\n> Merging [#6898](https://codecov.io/gh/huggingface/transformers/pull/6898?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d822ab636b6a14ed50f7bca0797c1de42c19de61?el=desc) will **increase** coverage by `1.00%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6898/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6898?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6898 +/- ##\n==========================================\n+ Coverage 79.61% 80.62% +1.00% \n==========================================\n Files 157 157 \n Lines 28826 28826 \n==========================================\n+ Hits 22951 23241 +290 \n+ Misses 5875 5585 -290 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6898?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `90.10% <0.00%> (-3.93%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.97% <0.00%> (-0.68%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.90% <0.00%> (-0.34%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.21% <0.00%> (+0.27%)` | :arrow_up: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6898?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6898?src=pr&el=footer). Last update [d822ab6...6b67e49](https://codecov.io/gh/huggingface/transformers/pull/6898?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-05-14T05:21:57"
"2024-05-16T14:36:57"
"2024-05-16T14:28:16"
MEMBER
null
Fix YAML error in README files appearing on GitHub. See error message: ![Screenshot from 2024-05-14 06-58-02](https://github.com/huggingface/datasets/assets/8515462/7984cc4e-96ee-4e83-99a4-4c0c5791fa05) Fix #6897.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6898/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6898/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6898", "html_url": "https://github.com/huggingface/datasets/pull/6898", "diff_url": "https://github.com/huggingface/datasets/pull/6898.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6898.patch", "merged_at": "2024-05-16T14:28:16" }
https://api.github.com/repos/huggingface/datasets/issues/6897
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6897/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6897/comments
https://api.github.com/repos/huggingface/datasets/issues/6897/events
https://github.com/huggingface/datasets/issues/6897
2,293,428,243
I_kwDODunzps6IsvAT
6,897
datasets template guide :: issue in documentation YAML
{ "login": "bghira", "id": 59658056, "node_id": "MDQ6VXNlcjU5NjU4MDU2", "avatar_url": "https://avatars.githubusercontent.com/u/59658056?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bghira", "html_url": "https://github.com/bghira", "followers_url": "https://api.github.com/users/bghira/followers", "following_url": "https://api.github.com/users/bghira/following{/other_user}", "gists_url": "https://api.github.com/users/bghira/gists{/gist_id}", "starred_url": "https://api.github.com/users/bghira/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bghira/subscriptions", "organizations_url": "https://api.github.com/users/bghira/orgs", "repos_url": "https://api.github.com/users/bghira/repos", "events_url": "https://api.github.com/users/bghira/events{/privacy}", "received_events_url": "https://api.github.com/users/bghira/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6897?src=pr&el=h1) Report\n> Merging [#6897](https://codecov.io/gh/huggingface/transformers/pull/6897?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d822ab636b6a14ed50f7bca0797c1de42c19de61?el=desc) will **increase** coverage by `0.77%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6897/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6897?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6897 +/- ##\n==========================================\n+ Coverage 79.61% 80.39% +0.77% \n==========================================\n Files 157 157 \n Lines 28826 28826 \n==========================================\n+ Hits 22951 23174 +223 \n+ Misses 5875 5652 -223 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6897?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.42% <ø> (ø)` | |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `45.41% <0.00%> (-47.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `57.29% <0.00%> (-39.79%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.85% <0.00%> (-7.19%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.14% <0.00%> (-2.15%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.96% <0.00%> (-0.45%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (-0.41%)` | :arrow_down: |\n| ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6897?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6897?src=pr&el=footer). Last update [d822ab6...b6c59a1](https://codecov.io/gh/huggingface/transformers/pull/6897?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
"2024-05-13T17:33:59"
"2024-05-16T14:28:17"
"2024-05-16T14:28:17"
NONE
null
### Describe the bug There is a YAML error at the top of the page, and I don't think it's supposed to be there ### Steps to reproduce the bug 1. Browse to [this tutorial document](https://github.com/huggingface/datasets/blob/main/templates/README_guide.md) 2. Observe a big red error at the top 3. The rest of the document remains functional ### Expected behavior I think the YAML block should be displayed or ignored. ### Environment info N/A
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6897/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6897/timeline
null
completed
null
null
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
1
Edit dataset card