url stringlengths 66 66 | repository_url stringclasses 1
value | labels_url stringlengths 80 80 | comments_url stringlengths 75 75 | events_url stringlengths 73 73 | html_url stringlengths 54 56 | id int64 2.03B 2.11B | node_id stringlengths 18 19 | number int64 27.9k 28.8k | title stringlengths 3 306 | user dict | labels list | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees list | milestone null | comments int64 0 39 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | active_lock_reason null | body stringlengths 19 42.4k ⌀ | reactions dict | timeline_url stringlengths 75 75 | performed_via_github_app null | state_reason stringclasses 3
values | draft bool 2
classes | pull_request dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/28506 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28506/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28506/comments | https://api.github.com/repos/huggingface/transformers/issues/28506/events | https://github.com/huggingface/transformers/pull/28506 | 2,081,718,248 | PR_kwDOCUB6oc5kEy0M | 28,506 | Use `weights_only` only if torch >= 1.13 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-15T10:24:24 | 2024-01-18T10:55:31 | 2024-01-18T10:55:30 | COLLABORATOR | null | # What does this PR do?
Fix https://github.com/huggingface/transformers/pull/27282#issuecomment-1887859328
cc @julien-c | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28506/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28506/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28506",
"html_url": "https://github.com/huggingface/transformers/pull/28506",
"diff_url": "https://github.com/huggingface/transformers/pull/28506.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28506.patch",
"merged_at": "2024-01-18T10:55:30"
} |
https://api.github.com/repos/huggingface/transformers/issues/28505 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28505/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28505/comments | https://api.github.com/repos/huggingface/transformers/issues/28505/events | https://github.com/huggingface/transformers/issues/28505 | 2,081,624,681 | I_kwDOCUB6oc58ExJp | 28,505 | Exclude the load balancing loss of padding tokens in Mixtral-8x7B | {
"login": "khaimt",
"id": 145790391,
"node_id": "U_kgDOCLCVtw",
"avatar_url": "https://avatars.githubusercontent.com/u/145790391?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/khaimt",
"html_url": "https://github.com/khaimt",
"followers_url": "https://api.github.com/users/khaimt/followers",
"following_url": "https://api.github.com/users/khaimt/following{/other_user}",
"gists_url": "https://api.github.com/users/khaimt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/khaimt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/khaimt/subscriptions",
"organizations_url": "https://api.github.com/users/khaimt/orgs",
"repos_url": "https://api.github.com/users/khaimt/repos",
"events_url": "https://api.github.com/users/khaimt/events{/privacy}",
"received_events_url": "https://api.github.com/users/khaimt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First... | closed | false | null | [] | null | 5 | 2024-01-15T09:36:10 | 2024-01-24T09:12:15 | 2024-01-24T09:12:15 | CONTRIBUTOR | null | ### Feature request
The auxiliary loss in Mixtral-MoE shouldn't **include the loss from padding tokens**.
### Motivation
I think it is better to change the function
[load_balancing_loss_func](https://github.com/huggingface/transformers/blob/main/src/transformers/models/mixtral/modeling_mixtral.py#L77) by adding an additional parameter: `attention_mask` and change the implementation inside to remove the loss from padding tokens
### Your contribution
I would be happy to review the PR implemeting this feature ! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28505/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28505/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28504 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28504/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28504/comments | https://api.github.com/repos/huggingface/transformers/issues/28504/events | https://github.com/huggingface/transformers/pull/28504 | 2,081,337,584 | PR_kwDOCUB6oc5kDgar | 28,504 | Allow to train dinov2 with different dtypes like bf16 | {
"login": "StarCycle",
"id": 33491471,
"node_id": "MDQ6VXNlcjMzNDkxNDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/33491471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StarCycle",
"html_url": "https://github.com/StarCycle",
"followers_url": "https://api.github.com/users/StarCycle/followers",
"following_url": "https://api.github.com/users/StarCycle/following{/other_user}",
"gists_url": "https://api.github.com/users/StarCycle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StarCycle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StarCycle/subscriptions",
"organizations_url": "https://api.github.com/users/StarCycle/orgs",
"repos_url": "https://api.github.com/users/StarCycle/repos",
"events_url": "https://api.github.com/users/StarCycle/events{/privacy}",
"received_events_url": "https://api.github.com/users/StarCycle/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2024-01-15T06:28:43 | 2024-01-17T19:03:09 | 2024-01-17T19:03:08 | CONTRIBUTOR | null | I want to train dinov2 with bf16 but I get the following error in https://github.com/huggingface/transformers/blob/bc72b4e2cdcbc80d5f56731f35dbc9c18b4c8de6/src/transformers/models/dinov2/modeling_dinov2.py#L635:
```
RuntimeError: Input type (float) and bias type (c10::BFloat16) should be the same
```
Since the input dtype is torch.float32, the parameter dtype has to be torch.float32...
@LZHgrla and I checked the code of clip vision encoder and found there is an automatic dtype transformation (https://github.com/huggingface/transformers/blob/bc72b4e2cdcbc80d5f56731f35dbc9c18b4c8de6/src/transformers/models/clip/modeling_clip.py#L181-L182).
So I add similar automatic dtype transformation to modeling_dinov2.py.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28504/reactions",
"total_count": 4,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28504/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28504",
"html_url": "https://github.com/huggingface/transformers/pull/28504",
"diff_url": "https://github.com/huggingface/transformers/pull/28504.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28504.patch",
"merged_at": "2024-01-17T19:03:08"
} |
https://api.github.com/repos/huggingface/transformers/issues/28503 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28503/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28503/comments | https://api.github.com/repos/huggingface/transformers/issues/28503/events | https://github.com/huggingface/transformers/pull/28503 | 2,081,147,575 | PR_kwDOCUB6oc5kC3Xh | 28,503 | Add sudachi_projection option to BertJapaneseTokenizer | {
"login": "hiroshi-matsuda-rit",
"id": 40782025,
"node_id": "MDQ6VXNlcjQwNzgyMDI1",
"avatar_url": "https://avatars.githubusercontent.com/u/40782025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hiroshi-matsuda-rit",
"html_url": "https://github.com/hiroshi-matsuda-rit",
"followers_url": "https://api.github.com/users/hiroshi-matsuda-rit/followers",
"following_url": "https://api.github.com/users/hiroshi-matsuda-rit/following{/other_user}",
"gists_url": "https://api.github.com/users/hiroshi-matsuda-rit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hiroshi-matsuda-rit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hiroshi-matsuda-rit/subscriptions",
"organizations_url": "https://api.github.com/users/hiroshi-matsuda-rit/orgs",
"repos_url": "https://api.github.com/users/hiroshi-matsuda-rit/repos",
"events_url": "https://api.github.com/users/hiroshi-matsuda-rit/events{/privacy}",
"received_events_url": "https://api.github.com/users/hiroshi-matsuda-rit/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 12 | 2024-01-15T02:47:41 | 2024-01-26T17:30:08 | null | NONE | null | # What does this PR do?
A new feature of SudachiPy v0.6.8 allows normalization of words based on Japanese morphological analysis.
https://github.com/WorksApplications/sudachi.rs/issues/230
This morphology-based normalization functionality, named "projection" in SudachiPy, makes Japanese sub-tokenization more efficient and can improve transformer performance.
Very few changes are required to add `sudachi_projection` option to `BertJapaneseTokenizer`, and the models that do not specify `sudachi_projection` option can be used as before in environments using older versions of SudachiPy.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- text models: @ArthurZucker and @younesbelkada | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28503/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28503/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28503",
"html_url": "https://github.com/huggingface/transformers/pull/28503",
"diff_url": "https://github.com/huggingface/transformers/pull/28503.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28503.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28502 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28502/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28502/comments | https://api.github.com/repos/huggingface/transformers/issues/28502/events | https://github.com/huggingface/transformers/issues/28502 | 2,081,119,761 | I_kwDOCUB6oc58C14R | 28,502 | Tokenizer should be serializable | {
"login": "hk6an6",
"id": 2327624,
"node_id": "MDQ6VXNlcjIzMjc2MjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2327624?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hk6an6",
"html_url": "https://github.com/hk6an6",
"followers_url": "https://api.github.com/users/hk6an6/followers",
"following_url": "https://api.github.com/users/hk6an6/following{/other_user}",
"gists_url": "https://api.github.com/users/hk6an6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hk6an6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hk6an6/subscriptions",
"organizations_url": "https://api.github.com/users/hk6an6/orgs",
"repos_url": "https://api.github.com/users/hk6an6/repos",
"events_url": "https://api.github.com/users/hk6an6/events{/privacy}",
"received_events_url": "https://api.github.com/users/hk6an6/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2024-01-15T02:11:52 | 2024-01-17T06:17:46 | 2024-01-17T06:11:30 | NONE | null | ### System Info
- transformers.__version__: '4.36.1'
- platform: macOS 14.2.1 (23C71)
- Python version: Python 3.11.6
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
!pip install -U sentence-transformers
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('paraphrase-distilroberta-base-v1', device='mps')
model.max_seq_length = 384
model.save('my_path')
```
### Expected behavior
`transformers/tokenization_utils_base.py` fails to serialize the local variable `tokenizer_config`. and breaks `model.save`. `model.save` should succeed. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28502/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28502/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28501 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28501/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28501/comments | https://api.github.com/repos/huggingface/transformers/issues/28501/events | https://github.com/huggingface/transformers/issues/28501 | 2,081,085,913 | I_kwDOCUB6oc58CtnZ | 28,501 | remote tokenizers trust remote code prompt doesn't not work as expected | {
"login": "mzbac",
"id": 7523197,
"node_id": "MDQ6VXNlcjc1MjMxOTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7523197?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mzbac",
"html_url": "https://github.com/mzbac",
"followers_url": "https://api.github.com/users/mzbac/followers",
"following_url": "https://api.github.com/users/mzbac/following{/other_user}",
"gists_url": "https://api.github.com/users/mzbac/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mzbac/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mzbac/subscriptions",
"organizations_url": "https://api.github.com/users/mzbac/orgs",
"repos_url": "https://api.github.com/users/mzbac/repos",
"events_url": "https://api.github.com/users/mzbac/events{/privacy}",
"received_events_url": "https://api.github.com/users/mzbac/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 7 | 2024-01-15T01:33:26 | 2024-01-15T14:42:16 | 2024-01-15T14:42:16 | NONE | null | ### System Info
transformers: 4.36.2
### Who can help?
@ArthurZucker
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. use `AutoTokenizer.from_pretrained('Qwen/Qwen-1_8B')
2. The tokenizer failed to load and threw an error. The tokenizer class was not found, and it didn't prompt the user to allow trust for remote code.
3. Delete the `tokenizer_class` setting in [config.jso](https://huggingface.co/Qwen/Qwen-1_8B/blob/main/config.json#L30)n and [tokenizer_config.json](https://huggingface.co/Qwen/Qwen-1_8B/blob/main/tokenizer_config.json)
4. After that, when using `AutoTokenizer.from_pretrained('Qwen/Qwen-1_8B')`, it prompts the user to trust remote code. However, instead of asking once, it prompts the user to confirm three times.
### Expected behavior
The `AutoTokenizer.from_pretrained` function should prompt the user whether they want to enable trust remote code only once when the user did not pass the `trust_remote_code` parameter. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28501/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28501/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28500 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28500/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28500/comments | https://api.github.com/repos/huggingface/transformers/issues/28500/events | https://github.com/huggingface/transformers/pull/28500 | 2,080,999,703 | PR_kwDOCUB6oc5kCWrW | 28,500 | Log a warning when best model is not loaded | {
"login": "akwako",
"id": 31602350,
"node_id": "MDQ6VXNlcjMxNjAyMzUw",
"avatar_url": "https://avatars.githubusercontent.com/u/31602350?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akwako",
"html_url": "https://github.com/akwako",
"followers_url": "https://api.github.com/users/akwako/followers",
"following_url": "https://api.github.com/users/akwako/following{/other_user}",
"gists_url": "https://api.github.com/users/akwako/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akwako/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akwako/subscriptions",
"organizations_url": "https://api.github.com/users/akwako/orgs",
"repos_url": "https://api.github.com/users/akwako/repos",
"events_url": "https://api.github.com/users/akwako/events{/privacy}",
"received_events_url": "https://api.github.com/users/akwako/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 4 | 2024-01-15T00:01:58 | 2024-01-22T03:31:34 | null | NONE | null | Specifying "load_best_model_at_end=True" in TrainingArguments should ensure that the best model is always loaded at end of training. However, this is not always the case: When best_model_checkpoint is None, the best model is not loaded, and the user may be unaware of this behavior.
Add a warning to the log to let the user know when the best model is not loaded at the end of training. Suggest that the user check the "save_strategy" TrainingArguments, as failing to to do so is one possible reason why the best model failed to load.
@muellerzr
@pacman100 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28500/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28500/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28500",
"html_url": "https://github.com/huggingface/transformers/pull/28500",
"diff_url": "https://github.com/huggingface/transformers/pull/28500.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28500.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28499 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28499/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28499/comments | https://api.github.com/repos/huggingface/transformers/issues/28499/events | https://github.com/huggingface/transformers/issues/28499 | 2,080,927,400 | I_kwDOCUB6oc58CG6o | 28,499 | activation_checkpointing error when using --fsdp | {
"login": "getao",
"id": 12735658,
"node_id": "MDQ6VXNlcjEyNzM1NjU4",
"avatar_url": "https://avatars.githubusercontent.com/u/12735658?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/getao",
"html_url": "https://github.com/getao",
"followers_url": "https://api.github.com/users/getao/followers",
"following_url": "https://api.github.com/users/getao/following{/other_user}",
"gists_url": "https://api.github.com/users/getao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/getao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/getao/subscriptions",
"organizations_url": "https://api.github.com/users/getao/orgs",
"repos_url": "https://api.github.com/users/getao/repos",
"events_url": "https://api.github.com/users/getao/events{/privacy}",
"received_events_url": "https://api.github.com/users/getao/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-14T22:39:17 | 2024-01-15T19:40:46 | null | NONE | null | ### System Info
transformers == 4.36.2
pytorch == 2.1.0
### Who can help?
When using deepspeed to enable activation checkpointing, everything goes well. However, when I switch to torchrun with the native pytorch fsdp integrated into the huggingface: https://huggingface.co/docs/transformers/main/main_classes/trainer#transformers.TrainingArguments.fsdp
I can't run the training process properly with the following errors:
```
File "/workspace/training_script.py", line 77, in train_model
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 1537, in train
return inner_training_loop(
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 1854, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2744, in training_step
self.accelerator.backward(loss)
File "/opt/conda/lib/python3.10/site-packages/accelerate/accelerator.py", line 1905, in backward
loss.backward(**kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/_tensor.py", line 492, in backward
torch.autograd.backward(
File "/opt/conda/lib/python3.10/site-packages/torch/autograd/__init__.py", line 251, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 1065, in unpack_hook
args = ctx.get_args(ctx.saved_tensors)
File "/opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 1075, in unpack_hook
frame.check_recomputed_tensors_match(gid)
File "/opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 850, in check_recomputed_tensors_match
raise CheckpointError(
torch.utils.checkpoint.CheckpointError: torch.utils.checkpoint: Recomputed values for the following tensors have different metadata than during the forward pass.
tensor at position 13:
saved metadata: {'shape': torch.Size([1, 3112, 32, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=1)}
recomputed metadata: {'shape': torch.Size([1, 9336, 32, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=1)}
tensor at position 14:
saved metadata: {'shape': torch.Size([1, 3112, 32, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=1)}
recomputed metadata: {'shape': torch.Size([1, 9336, 32, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=1)}
```
The model I used is Llama-2 and I didn't change its forward function and use Trainer to train it. I wonder if there is something wrong with activation_checkpointing (enabling it in the fsdp_config.json) feature used together with --fsdp.
Thank you
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Training Llama using Trainer with the following arguments:
--fsdp shard_grad_op --fsdp_config fsdp_config.json (where activation_checkpointing is set to true)
### Expected behavior
Properly running the training process with memory saved. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28499/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28499/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28498 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28498/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28498/comments | https://api.github.com/repos/huggingface/transformers/issues/28498/events | https://github.com/huggingface/transformers/pull/28498 | 2,080,667,469 | PR_kwDOCUB6oc5kBSdl | 28,498 | add dataloader prefetch factor in training args and trainer | {
"login": "qmeeus",
"id": 25608944,
"node_id": "MDQ6VXNlcjI1NjA4OTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/25608944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qmeeus",
"html_url": "https://github.com/qmeeus",
"followers_url": "https://api.github.com/users/qmeeus/followers",
"following_url": "https://api.github.com/users/qmeeus/following{/other_user}",
"gists_url": "https://api.github.com/users/qmeeus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qmeeus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qmeeus/subscriptions",
"organizations_url": "https://api.github.com/users/qmeeus/orgs",
"repos_url": "https://api.github.com/users/qmeeus/repos",
"events_url": "https://api.github.com/users/qmeeus/events{/privacy}",
"received_events_url": "https://api.github.com/users/qmeeus/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-14T10:17:20 | 2024-01-25T13:12:42 | 2024-01-23T15:08:19 | CONTRIBUTOR | null | What does this PR do?
I added an option to the trainer to prefetch batches during data loading.
When training a model with heavy transformations and an iterable dataset, the dataloader might struggle to deliver fast enough for the GPU. I've found that prefetching batches helps to solve this issue.
The option is implemented in torch.utils.data.DataLoader but not in HF Trainer.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28498/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28498/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28498",
"html_url": "https://github.com/huggingface/transformers/pull/28498",
"diff_url": "https://github.com/huggingface/transformers/pull/28498.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28498.patch",
"merged_at": "2024-01-23T15:08:18"
} |
https://api.github.com/repos/huggingface/transformers/issues/28497 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28497/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28497/comments | https://api.github.com/repos/huggingface/transformers/issues/28497/events | https://github.com/huggingface/transformers/pull/28497 | 2,080,667,462 | PR_kwDOCUB6oc5kBSdf | 28,497 | Improving Training Performance and Scalability Documentation | {
"login": "HamzaFB",
"id": 24733081,
"node_id": "MDQ6VXNlcjI0NzMzMDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/24733081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HamzaFB",
"html_url": "https://github.com/HamzaFB",
"followers_url": "https://api.github.com/users/HamzaFB/followers",
"following_url": "https://api.github.com/users/HamzaFB/following{/other_user}",
"gists_url": "https://api.github.com/users/HamzaFB/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HamzaFB/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HamzaFB/subscriptions",
"organizations_url": "https://api.github.com/users/HamzaFB/orgs",
"repos_url": "https://api.github.com/users/HamzaFB/repos",
"events_url": "https://api.github.com/users/HamzaFB/events{/privacy}",
"received_events_url": "https://api.github.com/users/HamzaFB/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-14T10:17:19 | 2024-01-16T10:30:27 | 2024-01-16T10:30:27 | CONTRIBUTOR | null | This PR improves the docs.
A strategy for improving Memory Performance for Large Models (Billions of parameters) is PEFT.
Actual documentation does not mention it.
This PR adds PEFT and provides an example as to why this reduces memory needs. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28497/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28497/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28497",
"html_url": "https://github.com/huggingface/transformers/pull/28497",
"diff_url": "https://github.com/huggingface/transformers/pull/28497.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28497.patch",
"merged_at": "2024-01-16T10:30:27"
} |
https://api.github.com/repos/huggingface/transformers/issues/28496 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28496/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28496/comments | https://api.github.com/repos/huggingface/transformers/issues/28496/events | https://github.com/huggingface/transformers/issues/28496 | 2,080,646,687 | I_kwDOCUB6oc58BCYf | 28,496 | No name 'SiLUActivation' in module 'transformers.activations' | {
"login": "qxpBlog",
"id": 96739096,
"node_id": "U_kgDOBcQfGA",
"avatar_url": "https://avatars.githubusercontent.com/u/96739096?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qxpBlog",
"html_url": "https://github.com/qxpBlog",
"followers_url": "https://api.github.com/users/qxpBlog/followers",
"following_url": "https://api.github.com/users/qxpBlog/following{/other_user}",
"gists_url": "https://api.github.com/users/qxpBlog/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qxpBlog/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qxpBlog/subscriptions",
"organizations_url": "https://api.github.com/users/qxpBlog/orgs",
"repos_url": "https://api.github.com/users/qxpBlog/repos",
"events_url": "https://api.github.com/users/qxpBlog/events{/privacy}",
"received_events_url": "https://api.github.com/users/qxpBlog/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2024-01-14T09:10:56 | 2024-01-15T15:42:28 | null | NONE | null | ### System Info
No name 'SiLUActivation' in module 'transformers.activations'
why i meet this problem , my version of transformers is 4.37.0.dev0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers.activations import SiLUActivation
### Expected behavior
get the module SiLUActivation | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28496/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28496/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28495 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28495/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28495/comments | https://api.github.com/repos/huggingface/transformers/issues/28495/events | https://github.com/huggingface/transformers/pull/28495 | 2,080,589,832 | PR_kwDOCUB6oc5kBDN2 | 28,495 | improve dev setup comments and hints | {
"login": "4imothy",
"id": 40186632,
"node_id": "MDQ6VXNlcjQwMTg2NjMy",
"avatar_url": "https://avatars.githubusercontent.com/u/40186632?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/4imothy",
"html_url": "https://github.com/4imothy",
"followers_url": "https://api.github.com/users/4imothy/followers",
"following_url": "https://api.github.com/users/4imothy/following{/other_user}",
"gists_url": "https://api.github.com/users/4imothy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/4imothy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/4imothy/subscriptions",
"organizations_url": "https://api.github.com/users/4imothy/orgs",
"repos_url": "https://api.github.com/users/4imothy/repos",
"events_url": "https://api.github.com/users/4imothy/events{/privacy}",
"received_events_url": "https://api.github.com/users/4imothy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-14T05:24:20 | 2024-01-15T18:36:40 | 2024-01-15T18:36:40 | CONTRIBUTOR | null | # What does this PR do?
Changes 'pip install -e .[dev]' -> \`pip install -e '.[dev]'\` in multiple comments and hints.
New command runs on both *zsh* and *bash*, previously it did not work on *zsh*.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Documentation: @stevhliu and @MKhalusova | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28495/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28495/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28495",
"html_url": "https://github.com/huggingface/transformers/pull/28495",
"diff_url": "https://github.com/huggingface/transformers/pull/28495.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28495.patch",
"merged_at": "2024-01-15T18:36:40"
} |
https://api.github.com/repos/huggingface/transformers/issues/28494 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28494/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28494/comments | https://api.github.com/repos/huggingface/transformers/issues/28494/events | https://github.com/huggingface/transformers/pull/28494 | 2,080,434,260 | PR_kwDOCUB6oc5kAlMw | 28,494 | Generate: consolidate output classes | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-13T18:54:34 | 2024-01-15T17:04:12 | 2024-01-15T17:04:09 | MEMBER | null | # What does this PR do?
Does some cleanup that's been on my mind for a while 🧹
We had a bunch of classes that were a copy of each other, named after each internal generation method. This PR consolidates them. As a result, the documentation becomes more concise and with less risk of suffering from incomplete updates 🤗
Full retrocompatibility is kept (and tested)! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28494/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28494/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28494",
"html_url": "https://github.com/huggingface/transformers/pull/28494",
"diff_url": "https://github.com/huggingface/transformers/pull/28494.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28494.patch",
"merged_at": "2024-01-15T17:04:08"
} |
https://api.github.com/repos/huggingface/transformers/issues/28493 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28493/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28493/comments | https://api.github.com/repos/huggingface/transformers/issues/28493/events | https://github.com/huggingface/transformers/pull/28493 | 2,080,413,740 | PR_kwDOCUB6oc5kAhZ4 | 28,493 | Generate: fix candidate device placement | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-13T17:52:11 | 2024-01-15T09:30:48 | 2024-01-13T20:31:25 | MEMBER | null | # What does this PR do?
#27775 was merged, and the branch was not synced with #27995 (already on `main`) -- the two branches together result in CI failures. Fortunately, the fix is simple :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28493/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28493/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28493",
"html_url": "https://github.com/huggingface/transformers/pull/28493",
"diff_url": "https://github.com/huggingface/transformers/pull/28493.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28493.patch",
"merged_at": "2024-01-13T20:31:25"
} |
https://api.github.com/repos/huggingface/transformers/issues/28492 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28492/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28492/comments | https://api.github.com/repos/huggingface/transformers/issues/28492/events | https://github.com/huggingface/transformers/pull/28492 | 2,080,118,727 | PR_kwDOCUB6oc5j_lMO | 28,492 | Fixing Issue #17488. Add changes to make the error thrown consistent in both decode and encode functions of Tokenizer | {
"login": "prasatee",
"id": 142558246,
"node_id": "U_kgDOCH9EJg",
"avatar_url": "https://avatars.githubusercontent.com/u/142558246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prasatee",
"html_url": "https://github.com/prasatee",
"followers_url": "https://api.github.com/users/prasatee/followers",
"following_url": "https://api.github.com/users/prasatee/following{/other_user}",
"gists_url": "https://api.github.com/users/prasatee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prasatee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prasatee/subscriptions",
"organizations_url": "https://api.github.com/users/prasatee/orgs",
"repos_url": "https://api.github.com/users/prasatee/repos",
"events_url": "https://api.github.com/users/prasatee/events{/privacy}",
"received_events_url": "https://api.github.com/users/prasatee/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-13T06:08:26 | 2024-01-30T00:28:44 | null | CONTRIBUTOR | null | **Fixing Issue #17488. Add changes to make the error thrown consistent in both decode and encode functions of Tokenizer**
# What does this PR do?
This PR is with regards to fixing the issue to make both the encode and decode function return the same error when an unexpected argument is passed to the function. Below is the Issue ID and the Issue Title
"_batch_encode_plus() got an unexpected keyword argument 'is_pretokenized' using BertTokenizerFast #17488"
https://github.com/huggingface/transformers/issues/17488
Fixes # 17488
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28492/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28492/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28492",
"html_url": "https://github.com/huggingface/transformers/pull/28492",
"diff_url": "https://github.com/huggingface/transformers/pull/28492.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28492.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28491 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28491/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28491/comments | https://api.github.com/repos/huggingface/transformers/issues/28491/events | https://github.com/huggingface/transformers/issues/28491 | 2,080,104,945 | I_kwDOCUB6oc57--Hx | 28,491 | Inconsistent in batch generation results | {
"login": "qlwang25",
"id": 38132016,
"node_id": "MDQ6VXNlcjM4MTMyMDE2",
"avatar_url": "https://avatars.githubusercontent.com/u/38132016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qlwang25",
"html_url": "https://github.com/qlwang25",
"followers_url": "https://api.github.com/users/qlwang25/followers",
"following_url": "https://api.github.com/users/qlwang25/following{/other_user}",
"gists_url": "https://api.github.com/users/qlwang25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qlwang25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qlwang25/subscriptions",
"organizations_url": "https://api.github.com/users/qlwang25/orgs",
"repos_url": "https://api.github.com/users/qlwang25/repos",
"events_url": "https://api.github.com/users/qlwang25/events{/privacy}",
"received_events_url": "https://api.github.com/users/qlwang25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-13T05:33:54 | 2024-01-14T12:39:07 | 2024-01-14T12:34:58 | NONE | null | ### System Info
load model code
```
model_path = "../../../pre-trained_models/h2oai-llama2-7b-chat"
tokenizer = LlamaTokenizer.from_pretrained(model_path)
config = LlamaConfig.from_pretrained(model_path)
config.max_length = 512
with init_empty_weights():
model = LlamaForCausalLM._from_config(config, torch_dtype=torch.float16)
model.tie_weights()
model = load_checkpoint_and_dispatch(model, model_path, device_map="auto", no_split_module_classes=["LlamaDecoderLayer"], dtype=torch.float16)
model = model.eval()
model.generation_config = GenerationConfig.from_pretrained(pretrained_model_name=model_path, config_file_name='generation_config.json')
```
generate code
```
prompt1 = "Given a review, extract the aspect term(s) and determine their corresponding sentiment polarity. Here are some examples: \n"
prompt1 += "Review: It is always reliable , never bugged and responds well ." + "\n"
prompt1 += "Label:[[responds, positive]]" + "\n"
prompt1 += "Review: The machine is slow to boot up and occasionally crashes completely ." + "\n"
prompt1 += "Label:[[boot up, negative]]" + "\n"
prompt1 += "Review: Enabling the battery timer is useless ." + "\n"
prompt1 += "Label:"
prompt2 = "Given a review, extract the aspect term(s) and determine their corresponding sentiment polarity. Here are some examples: \n"
prompt2 += "Review: It rarely works and when it does it's incredibly slow ." + "\n"
prompt2 += "Label:[[works, negative]]" + "\n"
prompt2 += "Review: The machine is slow to boot up and occasionally crashes completely ." + "\n"
prompt2 += "Label:[[boot up, negative]]" + "\n"
prompt2 += "Review: Boot time is super fast , around anywhere from 35 seconds to 1 minute ." + "\n"
prompt2 += "Label:"
prompt3 = "Given a review, extract the aspect term(s) and determine their corresponding sentiment polarity. Here are some examples: \n"
prompt3 += "Review: It is always reliable , never bugged and responds well ." + "\n"
prompt3 += "Label:[[responds, positive]]" + "\n"
prompt3 += "Review: It rarely works and when it does it's incredibly slow ." + "\n"
prompt3 += "Label:[[works, negative]]" + "\n"
prompt3 += "Review: Boot time is super fast , around anywhere from 35 seconds to 1 minute ." + "\n"
prompt3 += "Label:"
tokenizer.pad_token = tokenizer.eos_token
inputs = tokenizer([prompt1, prompt2, prompt3], padding="longest", return_tensors="pt")
padding_len = inputs["input_ids"].size(1)
outputs = model.generate(**inputs, max_length=padding_len + 80, do_sample=False, num_beams=1)
for output in outputs:
pred = tokenizer.decode(output[padding_len:], skip_special_tokens=True)
pred = pred.split("\n")[0]
print(pred)
```
---
When I use ```[prompt1, prompt2, prompt3]``` as input, the result is:
```
Љ [[battery timer, negative]]
Љ [[boot time, positive]]
[[fast, positive]]
```
When I use ```[prompt3, prompt2, prompt1]``` as input, the result is:
```
[[fast, positive]]
. [[boot time, positive]]
[[useless, negative]]
```
Again, when I use ```[prompt3, prompt2, prompt1]``` as input, the result is: (the second result is empty)
```
[[fast, positive]]
Љ [[battery timer, negative]]
```

**Problem**
(1) Why does the prompt produce different results with different inputs (or the same input)?
(2) Why is this character (Љ) generated?
(3) Why do batch generation (batch_size=3) and individual generation (batch_size=1) have different results
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
please see the code above
### Expected behavior
I expect the results by the batch generated to not be affected by the prompt order.
And, each sample should be the same as the results generated separately (batch_size) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28491/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28491/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28490 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28490/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28490/comments | https://api.github.com/repos/huggingface/transformers/issues/28490/events | https://github.com/huggingface/transformers/issues/28490 | 2,080,102,592 | I_kwDOCUB6oc57-9jA | 28,490 | [AutoGPTQ] The notebook tutorial of AutoGPTQ is not working. | {
"login": "DjangoPeng",
"id": 16943353,
"node_id": "MDQ6VXNlcjE2OTQzMzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/16943353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DjangoPeng",
"html_url": "https://github.com/DjangoPeng",
"followers_url": "https://api.github.com/users/DjangoPeng/followers",
"following_url": "https://api.github.com/users/DjangoPeng/following{/other_user}",
"gists_url": "https://api.github.com/users/DjangoPeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DjangoPeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DjangoPeng/subscriptions",
"organizations_url": "https://api.github.com/users/DjangoPeng/orgs",
"repos_url": "https://api.github.com/users/DjangoPeng/repos",
"events_url": "https://api.github.com/users/DjangoPeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/DjangoPeng/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 5 | 2024-01-13T05:28:58 | 2024-01-16T08:39:40 | null | NONE | null | ### System Info
System Info:
- `transformers` version: 4.37.0.dev0
- Platform: Linux-4.4.0-210-generic-x86_64-with-glibc2.23
- Python version: 3.11.5
- Huggingface_hub version: 0.20.1
- Safetensors version: 0.4.0
- Accelerate version: 0.26.1
- Accelerate config: not found
- PyTorch version (GPU): 2.1.2+cu121 (True)
### Who can help?
@SunMarc and @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The official notebook introduced by [AutoGPTQ docs](https://huggingface.co/docs/transformers/quantization#autogptq) is not working after upgrading Transformers and dependencies.
I estimate that this should be an incompatible issue caused by the update of `BuilderConfig`. It can be easily reproduced in Google Colab [here](https://colab.research.google.com/drive/1_TIrmuKOFhuRRiTWN94iLKUFu6ZX4ceb?usp=sharing).
```shell
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-3-f66cf2cc3929>](https://localhost:8080/#) in <cell line: 14>()
12
13 tokenizer = AutoTokenizer.from_pretrained(model_id)
---> 14 quant_model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=quantization_config, device_map='auto')
9 frames
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _create_builder_config(self, config_name, custom_features, **config_kwargs)
590 builder_config = self.builder_configs.get(config_name)
591 if builder_config is None and self.BUILDER_CONFIGS:
--> 592 raise ValueError(
593 f"BuilderConfig '{config_name}' not found. Available: {list(self.builder_configs.keys())}"
594 )
ValueError: BuilderConfig 'allenai--c4' not found. Available: ['en', 'en.noblocklist', 'en.noclean', 'realnewslike', 'multilingual', 'af', 'am', 'ar', 'az', 'be', 'bg', 'bg-Latn', 'bn', 'ca', 'ceb', 'co', 'cs', 'cy', 'da', 'de', 'el', 'el-Latn', 'en-multi', 'eo', 'es', 'et', 'eu', 'fa', 'fi', 'fil', 'fr', 'fy', 'ga', 'gd', 'gl', 'gu', 'ha', 'haw', 'hi', 'hi-Latn', 'hmn', 'ht', 'hu', 'hy', 'id', 'ig', 'is', 'it', 'iw', 'ja', 'ja-Latn', 'jv', 'ka', 'kk', 'km', 'kn', 'ko', 'ku', 'ky', 'la', 'lb', 'lo', 'lt', 'lv', 'mg', 'mi', 'mk', 'ml', 'mn', 'mr', 'ms', 'mt', 'my', 'ne', 'nl', 'no', 'ny', 'pa', 'pl', 'ps', 'pt', 'ro', 'ru', 'ru-Latn', 'sd', 'si', 'sk', 'sl', 'sm', 'sn', 'so', 'sq', 'sr', 'st', 'su', 'sv', 'sw', 'ta', 'te', 'tg', 'th', 'tr', 'uk', 'und', 'ur', 'uz', 'vi', 'xh', 'yi', 'yo', 'zh', 'zh-Latn', 'zu']
```
### Expected behavior
Fix and identify the root cause. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28490/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28490/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28489 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28489/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28489/comments | https://api.github.com/repos/huggingface/transformers/issues/28489/events | https://github.com/huggingface/transformers/pull/28489 | 2,080,052,597 | PR_kwDOCUB6oc5j_WtM | 28,489 | Fixed minor typos | {
"login": "rishit5",
"id": 24509842,
"node_id": "MDQ6VXNlcjI0NTA5ODQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24509842?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rishit5",
"html_url": "https://github.com/rishit5",
"followers_url": "https://api.github.com/users/rishit5/followers",
"following_url": "https://api.github.com/users/rishit5/following{/other_user}",
"gists_url": "https://api.github.com/users/rishit5/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rishit5/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rishit5/subscriptions",
"organizations_url": "https://api.github.com/users/rishit5/orgs",
"repos_url": "https://api.github.com/users/rishit5/repos",
"events_url": "https://api.github.com/users/rishit5/events{/privacy}",
"received_events_url": "https://api.github.com/users/rishit5/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-13T03:54:37 | 2024-01-15T16:45:15 | 2024-01-15T16:45:15 | CONTRIBUTOR | null | # What does this PR do?
Fixed typos in readme files.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
Typos fixed in:
1. .circleci/TROUBLESHOOT.md
2. .github/workflows/TROUBLESHOOT.md
3. docs/README.md
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28489/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28489/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28489",
"html_url": "https://github.com/huggingface/transformers/pull/28489",
"diff_url": "https://github.com/huggingface/transformers/pull/28489.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28489.patch",
"merged_at": "2024-01-15T16:45:15"
} |
https://api.github.com/repos/huggingface/transformers/issues/28488 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28488/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28488/comments | https://api.github.com/repos/huggingface/transformers/issues/28488/events | https://github.com/huggingface/transformers/issues/28488 | 2,079,974,478 | I_kwDOCUB6oc57-eRO | 28,488 | fine tuning the updated Phi-2 with flash-attn-2 produces very high loss > 2 | {
"login": "abacaj",
"id": 7272343,
"node_id": "MDQ6VXNlcjcyNzIzNDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7272343?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abacaj",
"html_url": "https://github.com/abacaj",
"followers_url": "https://api.github.com/users/abacaj/followers",
"following_url": "https://api.github.com/users/abacaj/following{/other_user}",
"gists_url": "https://api.github.com/users/abacaj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abacaj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abacaj/subscriptions",
"organizations_url": "https://api.github.com/users/abacaj/orgs",
"repos_url": "https://api.github.com/users/abacaj/repos",
"events_url": "https://api.github.com/users/abacaj/events{/privacy}",
"received_events_url": "https://api.github.com/users/abacaj/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 20 | 2024-01-13T01:48:39 | 2024-02-01T00:35:37 | null | NONE | null | ### System Info
The updated code of phi-2 produces a high loss, I have tried fp16, bf16, deepspeed and fsdp the result is the same -> loss starts at 2 and keeps going higher. Setting `use_flash_attention_2=False` fixes this or using the old phi-2 modeling file.
torch==2.1.2
flash-attn==2.4.2
transformers==4.37.0.dev0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Fine-tune the updated phi-2 model using transformers trainer
### Expected behavior
Loss go down | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28488/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28488/timeline | null | reopened | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28487 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28487/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28487/comments | https://api.github.com/repos/huggingface/transformers/issues/28487/events | https://github.com/huggingface/transformers/issues/28487 | 2,079,721,297 | I_kwDOCUB6oc579gdR | 28,487 | add support for custom pipeline | {
"login": "not-lain",
"id": 70411813,
"node_id": "MDQ6VXNlcjcwNDExODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/70411813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/not-lain",
"html_url": "https://github.com/not-lain",
"followers_url": "https://api.github.com/users/not-lain/followers",
"following_url": "https://api.github.com/users/not-lain/following{/other_user}",
"gists_url": "https://api.github.com/users/not-lain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/not-lain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/not-lain/subscriptions",
"organizations_url": "https://api.github.com/users/not-lain/orgs",
"repos_url": "https://api.github.com/users/not-lain/repos",
"events_url": "https://api.github.com/users/not-lain/events{/privacy}",
"received_events_url": "https://api.github.com/users/not-lain/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | null | [] | null | 4 | 2024-01-12T21:06:53 | 2024-01-15T18:07:32 | 2024-01-15T18:07:32 | CONTRIBUTOR | null | ### Feature request
is it possible to add a support for custom pipelines.
something like this
```
| - config.json
| - custom_config.py
| - custom_architecture.py
| - custom_pipeline.py
```
### Motivation
i was following this documentation https://huggingface.co/docs/transformers/add_new_pipeline and i can't make it workout
it's also pointing out at the following custom pipeline `sgugger/finetuned-bert-mrpc` which i checked all the previous commits and nothing seems to workout.
also it should be a good idea since adding support to all the models directly to the transformers library will bloat the library so i hope this might be taken into consideration.
### Your contribution
i will help out if possible. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28487/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28487/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28486 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28486/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28486/comments | https://api.github.com/repos/huggingface/transformers/issues/28486/events | https://github.com/huggingface/transformers/pull/28486 | 2,079,348,302 | PR_kwDOCUB6oc5j884U | 28,486 | [ASR Pipe] Update init to set model type and subsequently call parent init method | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2024-01-12T17:30:37 | 2024-01-18T16:12:01 | 2024-01-18T16:11:50 | CONTRIBUTOR | null | # What does this PR do?
Fixes #28162 by overriding the init method of the ASR pipeline class. We first set the model type, then call the parent init method.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28486/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28486/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28486",
"html_url": "https://github.com/huggingface/transformers/pull/28486",
"diff_url": "https://github.com/huggingface/transformers/pull/28486.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28486.patch",
"merged_at": "2024-01-18T16:11:50"
} |
https://api.github.com/repos/huggingface/transformers/issues/28485 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28485/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28485/comments | https://api.github.com/repos/huggingface/transformers/issues/28485/events | https://github.com/huggingface/transformers/pull/28485 | 2,079,326,176 | PR_kwDOCUB6oc5j838y | 28,485 | [Whisper Tok] Move token ids to CPU when computing offsets | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-12T17:18:26 | 2024-01-18T16:12:18 | 2024-01-18T16:12:15 | CONTRIBUTOR | null | # What does this PR do?
Fixes #28097 by moving token ids in pytorch on GPU to the CPU before converting to numpy.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28485/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28485/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28485",
"html_url": "https://github.com/huggingface/transformers/pull/28485",
"diff_url": "https://github.com/huggingface/transformers/pull/28485.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28485.patch",
"merged_at": "2024-01-18T16:12:15"
} |
https://api.github.com/repos/huggingface/transformers/issues/28484 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28484/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28484/comments | https://api.github.com/repos/huggingface/transformers/issues/28484/events | https://github.com/huggingface/transformers/pull/28484 | 2,079,242,765 | PR_kwDOCUB6oc5j8lU4 | 28,484 | Dataloader prefetch batch | {
"login": "qmeeus",
"id": 25608944,
"node_id": "MDQ6VXNlcjI1NjA4OTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/25608944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qmeeus",
"html_url": "https://github.com/qmeeus",
"followers_url": "https://api.github.com/users/qmeeus/followers",
"following_url": "https://api.github.com/users/qmeeus/following{/other_user}",
"gists_url": "https://api.github.com/users/qmeeus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qmeeus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qmeeus/subscriptions",
"organizations_url": "https://api.github.com/users/qmeeus/orgs",
"repos_url": "https://api.github.com/users/qmeeus/repos",
"events_url": "https://api.github.com/users/qmeeus/events{/privacy}",
"received_events_url": "https://api.github.com/users/qmeeus/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2024-01-12T16:34:00 | 2024-01-14T10:18:18 | 2024-01-14T10:15:49 | CONTRIBUTOR | null | # What does this PR do?
I added an option to the trainer to prefetch batches during data loading.
When training a model with heavy transformations and an iterable dataset, the dataloader might struggle to deliver fast enough for the GPU. I've found that prefetching batches helps to solve this issue.
The option is implemented in `torch.utils.data.DataLoader` but not in HF Trainer.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28484/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28484",
"html_url": "https://github.com/huggingface/transformers/pull/28484",
"diff_url": "https://github.com/huggingface/transformers/pull/28484.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28484.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28483 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28483/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28483/comments | https://api.github.com/repos/huggingface/transformers/issues/28483/events | https://github.com/huggingface/transformers/pull/28483 | 2,079,209,690 | PR_kwDOCUB6oc5j8d9P | 28,483 | TF: purge `TFTrainer` | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-12T16:16:41 | 2024-01-12T16:56:37 | 2024-01-12T16:56:35 | MEMBER | null | # What does this PR do?
Removes `TFTrainer` and all its traces. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28483/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28483/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28483",
"html_url": "https://github.com/huggingface/transformers/pull/28483",
"diff_url": "https://github.com/huggingface/transformers/pull/28483.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28483.patch",
"merged_at": "2024-01-12T16:56:34"
} |
https://api.github.com/repos/huggingface/transformers/issues/28482 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28482/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28482/comments | https://api.github.com/repos/huggingface/transformers/issues/28482/events | https://github.com/huggingface/transformers/pull/28482 | 2,079,041,827 | PR_kwDOCUB6oc5j74gO | 28,482 | Don't set `finetuned_from` if it is a local path | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-12T15:00:07 | 2024-01-15T10:38:21 | 2024-01-15T10:38:21 | COLLABORATOR | null | # What does this PR do?
Fix `base_model` issue in #28286 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28482/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28482/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28482",
"html_url": "https://github.com/huggingface/transformers/pull/28482",
"diff_url": "https://github.com/huggingface/transformers/pull/28482.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28482.patch",
"merged_at": "2024-01-15T10:38:21"
} |
https://api.github.com/repos/huggingface/transformers/issues/28481 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28481/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28481/comments | https://api.github.com/repos/huggingface/transformers/issues/28481/events | https://github.com/huggingface/transformers/pull/28481 | 2,078,977,415 | PR_kwDOCUB6oc5j7qDm | 28,481 | Fix/speecht5 bug | {
"login": "NimaYaqmuri",
"id": 62163525,
"node_id": "MDQ6VXNlcjYyMTYzNTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/62163525?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NimaYaqmuri",
"html_url": "https://github.com/NimaYaqmuri",
"followers_url": "https://api.github.com/users/NimaYaqmuri/followers",
"following_url": "https://api.github.com/users/NimaYaqmuri/following{/other_user}",
"gists_url": "https://api.github.com/users/NimaYaqmuri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NimaYaqmuri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NimaYaqmuri/subscriptions",
"organizations_url": "https://api.github.com/users/NimaYaqmuri/orgs",
"repos_url": "https://api.github.com/users/NimaYaqmuri/repos",
"events_url": "https://api.github.com/users/NimaYaqmuri/events{/privacy}",
"received_events_url": "https://api.github.com/users/NimaYaqmuri/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 7 | 2024-01-12T14:33:19 | 2024-01-16T15:26:15 | 2024-01-16T14:14:29 | CONTRIBUTOR | null | # What does this PR do?
Fixes a Critical Issue in SpeechT5 Speech Decoder Prenet and Enhances Test Suite
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ylacombe
@sanchit-gandhi
@Spycsh
Key Changes:
------------
* **Critical Bug Fix in Speech Decoder Prenet**: I discovered that the `repeat` operation in the speech decoder prenet's forward method was mistakenly duplicating the `speaker_embeddings` tensor. This erroneous behavior, likely an oversight in previous contributions, resulted in incorrect tensor dimensions for concatenation, leading to raised errors and halting the training process
* **Refined Testing Approach**: Alongside this fix, I have updated the SpeechT5ForTextToSpeechIntegrationTests. These updates include:
* **Adaptability to Variability in Sequence Lengths**: Modifications to handle variability due to dropout in the speech decoder pre-net, ensuring test reliability against random variations.
* **Dynamic Dimension Checks**: Replacement of hardcoded dimensions with dynamic checks based on the model's configuration and seed settings, ensuring test validity across various scenarios.
* **New and Improved Test Cases**: Introduction of new test cases for validation of spectrogram and waveform shapes, addressing potential issues in speech generation and vocoder processing.
* **Correction of Misassumptions in Tests**: Adjustment of existing test cases where previous assumptions about output shapes led to inaccuracies. This includes considering varying batch sizes in tests, which were not adequately addressed before, possibly due to an oversight in considering the speaker embeddings' shape (initially 1x512) in batch scenarios. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28481/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28481/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28481",
"html_url": "https://github.com/huggingface/transformers/pull/28481",
"diff_url": "https://github.com/huggingface/transformers/pull/28481.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28481.patch",
"merged_at": "2024-01-16T14:14:29"
} |
https://api.github.com/repos/huggingface/transformers/issues/28480 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28480/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28480/comments | https://api.github.com/repos/huggingface/transformers/issues/28480/events | https://github.com/huggingface/transformers/pull/28480 | 2,078,963,133 | PR_kwDOCUB6oc5j7m67 | 28,480 | chore: Just fix some typo | {
"login": "hugo-syn",
"id": 61210734,
"node_id": "MDQ6VXNlcjYxMjEwNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/61210734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hugo-syn",
"html_url": "https://github.com/hugo-syn",
"followers_url": "https://api.github.com/users/hugo-syn/followers",
"following_url": "https://api.github.com/users/hugo-syn/following{/other_user}",
"gists_url": "https://api.github.com/users/hugo-syn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hugo-syn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hugo-syn/subscriptions",
"organizations_url": "https://api.github.com/users/hugo-syn/orgs",
"repos_url": "https://api.github.com/users/hugo-syn/repos",
"events_url": "https://api.github.com/users/hugo-syn/events{/privacy}",
"received_events_url": "https://api.github.com/users/hugo-syn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2024-01-12T14:27:16 | 2024-01-12T17:18:27 | 2024-01-12T17:18:26 | CONTRIBUTOR | null | # What does this PR do?
Just fix some typo
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28480/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28480/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28480",
"html_url": "https://github.com/huggingface/transformers/pull/28480",
"diff_url": "https://github.com/huggingface/transformers/pull/28480.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28480.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28479 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28479/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28479/comments | https://api.github.com/repos/huggingface/transformers/issues/28479/events | https://github.com/huggingface/transformers/pull/28479 | 2,078,880,947 | PR_kwDOCUB6oc5j7UzI | 28,479 | Improved type hinting for all attention parameters | {
"login": "nakranivaibhav",
"id": 67785830,
"node_id": "MDQ6VXNlcjY3Nzg1ODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/67785830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nakranivaibhav",
"html_url": "https://github.com/nakranivaibhav",
"followers_url": "https://api.github.com/users/nakranivaibhav/followers",
"following_url": "https://api.github.com/users/nakranivaibhav/following{/other_user}",
"gists_url": "https://api.github.com/users/nakranivaibhav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nakranivaibhav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nakranivaibhav/subscriptions",
"organizations_url": "https://api.github.com/users/nakranivaibhav/orgs",
"repos_url": "https://api.github.com/users/nakranivaibhav/repos",
"events_url": "https://api.github.com/users/nakranivaibhav/events{/privacy}",
"received_events_url": "https://api.github.com/users/nakranivaibhav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 17 | 2024-01-12T13:43:39 | 2024-01-25T11:59:45 | 2024-01-24T16:47:35 | CONTRIBUTOR | null | # What does this PR do?
The type hinting for all attention parameters has been changed to 'Optional[Tuple[torch.FloatTensor,...]] = None' to reflect a tuple of arbitrary size.
Fixes #28345
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker @Rocketknight1
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28479/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28479/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28479",
"html_url": "https://github.com/huggingface/transformers/pull/28479",
"diff_url": "https://github.com/huggingface/transformers/pull/28479.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28479.patch",
"merged_at": "2024-01-24T16:47:35"
} |
https://api.github.com/repos/huggingface/transformers/issues/28478 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28478/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28478/comments | https://api.github.com/repos/huggingface/transformers/issues/28478/events | https://github.com/huggingface/transformers/pull/28478 | 2,078,735,350 | PR_kwDOCUB6oc5j60kp | 28,478 | Generate: deprecate old public functions | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-12T12:24:31 | 2024-01-12T15:21:19 | 2024-01-12T15:21:15 | MEMBER | null | # What does this PR do?
Schedules for deprecation old public functions -- these functions are not used anywhere in the code base, and haven't been since I've been in charge of `generate`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28478/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28478/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28478",
"html_url": "https://github.com/huggingface/transformers/pull/28478",
"diff_url": "https://github.com/huggingface/transformers/pull/28478.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28478.patch",
"merged_at": "2024-01-12T15:21:15"
} |
https://api.github.com/repos/huggingface/transformers/issues/28477 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28477/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28477/comments | https://api.github.com/repos/huggingface/transformers/issues/28477/events | https://github.com/huggingface/transformers/pull/28477 | 2,078,710,190 | PR_kwDOCUB6oc5j6vBn | 28,477 | Generate: refuse to save bad generation config files | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-12T12:09:37 | 2024-01-12T16:01:55 | 2024-01-12T16:01:17 | MEMBER | null | # What does this PR do?
This PR converts a warning into an exception. This warning stated that it would be converted to an exception in v4.34 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28477/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28477/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28477",
"html_url": "https://github.com/huggingface/transformers/pull/28477",
"diff_url": "https://github.com/huggingface/transformers/pull/28477.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28477.patch",
"merged_at": "2024-01-12T16:01:17"
} |
https://api.github.com/repos/huggingface/transformers/issues/28476 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28476/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28476/comments | https://api.github.com/repos/huggingface/transformers/issues/28476/events | https://github.com/huggingface/transformers/issues/28476 | 2,078,647,827 | I_kwDOCUB6oc575aYT | 28,476 | How to avoid the peak RAM memory usage of a model when I want to load to GPU | {
"login": "JoanFM",
"id": 19825685,
"node_id": "MDQ6VXNlcjE5ODI1Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/19825685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JoanFM",
"html_url": "https://github.com/JoanFM",
"followers_url": "https://api.github.com/users/JoanFM/followers",
"following_url": "https://api.github.com/users/JoanFM/following{/other_user}",
"gists_url": "https://api.github.com/users/JoanFM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JoanFM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoanFM/subscriptions",
"organizations_url": "https://api.github.com/users/JoanFM/orgs",
"repos_url": "https://api.github.com/users/JoanFM/repos",
"events_url": "https://api.github.com/users/JoanFM/events{/privacy}",
"received_events_url": "https://api.github.com/users/JoanFM/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2024-01-12T11:39:52 | 2024-01-12T15:55:07 | null | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.10.201-191.748.amzn2.x86_64-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: 0.26.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
I am using transformers to load a model into GPU, and I observed that before moving the model to GPU there is a peak of RAM usage that later gets unused. I assume the model is loaded into CPU before moving into GPU.
In GPU model takes around 4Gi and to load it I need more than 7Gi of RAM which seems weird.
Is there a way to load it direcly to the GPU without spending so much RAM?
I have tried with the `low_cpu_mem_usage` and `device_map` parameter to `cuda` and `auto` but no luck.
```python
from transformers import AutoModel; m = AutoModel.from_pretrained("jinaai/jina-embeddings-v2-base-en", trust_remote_code=True, low_cpu_mem_usage=True, device_map="auto")
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoModel; m = AutoModel.from_pretrained("jinaai/jina-embeddings-v2-base-en", trust_remote_code=True, low_cpu_mem_usage=True, device_map="auto")
```
### Expected behavior
Not having such a memory peak | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28476/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/28476/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28475 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28475/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28475/comments | https://api.github.com/repos/huggingface/transformers/issues/28475/events | https://github.com/huggingface/transformers/pull/28475 | 2,078,570,207 | PR_kwDOCUB6oc5j6P4D | 28,475 | Docs: add model paths | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-12T10:53:07 | 2024-01-12T15:25:48 | 2024-01-12T15:25:44 | MEMBER | null | # What does this PR do?
As reported by @sayakpaul: some models had placeholder paths. This PR corrects it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28475/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28475/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28475",
"html_url": "https://github.com/huggingface/transformers/pull/28475",
"diff_url": "https://github.com/huggingface/transformers/pull/28475.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28475.patch",
"merged_at": "2024-01-12T15:25:44"
} |
https://api.github.com/repos/huggingface/transformers/issues/28474 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28474/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28474/comments | https://api.github.com/repos/huggingface/transformers/issues/28474/events | https://github.com/huggingface/transformers/pull/28474 | 2,078,491,706 | PR_kwDOCUB6oc5j5-0S | 28,474 | filter out callable attributes from tokenizer_config in save_pretrained | {
"login": "shuttie",
"id": 999061,
"node_id": "MDQ6VXNlcjk5OTA2MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/999061?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shuttie",
"html_url": "https://github.com/shuttie",
"followers_url": "https://api.github.com/users/shuttie/followers",
"following_url": "https://api.github.com/users/shuttie/following{/other_user}",
"gists_url": "https://api.github.com/users/shuttie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shuttie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shuttie/subscriptions",
"organizations_url": "https://api.github.com/users/shuttie/orgs",
"repos_url": "https://api.github.com/users/shuttie/repos",
"events_url": "https://api.github.com/users/shuttie/events{/privacy}",
"received_events_url": "https://api.github.com/users/shuttie/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-01-12T10:07:29 | 2024-01-15T14:56:43 | null | NONE | null | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/28472
As discussed in the upstream bug report, `add_special_token` can be both kwargs parameter passed to the `from_pretrained` and a method in `SpecialTokensMixin.add_special_tokens`. Not sure that it's the best way for doing this, but this PR:
* ensures that no methods are passed into the tokenizer config
* so it can be safely serialized to json with `json.dumps`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28474/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28474/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28474",
"html_url": "https://github.com/huggingface/transformers/pull/28474",
"diff_url": "https://github.com/huggingface/transformers/pull/28474.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28474.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28473 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28473/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28473/comments | https://api.github.com/repos/huggingface/transformers/issues/28473/events | https://github.com/huggingface/transformers/pull/28473 | 2,078,438,756 | PR_kwDOCUB6oc5j5zXV | 28,473 | feat: support indicating prefix token of chat template | {
"login": "congchan",
"id": 18083731,
"node_id": "MDQ6VXNlcjE4MDgzNzMx",
"avatar_url": "https://avatars.githubusercontent.com/u/18083731?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/congchan",
"html_url": "https://github.com/congchan",
"followers_url": "https://api.github.com/users/congchan/followers",
"following_url": "https://api.github.com/users/congchan/following{/other_user}",
"gists_url": "https://api.github.com/users/congchan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/congchan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/congchan/subscriptions",
"organizations_url": "https://api.github.com/users/congchan/orgs",
"repos_url": "https://api.github.com/users/congchan/repos",
"events_url": "https://api.github.com/users/congchan/events{/privacy}",
"received_events_url": "https://api.github.com/users/congchan/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-12T09:36:48 | 2024-01-15T15:01:32 | null | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
In chat language model training, sometimes we need to mask the input from real users, and train the model solely from assistant's outputs.
This PR add a special prefix token, which can be applied in `chat_template`, so that we can make use of this `prefix_token` to dynamically separate dialogs from `user` and `assistant`.
For example:
```
"""<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
"""
```
The prefix_token could be `<|im_start|>assistant\n`, we can make use of this token:
- to set the model's `chat_template`, for example `{% if add_generation_prompt %}{{ prefix_token }}`
- To separate the dialogs from user's and model's turns, and mask the loss from user's turns, by access `tokenizer.prefix_token` and `tokenizer.eos_token`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28473/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28473/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28473",
"html_url": "https://github.com/huggingface/transformers/pull/28473",
"diff_url": "https://github.com/huggingface/transformers/pull/28473.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28473.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28472 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28472/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28472/comments | https://api.github.com/repos/huggingface/transformers/issues/28472/events | https://github.com/huggingface/transformers/issues/28472 | 2,078,425,643 | I_kwDOCUB6oc574kIr | 28,472 | Tokenizer.save_pretrained fails when add_special_tokens=True|False | {
"login": "shuttie",
"id": 999061,
"node_id": "MDQ6VXNlcjk5OTA2MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/999061?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shuttie",
"html_url": "https://github.com/shuttie",
"followers_url": "https://api.github.com/users/shuttie/followers",
"following_url": "https://api.github.com/users/shuttie/following{/other_user}",
"gists_url": "https://api.github.com/users/shuttie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shuttie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shuttie/subscriptions",
"organizations_url": "https://api.github.com/users/shuttie/orgs",
"repos_url": "https://api.github.com/users/shuttie/repos",
"events_url": "https://api.github.com/users/shuttie/events{/privacy}",
"received_events_url": "https://api.github.com/users/shuttie/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-12T09:28:35 | 2024-01-12T12:48:03 | null | NONE | null | ### System Info
transformers-4.34
python-3.11
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoTokenizer
tok = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1", add_special_tokens=True)
tok.save_pretrained("out")
```
The snippet:
* works well on `add_special_tokens=` being present, absent, True/False on 4.33 and below
* works well when `add_special_tokens=` is not added to the list of tokenizer parameters on 4.34+
* fails when `add_special_tokens=` is present in parameters (with both True/False values) on 4.34+ with the following error:
```
Traceback (most recent call last):
File "/home/shutty/private/code/savepbug/test.py", line 4, in <module>
tok.save_pretrained("tokenz")
File "/home/shutty/private/code/savepbug/.venv/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2435, in save_pretrained
out_str = json.dumps(tokenizer_config, indent=2, sort_keys=True, ensure_ascii=False) + "\n"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/json/__init__.py", line 238, in dumps
**kw).encode(obj)
^^^^^^^^^^^
File "/usr/lib/python3.11/json/encoder.py", line 202, in encode
chunks = list(chunks)
^^^^^^^^^^^^
File "/usr/lib/python3.11/json/encoder.py", line 432, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib/python3.11/json/encoder.py", line 406, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.11/json/encoder.py", line 439, in _iterencode
o = _default(o)
^^^^^^^^^^^
File "/usr/lib/python3.11/json/encoder.py", line 180, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type method is not JSON serializable
```
The issue happens on any tokenizer, not only on LLama one. I can confirm it failing the same way on `bert-base-uncased`
If you go to the `tokenization_utils_base` and dump the tokenizer_config just before the `json.dumps`, you may see that `add_special_tokens` surprizingly became a method, and not a bool:
```
{'clean_up_tokenization_spaces': False, 'unk_token': '<unk>', 'bos_token': '<s>', 'eos_token': '</s>', 'add_bos_token': True,
'add_eos_token': False, 'use_default_system_prompt': False, 'additional_special_tokens': [], 'legacy': True,
'model_max_length': 1000000000000000019884624838656, 'pad_token': None, 'sp_model_kwargs': {},
'spaces_between_special_tokens': False,
'add_special_tokens': <bound method SpecialTokensMixin.add_special_tokens of LlamaTokenizerFast(name_or_path='mistralai/Mistral-7B-v0.1', vocab_size=32000, model_max_length=1000000000000000019884624838656, is_fast=True,
padding_side='left', truncation_side='right', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '<unk>'}, clean_up_tokenization_spaces=False), added_tokens_decoder={
0: AddedToken("<unk>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
1: AddedToken("<s>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
2: AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
}>, 'added_tokens_decoder': {0: {'content': '<unk>', 'single_word': False, 'lstrip': False, 'rstrip': False,
'normalized': False, 'special': True}, 1: {'content': '<s>', 'single_word': False, 'lstrip': False, 'rstrip': False,
'normalized': False, 'special': True}, 2: {'content': '</s>', 'single_word': False, 'lstrip': False, 'rstrip': False,
'normalized': False, 'special': True}}, 'tokenizer_class': 'LlamaTokenizer'}
```
My feeling that the issue is related to the https://github.com/huggingface/transformers/pull/23909 PR which refactored a lot of tokenizer internals, so in the current version:
* `add_special_tokens` is a part of kwargs passed to the tokenizer
* there is also a method `SpecialTokensMixin.add_special_tokens` having the same name
* when everything is being joined together before `json.dumps`, the method is being serialized instead of the kwargs parameter.
### Expected behavior
Not crashing with `TypeError: Object of type method is not JSON serializable` as in was pre https://github.com/huggingface/transformers/pull/23909 in 4.33. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28472/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28472/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28471 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28471/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28471/comments | https://api.github.com/repos/huggingface/transformers/issues/28471/events | https://github.com/huggingface/transformers/pull/28471 | 2,078,363,551 | PR_kwDOCUB6oc5j5jQl | 28,471 | Fix torch.ones usage in xlnet | {
"login": "sungho-ham",
"id": 19978686,
"node_id": "MDQ6VXNlcjE5OTc4Njg2",
"avatar_url": "https://avatars.githubusercontent.com/u/19978686?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sungho-ham",
"html_url": "https://github.com/sungho-ham",
"followers_url": "https://api.github.com/users/sungho-ham/followers",
"following_url": "https://api.github.com/users/sungho-ham/following{/other_user}",
"gists_url": "https://api.github.com/users/sungho-ham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sungho-ham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sungho-ham/subscriptions",
"organizations_url": "https://api.github.com/users/sungho-ham/orgs",
"repos_url": "https://api.github.com/users/sungho-ham/repos",
"events_url": "https://api.github.com/users/sungho-ham/events{/privacy}",
"received_events_url": "https://api.github.com/users/sungho-ham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-12T08:52:06 | 2024-01-12T14:31:01 | 2024-01-12T14:31:01 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
When creating a causal attention mask in xlnet, the device parameter of torch.ones can be interpreted as one of the dimensions. Because of this, the code throws an error in torch 1.13.1. I have modified to specify the name of the device parameter.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @younesbelkada
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28471/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28471/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28471",
"html_url": "https://github.com/huggingface/transformers/pull/28471",
"diff_url": "https://github.com/huggingface/transformers/pull/28471.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28471.patch",
"merged_at": "2024-01-12T14:31:01"
} |
https://api.github.com/repos/huggingface/transformers/issues/28470 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28470/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28470/comments | https://api.github.com/repos/huggingface/transformers/issues/28470/events | https://github.com/huggingface/transformers/issues/28470 | 2,078,272,669 | I_kwDOCUB6oc573-yd | 28,470 | Running a `forward` pass before `generate` with AWQ fused modules breaks it | {
"login": "IlyasMoutawwakil",
"id": 57442720,
"node_id": "MDQ6VXNlcjU3NDQyNzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/57442720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IlyasMoutawwakil",
"html_url": "https://github.com/IlyasMoutawwakil",
"followers_url": "https://api.github.com/users/IlyasMoutawwakil/followers",
"following_url": "https://api.github.com/users/IlyasMoutawwakil/following{/other_user}",
"gists_url": "https://api.github.com/users/IlyasMoutawwakil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IlyasMoutawwakil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IlyasMoutawwakil/subscriptions",
"organizations_url": "https://api.github.com/users/IlyasMoutawwakil/orgs",
"repos_url": "https://api.github.com/users/IlyasMoutawwakil/repos",
"events_url": "https://api.github.com/users/IlyasMoutawwakil/events{/privacy}",
"received_events_url": "https://api.github.com/users/IlyasMoutawwakil/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url"... | null | 1 | 2024-01-12T07:56:41 | 2024-01-12T12:51:55 | null | MEMBER | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.4.0-166-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: 0.26.1
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MEGATRON_LM
- mixed_precision: fp16
- use_cpu: False
- debug: False
- num_processes: 2
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- megatron_lm_config: {'megatron_lm_gradient_clipping': 1.0, 'megatron_lm_pp_degree': 1, 'megatron_lm_recompute_activations': True, 'megatron_lm_sequence_parallelism': False, 'megatron_lm_tp_degree': 2, 'megatron_lm_use_distributed_optimizer': True}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.1.2+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoModelForCausalLM, AwqConfig, AutoTokenizer
awq_config = AwqConfig(do_fuse=True, fuse_max_seq_len=512)
model = AutoModelForCausalLM.from_pretrained(
"casperhansen/tinyllama-1b-awq",
quantization_config=awq_config,
).to("cuda")
tokenizer = AutoTokenizer.from_pretrained("casperhansen/tinyllama-1b-awq")
input_ids = tokenizer("Hello, my dog is cute", return_tensors="pt").input_ids.to("cuda")
model.forward(input_ids)
model.generate(input_ids, max_new_tokens=100)
```
### Expected behavior
code works if only generate is called but not if a forward pass precedes it.
looking at the traceback:
```
Traceback (most recent call last):
File "/workspace/llm-perf/test_.py", line 29, in <module>
model.generate(input_ids, max_new_tokens=100)
File "/home/user/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/user/.local/lib/python3.10/site-packages/transformers/generation/utils.py", line 1718, in generate
return self.greedy_search(
File "/home/user/.local/lib/python3.10/site-packages/transformers/generation/utils.py", line 2579, in greedy_search
outputs = self(
File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/user/.local/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1181, in forward
outputs = self.model(
File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/user/.local/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1033, in forward
attention_mask = _prepare_4d_causal_attention_mask_for_sdpa(
File "/home/user/.local/lib/python3.10/site-packages/transformers/modeling_attn_mask_utils.py", line 372, in _prepare_4d_causal_attention_mask_for_sdpa
expanded_4d_mask = attn_mask_converter.to_4d(
File "/home/user/.local/lib/python3.10/site-packages/transformers/modeling_attn_mask_utils.py", line 136, in to_4d
expanded_attn_mask = causal_4d_mask.masked_fill(expanded_attn_mask.bool(), torch.finfo(dtype).min)
RuntimeError: The size of tensor a (9) must match the size of tensor b (25) at non-singleton dimension 3
```
the problems seems to be related to the sdpa integration | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28470/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28470/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28469 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28469/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28469/comments | https://api.github.com/repos/huggingface/transformers/issues/28469/events | https://github.com/huggingface/transformers/issues/28469 | 2,078,206,863 | I_kwDOCUB6oc573uuP | 28,469 | `dataloader_persistent_workers=True` causes fork-bomb due to repeated creation of `eval_dataloader` | {
"login": "naba89",
"id": 12119806,
"node_id": "MDQ6VXNlcjEyMTE5ODA2",
"avatar_url": "https://avatars.githubusercontent.com/u/12119806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/naba89",
"html_url": "https://github.com/naba89",
"followers_url": "https://api.github.com/users/naba89/followers",
"following_url": "https://api.github.com/users/naba89/following{/other_user}",
"gists_url": "https://api.github.com/users/naba89/gists{/gist_id}",
"starred_url": "https://api.github.com/users/naba89/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/naba89/subscriptions",
"organizations_url": "https://api.github.com/users/naba89/orgs",
"repos_url": "https://api.github.com/users/naba89/repos",
"events_url": "https://api.github.com/users/naba89/events{/privacy}",
"received_events_url": "https://api.github.com/users/naba89/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-01-12T07:08:37 | 2024-01-12T07:08:37 | null | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.10.13
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: 0.26.1
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: NO
- mixed_precision: fp16
- use_cpu: False
- debug: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.1.2 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: does not matter
- Using distributed or parallel set-up in script?: does not matter
### Who can help?
@muellerzr @pacman100
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
import os
from dataclasses import dataclass
import torch
import torch.nn.functional as F
from torch.utils.data import Dataset
from transformers import TrainingArguments, Trainer
from transformers.modeling_outputs import BaseModelOutput
# Dummy Dataset
class DummyDataset(Dataset):
def __init__(self, size=100):
self.size = size
self.data = torch.rand(size, 10) # Random data
self.labels = torch.randint(0, 2, (size,)) # Binary labels
def __len__(self):
return self.size
def __getitem__(self, idx):
return {'input_ids': self.data[idx], 'labels': self.labels[idx]}
@dataclass
class DummyModelOutput(BaseModelOutput):
loss: torch.Tensor = None
logits: torch.Tensor = None
# Dummy Model
class DummyModel(torch.nn.Module):
def __init__(self):
super(DummyModel, self).__init__()
self.linear = torch.nn.Linear(10, 2)
def forward(self, input_ids, labels=None) -> DummyModelOutput:
outputs = self.linear(input_ids)
loss = F.cross_entropy(outputs, labels)
return DummyModelOutput(loss=loss, logits=outputs)
if __name__ == '__main__':
# using wandb, because it logs system metrics periodically
os.environ["WANDB_PROJECT"] = "dummy_project"
# Create dataset and model instances
dataset = DummyDataset(size=1000)
model = DummyModel()
persistent_workers = False # set to True to enable persistent workers
# Training arguments
training_args = TrainingArguments(
output_dir="./test_trainer",
run_name=f'dataloader_peristent_workers={persistent_workers}',
num_train_epochs=20,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
dataloader_num_workers=8,
dataloader_persistent_workers=persistent_workers,
logging_strategy="no",
evaluation_strategy="epoch",
)
# Initialize the custom trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=dataset,
eval_dataset=dataset,
)
# Train the model
trainer.train()
```
### Expected behavior
Since the [get_eval_loader](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L3065C16-L3065C16) is called on every evaluate call, with `dataloader_persistent_workers=True` the previous worker processes are not killed and leads to a fork-bomb and exhausts system resources and causes instability/crash.
As you can see in the below plots generated with the reproduction script (in the wandb system metrics section),
- persistent data loader workers cause speedup (mainly because the training loader does not recreate all processes at every epoch), but evaluation loaders cause the fork-bomb.
- without persistent data loader workers, speed is slow, but the number of processes is constant.

Having the persistent dataloader option is good. Still, it is necessary to fix the eval loader logic, create it once, and reuse it since the eval datasets won't change in the middle of training.
This option was added in #27058 and #27189
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28469/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28469/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28468 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28468/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28468/comments | https://api.github.com/repos/huggingface/transformers/issues/28468/events | https://github.com/huggingface/transformers/issues/28468 | 2,078,022,384 | I_kwDOCUB6oc573Brw | 28,468 | Train LLaMA 2 with PEFT(LoRA) + Deepspeed Zero3 on v100 * 8, raise assert param.ds_status == ZeroParamStatus.AVAILABLE | {
"login": "ZetangForward",
"id": 123983104,
"node_id": "U_kgDOB2PVAA",
"avatar_url": "https://avatars.githubusercontent.com/u/123983104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZetangForward",
"html_url": "https://github.com/ZetangForward",
"followers_url": "https://api.github.com/users/ZetangForward/followers",
"following_url": "https://api.github.com/users/ZetangForward/following{/other_user}",
"gists_url": "https://api.github.com/users/ZetangForward/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZetangForward/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZetangForward/subscriptions",
"organizations_url": "https://api.github.com/users/ZetangForward/orgs",
"repos_url": "https://api.github.com/users/ZetangForward/repos",
"events_url": "https://api.github.com/users/ZetangForward/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZetangForward/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 8 | 2024-01-12T04:00:59 | 2024-01-17T10:53:30 | null | NONE | null | ### System Info
Huggingface Version == 4.31.0
## Environment
Deepspeed Zero3 Config:
```
{
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 0,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 0,
"stage3_max_reuse_distance": 0,
"stage3_gather_16bit_weights_on_model_save": true
},
"fp16": {
"enabled": true,
"auto_cast": false,
"loss_scale": 0,
"initial_scale_power": 32,
"loss_scale_window": 2000,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": [
0.9,
0.999
],
"eps": 1e-8,
"weight_decay": 0
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"gradient_accumulation_steps": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
Launch Config:
```
deepspeed --num_gpus 1 \
--num_nodes 1 \
train_vqllama_lora.py \
--model_name_or_path "/model/llama2" \
--data_path "/data/data.pkl" \
--output_dir ${OUTPUT_DIR} \
--num_train_epochs 100 \
--model_max_length 1024 \
--per_device_train_batch_size 68 \
--per_device_eval_batch_size 16 \
--gradient_accumulation_steps 1 \
--evaluation_strategy "steps" \
--eval_steps 5 \
--greater_is_better False \
--save_strategy "steps" \
--load_best_model_at_end True \
--save_steps 5 \
--save_total_limit 10 \
--learning_rate 3e-5 \
--warmup_steps 20 \
--logging_steps 5 \
--dataloader_num_workers 0 \
--lr_scheduler_type "cosine" \
--report_to "tensorboard" \
--deepspeed configs/deepspeed/stage3_test.json \
--fp16 True \
--remove_unused_columns False;
```
LoRA Config:
```
myllama = CustomLLama.from_pretrained(
model_args.model_name_or_path,
config=llamaconfig,
cache_dir=training_args.cache_dir
)
config = LoraConfig(
r=16,
lora_alpha=32,
lora_dropout=0.05,
target_modules=["q_proj", "v_proj"],
bias="none",
task_type="CAUSAL_LM",
modules_to_save=["custom_embedding", "custom_head", "wte", "lm_head"]
)
myllama = get_peft_model(myllama , config)
```
Then, I train `myllama` with Huggingface Trainer
## Errors
I come with this error
```
Parameter Offload: Total persistent parameters: 8663041 in 198 params
0%| | 0/280000 [00:00<?, ?it/s]Traceback (most recent call last):
File "/workspace/zecheng/modelzipper/projects/custom_llama/train_vqllama_lora.py", line 230, in <module>
train()
File "/workspace/zecheng/modelzipper/projects/custom_llama/train_vqllama_lora.py", line 224, in train
trainer.train()
File "/opt/conda/envs/llama/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train
return inner_training_loop(
File "/opt/conda/envs/llama/lib/python3.10/site-packages/transformers/trainer.py", line 1809, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/transformers/trainer.py", line 2654, in training_step
loss = self.compute_loss(model, inputs)
File "/workspace/zecheng/modelzipper/projects/custom_llama/train_vqllama_lora.py", line 105, in compute_loss
outputs = model(**inputs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1833, in forward
loss = self.module(*inputs, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl
result = forward_call(*args, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/peft/peft_model.py", line 1073, in forward
return self.base_model(
File "/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl
result = forward_call(*args, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/peft/tuners/tuners_utils.py", line 103, in forward
return self.model.forward(*args, **kwargs)
File "/workspace/zecheng/modelzipper/projects/custom_llama/models/vqllama.py", line 84, in forward
svg_token_embeddings = self.vqvae_embedding(svg_token_ids) # Encode svg tokens
File "/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl
result = forward_call(*args, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/peft/utils/other.py", line 219, in forward
return self.modules_to_save[self.active_adapter](*args, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1557, in _call_impl
args_result = hook(self, args)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/deepspeed/runtime/zero/parameter_offload.py", line 392, in _pre_forward_module_hook
self.pre_sub_module_forward_function(module)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/deepspeed/runtime/zero/parameter_offload.py", line 505, in pre_sub_module_forward_function
param_coordinator.fetch_sub_module(sub_module, forward=prev_grad_state)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/deepspeed/runtime/zero/partitioned_param_coordinator.py", line 310, in fetch_sub_module
assert param.ds_status == ZeroParamStatus.AVAILABLE, param.ds_summary()
AssertionError: {'id': 423, 'status': 'NOT_AVAILABLE', 'numel': 0, 'ds_numel': 0, 'shape': (0,), 'ds_shape': (0,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {1038}, 'ds_tensor.shape': torch.Size([0])}
```
Any help for this problem ? THX
### Who can help?
@muellerz @pacman100
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. initiate the llama2 model from HF and add some extra modules for training, such as LoRA, one extra embedding / LM Head
2. use peft to wrap the model above
3. apply Deepspeed Zero3 (with my config) and HF Trainer to start training.
### Expected behavior
Seek for help and may solve some potential bugs | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28468/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28468/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28467 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28467/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28467/comments | https://api.github.com/repos/huggingface/transformers/issues/28467/events | https://github.com/huggingface/transformers/issues/28467 | 2,078,016,506 | I_kwDOCUB6oc573AP6 | 28,467 | ImportError: cannot import name 'is_g2p_en_available' from 'transformers.utils' (/usr/local/lib/python3.10/dist-packages/transformers/utils/__init__.py) | {
"login": "kli017",
"id": 14877573,
"node_id": "MDQ6VXNlcjE0ODc3NTcz",
"avatar_url": "https://avatars.githubusercontent.com/u/14877573?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kli017",
"html_url": "https://github.com/kli017",
"followers_url": "https://api.github.com/users/kli017/followers",
"following_url": "https://api.github.com/users/kli017/following{/other_user}",
"gists_url": "https://api.github.com/users/kli017/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kli017/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kli017/subscriptions",
"organizations_url": "https://api.github.com/users/kli017/orgs",
"repos_url": "https://api.github.com/users/kli017/repos",
"events_url": "https://api.github.com/users/kli017/events{/privacy}",
"received_events_url": "https://api.github.com/users/kli017/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-12T03:52:36 | 2024-01-27T21:04:24 | null | NONE | null | ### System Info
env: colab
python=3.10
trasformers=4.37.0.dev0
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi, I was run the peft_bnb_whisper_large_v2_training.ipynb from peft project. Everything is good till I met the error when I run the code `import evaluate`. I also try transformers=4.27.4, 4.33,1 and 4.36.2 and get the same error.
### Expected behavior
Anyone can help? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28467/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28467/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28466 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28466/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28466/comments | https://api.github.com/repos/huggingface/transformers/issues/28466/events | https://github.com/huggingface/transformers/issues/28466 | 2,077,979,494 | I_kwDOCUB6oc5723Nm | 28,466 | LlamaForCausalLM does not support Flash Attention 2.0 yet | {
"login": "Patrick-Ni",
"id": 59468866,
"node_id": "MDQ6VXNlcjU5NDY4ODY2",
"avatar_url": "https://avatars.githubusercontent.com/u/59468866?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Patrick-Ni",
"html_url": "https://github.com/Patrick-Ni",
"followers_url": "https://api.github.com/users/Patrick-Ni/followers",
"following_url": "https://api.github.com/users/Patrick-Ni/following{/other_user}",
"gists_url": "https://api.github.com/users/Patrick-Ni/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Patrick-Ni/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Patrick-Ni/subscriptions",
"organizations_url": "https://api.github.com/users/Patrick-Ni/orgs",
"repos_url": "https://api.github.com/users/Patrick-Ni/repos",
"events_url": "https://api.github.com/users/Patrick-Ni/events{/privacy}",
"received_events_url": "https://api.github.com/users/Patrick-Ni/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-12T03:01:44 | 2024-01-12T15:28:40 | 2024-01-12T15:28:40 | NONE | null | The model was loaded with use_flash_attention_2=True, which is deprecated and may be removed in a future release. Please use `attn_implementation="flash_attention_2"` instead.
Traceback (most recent call last):
File "/root/paddlejob/workspace/env_run/benchmark/generation/main.py", line 116, in <module>
main()
File "/root/paddlejob/workspace/env_run/benchmark/generation/main.py", line 91, in main
pipeline = load_model_and_tokenizer(model_home, args.model, args.use_pipeline)
File "/root/paddlejob/workspace/env_run/benchmark/generation/load_models_and_datasets.py", line 26, in load_model_and_tokenizer
model = AutoModelForCausalLM.from_pretrained(
File "/root/paddlejob/workspace/env_run/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 561, in from_pretrained
return model_class.from_pretrained(
File "/root/paddlejob/workspace/env_run/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3456, in from_pretrained
config = cls._autoset_attn_implementation(
File "/root/paddlejob/workspace/env_run/lib/python3.9/site-packages/transformers/modeling_utils.py", line 1302, in _autoset_attn_implementation
cls._check_and_enable_flash_attn_2(
File "/root/paddlejob/workspace/env_run/lib/python3.9/site-packages/transformers/modeling_utils.py", line 1382, in _check_and_enable_flash_attn_2
raise ValueError(
ValueError: LlamaForCausalLM does not support Flash Attention 2.0 yet. Please open an issue on GitHub to request support for this architecture: https://github.com/huggingface/transformers/issues/new | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28466/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28466/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28465 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28465/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28465/comments | https://api.github.com/repos/huggingface/transformers/issues/28465/events | https://github.com/huggingface/transformers/pull/28465 | 2,077,920,592 | PR_kwDOCUB6oc5j4EEy | 28,465 | Update README.md | {
"login": "kit1980",
"id": 420184,
"node_id": "MDQ6VXNlcjQyMDE4NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/420184?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kit1980",
"html_url": "https://github.com/kit1980",
"followers_url": "https://api.github.com/users/kit1980/followers",
"following_url": "https://api.github.com/users/kit1980/following{/other_user}",
"gists_url": "https://api.github.com/users/kit1980/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kit1980/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kit1980/subscriptions",
"organizations_url": "https://api.github.com/users/kit1980/orgs",
"repos_url": "https://api.github.com/users/kit1980/repos",
"events_url": "https://api.github.com/users/kit1980/events{/privacy}",
"received_events_url": "https://api.github.com/users/kit1980/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-12T01:51:35 | 2024-01-12T01:55:49 | 2024-01-12T01:54:58 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28465/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28465/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28465",
"html_url": "https://github.com/huggingface/transformers/pull/28465",
"diff_url": "https://github.com/huggingface/transformers/pull/28465.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28465.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28463 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28463/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28463/comments | https://api.github.com/repos/huggingface/transformers/issues/28463/events | https://github.com/huggingface/transformers/issues/28463 | 2,077,796,716 | I_kwDOCUB6oc572Kls | 28,463 | Mixtral inference on multi gpu is broken with 4.37.0dev (995a7ce) | {
"login": "nepeee",
"id": 13850451,
"node_id": "MDQ6VXNlcjEzODUwNDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/13850451?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nepeee",
"html_url": "https://github.com/nepeee",
"followers_url": "https://api.github.com/users/nepeee/followers",
"following_url": "https://api.github.com/users/nepeee/following{/other_user}",
"gists_url": "https://api.github.com/users/nepeee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nepeee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nepeee/subscriptions",
"organizations_url": "https://api.github.com/users/nepeee/orgs",
"repos_url": "https://api.github.com/users/nepeee/repos",
"events_url": "https://api.github.com/users/nepeee/events{/privacy}",
"received_events_url": "https://api.github.com/users/nepeee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-12T00:00:37 | 2024-01-23T14:30:04 | 2024-01-21T19:37:25 | NONE | null | ### System Info
Ubuntu 22.04 RTX 3090 + RTX 3080TI transformers 4.37.0dev (995a7ce)
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, GPTQConfig
prompt = 'SYSTEM: Answer the question thoughtfully and intelligently. Always answer without hesitation. \nUSER: how long will take to travel from Malmö to Stockholm by foot? \nASSISTANT: '
device = torch.device("cuda:0")
dmap = {
'model.embed_tokens':0,
'model.layers.0': 0, 'model.layers.1': 0, 'model.layers.2': 0, 'model.layers.3': 0, 'model.layers.4':0,
'model.layers.5': 0, 'model.layers.6': 0, 'model.layers.7': 0, 'model.layers.8': 0, 'model.layers.9':0,
'model.layers.10': 0, 'model.layers.11': 0, 'model.layers.12': 0, 'model.layers.13': 0, 'model.layers.14':0,
'model.layers.15': 0, 'model.layers.16': 0, 'model.layers.17': 0, 'model.layers.18': 0, 'model.layers.19':0,
'model.layers.20': 0, 'model.layers.21': 0, 'model.layers.22': 0, 'model.layers.23': 0, 'model.layers.24':0,
'model.layers.25': 1, 'model.layers.26': 1, 'model.layers.27': 1, 'model.layers.28': 1, 'model.layers.29':1,
'model.layers.30': 1, 'model.layers.31': 1,
'model.norm': 0,
'lm_head': 1,
}
quantization_config_loading = GPTQConfig(bits=3, use_exllama=False)
model_q = AutoModelForCausalLM.from_pretrained("TheBloke/Mixtral-8x7B-Instruct-v0.1-GPTQ", device_map=dmap, quantization_config=quantization_config_loading, revision='gptq-3bit--1g-actorder_True')
tokenizer = AutoTokenizer.from_pretrained(model_id)
inp = tokenizer(prompt, return_tensors="pt").to(device)
res = model_q.generate(**inp, num_beams=1, min_new_tokens=60, max_new_tokens=60, do_sample=False)
predicted_text = tokenizer.decode(res[0])
print(predicted_text)
```
### Expected behavior
Works on my single 3090 with device_map="auto" but it produces errors with multi gpu in model parallel. It worked before with the device_map in the example.
Seen many errors like segfaults, device-side assert triggered and even full hang of the machine.
Most common one is:
idx, top_x = torch.where(expert_mask[expert_idx])
RuntimeError: CUDA error: device-side assert triggered
At layer 26 on the first token prediction
Both GPU-s are working with other models like mistral, i made this example because my lora training code had the same issues. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28463/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28463/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28462 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28462/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28462/comments | https://api.github.com/repos/huggingface/transformers/issues/28462/events | https://github.com/huggingface/transformers/issues/28462 | 2,077,558,874 | I_kwDOCUB6oc571Qha | 28,462 | Move layer_idx from a layer property to function argument. | {
"login": "siddartha-RE",
"id": 55106295,
"node_id": "MDQ6VXNlcjU1MTA2Mjk1",
"avatar_url": "https://avatars.githubusercontent.com/u/55106295?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/siddartha-RE",
"html_url": "https://github.com/siddartha-RE",
"followers_url": "https://api.github.com/users/siddartha-RE/followers",
"following_url": "https://api.github.com/users/siddartha-RE/following{/other_user}",
"gists_url": "https://api.github.com/users/siddartha-RE/gists{/gist_id}",
"starred_url": "https://api.github.com/users/siddartha-RE/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/siddartha-RE/subscriptions",
"organizations_url": "https://api.github.com/users/siddartha-RE/orgs",
"repos_url": "https://api.github.com/users/siddartha-RE/repos",
"events_url": "https://api.github.com/users/siddartha-RE/events{/privacy}",
"received_events_url": "https://api.github.com/users/siddartha-RE/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | 1 | 2024-01-11T20:48:23 | 2024-01-12T15:25:07 | null | CONTRIBUTOR | null | ### Feature request
Currently the layer_idx is recorded in the attention module of each `LlamaDecoderLayer`. This has the unfortunate side effect that the layers cannot easily be moved around or reused within the layer list. It seems simple enough to pass in the layer index as part of loop over layers in the forward pass. That way the layers once again will be decouple from their position information.
Backward compatibility could be preserved by still accepting the argument in the constructor but defaulting it to None and then just ignoring it in favor of the passed forward argument.
### Motivation
The motivation is to allow for simple layer stacking (like we have been seeing with pass through merged models) at inference time without actually expanding the memory usage of the model.
### Your contribution
I am happy to send a PR. Seems simple enough. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28462/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28462/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28461 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28461/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28461/comments | https://api.github.com/repos/huggingface/transformers/issues/28461/events | https://github.com/huggingface/transformers/issues/28461 | 2,077,408,711 | I_kwDOCUB6oc570r3H | 28,461 | Pytorch can have its default dtype permanently set to the "wrong" value if there is an exception when loading a model | {
"login": "Taytay",
"id": 1330693,
"node_id": "MDQ6VXNlcjEzMzA2OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1330693?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Taytay",
"html_url": "https://github.com/Taytay",
"followers_url": "https://api.github.com/users/Taytay/followers",
"following_url": "https://api.github.com/users/Taytay/following{/other_user}",
"gists_url": "https://api.github.com/users/Taytay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Taytay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Taytay/subscriptions",
"organizations_url": "https://api.github.com/users/Taytay/orgs",
"repos_url": "https://api.github.com/users/Taytay/repos",
"events_url": "https://api.github.com/users/Taytay/events{/privacy}",
"received_events_url": "https://api.github.com/users/Taytay/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2024-01-11T19:09:51 | 2024-01-15T16:25:19 | null | NONE | null | ### System Info
I just ran into the most head-scratching issue. My data collator was crashing because a tensor it made was in half precision (fp16). I couldn't figure out why, but then I realized my `torch.get_default_dtype()` was `torch.float16`!
Then I realized it's because my model code threw an exception in a previous run of a notebook cell.
And if you look at this code PreTrainedModel:_from_config : [def _from_config(cls, config, **kwargs):](https://github.com/huggingface/transformers/blob/995a7ce9a80b80062ccfe0b2d78857fb17351e27/src/transformers/modeling_utils.py#L1256-L1294)
You can see that it tries to set the dtype back to the original value, but doesn't do so in a `finally` block:
```python
# override default dtype if needed
dtype_orig = None
if torch_dtype is not None:
dtype_orig = cls._set_default_torch_dtype(torch_dtype)
# do some stuff here....maybe throw an exception...
# restore default dtype if it was modified (assuming we get to this line)
if dtype_orig is not None:
torch.set_default_dtype(dtype_orig)
return model
```
This would of course leave my torch default dtype in whatever it was in when I was trying to load the model.
We could sprinkle some `finally` blocks around, or we could write a class like this:
```python
class temporily_set_default_torch_dtype:
def __init__(self, dtype):
self.new_dtype = dtype
if dtype is not None:
self.original_dtype = torch.get_default_dtype()
else:
# try to make this a no-op
self.original_dtype = None
def __enter__(self):
if self.new_dtype is not None:
torch.set_default_dtype(self.new_dtype)
def __exit__(self, exc_type, exc_val, exc_tb):
if self.original_dtype is not None:
torch.set_default_dtype(self.original_dtype)
```
And use it like so:
```python
torch.set_default_dtype(torch.float32)
print(f"default dtype is this before: {torch.get_default_dtype()}")
try:
with temporily_set_default_torch_dtype(torch.float16):
print(f"default dtype is now this inside: {torch.get_default_dtype()}")
raise ValueError("Throwing an exception to make sure it works")
except ValueError as e:
print("We caught the exception")
pass
print(f"default dtype is this after: {torch.get_default_dtype()}")
# prints:
# default dtype is this before: torch.float32
# default dtype is now this inside: torch.float16
# default dtype is this after: torch.float32
```
### Who can help?
Think @ArthurZucker and @younesbelkada are correct here?
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1: Run a notebook cell that loads a model from_pretrained, in dtype=float16, and throws an exception while doing so.
2: Note that your torch.get_default_dtype() is still set to float16.
(This causes a real problem when things like the `DataCollatorForLanguageModeling` calls `torch_mask_tokens`, and then:
```python
# this will accidentally create a float16 tensor:
probability_matrix = torch.full(labels.shape, self.mlm_probability)
#...
probability_matrix.masked_fill_(special_tokens_mask, value=0.0)
masked_indices = torch.bernoulli(probability_matrix).bool()
```
An exception gets thrown when you try to call `bernoulli` on a cpu tensor at half precision:
`RuntimeError: "bernoulli_tensor_cpu_self_" not implemented for 'Half'`
### Expected behavior
My default torch dtype should not get "corrupted" even if the model loading code throws an exception | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28461/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28461/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28460 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28460/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28460/comments | https://api.github.com/repos/huggingface/transformers/issues/28460/events | https://github.com/huggingface/transformers/pull/28460 | 2,077,385,630 | PR_kwDOCUB6oc5j2OXG | 28,460 | Fix docstrings and update docstring checker error message | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-11T18:53:50 | 2024-01-12T17:54:12 | 2024-01-12T17:54:11 | MEMBER | null | While I was making fixes to the docstring checker, I found another issue - this one seems to be intermittent, and I'm not sure why it only fails tests sometimes. Still, it's definitely wrong, so this fix should hopefully avoid issues in future! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28460/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28460",
"html_url": "https://github.com/huggingface/transformers/pull/28460",
"diff_url": "https://github.com/huggingface/transformers/pull/28460.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28460.patch",
"merged_at": "2024-01-12T17:54:11"
} |
https://api.github.com/repos/huggingface/transformers/issues/28459 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28459/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28459/comments | https://api.github.com/repos/huggingface/transformers/issues/28459/events | https://github.com/huggingface/transformers/issues/28459 | 2,077,372,971 | I_kwDOCUB6oc570jIr | 28,459 | `get_imports` failing to respect conditionals on imports | {
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users/jamesbraza/followers",
"following_url": "https://api.github.com/users/jamesbraza/following{/other_user}",
"gists_url": "https://api.github.com/users/jamesbraza/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jamesbraza/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamesbraza/subscriptions",
"organizations_url": "https://api.github.com/users/jamesbraza/orgs",
"repos_url": "https://api.github.com/users/jamesbraza/repos",
"events_url": "https://api.github.com/users/jamesbraza/events{/privacy}",
"received_events_url": "https://api.github.com/users/jamesbraza/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 10 | 2024-01-11T18:45:02 | 2024-01-31T03:57:17 | null | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: macOS-13.5.2-arm64-arm-64bit
- Python version: 3.11.7
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
From `git blame`: @Wauplin @sgugger
From issue template (it's a LLM): @ArthurZucker @you
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Running the below snippet on a MacBook without an Nvidia GPU and `transformers==4.36.2` will throw an `ImportError` to `pip install flash_attn`. However, `flash_attn` isn't actually a requirement for this model, so something's off here.
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1_5", trust_remote_code=True)
```
Leads to:
```
File "/Users/user/code/project/venv/lib/python3.11/site-packages/transformers/dynamic_module_utils.py", line 315, in get_cached_module_file
modules_needed = check_imports(resolved_module_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/code/project/venv/lib/python3.11/site-packages/transformers/dynamic_module_utils.py", line 180, in check_imports
raise ImportError(
ImportError: This modeling file requires the following packages that were not found in your environment: flash_attn. Run `pip install flash_attn`
python-BaseException
```
Investigating this, it seems https://github.com/huggingface/transformers/blob/v4.36.2/src/transformers/dynamic_module_utils.py#L154 is picking up `flash_attn` from https://github.com/huggingface/transformers/blob/v4.36.2/src/transformers/models/phi/modeling_phi.py#L50-L52. However, if you look at the file, it's within an `if` statement.
Therein is the bug, that `transformers.dynamic_module_utils.get_imports` is not respecting conditionals before imports.
Please see https://huggingface.co/microsoft/phi-1_5/discussions/72 for more info.
### Expected behavior
My goal is some way to avoid monkey patching `get_imports` to remove the extra inferred `flash_attn` dependency.
The most generalized solution is probably moving `get_imports` from regex searching the source to either use `inspect` (see [here](https://stackoverflow.com/a/47093697)) or some other AST walking method. I am pretty sure there is a simple fix here, it just involves moving away from a regex. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28459/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28458 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28458/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28458/comments | https://api.github.com/repos/huggingface/transformers/issues/28458/events | https://github.com/huggingface/transformers/pull/28458 | 2,077,343,501 | PR_kwDOCUB6oc5j2FPX | 28,458 | Mark two logger tests as flaky | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-11T18:25:45 | 2024-01-12T11:59:03 | 2024-01-12T11:58:59 | COLLABORATOR | null | # What does this PR do?
Two tests which capture and check the logger's output occasionally fail.
```
FAILED tests/test_modeling_utils.py::ModelUtilsTest::test_model_from_pretrained_with_different_pretrained_model_name - AssertionError: False is not true
FAILED tests/test_modeling_utils.py::ModelUtilsTest::test_unexpected_keys_warnings - AssertionError: "were not used when initializing ModelWithHead: ['added_key']" not found in ''
```
It looks like the logger isn't capturing the output. I have never been able to replicate the errors outside of circle CI, locally or on a VM.
The reason for the failure is unclear: there are other tests in the same module which utilise `CaptureLogger`. However, it's always these two tests which fail.
Example runs, where failing tests were unrelated to the PR:
* https://app.circleci.com/pipelines/github/huggingface/transformers/81004/workflows/4919e5c9-0ea2-457b-ad4f-65371f79e277/jobs/1038999
* https://app.circleci.com/pipelines/github/huggingface/transformers/82051/workflows/8674dab8-35ac-4336-8db2-24d90426554f/jobs/1054942
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28458/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28458/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28458",
"html_url": "https://github.com/huggingface/transformers/pull/28458",
"diff_url": "https://github.com/huggingface/transformers/pull/28458.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28458.patch",
"merged_at": "2024-01-12T11:58:59"
} |
https://api.github.com/repos/huggingface/transformers/issues/28457 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28457/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28457/comments | https://api.github.com/repos/huggingface/transformers/issues/28457/events | https://github.com/huggingface/transformers/pull/28457 | 2,077,277,222 | PR_kwDOCUB6oc5j12zh | 28,457 | Bump jinja2 from 2.11.3 to 3.1.3 in /examples/research_projects/decision_transformer | {
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
} | [
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
},
{
"id": 6410... | closed | false | null | [] | null | 2 | 2024-01-11T17:45:03 | 2024-01-12T14:28:57 | 2024-01-12T14:28:56 | CONTRIBUTOR | null | Bumps [jinja2](https://github.com/pallets/jinja) from 2.11.3 to 3.1.3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pallets/jinja/releases">jinja2's releases</a>.</em></p>
<blockquote>
<h2>3.1.3</h2>
<p>This is a fix release for the 3.1.x feature branch.</p>
<ul>
<li>Fix for <a href="https://github.com/pallets/jinja/security/advisories/GHSA-h5c8-rqwp-cp95">GHSA-h5c8-rqwp-cp95</a>. You are affected if you are using <code>xmlattr</code> and passing user input as attribute keys.</li>
<li>Changes: <a href="https://jinja.palletsprojects.com/en/3.1.x/changes/#version-3-1-3">https://jinja.palletsprojects.com/en/3.1.x/changes/#version-3-1-3</a></li>
<li>Milestone: <a href="https://github.com/pallets/jinja/milestone/15?closed=1">https://github.com/pallets/jinja/milestone/15?closed=1</a></li>
</ul>
<h2>3.1.2</h2>
<p>This is a fix release for the <a href="https://github.com/pallets/jinja/releases/tag/3.1.0">3.1.0</a> feature release.</p>
<ul>
<li>Changes: <a href="https://jinja.palletsprojects.com/en/3.1.x/changes/#version-3-1-2">https://jinja.palletsprojects.com/en/3.1.x/changes/#version-3-1-2</a></li>
<li>Milestone: <a href="https://github.com/pallets/jinja/milestone/13?closed=1">https://github.com/pallets/jinja/milestone/13?closed=1</a></li>
</ul>
<h2>3.1.1</h2>
<ul>
<li>Changes: <a href="https://jinja.palletsprojects.com/en/3.1.x/changes/#version-3-1-1">https://jinja.palletsprojects.com/en/3.1.x/changes/#version-3-1-1</a></li>
<li>Milestone: <a href="https://github.com/pallets/jinja/milestone/12?closed=1">https://github.com/pallets/jinja/milestone/12?closed=1</a></li>
</ul>
<h2>3.1.0</h2>
<p>This is a feature release, which includes new features and removes previously deprecated features. The 3.1.x branch is now the supported bugfix branch, the 3.0.x branch has become a tag marking the end of support for that branch. We encourage everyone to upgrade, and to use a tool such as <a href="https://pypi.org/project/pip-tools/">pip-tools</a> to pin all dependencies and control upgrades. We also encourage upgrading to MarkupSafe 2.1.1, the latest version at this time.</p>
<ul>
<li>Changes: <a href="https://jinja.palletsprojects.com/en/3.1.x/changes/#version-3-1-0">https://jinja.palletsprojects.com/en/3.1.x/changes/#version-3-1-0</a></li>
<li>Milestone: <a href="https://github.com/pallets/jinja/milestone/8?closed=1">https://github.com/pallets/jinja/milestone/8?closed=1</a></li>
<li>MarkupSafe changes: <a href="https://markupsafe.palletsprojects.com/en/2.1.x/changes/#version-2-1-1">https://markupsafe.palletsprojects.com/en/2.1.x/changes/#version-2-1-1</a></li>
</ul>
<h2>3.0.3</h2>
<ul>
<li>Changes: <a href="https://jinja.palletsprojects.com/en/3.0.x/changes/#version-3-0-3">https://jinja.palletsprojects.com/en/3.0.x/changes/#version-3-0-3</a></li>
</ul>
<h2>3.0.2</h2>
<ul>
<li>Changes: <a href="https://jinja.palletsprojects.com/en/3.0.x/changes/#version-3-0-2">https://jinja.palletsprojects.com/en/3.0.x/changes/#version-3-0-2</a></li>
</ul>
<h2>3.0.1</h2>
<ul>
<li>Changes: <a href="https://jinja.palletsprojects.com/en/3.0.x/changes/#version-3-0-1">https://jinja.palletsprojects.com/en/3.0.x/changes/#version-3-0-1</a></li>
</ul>
<h2>3.0.0</h2>
<p>New major versions of all the core Pallets libraries, including Jinja 3.0, have been released! :tada:</p>
<ul>
<li>Read the announcement on our blog: <a href="https://palletsprojects.com/blog/flask-2-0-released/">https://palletsprojects.com/blog/flask-2-0-released/</a></li>
<li>Read the full list of changes: <a href="https://jinja.palletsprojects.com/changes/#version-3-0-0">https://jinja.palletsprojects.com/changes/#version-3-0-0</a></li>
<li>Retweet the announcement on Twitter: <a href="https://twitter.com/PalletsTeam/status/1392266507296514048">https://twitter.com/PalletsTeam/status/1392266507296514048</a></li>
<li>Follow our blog, Twitter, or GitHub to see future announcements.</li>
</ul>
<p>This represents a significant amount of work, and there are quite a few changes. Be sure to carefully read the changelog, and use tools such as pip-compile and Dependabot to pin your dependencies and control your updates.</p>
<h2>3.0.0rc2</h2>
<p>Fixes an issue with the deprecated <code>Markup</code> subclass, <a href="https://redirect.github.com/pallets/jinja/issues/1401">#1401</a>.</p>
<ul>
<li>Changes: <a href="https://jinja.palletsprojects.com/en/master/changes/#version-3-0-0">https://jinja.palletsprojects.com/en/master/changes/#version-3-0-0</a></li>
</ul>
<h2>3.0.0rc1</h2>
<ul>
<li>Changes: <a href="https://jinja.palletsprojects.com/en/master/changes/#version-3-0-0">https://jinja.palletsprojects.com/en/master/changes/#version-3-0-0</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pallets/jinja/blob/main/CHANGES.rst">jinja2's changelog</a>.</em></p>
<blockquote>
<h2>Version 3.1.3</h2>
<p>Released 2024-01-10</p>
<ul>
<li>Fix compiler error when checking if required blocks in parent templates are
empty. :pr:<code>1858</code></li>
<li><code>xmlattr</code> filter does not allow keys with spaces. GHSA-h5c8-rqwp-cp95</li>
<li>Make error messages stemming from invalid nesting of <code>{% trans %}</code> blocks
more helpful. :pr:<code>1918</code></li>
</ul>
<h2>Version 3.1.2</h2>
<p>Released 2022-04-28</p>
<ul>
<li>Add parameters to <code>Environment.overlay</code> to match <code>__init__</code>.
:issue:<code>1645</code></li>
<li>Handle race condition in <code>FileSystemBytecodeCache</code>. :issue:<code>1654</code></li>
</ul>
<h2>Version 3.1.1</h2>
<p>Released 2022-03-25</p>
<ul>
<li>The template filename on Windows uses the primary path separator.
:issue:<code>1637</code></li>
</ul>
<h2>Version 3.1.0</h2>
<p>Released 2022-03-24</p>
<ul>
<li>
<p>Drop support for Python 3.6. :pr:<code>1534</code></p>
</li>
<li>
<p>Remove previously deprecated code. :pr:<code>1544</code></p>
<ul>
<li><code>WithExtension</code> and <code>AutoEscapeExtension</code> are built-in now.</li>
<li><code>contextfilter</code> and <code>contextfunction</code> are replaced by
<code>pass_context</code>. <code>evalcontextfilter</code> and
<code>evalcontextfunction</code> are replaced by <code>pass_eval_context</code>.
<code>environmentfilter</code> and <code>environmentfunction</code> are replaced
by <code>pass_environment</code>.</li>
<li><code>Markup</code> and <code>escape</code> should be imported from MarkupSafe.</li>
<li>Compiled templates from very old Jinja versions may need to be
recompiled.</li>
<li>Legacy resolve mode for <code>Context</code> subclasses is no longer
supported. Override <code>resolve_or_missing</code> instead of</li>
</ul>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pallets/jinja/commit/d9de4bb215fd1cc8092a410fb834c7c4060b1fc1"><code>d9de4bb</code></a> release version 3.1.3</li>
<li><a href="https://github.com/pallets/jinja/commit/50124e16561f17f6c1ec85a692f6551418971cdc"><code>50124e1</code></a> skip test pypi</li>
<li><a href="https://github.com/pallets/jinja/commit/9ea7222ef3f184480be0d0884e30ccfb4172b17b"><code>9ea7222</code></a> use trusted publishing</li>
<li><a href="https://github.com/pallets/jinja/commit/da703f7aae36b1e88baaa20de334d7ff6378fdde"><code>da703f7</code></a> use trusted publishing</li>
<li><a href="https://github.com/pallets/jinja/commit/bce174692547464512383ec40e0f8338b8811983"><code>bce1746</code></a> use trusted publishing</li>
<li><a href="https://github.com/pallets/jinja/commit/7277d8068be593deab3555c7c14f974ada373af1"><code>7277d80</code></a> update pre-commit hooks</li>
<li><a href="https://github.com/pallets/jinja/commit/5c8a10522421270f66376a24ec8e0d6812bc4b14"><code>5c8a105</code></a> Make nested-trans-block exceptions nicer (<a href="https://redirect.github.com/pallets/jinja/issues/1918">#1918</a>)</li>
<li><a href="https://github.com/pallets/jinja/commit/19a55db3b411343309f2faaffaedbb089e841895"><code>19a55db</code></a> Make nested-trans-block exceptions nicer</li>
<li><a href="https://github.com/pallets/jinja/commit/716795349a41d4983a9a4771f7d883c96ea17be7"><code>7167953</code></a> Merge pull request from GHSA-h5c8-rqwp-cp95</li>
<li><a href="https://github.com/pallets/jinja/commit/7dd3680e6eea0d77fde024763657aa4d884ddb23"><code>7dd3680</code></a> xmlattr filter disallows keys with spaces</li>
<li>Additional commits viewable in <a href="https://github.com/pallets/jinja/compare/2.11.3...3.1.3">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28457/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28457/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28457",
"html_url": "https://github.com/huggingface/transformers/pull/28457",
"diff_url": "https://github.com/huggingface/transformers/pull/28457.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28457.patch",
"merged_at": "2024-01-12T14:28:55"
} |
https://api.github.com/repos/huggingface/transformers/issues/28456 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28456/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28456/comments | https://api.github.com/repos/huggingface/transformers/issues/28456/events | https://github.com/huggingface/transformers/issues/28456 | 2,077,240,191 | I_kwDOCUB6oc570Ct_ | 28,456 | Very slow on conditional check of HF tokenizers | {
"login": "pseudotensor",
"id": 2249614,
"node_id": "MDQ6VXNlcjIyNDk2MTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2249614?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pseudotensor",
"html_url": "https://github.com/pseudotensor",
"followers_url": "https://api.github.com/users/pseudotensor/followers",
"following_url": "https://api.github.com/users/pseudotensor/following{/other_user}",
"gists_url": "https://api.github.com/users/pseudotensor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pseudotensor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pseudotensor/subscriptions",
"organizations_url": "https://api.github.com/users/pseudotensor/orgs",
"repos_url": "https://api.github.com/users/pseudotensor/repos",
"events_url": "https://api.github.com/users/pseudotensor/events{/privacy}",
"received_events_url": "https://api.github.com/users/pseudotensor/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-11T17:31:55 | 2024-01-12T14:45:19 | null | NONE | null | ### System Info
transformers==4.36.2
Python 3.10
Ubuntu 20
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
It's pretty pythonic to avoid "x is not None" if could have just done "if x". I know for numpy objects this isn't same thing, but I don't know why tokenizer would do alot of extra work here. Seems like bug.
```
import time
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm3-6b", trust_remote_code=True)
t0 = time.time()
if tokenizer:
pass
print(time.time() - t0)
t0 = time.time()
if tokenizer is not None:
pass
print(time.time() - t0)
tokenizer = AutoTokenizer.from_pretrained("h2oai/h2ogpt-4096-llama2-7b-chat", trust_remote_code=True)
t0 = time.time()
if tokenizer:
pass
print(time.time() - t0)
t0 = time.time()
if tokenizer is not None:
pass
print(time.time() - t0)
```
```
0.0909724235534668
9.5367431640625e-07
0.0019714832305908203
2.384185791015625e-07
```
### Expected behavior
Not be 100000x slower if do:
```
if tokenizer:
pass
```
vs. faster:
```
if tokenizer is not None:
pass
```
Point is that I might check tokenizer many times, and actually tokenizing things is much faster than that check, which is bad. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28456/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28456/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28455 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28455/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28455/comments | https://api.github.com/repos/huggingface/transformers/issues/28455/events | https://github.com/huggingface/transformers/pull/28455 | 2,077,174,720 | PR_kwDOCUB6oc5j1f1e | 28,455 | Changed type hinting for attentions | {
"login": "nakranivaibhav",
"id": 67785830,
"node_id": "MDQ6VXNlcjY3Nzg1ODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/67785830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nakranivaibhav",
"html_url": "https://github.com/nakranivaibhav",
"followers_url": "https://api.github.com/users/nakranivaibhav/followers",
"following_url": "https://api.github.com/users/nakranivaibhav/following{/other_user}",
"gists_url": "https://api.github.com/users/nakranivaibhav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nakranivaibhav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nakranivaibhav/subscriptions",
"organizations_url": "https://api.github.com/users/nakranivaibhav/orgs",
"repos_url": "https://api.github.com/users/nakranivaibhav/repos",
"events_url": "https://api.github.com/users/nakranivaibhav/events{/privacy}",
"received_events_url": "https://api.github.com/users/nakranivaibhav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-11T17:00:36 | 2024-01-12T12:19:41 | 2024-01-12T12:19:37 | CONTRIBUTOR | null | Issue No: #28345
# What does this PR do?
Changed type hinting for attentions to 'attentions: Optional[tuple[torch.FloatTensor,...]] = None'
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28455/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28455/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28455",
"html_url": "https://github.com/huggingface/transformers/pull/28455",
"diff_url": "https://github.com/huggingface/transformers/pull/28455.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28455.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28454 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28454/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28454/comments | https://api.github.com/repos/huggingface/transformers/issues/28454/events | https://github.com/huggingface/transformers/issues/28454 | 2,077,166,524 | I_kwDOCUB6oc57zwu8 | 28,454 | Generate with Logits Processor not working - even with no modifications to logits | {
"login": "SamSJackson",
"id": 86316114,
"node_id": "MDQ6VXNlcjg2MzE2MTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/86316114?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SamSJackson",
"html_url": "https://github.com/SamSJackson",
"followers_url": "https://api.github.com/users/SamSJackson/followers",
"following_url": "https://api.github.com/users/SamSJackson/following{/other_user}",
"gists_url": "https://api.github.com/users/SamSJackson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SamSJackson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SamSJackson/subscriptions",
"organizations_url": "https://api.github.com/users/SamSJackson/orgs",
"repos_url": "https://api.github.com/users/SamSJackson/repos",
"events_url": "https://api.github.com/users/SamSJackson/events{/privacy}",
"received_events_url": "https://api.github.com/users/SamSJackson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-11T16:56:45 | 2024-01-12T14:45:40 | 2024-01-11T17:31:35 | NONE | null | ### System Info
- `transformers` version: 4.37.0.dev0
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.11.5
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.0
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Not that I am aware of.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
# Overview
I am using a custom logits processor but I have made a custom class here that does nothing to the logits but shows that the process is not working.
Similarly, I am working with mistral's 7B instruct model but problem persists with other models, such as GPT2 - not found a working model yet.
## Minimum code to reproduce:
```
from transformers import AutoModelForCausalLM, AutoTokenizer, LogitsProcessorList, LogitsProcessor
from data.code.implementation.newkirch.extended_watermark_processor import WatermarkLogitsProcessor
import torch
device = "cuda"
model_name = "gpt2"
# model_name = "mistralai/Mistral-7B-Instruct-v0.2" # Mistral AI model
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map=device)
tokenizer = AutoTokenizer.from_pretrained(model_name)
class CustomLogitsProcessor(LogitsProcessor):
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
return scores
def generate_essay(prompt):
messages = [{
"role": "user",
"content": prompt
}]
model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(device)
# Setting `pad_token_id` to `eos_token_id` for open-ended generation.
generated_ids = model.generate(
model_inputs,
max_new_tokens=7500,
do_sample=True,
pad_token_id=tokenizer.eos_token_id,
logits_processor=LogitsProcessorList([CustomLogitsProcessor])
)
decoded = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
text = decoded[0].split("[/INST]")[1]
return text
prompt = '''You are a student working on the following assignment.
Create an essay based on the following topic in no more than a 100 words.
Topic: Why are cats better than dogs?
'''
text = generate_essay(prompt)
print(text)
```
## Response/error:
```
Traceback (most recent call last):
File "C:\Users\Sam\Desktop\Level4-Proj\data\mistral_test.py", line 41, in <module>
text = generate_essay(prompt)
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Sam\Desktop\Level4-Proj\data\mistral_test.py", line 22, in generate_essay
generated_ids = model.generate(
^^^^^^^^^^^^^^^
File "C:\Users\Sam\anaconda3\envs\wmark-pt\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Sam\anaconda3\envs\wmark-pt\Lib\site-packages\transformers\generation\utils.py", line 1777, in generate
return self.sample(
^^^^^^^^^^^^
File "C:\Users\Sam\anaconda3\envs\wmark-pt\Lib\site-packages\transformers\generation\utils.py", line 2887, in sample
next_token_scores = logits_processor(input_ids, next_token_logits)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Sam\anaconda3\envs\wmark-pt\Lib\site-packages\transformers\generation\logits_process.py", line 94, in __call__
raise ValueError(
ValueError: Make sure that all the required parameters: ['self', 'input_ids', 'scores'] for <class 'type'> are passed to the logits processor.
```
From my investigation, it looks like it is getting caught up [here](https://github.com/huggingface/transformers/blob/main/src/transformers/generation/logits_process.py), on line 90.
I have verified that "scores" and "input_ids" are both present when the call is made.
Not sure if the `function_args` should be checking in `list(function_args.keys()[3:])` but this is where the error is happening.
# Expectations
I would expect the model to generate some text corresponding to the prompt.
In an effort to show that the logit processor is doing nothing, sampling is off - results should be deterministic.
The text produced should be very similar to the text produced without the logits processor. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28454/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28454/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28453 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28453/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28453/comments | https://api.github.com/repos/huggingface/transformers/issues/28453/events | https://github.com/huggingface/transformers/issues/28453 | 2,077,008,882 | I_kwDOCUB6oc57zKPy | 28,453 | Loading safetensor version of mistralai/Mistral-7B-Instruct-v0.1(7b) in Triton Server results in cuda OOM | {
"login": "jimmymanianchira",
"id": 9268915,
"node_id": "MDQ6VXNlcjkyNjg5MTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/9268915?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jimmymanianchira",
"html_url": "https://github.com/jimmymanianchira",
"followers_url": "https://api.github.com/users/jimmymanianchira/followers",
"following_url": "https://api.github.com/users/jimmymanianchira/following{/other_user}",
"gists_url": "https://api.github.com/users/jimmymanianchira/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jimmymanianchira/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jimmymanianchira/subscriptions",
"organizations_url": "https://api.github.com/users/jimmymanianchira/orgs",
"repos_url": "https://api.github.com/users/jimmymanianchira/repos",
"events_url": "https://api.github.com/users/jimmymanianchira/events{/privacy}",
"received_events_url": "https://api.github.com/users/jimmymanianchira/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2024-01-11T15:47:44 | 2024-01-12T09:36:54 | null | NONE | null | ### System Info
transformers==4.36
Triton Instance - nvcr.io/nvidia/tritonserver:23.09-pyt-python-py3
torch==2.1.2
### Who can help?
We are trying to load and serve [Mistral 7B Instruct v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) using Triton Inference server.
We initially was loading the weights using .bin and it worked fine. But we later moved to the safe tensor version. We noticed that after calling the model couple of times, its getting OOM. I did it in 4 gpu and same thing is happening. It works for the 1st inference call and after that, we end up with OOM. Its very strange and .bin weights work normally
@ArthurZucker and @younesbelkada @SunMarc
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Load Mistral 7b in Triton Inference Server using safe tensor weights
2. Send couple of inference calls
3. Will notice OOM
### Expected behavior
Loading weights with safetensors shouldn't cause OOM. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28453/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28453/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28452 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28452/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28452/comments | https://api.github.com/repos/huggingface/transformers/issues/28452/events | https://github.com/huggingface/transformers/pull/28452 | 2,076,802,059 | PR_kwDOCUB6oc5j0Mjb | 28,452 | Fix docker file | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-11T14:22:35 | 2024-01-11T14:34:07 | 2024-01-11T14:34:06 | COLLABORATOR | null | # What does this PR do?
The change in #28400 and #28432 break the docker image build. This PR fixes 2 issues to we can build the image for CI. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28452/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28452/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28452",
"html_url": "https://github.com/huggingface/transformers/pull/28452",
"diff_url": "https://github.com/huggingface/transformers/pull/28452.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28452.patch",
"merged_at": "2024-01-11T14:34:06"
} |
https://api.github.com/repos/huggingface/transformers/issues/28451 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28451/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28451/comments | https://api.github.com/repos/huggingface/transformers/issues/28451/events | https://github.com/huggingface/transformers/pull/28451 | 2,076,714,857 | PR_kwDOCUB6oc5jz4zg | 28,451 | Fix broken link on page | {
"login": "keenranger",
"id": 18392918,
"node_id": "MDQ6VXNlcjE4MzkyOTE4",
"avatar_url": "https://avatars.githubusercontent.com/u/18392918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/keenranger",
"html_url": "https://github.com/keenranger",
"followers_url": "https://api.github.com/users/keenranger/followers",
"following_url": "https://api.github.com/users/keenranger/following{/other_user}",
"gists_url": "https://api.github.com/users/keenranger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/keenranger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/keenranger/subscriptions",
"organizations_url": "https://api.github.com/users/keenranger/orgs",
"repos_url": "https://api.github.com/users/keenranger/repos",
"events_url": "https://api.github.com/users/keenranger/events{/privacy}",
"received_events_url": "https://api.github.com/users/keenranger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-11T13:52:12 | 2024-01-11T17:26:13 | 2024-01-11T17:26:13 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes broken link in documents. see the `Hub` link in [page](https://huggingface.co/docs/transformers/main/en/add_new_pipeline)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28451/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28451/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28451",
"html_url": "https://github.com/huggingface/transformers/pull/28451",
"diff_url": "https://github.com/huggingface/transformers/pull/28451.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28451.patch",
"merged_at": "2024-01-11T17:26:13"
} |
https://api.github.com/repos/huggingface/transformers/issues/28450 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28450/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28450/comments | https://api.github.com/repos/huggingface/transformers/issues/28450/events | https://github.com/huggingface/transformers/pull/28450 | 2,076,610,815 | PR_kwDOCUB6oc5jzhVb | 28,450 | Fix docstring checker issues with PIL enums | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-11T13:09:54 | 2024-01-11T17:23:43 | 2024-01-11T17:23:42 | MEMBER | null | The docstring checker is called by `make repo-consistency` or `make fix-copies`, but it struggles with enums, and particularly struggles with the `PIL.Resampling` enum, as this moved in PIL 10 and we set some code in `image_utils` to always put it in the same place. This caused issues where, depending on the installed PIL version, the docstring checker would try to replace enum names like `Resampling.BICUBIC` with the enum int value for that entry.
After this fix, people should be able to upgrade to the latest version of `PIL` and run `make fixup` or `make fix-copies` without issues! The issue may persist on older versions of PIL, unfortunately, where the value is sometimes just a raw `int` rather than an `Enum`, but we can just ask users to upgrade if they encounter issues there. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28450/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28450/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28450",
"html_url": "https://github.com/huggingface/transformers/pull/28450",
"diff_url": "https://github.com/huggingface/transformers/pull/28450.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28450.patch",
"merged_at": "2024-01-11T17:23:42"
} |
https://api.github.com/repos/huggingface/transformers/issues/28449 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28449/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28449/comments | https://api.github.com/repos/huggingface/transformers/issues/28449/events | https://github.com/huggingface/transformers/issues/28449 | 2,076,606,399 | I_kwDOCUB6oc57xn-_ | 28,449 | Intel/dpt-swinv2: TypeError: unsupported operand type(s) for //: 'NoneType' and 'NoneType' | {
"login": "kadirnar",
"id": 36204372,
"node_id": "MDQ6VXNlcjM2MjA0Mzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/36204372?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kadirnar",
"html_url": "https://github.com/kadirnar",
"followers_url": "https://api.github.com/users/kadirnar/followers",
"following_url": "https://api.github.com/users/kadirnar/following{/other_user}",
"gists_url": "https://api.github.com/users/kadirnar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kadirnar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kadirnar/subscriptions",
"organizations_url": "https://api.github.com/users/kadirnar/orgs",
"repos_url": "https://api.github.com/users/kadirnar/repos",
"events_url": "https://api.github.com/users/kadirnar/events{/privacy}",
"received_events_url": "https://api.github.com/users/kadirnar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2024-01-11T13:07:51 | 2024-01-15T16:36:34 | 2024-01-15T16:36:34 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.8.18
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@Narsil @amyeroberts
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import pipeline
pipe = pipeline(task="depth-estimation", model="Intel/dpt-swinv2-large-384")
result = pipe("http://images.cocodataset.org/val2017/000000039769.jpg")
result["depth"]
```
Error Message:
```
image_size = image_size if isinstance(image_size, collections.abc.Iterable) else (image_size, image_size)
patch_size = patch_size if isinstance(patch_size, collections.abc.Iterable) else (patch_size, patch_size)
num_patches = (image_size[1] // patch_size[1]) * (image_size[0] // patch_size[0])
TypeError: unsupported operand type(s) for //: 'NoneType' and 'NoneType'
```
### Expected behavior
I want to test dpt-brit-large-384 models.
Model Page: https://huggingface.co/Intel/dpt-swinv2-large-384 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28449/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28449/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28448 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28448/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28448/comments | https://api.github.com/repos/huggingface/transformers/issues/28448/events | https://github.com/huggingface/transformers/issues/28448 | 2,076,593,743 | I_kwDOCUB6oc57xk5P | 28,448 | Interested in YOLOv6 Addition? | {
"login": "SangbumChoi",
"id": 34004152,
"node_id": "MDQ6VXNlcjM0MDA0MTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/34004152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SangbumChoi",
"html_url": "https://github.com/SangbumChoi",
"followers_url": "https://api.github.com/users/SangbumChoi/followers",
"following_url": "https://api.github.com/users/SangbumChoi/following{/other_user}",
"gists_url": "https://api.github.com/users/SangbumChoi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SangbumChoi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SangbumChoi/subscriptions",
"organizations_url": "https://api.github.com/users/SangbumChoi/orgs",
"repos_url": "https://api.github.com/users/SangbumChoi/repos",
"events_url": "https://api.github.com/users/SangbumChoi/events{/privacy}",
"received_events_url": "https://api.github.com/users/SangbumChoi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | 2 | 2024-01-11T13:02:06 | 2024-01-27T08:37:44 | 2024-01-27T08:37:44 | CONTRIBUTOR | null | ### Model description
Hi, transformers team my question is very simple. Does team is interested in implementing [YOLOv6](https://github.com/meituan/YOLOv6/tree/main)?
I have finished making inference pipeline and working on training pipeline.
https://github.com/SangbumChoi/transformers/tree/yolov6
Currently, it might occur little bug and unpretty but it works. I will continue to make it regardless of whether it is goint to be officially implemented or not.
```
from transformers import Yolov6Model, Yolov6ForObjectDetection
from transformers import Yolov6Config
import io
import requests
from PIL import Image
import torch
import numpy
from transformers.image_transforms import center_to_corners_format
from transformers import Yolov6ImageProcessor
from torchvision.ops.boxes import batched_nms
object_model = Yolov6ForObjectDetection.from_pretrained("superb-ai/yolov6n").cuda()
object_model.eval()
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = Yolov6ImageProcessor()
inputs = image_processor(images=image, size={"shortest_edge": 640, "longest_edge": 640}, return_tensors="pt")
label = False
if label:
n_targets = 8
batch_size = 1
torch_device = 'cuda'
labels = []
for i in range(batch_size):
target = {}
target["class_labels"] = torch.ones(
size=(n_targets,), device=torch_device, dtype=torch.long
)
target["boxes"] = torch.rand(
n_targets, 4, device=torch_device, dtype=torch.float
)
labels.append(target)
inputs['labels'] = labels
inputs["pixel_values"] = inputs["pixel_values"].cuda()
outputs = object_model(**inputs)
out_logits, out_bbox = outputs.logits, outputs.pred_boxes
batch_size, num_queries, num_labels = out_logits.shape
prob = out_logits.sigmoid()
all_scores = prob.reshape(batch_size, -1).to(out_logits.device)
all_indexes = torch.arange(num_queries * num_labels)[None].repeat(batch_size, 1).to(out_logits.device)
all_boxes = torch.div(all_indexes, out_logits.shape[2], rounding_mode="floor")
all_labels = all_indexes % out_logits.shape[2]
boxes = center_to_corners_format(out_bbox)
boxes = torch.gather(boxes, 1, all_boxes.unsqueeze(-1).repeat(1, 1, 4))
nms_threshold = 0.7
threshold = 0.3
results = []
for b in range(batch_size):
box = boxes[b]
score = all_scores[b]
lbls = all_labels[b]
# apply NMS
keep_inds = batched_nms(box, score, lbls, nms_threshold)[:100]
score = score[keep_inds]
lbls = lbls[keep_inds]
box = box[keep_inds]
results.append(
{
"scores": score[score > threshold],
"labels": lbls[score > threshold],
"boxes": box[score > threshold],
}
)
import matplotlib.pyplot as plt
# colors for visualization
COLORS = [[0.000, 0.447, 0.741], [0.850, 0.325, 0.098], [0.929, 0.694, 0.125],
[0.494, 0.184, 0.556], [0.466, 0.674, 0.188], [0.301, 0.745, 0.933]]
def plot_results(pil_img, scores, labels, boxes):
plt.figure(figsize=(16,10))
plt.imshow(pil_img)
ax = plt.gca()
colors = COLORS * 100
for score, label, (xmin, ymin, xmax, ymax),c in zip(scores.tolist(), labels.tolist(), boxes.tolist(), colors):
ax.add_patch(plt.Rectangle((xmin, ymin), xmax - xmin, ymax - ymin,
fill=False, color=c, linewidth=3))
text = f'{object_model.config.id2label[label]}: {score:0.2f}'
ax.text(xmin, ymin, text, fontsize=15,
bbox=dict(facecolor='yellow', alpha=0.5))
plt.axis('off')
plt.show()
# postprocess model outputs
width, height = image.size
result = results[0]
plot_results(image, result['scores'], result['labels'], result['boxes'])
```

### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
https://huggingface.co/superb-ai/yolov6n | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28448/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28448/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28447 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28447/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28447/comments | https://api.github.com/repos/huggingface/transformers/issues/28447/events | https://github.com/huggingface/transformers/pull/28447 | 2,076,391,809 | PR_kwDOCUB6oc5jywC5 | 28,447 | symbolic_trace: add past_key_values, llama, sdpa support | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-11T11:43:29 | 2024-01-17T10:50:54 | 2024-01-17T10:50:54 | COLLABORATOR | null | This PR:
* Allows to use `transformers.utils.fx.symbolic_trace` with `past_key_values` inputs for some architectures (currently opt, llama)
* Adds llama support for symbolic_trace.
* Adds SDPA support for symbolic_trace.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28447/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/28447/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28447",
"html_url": "https://github.com/huggingface/transformers/pull/28447",
"diff_url": "https://github.com/huggingface/transformers/pull/28447.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28447.patch",
"merged_at": "2024-01-17T10:50:54"
} |
https://api.github.com/repos/huggingface/transformers/issues/28446 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28446/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28446/comments | https://api.github.com/repos/huggingface/transformers/issues/28446/events | https://github.com/huggingface/transformers/issues/28446 | 2,076,195,029 | I_kwDOCUB6oc57wDjV | 28,446 | Failed to import transformers.models.transfo_xl.configuration_transfo_xl | {
"login": "andysingal",
"id": 20493493,
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andysingal",
"html_url": "https://github.com/andysingal",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andysingal/subscriptions",
"organizations_url": "https://api.github.com/users/andysingal/orgs",
"repos_url": "https://api.github.com/users/andysingal/repos",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"received_events_url": "https://api.github.com/users/andysingal/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 17 | 2024-01-11T09:58:46 | 2024-01-26T17:17:41 | null | NONE | null | ### System Info
Colab Notebook
### Who can help?
@ArthurZucker @pacman100
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
model = AutoModelForSequenceClassification.from_pretrained(
TEACHER_MODEL,
problem_type="multi_label_classification",
num_labels=len(unique_labels),
id2label=id2label,
label2id=label2id
)
```
ERROR:
```
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py](https://localhost:8080/#) in _get_module(self, module_name)
1352 self._objects = {} if extra_objects is None else extra_objects
-> 1353 self._name = name
1354 self._import_structure = import_structure
11 frames
[/usr/lib/python3.10/importlib/__init__.py](https://localhost:8080/#) in import_module(name, package)
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
127
/usr/lib/python3.10/importlib/_bootstrap.py in _gcd_import(name, package, level)
/usr/lib/python3.10/importlib/_bootstrap.py in _find_and_load(name, import_)
/usr/lib/python3.10/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_)
ModuleNotFoundError: No module named 'transformers.models.transfo_xl.configuration_transfo_xl'
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
[<ipython-input-24-49d540f006ea>](https://localhost:8080/#) in <cell line: 1>()
----> 1 model = AutoModel.from_pretrained(
2 TEACHER_MODEL,
3 problem_type="multi_label_classification",
4 num_labels=len(unique_labels),
5 id2label=id2label,
[/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
541
542 has_remote_code = hasattr(config, "auto_map") and cls.__name__ in config.auto_map
--> 543 has_local_code = type(config) in cls._model_mapping.keys()
544 trust_remote_code = resolve_trust_remote_code(
545 trust_remote_code, pretrained_model_name_or_path, has_local_code, has_remote_code
[/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py](https://localhost:8080/#) in keys(self)
755
756 def keys(self):
--> 757 mapping_keys = [
758 self._load_attr_from_module(key, name)
759 for key, name in self._config_mapping.items()
[/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py](https://localhost:8080/#) in <listcomp>(.0)
756 def keys(self):
757 mapping_keys = [
--> 758 self._load_attr_from_module(key, name)
759 for key, name in self._config_mapping.items()
760 if key in self._model_mapping.keys()
[/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py](https://localhost:8080/#) in _load_attr_from_module(self, model_type, attr)
752 if module_name not in self._modules:
753 self._modules[module_name] = importlib.import_module(f".{module_name}", "transformers.models")
--> 754 return getattribute_from_module(self._modules[module_name], attr)
755
756 def keys(self):
[/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py](https://localhost:8080/#) in getattribute_from_module(module, attr)
696 if isinstance(attr, tuple):
697 return tuple(getattribute_from_module(module, a) for a in attr)
--> 698 if hasattr(module, attr):
699 return getattr(module, attr)
700 # Some of the mappings have entries model_type -> object of another model type. In that case we try to grab the
[/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py](https://localhost:8080/#) in __getattr__(self, name)
1341 super().__init__(name)
1342 self._modules = set(import_structure.keys())
-> 1343 self._class_to_module = {}
1344 for key, values in import_structure.items():
1345 for value in values:
[/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py](https://localhost:8080/#) in _get_module(self, module_name)
1353 self._name = name
1354 self._import_structure = import_structure
-> 1355
1356 # Needed for autocompletion in an IDE
1357 def __dir__(self):
RuntimeError: Failed to import transformers.models.transfo_xl.configuration_transfo_xl because of the following error (look up to see its traceback):
No module named 'transformers.models.transfo_xl.configuration_transfo_xl'
```
### Expected behavior
run smoothly | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28446/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28446/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28445 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28445/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28445/comments | https://api.github.com/repos/huggingface/transformers/issues/28445/events | https://github.com/huggingface/transformers/pull/28445 | 2,076,151,831 | PR_kwDOCUB6oc5jx69l | 28,445 | When using npu to reproduce the training results, `torch.manual_seed` is also needed | {
"login": "statelesshz",
"id": 28150734,
"node_id": "MDQ6VXNlcjI4MTUwNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/statelesshz",
"html_url": "https://github.com/statelesshz",
"followers_url": "https://api.github.com/users/statelesshz/followers",
"following_url": "https://api.github.com/users/statelesshz/following{/other_user}",
"gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions",
"organizations_url": "https://api.github.com/users/statelesshz/orgs",
"repos_url": "https://api.github.com/users/statelesshz/repos",
"events_url": "https://api.github.com/users/statelesshz/events{/privacy}",
"received_events_url": "https://api.github.com/users/statelesshz/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-01-11T09:36:11 | 2024-01-22T09:45:50 | null | CONTRIBUTOR | null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
As per title.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
cc @muellerzr | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28445/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28445/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28445",
"html_url": "https://github.com/huggingface/transformers/pull/28445",
"diff_url": "https://github.com/huggingface/transformers/pull/28445.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28445.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28444 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28444/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28444/comments | https://api.github.com/repos/huggingface/transformers/issues/28444/events | https://github.com/huggingface/transformers/issues/28444 | 2,076,025,396 | I_kwDOCUB6oc57vaI0 | 28,444 | Attribute Error: 'GenerationConfig' object has no attribute 'lang_to_id' | {
"login": "kimbaang",
"id": 154123661,
"node_id": "U_kgDOCS-9jQ",
"avatar_url": "https://avatars.githubusercontent.com/u/154123661?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kimbaang",
"html_url": "https://github.com/kimbaang",
"followers_url": "https://api.github.com/users/kimbaang/followers",
"following_url": "https://api.github.com/users/kimbaang/following{/other_user}",
"gists_url": "https://api.github.com/users/kimbaang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kimbaang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kimbaang/subscriptions",
"organizations_url": "https://api.github.com/users/kimbaang/orgs",
"repos_url": "https://api.github.com/users/kimbaang/repos",
"events_url": "https://api.github.com/users/kimbaang/events{/privacy}",
"received_events_url": "https://api.github.com/users/kimbaang/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-11T08:20:21 | 2024-01-11T11:14:12 | null | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.31
- Python version: 3.11.5
- Huggingface_hub version: 0.20.1
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: DEEPSPEED
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- deepspeed_config: {'gradient_accumulation_steps': 1, 'offload_optimizer_device': 'nvme', 'offload_optimizer_nvme_path': '/nvme', 'offload_param_device': 'nvme', 'offload_param_nvme_path': '/nvme', 'zero3_init_flag': False, 'zero_stage': 2}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): 2.15.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
class GenerateModel(tf.Module):
def __init__(self, model):
super(GenerateModel, self).__init__()
self.model = model
@tf.function(
# shouldn't need static batch size, but throws exception without it (needs to be fixed)
input_signature=[
tf.TensorSpec((1, 80, 3000), tf.float32, name="input_features"),
],
)
def serving(self, input_features):
outputs = self.model.generate(
inputs=input_features,
task="transcribe",
language="<|ko|>",
max_new_tokens=450, # change as needed
return_dict_in_generate=True,
)
return {"sequences": outputs["sequences"]}
```
I have provided `task` and `language` param to the self.model.generate() function,
Below is just a normal script that converts `whisper-tiny` model into tflite version.
```python
model = TFWhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny")
generate_model = GenerateModel(model=model)
tf.saved_model.save(
generate_model,
args.from_huggingface_dir,
signatures={"serving_default": generate_model.serving},
)
# Convert the model
converter = tf.lite.TFLiteConverter.from_saved_model(args.from_huggingface_dir)
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS, # enable TensorFlow ops.
]
# Perform full integer 8-bit quantization
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
# Save the model
with open(args.to_tflite_path, "wb") as f:
f.write(tflite_model)
# Loading dataset
ds = datasets.load_from_disk(args.dataset)
feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-tiny")
tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-tiny", predict_timestamps=True)
processor = WhisperProcessor(feature_extractor, tokenizer)
inputs = feature_extractor(
ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], return_tensors="tf"
)
input_features = inputs.input_features
labels = ds[0]["text"]
# loaded model... now with generate!
interpreter = tf.lite.Interpreter(args.to_tflite_path)
tflite_generate = interpreter.get_signature_runner()
generated_ids = tflite_generate(input_features=input_features)["sequences"]
print("label: ", labels)
print("prediction: ", generated_ids)
transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
But the following error has occurred:
```sh
outputs = self.model.generate(
File "/home/chris/anaconda3/envs/pt212-cuda122/lib/python3.11/site-packages/transformers/models/whisper/modeling_tf_whisper.py", line 1486, in generate *
if generation_config.language in generation_config.lang_to_id.keys():
AttributeError: 'GenerationConfig' object has no attribute 'lang_to_id'
```
### Expected behavior
I wanted to make sure the `task` and `language` parameters are passed to the function and according tokens are added prior to token generation so to generate korean transcripts successfully. Expected as in the order like below:
"<|startoftranscript|>": 50258 -> "<|ko|>": 50264 -> "<|transcribe|>": 50359 -> "<|notimestamps|>": 50363,
However 'lang_to_id' attribute doesn't seem to be included in the 'GenerationConfig' object and emits the error like above. Am I missing something here? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28444/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28444/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28443 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28443/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28443/comments | https://api.github.com/repos/huggingface/transformers/issues/28443/events | https://github.com/huggingface/transformers/pull/28443 | 2,076,012,129 | PR_kwDOCUB6oc5jxcqZ | 28,443 | Adding [T5/MT5/UMT5]ForTokenClassification | {
"login": "hackyon",
"id": 1557853,
"node_id": "MDQ6VXNlcjE1NTc4NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1557853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hackyon",
"html_url": "https://github.com/hackyon",
"followers_url": "https://api.github.com/users/hackyon/followers",
"following_url": "https://api.github.com/users/hackyon/following{/other_user}",
"gists_url": "https://api.github.com/users/hackyon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hackyon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hackyon/subscriptions",
"organizations_url": "https://api.github.com/users/hackyon/orgs",
"repos_url": "https://api.github.com/users/hackyon/repos",
"events_url": "https://api.github.com/users/hackyon/events{/privacy}",
"received_events_url": "https://api.github.com/users/hackyon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 7 | 2024-01-11T08:11:29 | 2024-02-01T02:53:49 | 2024-02-01T02:53:49 | CONTRIBUTOR | null | # What does this PR do?
Adding [T5/MT5/UMT5]ForTokenClassification. See discussion [here](https://github.com/huggingface/transformers/pull/26683#issuecomment-1874899361).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests) Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
@ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28443/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28443/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28443",
"html_url": "https://github.com/huggingface/transformers/pull/28443",
"diff_url": "https://github.com/huggingface/transformers/pull/28443.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28443.patch",
"merged_at": "2024-02-01T02:53:49"
} |
https://api.github.com/repos/huggingface/transformers/issues/28442 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28442/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28442/comments | https://api.github.com/repos/huggingface/transformers/issues/28442/events | https://github.com/huggingface/transformers/pull/28442 | 2,075,968,215 | PR_kwDOCUB6oc5jxTKh | 28,442 | Refine xpu device setting | {
"login": "zhuhong61",
"id": 95205772,
"node_id": "U_kgDOBay5jA",
"avatar_url": "https://avatars.githubusercontent.com/u/95205772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhuhong61",
"html_url": "https://github.com/zhuhong61",
"followers_url": "https://api.github.com/users/zhuhong61/followers",
"following_url": "https://api.github.com/users/zhuhong61/following{/other_user}",
"gists_url": "https://api.github.com/users/zhuhong61/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhuhong61/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhuhong61/subscriptions",
"organizations_url": "https://api.github.com/users/zhuhong61/orgs",
"repos_url": "https://api.github.com/users/zhuhong61/repos",
"events_url": "https://api.github.com/users/zhuhong61/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhuhong61/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-01-11T07:41:45 | 2024-01-11T15:38:18 | null | NONE | null | We'd like to revise some xpu related logic in device settting.
1. For MULTI_XPU case, we should set the device according to local rank, instead of to xpu:0 only.
2. If user set deepspeed config, deepspeed path should be triggered. But current logic will trigger xpu DDP logic not deepspeed. We adjust the order and put deepspeed ahead of xpu.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28442/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28442",
"html_url": "https://github.com/huggingface/transformers/pull/28442",
"diff_url": "https://github.com/huggingface/transformers/pull/28442.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28442.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28441 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28441/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28441/comments | https://api.github.com/repos/huggingface/transformers/issues/28441/events | https://github.com/huggingface/transformers/issues/28441 | 2,075,659,577 | I_kwDOCUB6oc57uA05 | 28,441 | Proposal for Adding a New Scheduler Strategy for Language Model Pretraining | {
"login": "gmftbyGMFTBY",
"id": 27548710,
"node_id": "MDQ6VXNlcjI3NTQ4NzEw",
"avatar_url": "https://avatars.githubusercontent.com/u/27548710?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gmftbyGMFTBY",
"html_url": "https://github.com/gmftbyGMFTBY",
"followers_url": "https://api.github.com/users/gmftbyGMFTBY/followers",
"following_url": "https://api.github.com/users/gmftbyGMFTBY/following{/other_user}",
"gists_url": "https://api.github.com/users/gmftbyGMFTBY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gmftbyGMFTBY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gmftbyGMFTBY/subscriptions",
"organizations_url": "https://api.github.com/users/gmftbyGMFTBY/orgs",
"repos_url": "https://api.github.com/users/gmftbyGMFTBY/repos",
"events_url": "https://api.github.com/users/gmftbyGMFTBY/events{/privacy}",
"received_events_url": "https://api.github.com/users/gmftbyGMFTBY/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | 4 | 2024-01-11T03:28:56 | 2024-01-12T14:47:52 | null | CONTRIBUTOR | null | ### Feature request
We try to propose the addition of a new and widely-adopted scheduler strategy for language model pretraining in the Transformers repository. Upon reviewing the current schedulers available in the [Transformers optimization module](https://github.com/huggingface/transformers/blob/main/src/transformers/optimization.py#L338), it appears there is a notable absence of an out-of-the-box implementation for a specific type of scheduler. This particular scheduler is prevalent in recent pre-training models and features a warmup decay, but importantly, it also maintains a limited minimum learning rate post-maximum iteration steps.
This scheduling approach has seen extensive use in several prominent pre-trained large language models (LLMs), including:
1. TinyLLaMA: Implementation details can be found in their [pretraining script](https://github.com/jzhang38/TinyLlama/blob/main/pretrain/tinyllama.py#L375).
2. MindLLM: Described in their research paper, available at [arXiv:2310.15777](https://arxiv.org/pdf/2310.15777.pdf).
3. trlx: Utilized in the TRLx framework, as seen in their [GitHub repository](https://github.com/CarperAI/trlx/tree/main).
4. ...
The introduction of this scheduler into the Transformers library would not only complete the suite of existing scheduling strategies but also provide practitioners with a tool that's already proven its efficacy in recent LLM training methodologies. I believe its inclusion will be beneficial for the community, fostering more efficient and effective pretraining processes.
### Motivation
This issue aims to introduce a novel scheduler into the current Transformers library. The proposed scheduler combines the elements of warmup decay with a distinctive feature - the implementation of a constrained minimum learning rate beyond the maximum iteration steps.
### Your contribution
Yes, we could submit a PR as soon as possible if any huggingface members think this contribution is necessary. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28441/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28441/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28440 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28440/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28440/comments | https://api.github.com/repos/huggingface/transformers/issues/28440/events | https://github.com/huggingface/transformers/issues/28440 | 2,074,813,346 | I_kwDOCUB6oc57qyOi | 28,440 | Adding mixtral attention_bias in style of llama modeling | {
"login": "Moreh-LeeJunhyeok",
"id": 99154015,
"node_id": "U_kgDOBej4Xw",
"avatar_url": "https://avatars.githubusercontent.com/u/99154015?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Moreh-LeeJunhyeok",
"html_url": "https://github.com/Moreh-LeeJunhyeok",
"followers_url": "https://api.github.com/users/Moreh-LeeJunhyeok/followers",
"following_url": "https://api.github.com/users/Moreh-LeeJunhyeok/following{/other_user}",
"gists_url": "https://api.github.com/users/Moreh-LeeJunhyeok/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Moreh-LeeJunhyeok/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Moreh-LeeJunhyeok/subscriptions",
"organizations_url": "https://api.github.com/users/Moreh-LeeJunhyeok/orgs",
"repos_url": "https://api.github.com/users/Moreh-LeeJunhyeok/repos",
"events_url": "https://api.github.com/users/Moreh-LeeJunhyeok/events{/privacy}",
"received_events_url": "https://api.github.com/users/Moreh-LeeJunhyeok/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | 2 | 2024-01-10T17:11:22 | 2024-01-17T21:22:31 | null | NONE | null | ### Feature request
### System Info
transformers version: 4.36.2
### Who can help?
don't have a clue about this
### Information
Refer to llama2 modeling code, I want to add attention bias option in mixtral model and configuration for flexibility of experiments.
If this changes seems appropriate, I will make a PR for it
### Expected behavior
After changes, attention bias option of model is added in config.
Can be controlled like example below(default config value is false)
```
from transformers import AutoConfig
config = AutoConfig.from_pretrained("variant_of_mixtral")
config.attention_bias = True
```
### Motivation
Refer to llama2 modeling code, I want to add attention bias option in mixtral model and configuration for flexibility of experiments.
### Your contribution
I have created a fix branch. I can make a PR of it
refer to [link](https://github.com/Moreh-LeeJunhyeok/transformers/tree/mixtral_add_attention_bias) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28440/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28439 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28439/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28439/comments | https://api.github.com/repos/huggingface/transformers/issues/28439/events | https://github.com/huggingface/transformers/pull/28439 | 2,074,806,467 | PR_kwDOCUB6oc5jtUbo | 28,439 | Task-specific pipeline init args | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2024-01-10T17:07:39 | 2024-01-30T16:55:01 | 2024-01-30T16:54:57 | COLLABORATOR | null | # What does this PR do?
Related to: https://github.com/huggingface/transformers/pull/28439#issue-2074806467
Adds a function to build the pipelines init arguments based on the processing objects it accepts: tokenizer, image processor, feature extractor.
This replaces `PIPELINE_INIT_ARGS` for the task-specific arguments so as to avoid e.g. `tokenizer` being listed as an input for some models when it's not correct.
Removes `task` as an input argument for the task-specific pipelines (the task is already specified).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28439/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28439",
"html_url": "https://github.com/huggingface/transformers/pull/28439",
"diff_url": "https://github.com/huggingface/transformers/pull/28439.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28439.patch",
"merged_at": "2024-01-30T16:54:57"
} |
https://api.github.com/repos/huggingface/transformers/issues/28438 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28438/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28438/comments | https://api.github.com/repos/huggingface/transformers/issues/28438/events | https://github.com/huggingface/transformers/issues/28438 | 2,074,777,327 | I_kwDOCUB6oc57qpbv | 28,438 | Multi-worker HF training using trainer API in torch-xla result in too many graph compilations after saving checkpoint (transformers>=4.35) | {
"login": "jeffhataws",
"id": 56947987,
"node_id": "MDQ6VXNlcjU2OTQ3OTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/56947987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeffhataws",
"html_url": "https://github.com/jeffhataws",
"followers_url": "https://api.github.com/users/jeffhataws/followers",
"following_url": "https://api.github.com/users/jeffhataws/following{/other_user}",
"gists_url": "https://api.github.com/users/jeffhataws/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeffhataws/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeffhataws/subscriptions",
"organizations_url": "https://api.github.com/users/jeffhataws/orgs",
"repos_url": "https://api.github.com/users/jeffhataws/repos",
"events_url": "https://api.github.com/users/jeffhataws/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeffhataws/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-10T16:51:56 | 2024-01-24T20:37:05 | 2024-01-24T20:37:04 | CONTRIBUTOR | null | ### System Info
transformers>=4.35
Neuron SDK 2.15 with torch-neuronx 1.13
### Who can help?
@muellerz
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
(Duplicate of https://github.com/aws-neuron/aws-neuron-sdk/issues/813)
I followed [PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html#torch-hf-bert-finetune) to fine-tune BERT. I ran run_2w.sh and see the following behavior where it runs normally until the first checkpoint is saved, but then starts doing compilation for every step (I changed save_steps option in run_2w.sh to 10 steps in order to trigger the issue faster):
```
[INFO|trainer.py:1712] 2024-01-09 17:04:08,045 >> ***** Running training *****
[INFO|trainer.py:1713] 2024-01-09 17:04:08,045 >> Num examples = 1,840
[INFO|trainer.py:1714] 2024-01-09 17:04:08,045 >> Num Epochs = 5
[INFO|trainer.py:1715] 2024-01-09 17:04:08,045 >> Instantaneous batch size per device = 8
[INFO|trainer.py:1718] 2024-01-09 17:04:08,045 >> Total train batch size (w. parallel, distributed & accumulation) = 16
[INFO|trainer.py:1719] 2024-01-09 17:04:08,045 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1720] 2024-01-09 17:04:08,045 >> Total optimization steps = 1,150 [INFO|trainer.py:1721] 2024-01-09 17:04:08,045 >> Number of trainable parameters = 109,483,778
0%| | 0/1150 [00:00<?, ?it/s]2024-01-09 17:04:08.000173: 140637 INFO ||NEURON_CACHE||: Compile cache path: /var/tmp/neuron-compile-cache
2024-01-09 17:04:08.000175: 140637 INFO ||NEURON_CC_WRAPPER||: Using a cached neff at /var/tmp/neuron-compile-cache/neuronxcc-2.11.0.35+4f5279863/MODULE_1650
6334326618155050+abb26765/model.neff. Exiting with a successfully compiled graph.
0%| | 1/1150 [00:00<04:53, 3.92it/s]2024-01-09 17:04:09.000508: 140742 INFO ||NEURON_CACHE||: Compile cache path: /var/tmp/neuron-compile-cache 2024-01-09 17:04:09.000603: 140742 INFO ||NEURON_CC_WRAPPER||: Using a cached neff at /var/tmp/neuron-compile-cache/neuronxcc-2.11.0.35+4f5279863/MODULE_2044
823947559839528+abb26765/model.neff. Exiting with a successfully compiled graph.
0%| | 2/1150 [00:02<29:23, 1.54s/it]2024-01-09 17:04:13.000328: 140780 INFO ||NEURON_CACHE||: Compile cache path: /var/tmp/neuron-compile-cache
2024-01-09 17:04:13.000442: 140780 INFO ||NEURON_CC_WRAPPER||: Using a cached neff at /var/tmp/neuron-compile-cache/neuronxcc-2.11.0.35+4f5279863/MODULE_7850
734058944619683+abb26765/model.neff. Exiting with a successfully compiled graph.
1%| | 10/1150 [00:09<08:40, 2.19it/s][INFO|trainer.py:2859] 2024-01-09 17:04:17,051 >> Saving model checkpoint to /tmp/mrpc/tmp-checkpoint-10
(Done saving checkpoint, then compilation every step below)
2024-01-09 17:04:17.000789: 141260 INFO ||NEURON_CACHE||: Compile cache path: /var/tmp/neuron-compile-cache
2024-01-09 17:04:17.000873: 141260 INFO ||NEURON_CC_WRAPPER||: Using a cached neff at /var/tmp/neuron-compile-cache/neuronxcc-2.11.0.35+4f5279863/MODULE_2523
922307180626946+abb26765/model.neff. Exiting with a successfully compiled graph.
2024-01-09 17:04:20.000215: 141270 INFO ||NEURON_CACHE||: Compile cache path: /var/tmp/neuron-compile-cache
2024-01-09 17:04:20.000216: 141270 INFO ||NEURON_CC_WRAPPER||: Using a cached neff at /var/tmp/neuron-compile-cache/neuronxcc-2.11.0.35+4f5279863/MODULE_6208
462474369064908+abb26765/model.neff. Exiting with a successfully compiled graph.
2024-01-09 17:04:21.000202: 141279 INFO ||NEURON_CACHE||: Compile cache path: /var/tmp/neuron-compile-cache
2024-01-09 17:04:21.000282: 141279 INFO ||NEURON_CC_WRAPPER||: Using a cached neff at /var/tmp/neuron-compile-cache/neuronxcc-2.11.0.35+4f5279863/MODULE_1498
3430005009285767+abb26765/model.neff. Exiting with a successfully compiled graph.
2024-01-09 17:04:23.000265: 141288 INFO ||NEURON_CACHE||: Compile cache path: /var/tmp/neuron-compile-cache
2024-01-09 17:04:23.000266: 141288 INFO ||NEURON_CC_WRAPPER||: Using a cached neff at /var/tmp/neuron-compile-cache/neuronxcc-2.11.0.35+4f5279863/MODULE_3356
031905174227108+abb26765/model.neff. Exiting with a successfully compiled graph.
2024-01-09 17:04:24.000025: 141297 INFO ||NEURON_CACHE||: Compile cache path: /var/tmp/neuron-compile-cache
2024-01-09 17:04:24.000104: 141297 INFO ||NEURON_CC_WRAPPER||: Using a cached neff at /var/tmp/neuron-compile-cache/neuronxcc-2.11.0.35+4f5279863/MODULE_5950
234423484734321+abb26765/model.neff. Exiting with a successfully compiled graph.
2024-01-09 17:04:26.000063: 141306 INFO ||NEURON_CACHE||: Compile cache path: /var/tmp/neuron-compile-cache
2024-01-09 17:04:26.000064: 141306 INFO ||NEURON_CC_WRAPPER||: Using a cached neff at /var/tmp/neuron-compile-cache/neuronxcc-2.11.0.35+4f5279863/MODULE_1050
0036830841255848+abb26765/model.neff. Exiting with a successfully compiled graph.
(Compilation repeated many times, and eventually run out of device memory in Neuron runtime)
```
This issue starts in transformers version 4.35.
### Expected behavior
We should see training completes normally with few torch-xla complations. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28438/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28438/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28437 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28437/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28437/comments | https://api.github.com/repos/huggingface/transformers/issues/28437/events | https://github.com/huggingface/transformers/pull/28437 | 2,074,758,258 | PR_kwDOCUB6oc5jtJze | 28,437 | Fix load correct tokenizer in Mixtral model documentation | {
"login": "JuanFKurucz",
"id": 31422367,
"node_id": "MDQ6VXNlcjMxNDIyMzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/31422367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JuanFKurucz",
"html_url": "https://github.com/JuanFKurucz",
"followers_url": "https://api.github.com/users/JuanFKurucz/followers",
"following_url": "https://api.github.com/users/JuanFKurucz/following{/other_user}",
"gists_url": "https://api.github.com/users/JuanFKurucz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JuanFKurucz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JuanFKurucz/subscriptions",
"organizations_url": "https://api.github.com/users/JuanFKurucz/orgs",
"repos_url": "https://api.github.com/users/JuanFKurucz/repos",
"events_url": "https://api.github.com/users/JuanFKurucz/events{/privacy}",
"received_events_url": "https://api.github.com/users/JuanFKurucz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-10T16:41:21 | 2024-01-13T02:40:36 | 2024-01-10T17:09:07 | CONTRIBUTOR | null | # What does this PR do?
There is an incorrect non existent tokenizer linked in the Mixtral documentation. https://huggingface.co/docs/transformers/main/en/model_doc/mixtral#usage-tips
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Models:
- text models: @ArthurZucker and @younesbelkada
Documentation: @stevhliu and @MKhalusova
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28437/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28437/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28437",
"html_url": "https://github.com/huggingface/transformers/pull/28437",
"diff_url": "https://github.com/huggingface/transformers/pull/28437.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28437.patch",
"merged_at": "2024-01-10T17:09:07"
} |
https://api.github.com/repos/huggingface/transformers/issues/28436 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28436/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28436/comments | https://api.github.com/repos/huggingface/transformers/issues/28436/events | https://github.com/huggingface/transformers/pull/28436 | 2,074,723,389 | PR_kwDOCUB6oc5jtCKF | 28,436 | Add qwen2 | {
"login": "JustinLin610",
"id": 27664428,
"node_id": "MDQ6VXNlcjI3NjY0NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/27664428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JustinLin610",
"html_url": "https://github.com/JustinLin610",
"followers_url": "https://api.github.com/users/JustinLin610/followers",
"following_url": "https://api.github.com/users/JustinLin610/following{/other_user}",
"gists_url": "https://api.github.com/users/JustinLin610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JustinLin610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JustinLin610/subscriptions",
"organizations_url": "https://api.github.com/users/JustinLin610/orgs",
"repos_url": "https://api.github.com/users/JustinLin610/repos",
"events_url": "https://api.github.com/users/JustinLin610/events{/privacy}",
"received_events_url": "https://api.github.com/users/JustinLin610/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2024-01-10T16:22:43 | 2024-01-17T15:02:23 | 2024-01-17T15:02:23 | CONTRIBUTOR | null | # Adding Qwen2
This PR adds the support of codes for the coming Qwen2 models. For information about Qwen, please visit https://github.com/QwenLM/Qwen. @ArthurZucker | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28436/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28436/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28436",
"html_url": "https://github.com/huggingface/transformers/pull/28436",
"diff_url": "https://github.com/huggingface/transformers/pull/28436.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28436.patch",
"merged_at": "2024-01-17T15:02:22"
} |
https://api.github.com/repos/huggingface/transformers/issues/28435 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28435/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28435/comments | https://api.github.com/repos/huggingface/transformers/issues/28435/events | https://github.com/huggingface/transformers/issues/28435 | 2,074,690,827 | I_kwDOCUB6oc57qUUL | 28,435 | Skip some weights for load_in_8bit and keep them as fp16/32? | {
"login": "gregor-ge",
"id": 7710563,
"node_id": "MDQ6VXNlcjc3MTA1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7710563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gregor-ge",
"html_url": "https://github.com/gregor-ge",
"followers_url": "https://api.github.com/users/gregor-ge/followers",
"following_url": "https://api.github.com/users/gregor-ge/following{/other_user}",
"gists_url": "https://api.github.com/users/gregor-ge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gregor-ge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gregor-ge/subscriptions",
"organizations_url": "https://api.github.com/users/gregor-ge/orgs",
"repos_url": "https://api.github.com/users/gregor-ge/repos",
"events_url": "https://api.github.com/users/gregor-ge/events{/privacy}",
"received_events_url": "https://api.github.com/users/gregor-ge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2024-01-10T16:04:54 | 2024-01-13T13:58:28 | 2024-01-13T13:58:28 | NONE | null | ### Feature request
Hello,
I am looking for a way to load a checkpoint where I only load some of the weights in 8 bit and keep others in 16/32 bit.
### Motivation
My motivation is for vision-language models like Llava or BLIP2 where I want to load the LLM part in 8 bit but the image encoder should stay in 16 bit because I notice performance degradations with CLIP in 8 bit and also want to be able to train this part without LoRA.
As far as I can see in the documentation, issues and with Google (both here and for bitsandbytes), there is currently no way to do this.
### Your contribution
I can in theory help implement something like this but I don't know where and how in the code this should be done. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28435/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28435/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28434 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28434/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28434/comments | https://api.github.com/repos/huggingface/transformers/issues/28434/events | https://github.com/huggingface/transformers/issues/28434 | 2,074,395,530 | I_kwDOCUB6oc57pMOK | 28,434 | Llama2 inference in bfloat16 | {
"login": "JeevanBhoot",
"id": 64039772,
"node_id": "MDQ6VXNlcjY0MDM5Nzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/64039772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JeevanBhoot",
"html_url": "https://github.com/JeevanBhoot",
"followers_url": "https://api.github.com/users/JeevanBhoot/followers",
"following_url": "https://api.github.com/users/JeevanBhoot/following{/other_user}",
"gists_url": "https://api.github.com/users/JeevanBhoot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JeevanBhoot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JeevanBhoot/subscriptions",
"organizations_url": "https://api.github.com/users/JeevanBhoot/orgs",
"repos_url": "https://api.github.com/users/JeevanBhoot/repos",
"events_url": "https://api.github.com/users/JeevanBhoot/events{/privacy}",
"received_events_url": "https://api.github.com/users/JeevanBhoot/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 6 | 2024-01-10T13:36:27 | 2024-01-10T14:30:24 | null | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.15.0-89-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.20.1
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes - 1x RTX 4090
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @younesbelkada @SunMarc
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am trying to run both GPTQ and unquantized Llama2 models:
```python
gptq_config = GPTQConfig(bits=4, disable_exllama=True)
model_path = "TheBloke/Llama-2-7B-Chat-GPTQ"
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto", trust_remote_code=False, revision="main", quantization_config=gptq_config, torch_dtype=torch.bfloat16).to("cuda")
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=True)
input_ids = tokenizer("Tell me an interesting fact!", return_tensors="pt").input_ids.to("cuda").to(torch.bfloat16)
output = model.generate(input_ids)
```
and
```python
model_path = "meta-llama/Llama-2-7b-chat-hf"
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto", trust_remote_code=False, revision="main", torch_dtype=torch.bfloat16).to("cuda")
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=True)
input_ids = tokenizer("Tell me an interesting fact!", return_tensors="pt").input_ids.to("cuda").to(torch.bfloat16)
output = model.generate(input_ids)
```
### Expected behavior
I am trying to run Llama2 inference in bfloat16 - I want all weights and computations to be in bfloat16. When I run the two snippets provided, I encounter the following error:
```
File "/home/jeevan/miniconda3/envs/llama2_env/lib/python3.10/site-packages/torch/nn/functional.py", line 2233, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got CUDABFloat16Type instead (while checking arguments for embedding)
```
If I keep the input in int64, then this works fine i.e. changing just the model weights to bfloat16 works fine. But I want all computation to be performed in bfloat16. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28434/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28434/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28433 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28433/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28433/comments | https://api.github.com/repos/huggingface/transformers/issues/28433/events | https://github.com/huggingface/transformers/pull/28433 | 2,074,298,641 | PR_kwDOCUB6oc5jrky2 | 28,433 | Enable multi-label image classification in pipeline | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-10T12:43:38 | 2024-01-11T11:40:41 | 2024-01-11T10:29:39 | COLLABORATOR | null | # What does this PR do?
Enables multilabel image classification in the pipeline and explicity specifying the activation function applied to the model's logits - matching the logic for [text classification pipeline](https://github.com/huggingface/transformers/blob/ffd3710391c0700a3957f0cdf2c99bc5ae966c70/src/transformers/pipelines/text_classification.py#L195).
Fixes https://github.com/huggingface/huggingface.js/issues/429
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28433/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28433/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28433",
"html_url": "https://github.com/huggingface/transformers/pull/28433",
"diff_url": "https://github.com/huggingface/transformers/pull/28433.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28433.patch",
"merged_at": "2024-01-11T10:29:39"
} |
https://api.github.com/repos/huggingface/transformers/issues/28432 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28432/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28432/comments | https://api.github.com/repos/huggingface/transformers/issues/28432/events | https://github.com/huggingface/transformers/pull/28432 | 2,074,076,507 | PR_kwDOCUB6oc5jqz3A | 28,432 | CI: limit natten version | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-10T10:36:45 | 2024-01-10T12:39:09 | 2024-01-10T12:39:05 | MEMBER | null | # What does this PR do?
[natten](https://github.com/SHI-Labs/NATTEN/) has a new release (v0.15.0), breaking our CI. This PR limits its version to the latest working version. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28432/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/28432/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28432",
"html_url": "https://github.com/huggingface/transformers/pull/28432",
"diff_url": "https://github.com/huggingface/transformers/pull/28432.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28432.patch",
"merged_at": "2024-01-10T12:39:05"
} |
https://api.github.com/repos/huggingface/transformers/issues/28431 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28431/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28431/comments | https://api.github.com/repos/huggingface/transformers/issues/28431/events | https://github.com/huggingface/transformers/pull/28431 | 2,073,804,691 | PR_kwDOCUB6oc5jp4fY | 28,431 | Doc | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2024-01-10T07:59:03 | 2024-01-11T16:55:48 | 2024-01-11T16:55:48 | CONTRIBUTOR | null | Hi @amyeroberts
We need to update some lib versions for CPU training in the docs, would you please help review it? Thx! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28431/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28431/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28431",
"html_url": "https://github.com/huggingface/transformers/pull/28431",
"diff_url": "https://github.com/huggingface/transformers/pull/28431.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28431.patch",
"merged_at": "2024-01-11T16:55:48"
} |
https://api.github.com/repos/huggingface/transformers/issues/28430 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28430/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28430/comments | https://api.github.com/repos/huggingface/transformers/issues/28430/events | https://github.com/huggingface/transformers/pull/28430 | 2,073,787,119 | PR_kwDOCUB6oc5jp0rr | 28,430 | Fix number of models in README.md | {
"login": "prasatee",
"id": 142558246,
"node_id": "U_kgDOCH9EJg",
"avatar_url": "https://avatars.githubusercontent.com/u/142558246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prasatee",
"html_url": "https://github.com/prasatee",
"followers_url": "https://api.github.com/users/prasatee/followers",
"following_url": "https://api.github.com/users/prasatee/following{/other_user}",
"gists_url": "https://api.github.com/users/prasatee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prasatee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prasatee/subscriptions",
"organizations_url": "https://api.github.com/users/prasatee/orgs",
"repos_url": "https://api.github.com/users/prasatee/repos",
"events_url": "https://api.github.com/users/prasatee/events{/privacy}",
"received_events_url": "https://api.github.com/users/prasatee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-10T07:46:33 | 2024-01-10T11:11:09 | 2024-01-10T11:11:09 | CONTRIBUTOR | null | # What does this PR do?
This fixes a small typo in the README.md
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28430/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28430/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28430",
"html_url": "https://github.com/huggingface/transformers/pull/28430",
"diff_url": "https://github.com/huggingface/transformers/pull/28430.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28430.patch",
"merged_at": "2024-01-10T11:11:09"
} |
https://api.github.com/repos/huggingface/transformers/issues/28429 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28429/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28429/comments | https://api.github.com/repos/huggingface/transformers/issues/28429/events | https://github.com/huggingface/transformers/pull/28429 | 2,073,673,605 | PR_kwDOCUB6oc5jpb-B | 28,429 | disable query_length diff on graph model | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 6 | 2024-01-10T06:18:30 | 2024-01-17T10:51:04 | 2024-01-17T10:51:04 | CONTRIBUTOR | null | Hi @fxmarty
In generation tasks, the model will not use `AttentionMaskConverter._unmask_unattended` on the 1st token because no `past_key_values`, but will use it from the 2nd token. It will cause a different implementation while tracing, so we need to disable check `query_length` when using `jit.trace`.
cc @younesbelkada @ArthurZucker | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28429/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28429/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28429",
"html_url": "https://github.com/huggingface/transformers/pull/28429",
"diff_url": "https://github.com/huggingface/transformers/pull/28429.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28429.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28428 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28428/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28428/comments | https://api.github.com/repos/huggingface/transformers/issues/28428/events | https://github.com/huggingface/transformers/issues/28428 | 2,073,602,733 | I_kwDOCUB6oc57mKqt | 28,428 | Huggingface endpoint not working | {
"login": "andysingal",
"id": 20493493,
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andysingal",
"html_url": "https://github.com/andysingal",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andysingal/subscriptions",
"organizations_url": "https://api.github.com/users/andysingal/orgs",
"repos_url": "https://api.github.com/users/andysingal/repos",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"received_events_url": "https://api.github.com/users/andysingal/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-10T05:14:27 | 2024-01-10T11:09:34 | null | NONE | null | ### System Info
```
2024-01-10 05:12:28.914726: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-01-10 05:12:28.914812: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-01-10 05:12:28.917235: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-01-10 05:12:31.226361: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/transformers/commands/env.py:100: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.36.2
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (False)
- Tensorflow version (GPU?): 2.15.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu)
- Jax version: 0.4.23
- JaxLib version: 0.4.23
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import requests
import json
def call_huggingface_api(text, params=None):
url = "https://d2q5h5r3a1pkorfp.us-east-1.aws.endpoints.huggingface.cloud"
endpoint = "/"
headers = {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": "application/json"
}
data = {
"inputs": text,
"parameters": params
}
json_data = json.dumps(data)
response = requests.post(url + endpoint, data=json_data, headers=headers)
if response.status_code == 200:
return response.json()
else:
print("Request failed with status code:", response.status_code)
return None
```
```
parameters = {
"top_k": None
}
result = call_huggingface_api(text, parameters)
print(result)
```
gives
```
Request failed with status code: 502
None
```
### Expected behavior
runs properly with result | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28428/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28428/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28427 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28427/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28427/comments | https://api.github.com/repos/huggingface/transformers/issues/28427/events | https://github.com/huggingface/transformers/issues/28427 | 2,073,519,255 | I_kwDOCUB6oc57l2SX | 28,427 | RagRetriever download too much data and won't stop | {
"login": "MohammadDara",
"id": 6161219,
"node_id": "MDQ6VXNlcjYxNjEyMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6161219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MohammadDara",
"html_url": "https://github.com/MohammadDara",
"followers_url": "https://api.github.com/users/MohammadDara/followers",
"following_url": "https://api.github.com/users/MohammadDara/following{/other_user}",
"gists_url": "https://api.github.com/users/MohammadDara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MohammadDara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MohammadDara/subscriptions",
"organizations_url": "https://api.github.com/users/MohammadDara/orgs",
"repos_url": "https://api.github.com/users/MohammadDara/repos",
"events_url": "https://api.github.com/users/MohammadDara/events{/privacy}",
"received_events_url": "https://api.github.com/users/MohammadDara/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 5 | 2024-01-10T03:32:43 | 2024-01-12T10:36:44 | null | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: macOS-14.1.1-arm64-arm-64bit
- Python version: 3.10.13
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2 (False)
- Tensorflow version (GPU?): 2.15.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the following code:
```
from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration, RagSequenceForGeneration
import torch
import faiss
tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq")
model = RagSequenceForGeneration.from_pretrained("facebook/rag-token-nq")
retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact")
```
### Expected behavior
I expect RagRetriever to download data and finish it. but it will never stop. here is part of the result:
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.32G/1.32G [00:14<00:00, 88.3MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:14<00:00, 89.8MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:15<00:00, 88.0MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:15<00:00, 87.6MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:15<00:00, 87.1MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:15<00:00, 85.1MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:17<00:00, 75.9MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:15<00:00, 83.5MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:14<00:00, 89.6MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:15<00:00, 88.1MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:15<00:00, 86.2MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:16<00:00, 82.4MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:15<00:00, 87.6MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:16<00:00, 81.9MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:16<00:00, 78.2MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:19<00:00, 69.7MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:18<00:00, 70.4MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:15<00:00, 84.3MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:16<00:00, 80.0MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:19<00:00, 69.3MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:15<00:00, 87.4MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [02:03<00:00, 10.8MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:26<00:00, 50.7MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:22<00:00, 59.3MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:19<00:00, 66.8MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:15<00:00, 84.2MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:18<00:00, 70.5MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:20<00:00, 65.0MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:18<00:00, 70.5MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:15<00:00, 84.1MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:15<00:00, 87.5MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:18<00:00, 71.1MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:19<00:00, 67.0MB/s]
It will continue downloading if you don't stop it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28427/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28425 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28425/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28425/comments | https://api.github.com/repos/huggingface/transformers/issues/28425/events | https://github.com/huggingface/transformers/issues/28425 | 2,073,426,512 | I_kwDOCUB6oc57lfpQ | 28,425 | GQA Llama 13B slower than Llama 13B without GQA | {
"login": "Adonai02",
"id": 70610799,
"node_id": "MDQ6VXNlcjcwNjEwNzk5",
"avatar_url": "https://avatars.githubusercontent.com/u/70610799?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Adonai02",
"html_url": "https://github.com/Adonai02",
"followers_url": "https://api.github.com/users/Adonai02/followers",
"following_url": "https://api.github.com/users/Adonai02/following{/other_user}",
"gists_url": "https://api.github.com/users/Adonai02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Adonai02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Adonai02/subscriptions",
"organizations_url": "https://api.github.com/users/Adonai02/orgs",
"repos_url": "https://api.github.com/users/Adonai02/repos",
"events_url": "https://api.github.com/users/Adonai02/events{/privacy}",
"received_events_url": "https://api.github.com/users/Adonai02/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | 2 | 2024-01-10T01:28:37 | 2024-01-13T18:01:31 | null | NONE | null | ### Feature request
It would be nice if when I choose different key_value_heads (key_value_heads < attention_heads) on config's model, automatically the attn weights were computed by mean pooling. Right now, if I do this, it gives me the next error.
key_value_heads = 4
<img width="916" alt="image" src="https://github.com/huggingface/transformers/assets/70610799/05ae81c3-2ac6-4339-a805-02725ff9b538">
### Motivation
Make models faster, e.g Llama 2 13B, Llama 7B, Mistral 7B etc.
### Your contribution
I tried to do a simple implementation. But it gives me inconsistent results. GQA model is slower than No GQA model.
```
from transformers import LlamaConfig
from transformers.models.llama.modeling_llama import LlamaAttention, LlamaSdpaAttention
from copy import deepcopy
import torch
def split_attention_to_heads(input_tensor, num_splits):
# Get the shape of the input tensor
rows, cols = input_tensor.shape
# Check if the number of rows is divisible by the number of splits
if rows % num_splits != 0:
raise ValueError("Number of rows is not divisible by the number of splits")
# Calculate the number of rows in each split
# Use chunk to split the tensor along the rows
split_tensors = input_tensor.chunk(num_splits, dim=0)
return split_tensors
def average_heads(tensor_tuple, group_size, dtype):
# Initialize an empty list to store the averaged tensors
averaged_tensors = []
# Iterate through the tuple and average consecutive groups
for i in range(0, len(tensor_tuple), group_size):
# Take a group of tensors
tensor_group = tensor_tuple[i:i + group_size]
# Calculate the mean along dimension 0
averaged_tensor = torch.mean(torch.stack(tensor_group), dim=0, dtype=dtype)
# Append the averaged tensor to the list
averaged_tensors.append(averaged_tensor)
# Convert the list of averaged tensors to a tuple
averaged_tensors_tuple = tuple(averaged_tensors)
return averaged_tensors_tuple
def convert_wts_to_gqa(attention_module: torch.nn.Module , model_configuration: LlamaConfig):
attentions_wts = attention_module.state_dict().copy()
num_heads = model_configuration.num_attention_heads
gqa_groups = num_heads // model_configuration.num_key_value_heads
for name_wts in list(attentions_wts.keys()):
if ("k_proj" in name_wts) or ("v_proj" in name_wts):
tensor_to_convert = attentions_wts[name_wts].clone()
torch_dtype = tensor_to_convert.dtype
attn_heads = split_attention_to_heads(tensor_to_convert, num_splits=num_heads)
gqa_tensors_grouped = average_heads(attn_heads, gqa_groups, dtype=torch_dtype)
gqa_tensors_grouped = torch.cat(gqa_tensors_grouped)
attentions_wts[name_wts] = gqa_tensors_grouped
del tensor_to_convert
return attentions_wts
def convert_llama_to_gqa(module: torch.nn.Module, llama_config_from_hf: LlamaConfig, inplace: bool = False):
if isinstance(module, LlamaAttention):
wts_gqa = convert_wts_to_gqa(attention_module=module, model_configuration=llama_config_from_hf)
llama_atention_gqa = LlamaAttention(llama_config_from_hf, layer_idx=module.layer_idx)
llama_atention_gqa.half()
llama_atention_gqa.load_state_dict(wts_gqa)
return llama_atention_gqa
out = module if inplace else deepcopy(module)
for name, child in out.named_children():
out._modules[name] = convert_llama_to_gqa(child, llama_config_from_hf=llama_config_from_hf, inplace=True)
return out
from transformers import AutoConfig
configuration_llama = AutoConfig.from_pretrained("meta-llama/Llama-2-13b-chat-hf")
configuration_llama.num_key_value_heads = 4
llama_gqa = convert_llama_to_gqa(llama, configuration_llama)
```
**Results**
GQA LLAMA
<img width="784" alt="image" src="https://github.com/huggingface/transformers/assets/70610799/d1a1c250-5ed3-4c34-9041-620b6b57ef3c">
NO GQA LLAMA
<img width="782" alt="image" src="https://github.com/huggingface/transformers/assets/70610799/b09b020a-94fa-450c-a239-3f7fa5339f7a">
I don't know if I'm misunderstanding something, please let me know if you can see something I can't
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28425/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28425/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28424 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28424/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28424/comments | https://api.github.com/repos/huggingface/transformers/issues/28424/events | https://github.com/huggingface/transformers/pull/28424 | 2,073,406,278 | PR_kwDOCUB6oc5joi7U | 28,424 | Solve: Inconsistent decoding with additional special tokens between slow and fast tokenizers. | {
"login": "hi-sushanta",
"id": 93595990,
"node_id": "U_kgDOBZQpVg",
"avatar_url": "https://avatars.githubusercontent.com/u/93595990?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hi-sushanta",
"html_url": "https://github.com/hi-sushanta",
"followers_url": "https://api.github.com/users/hi-sushanta/followers",
"following_url": "https://api.github.com/users/hi-sushanta/following{/other_user}",
"gists_url": "https://api.github.com/users/hi-sushanta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hi-sushanta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hi-sushanta/subscriptions",
"organizations_url": "https://api.github.com/users/hi-sushanta/orgs",
"repos_url": "https://api.github.com/users/hi-sushanta/repos",
"events_url": "https://api.github.com/users/hi-sushanta/events{/privacy}",
"received_events_url": "https://api.github.com/users/hi-sushanta/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-10T01:01:08 | 2024-01-31T10:22:14 | null | CONTRIBUTOR | null | With this pull request, I have endeavored to remedy a minor decoding issue. If any problems remain with my proposed solution, I welcome your thoughtful feedback and suggestions for improvement.
Fixes #28287
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@ArthurZucker @amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28424/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28424/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28424",
"html_url": "https://github.com/huggingface/transformers/pull/28424",
"diff_url": "https://github.com/huggingface/transformers/pull/28424.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28424.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28423 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28423/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28423/comments | https://api.github.com/repos/huggingface/transformers/issues/28423/events | https://github.com/huggingface/transformers/pull/28423 | 2,073,383,421 | PR_kwDOCUB6oc5joeCJ | 28,423 | Fix paths to AI Sweden Models reference and model loading | {
"login": "JuanFKurucz",
"id": 31422367,
"node_id": "MDQ6VXNlcjMxNDIyMzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/31422367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JuanFKurucz",
"html_url": "https://github.com/JuanFKurucz",
"followers_url": "https://api.github.com/users/JuanFKurucz/followers",
"following_url": "https://api.github.com/users/JuanFKurucz/following{/other_user}",
"gists_url": "https://api.github.com/users/JuanFKurucz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JuanFKurucz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JuanFKurucz/subscriptions",
"organizations_url": "https://api.github.com/users/JuanFKurucz/orgs",
"repos_url": "https://api.github.com/users/JuanFKurucz/repos",
"events_url": "https://api.github.com/users/JuanFKurucz/events{/privacy}",
"received_events_url": "https://api.github.com/users/JuanFKurucz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-10T00:30:25 | 2024-01-17T22:08:03 | 2024-01-15T08:09:23 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes the paths to the correct models of AI Sweden Models, as they migrated their models to a different account https://huggingface.co/AI-Sweden-Models.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Models:
- text models: @ArthurZucker and @younesbelkada
Documentation: @stevhliu and @MKhalusova
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28423/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28423/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28423",
"html_url": "https://github.com/huggingface/transformers/pull/28423",
"diff_url": "https://github.com/huggingface/transformers/pull/28423.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28423.patch",
"merged_at": "2024-01-15T08:09:23"
} |
https://api.github.com/repos/huggingface/transformers/issues/28422 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28422/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28422/comments | https://api.github.com/repos/huggingface/transformers/issues/28422/events | https://github.com/huggingface/transformers/pull/28422 | 2,073,334,024 | PR_kwDOCUB6oc5joTUU | 28,422 | Set `cache_dir` for `evaluate.load()` in example scripts | {
"login": "aphedges",
"id": 14283972,
"node_id": "MDQ6VXNlcjE0MjgzOTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/14283972?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aphedges",
"html_url": "https://github.com/aphedges",
"followers_url": "https://api.github.com/users/aphedges/followers",
"following_url": "https://api.github.com/users/aphedges/following{/other_user}",
"gists_url": "https://api.github.com/users/aphedges/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aphedges/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aphedges/subscriptions",
"organizations_url": "https://api.github.com/users/aphedges/orgs",
"repos_url": "https://api.github.com/users/aphedges/repos",
"events_url": "https://api.github.com/users/aphedges/events{/privacy}",
"received_events_url": "https://api.github.com/users/aphedges/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-09T23:31:42 | 2024-01-11T14:38:44 | 2024-01-11T14:38:44 | CONTRIBUTOR | null | # What does this PR do?
While using `run_clm.py`,[^1] I noticed that some files were being added to my global cache, not the local cache. I set the `cache_dir` parameter for the one call to `evaluate.load()`, which partially solved the problem. I figured that while I was fixing the one script upstream, I might as well fix the problem in all other example scripts that I could.
There are still some files being added to my global cache, but this appears to be a bug in `evaluate` itself. This commit at least moves some of the files into the local cache, which is better than before.
To create this PR, I made the following regex-based transformation: `evaluate\.load\((.*?)\)` -> `evaluate\.load\($1,
cache_dir=model_args.cache_dir\)`. After using that, I manually fixed all modified files with `ruff` serving as useful guidance. During the process, I removed one existing usage of the `cache_dir` parameter in a script that did not have a corresponding `--cache-dir` argument declared.
[^1]: I specifically used `pytorch/language-modeling/run_clm.py` from v4.34.1 of the library. For the original code, see the following URL: https://github.com/huggingface/transformers/tree/acc394c4f5e1283c19783581790b3dc3105a3697/examples/pytorch/language-modeling/run_clm.py.
## Who can review?
Maintained examples:
- PyTorch:
- text models: @ArthurZucker
- TensorFlow: @Rocketknight1 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28422/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28422",
"html_url": "https://github.com/huggingface/transformers/pull/28422",
"diff_url": "https://github.com/huggingface/transformers/pull/28422.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28422.patch",
"merged_at": "2024-01-11T14:38:44"
} |
https://api.github.com/repos/huggingface/transformers/issues/28421 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28421/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28421/comments | https://api.github.com/repos/huggingface/transformers/issues/28421/events | https://github.com/huggingface/transformers/pull/28421 | 2,073,059,648 | PR_kwDOCUB6oc5jnXEa | 28,421 | Skip now failing test in the Trainer tests | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-09T19:53:23 | 2024-01-10T11:02:32 | 2024-01-10T11:02:31 | CONTRIBUTOR | null | # What does this PR do?
https://github.com/huggingface/accelerate/pull/2319 introduced a revert on the DataLoader sampling logic to *not* use a SeedableRandomSampler by default as users were taken aback by the performance differences, so we've set it to `False` by default. This test is now back to its old way, where it was failing for ages.
As noted in the `skip`, one of my next items to hit is a configuration for Accelerator that can be passed to the `TrainingArguments` that can customize this, but for now it's not the case.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28421/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28421/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28421",
"html_url": "https://github.com/huggingface/transformers/pull/28421",
"diff_url": "https://github.com/huggingface/transformers/pull/28421.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28421.patch",
"merged_at": "2024-01-10T11:02:31"
} |
https://api.github.com/repos/huggingface/transformers/issues/28420 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28420/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28420/comments | https://api.github.com/repos/huggingface/transformers/issues/28420/events | https://github.com/huggingface/transformers/pull/28420 | 2,072,866,425 | PR_kwDOCUB6oc5jmscR | 28,420 | Optionally preprocess segmentation maps for MobileViT | {
"login": "harisankar95",
"id": 58052269,
"node_id": "MDQ6VXNlcjU4MDUyMjY5",
"avatar_url": "https://avatars.githubusercontent.com/u/58052269?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harisankar95",
"html_url": "https://github.com/harisankar95",
"followers_url": "https://api.github.com/users/harisankar95/followers",
"following_url": "https://api.github.com/users/harisankar95/following{/other_user}",
"gists_url": "https://api.github.com/users/harisankar95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harisankar95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harisankar95/subscriptions",
"organizations_url": "https://api.github.com/users/harisankar95/orgs",
"repos_url": "https://api.github.com/users/harisankar95/repos",
"events_url": "https://api.github.com/users/harisankar95/events{/privacy}",
"received_events_url": "https://api.github.com/users/harisankar95/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2024-01-09T17:46:42 | 2024-01-11T14:52:14 | 2024-01-11T14:52:14 | CONTRIBUTOR | null | # What does this PR do?
- Preprocessor can now accept segmentations maps as well and performs augmentations inline to input images.
- Tests added for preprocessing segmentation masks.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28420/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28420/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28420",
"html_url": "https://github.com/huggingface/transformers/pull/28420",
"diff_url": "https://github.com/huggingface/transformers/pull/28420.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28420.patch",
"merged_at": "2024-01-11T14:52:14"
} |
https://api.github.com/repos/huggingface/transformers/issues/28419 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28419/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28419/comments | https://api.github.com/repos/huggingface/transformers/issues/28419/events | https://github.com/huggingface/transformers/pull/28419 | 2,072,771,476 | PR_kwDOCUB6oc5jmXeF | 28,419 | Correctly resolve trust_remote_code=None for AutoTokenizer | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-09T16:52:37 | 2024-01-11T15:12:10 | 2024-01-11T15:12:09 | MEMBER | null | If `trust_remote_code` is left at the default `None` in `AutoTokenizer.from_pretrained()`, an error is thrown if you try to load a repo that requires remote code, rather than the correct dialog box being displayed. This PR fixes that issue. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28419/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28419/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28419",
"html_url": "https://github.com/huggingface/transformers/pull/28419",
"diff_url": "https://github.com/huggingface/transformers/pull/28419.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28419.patch",
"merged_at": "2024-01-11T15:12:09"
} |
https://api.github.com/repos/huggingface/transformers/issues/28418 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28418/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28418/comments | https://api.github.com/repos/huggingface/transformers/issues/28418/events | https://github.com/huggingface/transformers/pull/28418 | 2,072,765,682 | PR_kwDOCUB6oc5jmWNX | 28,418 | [i18n-fr] Translate accelerate tutorial to French | {
"login": "NoB0",
"id": 28621493,
"node_id": "MDQ6VXNlcjI4NjIxNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/28621493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NoB0",
"html_url": "https://github.com/NoB0",
"followers_url": "https://api.github.com/users/NoB0/followers",
"following_url": "https://api.github.com/users/NoB0/following{/other_user}",
"gists_url": "https://api.github.com/users/NoB0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NoB0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NoB0/subscriptions",
"organizations_url": "https://api.github.com/users/NoB0/orgs",
"repos_url": "https://api.github.com/users/NoB0/repos",
"events_url": "https://api.github.com/users/NoB0/events{/privacy}",
"received_events_url": "https://api.github.com/users/NoB0/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-09T16:49:17 | 2024-01-09T19:03:50 | null | CONTRIBUTOR | null | # What does this PR do?
Translates the `accelerate.md` file of the documentation to French.
Part of #21456
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
French speaking contributors.
Documentation: @stevhliu and @MKhalusova | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28418/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28418",
"html_url": "https://github.com/huggingface/transformers/pull/28418",
"diff_url": "https://github.com/huggingface/transformers/pull/28418.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28418.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28417 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28417/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28417/comments | https://api.github.com/repos/huggingface/transformers/issues/28417/events | https://github.com/huggingface/transformers/pull/28417 | 2,072,687,874 | PR_kwDOCUB6oc5jmE_k | 28,417 | Bump fonttools from 4.31.1 to 4.43.0 in /examples/research_projects/decision_transformer | {
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
} | [
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
},
{
"id": 6410... | closed | false | null | [] | null | 0 | 2024-01-09T16:09:44 | 2024-01-10T10:22:45 | 2024-01-10T10:22:44 | CONTRIBUTOR | null | Bumps [fonttools](https://github.com/fonttools/fonttools) from 4.31.1 to 4.43.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/fonttools/fonttools/releases">fonttools's releases</a>.</em></p>
<blockquote>
<h2>4.43.0</h2>
<ul>
<li>[subset] Set up lxml <code>XMLParser(resolve_entities=False)</code> when parsing OT-SVG documents to prevent XML External Entity (XXE) attacks (9f61271dc): <a href="https://codeql.github.com/codeql-query-help/python/py-xxe/">https://codeql.github.com/codeql-query-help/python/py-xxe/</a></li>
<li>[varLib.iup] Added workaround for a Cython bug in <code>iup_delta_optimize</code> that was leading to IUP tolerance being incorrectly initialised, resulting in sub-optimal deltas (60126435d, <a href="https://redirect.github.com/cython/cython/issues/5732">cython/cython#5732</a>).</li>
<li>[varLib] Added new command-line entry point <code>fonttools varLib.avar</code> to add an <code>avar</code> table to an existing VF from axes mappings in a .designspace file (0a3360e52).</li>
<li>[instancer] Fixed bug whereby no longer used variation regions were not correctly pruned after VarData optimization (<a href="https://redirect.github.com/fonttools/fonttools/issues/3268">#3268</a>).</li>
<li>Added support for Python 3.12 (<a href="https://redirect.github.com/fonttools/fonttools/issues/3283">#3283</a>).</li>
</ul>
<h2>4.42.1</h2>
<ul>
<li>[t1Lib] Fixed several Type 1 issues (<a href="https://redirect.github.com/fonttools/fonttools/issues/3238">#3238</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3240">#3240</a>).</li>
<li>[otBase/packer] Allow sharing tables reached by different offset sizes (<a href="https://redirect.github.com/fonttools/fonttools/issues/3241">#3241</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3236">#3236</a>, 457f11c2).</li>
<li>[varLib/merger] Fix Cursive attachment merging error when all anchors are NULL (<a href="https://redirect.github.com/fonttools/fonttools/issues/3248">#3248</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3247">#3247</a>).</li>
<li>[ttLib] Fixed warning when calling <code>addMultilingualName</code> and <code>ttFont</code> parameter was not passed on to <code>findMultilingualName</code> (<a href="https://redirect.github.com/fonttools/fonttools/issues/3253">#3253</a>).</li>
</ul>
<h2>4.42.0</h2>
<ul>
<li>[varLib] Use sentinel value 0xFFFF to mark a glyph advance in hmtx/vmtx as non participating, allowing sparse masters to contain glyphs for variation purposes other than {H,V}VAR (<a href="https://redirect.github.com/fonttools/fonttools/issues/3235">#3235</a>).</li>
<li>[varLib/cff] Treat empty glyphs in non-default masters as missing, thus not participating in CFF2 delta computation, similarly to how varLib already treats them for gvar (<a href="https://redirect.github.com/fonttools/fonttools/issues/3234">#3234</a>).</li>
<li>Added varLib.avarPlanner script to deduce 'correct' avar v1 axis mappings based on glyph average weights (<a href="https://redirect.github.com/fonttools/fonttools/issues/3223">#3223</a>).</li>
</ul>
<h2>4.41.1</h2>
<ul>
<li>[subset] Fixed perf regression in v4.41.0 by making <code>NameRecordVisitor</code> only visit tables that do contain nameID references (<a href="https://redirect.github.com/fonttools/fonttools/issues/3213">#3213</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3214">#3214</a>).</li>
<li>[varLib.instancer] Support instancing fonts containing null ConditionSet offsets in FeatureVariationRecords (<a href="https://redirect.github.com/fonttools/fonttools/issues/3211">#3211</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3212">#3212</a>).</li>
<li>[statisticsPen] Report font glyph-average weight/width and font-wide slant.</li>
<li>[fontBuilder] Fixed head.created date incorrectly set to 0 instead of the current timestamp, regression introduced in v4.40.0 (<a href="https://redirect.github.com/fonttools/fonttools/issues/3210">#3210</a>).</li>
<li>[varLib.merger] Support sparse <code>CursivePos</code> masters (<a href="https://redirect.github.com/fonttools/fonttools/issues/3209">#3209</a>).</li>
</ul>
<h2>4.41.0</h2>
<ul>
<li>[fontBuilder] Fixed bug in setupOS2 with default panose attribute incorrectly being set to a dict instead of a Panose object (<a href="https://redirect.github.com/fonttools/fonttools/issues/3201">#3201</a>).</li>
<li>[name] Added method to <code>removeUnusedNameRecords</code> in the user range (<a href="https://redirect.github.com/fonttools/fonttools/issues/3185">#3185</a>).</li>
<li>[varLib.instancer] Fixed issue with L4 instancing (moving default) (<a href="https://redirect.github.com/fonttools/fonttools/issues/3179">#3179</a>).</li>
<li>[cffLib] Use latin1 so we can roundtrip non-ASCII in {Full,Font,Family}Name (<a href="https://redirect.github.com/fonttools/fonttools/issues/3202">#3202</a>).</li>
<li>[designspaceLib] Mark <!-- raw HTML omitted --> as optional in docs (as it is in the code).</li>
<li>[glyf-1] Fixed drawPoints() bug whereby last cubic segment becomes quadratic (<a href="https://redirect.github.com/fonttools/fonttools/issues/3189">#3189</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3190">#3190</a>).</li>
<li>[fontBuilder] Propagate the 'hidden' flag to the fvar Axis instance (<a href="https://redirect.github.com/fonttools/fonttools/issues/3184">#3184</a>).</li>
<li>[fontBuilder] Update setupAvar() to also support avar 2, fixing <code>_add_avar()</code> call site (<a href="https://redirect.github.com/fonttools/fonttools/issues/3183">#3183</a>).</li>
<li>Added new <code>voltLib.voltToFea</code> submodule (originally Tiro Typeworks' "Volto") for converting VOLT OpenType Layout sources to FEA format (<a href="https://redirect.github.com/fonttools/fonttools/issues/3164">#3164</a>).</li>
</ul>
<h2>4.40.0</h2>
<ul>
<li>Published native binary wheels to PyPI for all the python minor versions and platform and architectures currently supported that would benefit from this. They will include precompiled Cython-accelerated modules (e.g. cu2qu) without requiring to compile them from source. The pure-python wheel and source distribution will continue to be published as always (pip will automatically chose them when no binary wheel is available for the given platform, e.g. pypy). Use <code>pip install --no-binary=fonttools fonttools</code> to expliclity request pip to install from the pure-python source.</li>
<li>[designspaceLib|varLib] Add initial support for specifying axis mappings and build <code>avar2</code> table from those (<a href="https://redirect.github.com/fonttools/fonttools/issues/3123">#3123</a>).</li>
<li>[feaLib] Support variable ligature caret position (<a href="https://redirect.github.com/fonttools/fonttools/issues/3130">#3130</a>).</li>
<li>[varLib|glyf] Added option to --drop-implied-oncurves; test for impliable oncurve points either before or after rounding (<a href="https://redirect.github.com/fonttools/fonttools/issues/3146">#3146</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3147">#3147</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3155">#3155</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3156">#3156</a>).</li>
<li>[TTGlyphPointPen] Don't error with empty contours, simply ignore them (<a href="https://redirect.github.com/fonttools/fonttools/issues/3145">#3145</a>).</li>
<li>[sfnt] Fixed str vs bytes remnant of py3 transition in code dealing with de/compiling WOFF metadata (<a href="https://redirect.github.com/fonttools/fonttools/issues/3129">#3129</a>).</li>
<li>[instancer-solver] Fixed bug when moving default instance with sparse masters (<a href="https://redirect.github.com/fonttools/fonttools/issues/3139">#3139</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3140">#3140</a>).</li>
<li>[feaLib] Simplify variable scalars that don’t vary (<a href="https://redirect.github.com/fonttools/fonttools/issues/3132">#3132</a>).</li>
<li>[pens] Added filter pen that explicitly emits closing line when lastPt != movePt (<a href="https://redirect.github.com/fonttools/fonttools/issues/3100">#3100</a>).</li>
<li>[varStore] Improve optimize algorithm and better document the algorithm (<a href="https://redirect.github.com/fonttools/fonttools/issues/3124">#3124</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3127">#3127</a>).<br />
Added <code>quantization</code> option (<a href="https://redirect.github.com/fonttools/fonttools/issues/3126">#3126</a>).</li>
<li>Added CI workflow config file for building native binary wheels (<a href="https://redirect.github.com/fonttools/fonttools/issues/3121">#3121</a>).</li>
<li>[fontBuilder] Added glyphDataFormat=0 option; raise error when glyphs contain cubic outlines but glyphDataFormat was not explicitly set to 1 (<a href="https://redirect.github.com/fonttools/fonttools/issues/3113">#3113</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3119">#3119</a>).</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/fonttools/fonttools/blob/main/NEWS.rst">fonttools's changelog</a>.</em></p>
<blockquote>
<h2>4.43.0 (released 2023-09-29)</h2>
<ul>
<li>[subset] Set up lxml <code>XMLParser(resolve_entities=False)</code> when parsing OT-SVG documents
to prevent XML External Entity (XXE) attacks (9f61271dc):
<a href="https://codeql.github.com/codeql-query-help/python/py-xxe/">https://codeql.github.com/codeql-query-help/python/py-xxe/</a></li>
<li>[varLib.iup] Added workaround for a Cython bug in <code>iup_delta_optimize</code> that was
leading to IUP tolerance being incorrectly initialised, resulting in sub-optimal deltas
(60126435d, <a href="https://redirect.github.com/cython/cython/issues/5732">cython/cython#5732</a>).</li>
<li>[varLib] Added new command-line entry point <code>fonttools varLib.avar</code> to add an
<code>avar</code> table to an existing VF from axes mappings in a .designspace file (0a3360e52).</li>
<li>[instancer] Fixed bug whereby no longer used variation regions were not correctly pruned
after VarData optimization (<a href="https://redirect.github.com/fonttools/fonttools/issues/3268">#3268</a>).</li>
<li>Added support for Python 3.12 (<a href="https://redirect.github.com/fonttools/fonttools/issues/3283">#3283</a>).</li>
</ul>
<h2>4.42.1 (released 2023-08-20)</h2>
<ul>
<li>[t1Lib] Fixed several Type 1 issues (<a href="https://redirect.github.com/fonttools/fonttools/issues/3238">#3238</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3240">#3240</a>).</li>
<li>[otBase/packer] Allow sharing tables reached by different offset sizes (<a href="https://redirect.github.com/fonttools/fonttools/issues/3241">#3241</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3236">#3236</a>).</li>
<li>[varLib/merger] Fix Cursive attachment merging error when all anchors are NULL (<a href="https://redirect.github.com/fonttools/fonttools/issues/3248">#3248</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3247">#3247</a>).</li>
<li>[ttLib] Fixed warning when calling <code>addMultilingualName</code> and <code>ttFont</code> parameter was not
passed on to <code>findMultilingualName</code> (<a href="https://redirect.github.com/fonttools/fonttools/issues/3253">#3253</a>).</li>
</ul>
<h2>4.42.0 (released 2023-08-02)</h2>
<ul>
<li>[varLib] Use sentinel value 0xFFFF to mark a glyph advance in hmtx/vmtx as non
participating, allowing sparse masters to contain glyphs for variation purposes other
than {H,V}VAR (<a href="https://redirect.github.com/fonttools/fonttools/issues/3235">#3235</a>).</li>
<li>[varLib/cff] Treat empty glyphs in non-default masters as missing, thus not participating
in CFF2 delta computation, similarly to how varLib already treats them for gvar (<a href="https://redirect.github.com/fonttools/fonttools/issues/3234">#3234</a>).</li>
<li>Added varLib.avarPlanner script to deduce 'correct' avar v1 axis mappings based on
glyph average weights (<a href="https://redirect.github.com/fonttools/fonttools/issues/3223">#3223</a>).</li>
</ul>
<h2>4.41.1 (released 2023-07-21)</h2>
<ul>
<li>[subset] Fixed perf regression in v4.41.0 by making <code>NameRecordVisitor</code> only visit
tables that do contain nameID references (<a href="https://redirect.github.com/fonttools/fonttools/issues/3213">#3213</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3214">#3214</a>).</li>
<li>[varLib.instancer] Support instancing fonts containing null ConditionSet offsets in
FeatureVariationRecords (<a href="https://redirect.github.com/fonttools/fonttools/issues/3211">#3211</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3212">#3212</a>).</li>
<li>[statisticsPen] Report font glyph-average weight/width and font-wide slant.</li>
<li>[fontBuilder] Fixed head.created date incorrectly set to 0 instead of the current
timestamp, regression introduced in v4.40.0 (<a href="https://redirect.github.com/fonttools/fonttools/issues/3210">#3210</a>).</li>
<li>[varLib.merger] Support sparse <code>CursivePos</code> masters (<a href="https://redirect.github.com/fonttools/fonttools/issues/3209">#3209</a>).</li>
</ul>
<h2>4.41.0 (released 2023-07-12)</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/fonttools/fonttools/commit/145460e77f772767608e677737f2d00147152620"><code>145460e</code></a> Release 4.43.0</li>
<li><a href="https://github.com/fonttools/fonttools/commit/64f3fd83d901f2da882cca5efc38ebdfd2718ab7"><code>64f3fd8</code></a> Update changelog [skip ci]</li>
<li><a href="https://github.com/fonttools/fonttools/commit/7aea49e88cf997b3e0bdfd7f6330a16578c9ce5a"><code>7aea49e</code></a> Merge pull request <a href="https://redirect.github.com/fonttools/fonttools/issues/3283">#3283</a> from hugovk/main</li>
<li><a href="https://github.com/fonttools/fonttools/commit/4470c4401d628f273d79bf4bd0df42f1217fcc53"><code>4470c44</code></a> Bump requirements.txt to support Python 3.12</li>
<li><a href="https://github.com/fonttools/fonttools/commit/0c87cbad6e21c0f2511cdfc70ad7e1a572e84017"><code>0c87cba</code></a> Bump scipy for Python 3.12 support</li>
<li><a href="https://github.com/fonttools/fonttools/commit/eda6fa5cfbdfaf1d54cf391ed9c86b72288882a2"><code>eda6fa5</code></a> Add support for Python 3.12</li>
<li><a href="https://github.com/fonttools/fonttools/commit/0e033b0e5cd771f520bbf7346dedb7751677bd24"><code>0e033b0</code></a> Bump reportlab from 3.6.12 to 3.6.13 in /Doc</li>
<li><a href="https://github.com/fonttools/fonttools/commit/60126435dff31b489a9ea1a8dcc260101e5b1c20"><code>6012643</code></a> [iup] Work around cython bug</li>
<li><a href="https://github.com/fonttools/fonttools/commit/b14268a23c5a0dd644d2479064e4018a6b084b23"><code>b14268a</code></a> [iup] Remove copy/pasta</li>
<li><a href="https://github.com/fonttools/fonttools/commit/0a3360e52727cdefce2e9b28286b074faf99033c"><code>0a3360e</code></a> [varLib.avar] New module to compile avar from .designspace file</li>
<li>Additional commits viewable in <a href="https://github.com/fonttools/fonttools/compare/4.31.1...4.43.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28417/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28417/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28417",
"html_url": "https://github.com/huggingface/transformers/pull/28417",
"diff_url": "https://github.com/huggingface/transformers/pull/28417.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28417.patch",
"merged_at": "2024-01-10T10:22:44"
} |
https://api.github.com/repos/huggingface/transformers/issues/28416 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28416/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28416/comments | https://api.github.com/repos/huggingface/transformers/issues/28416/events | https://github.com/huggingface/transformers/issues/28416 | 2,072,663,931 | I_kwDOCUB6oc57ild7 | 28,416 | Loading Phi 1.5 model from the hub gives warning that model is uninitialized | {
"login": "gabeorlanski",
"id": 18234433,
"node_id": "MDQ6VXNlcjE4MjM0NDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/18234433?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gabeorlanski",
"html_url": "https://github.com/gabeorlanski",
"followers_url": "https://api.github.com/users/gabeorlanski/followers",
"following_url": "https://api.github.com/users/gabeorlanski/following{/other_user}",
"gists_url": "https://api.github.com/users/gabeorlanski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gabeorlanski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gabeorlanski/subscriptions",
"organizations_url": "https://api.github.com/users/gabeorlanski/orgs",
"repos_url": "https://api.github.com/users/gabeorlanski/repos",
"events_url": "https://api.github.com/users/gabeorlanski/events{/privacy}",
"received_events_url": "https://api.github.com/users/gabeorlanski/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 11 | 2024-01-09T15:56:47 | 2024-01-12T15:18:50 | 2024-01-12T15:18:50 | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.20.1
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: DEEPSPEED
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- deepspeed_config: {'gradient_accumulation_steps': 1, 'gradient_clipping': 1.0, 'offload_optimizer_device': 'cpu', 'offload_param_device': 'none', 'zero3_init_flag': False, 'zero_stage': 2}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the following code:
```Python
from transformers import PhiForCausalLM
model = PhiForCausalLM.from_pretrained("microsoft/phi-1.5")
```
This happens for all phi models and has only started happening recently. It worked fine a few days ago. I have tried this both in my conda environment, a fresh one, and using the official huggingface docker image. It happened in all three.
Additionally if i run:
```Python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1_5", trust_remote_code =True)
```
The same error occurs.
If I run:
```Python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1_5", use_safetensors= True, trust_remote_code =True)
```
The model hangs and never loads.
### Expected behavior
The model should load with the initialized weights from the hub. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28416/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28415 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28415/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28415/comments | https://api.github.com/repos/huggingface/transformers/issues/28415/events | https://github.com/huggingface/transformers/issues/28415 | 2,072,579,042 | I_kwDOCUB6oc57iQvi | 28,415 | Can not load model after finetuning PHI2 model | {
"login": "zhangmiaosen2000",
"id": 59921236,
"node_id": "MDQ6VXNlcjU5OTIxMjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/59921236?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangmiaosen2000",
"html_url": "https://github.com/zhangmiaosen2000",
"followers_url": "https://api.github.com/users/zhangmiaosen2000/followers",
"following_url": "https://api.github.com/users/zhangmiaosen2000/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangmiaosen2000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhangmiaosen2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangmiaosen2000/subscriptions",
"organizations_url": "https://api.github.com/users/zhangmiaosen2000/orgs",
"repos_url": "https://api.github.com/users/zhangmiaosen2000/repos",
"events_url": "https://api.github.com/users/zhangmiaosen2000/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhangmiaosen2000/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 4 | 2024-01-09T15:15:59 | 2024-01-27T20:38:10 | null | NONE | null | ### System Info
Here is my code in finetuning PHI-2 model
```
model = transformers.AutoModelForCausalLM.from_pretrained("microsoft/phi-2", torch_dtype="auto", flash_attn=True, flash_rotary=True, fused_dense=True, trust_remote_code=True)
tokenizer = transformers.AutoTokenizer.from_pretrained(
"microsoft/phi-2",
cache_dir=training_args.cache_dir,
model_max_length=training_args.model_max_length,
padding_side="right",
use_fast=True,
trust_remote_code=True
)
if tokenizer.pad_token is None:
smart_tokenizer_and_embedding_resize(
special_tokens_dict=dict(pad_token=DEFAULT_PAD_TOKEN),
tokenizer=tokenizer,
model=model,
)
```
where:
```
def smart_tokenizer_and_embedding_resize(
special_tokens_dict: Dict,
tokenizer: transformers.PreTrainedTokenizer,
model: transformers.PreTrainedModel,
):
"""Resize tokenizer and embedding.
Note: This is the unoptimized version that may make your embedding size not be divisible by 64.
"""
num_new_tokens = tokenizer.add_special_tokens(special_tokens_dict)
model.resize_token_embeddings(len(tokenizer))
if num_new_tokens > 0:
input_embeddings = model.get_input_embeddings().weight.data
output_embeddings = model.get_output_embeddings().weight.data
input_embeddings_avg = input_embeddings[:-num_new_tokens].mean(dim=0, keepdim=True)
output_embeddings_avg = output_embeddings[:-num_new_tokens].mean(dim=0, keepdim=True)
input_embeddings[-num_new_tokens:] = input_embeddings_avg
output_embeddings[-num_new_tokens:] = output_embeddings_avg
```
Then I successfully finetuned model and save to xxx/checkpoint-500.
But when I try to use this code to load finetuned model:
```
model = AutoModelForCausalLM.from_pretrained(
base_model,
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code = True
)
```
I alway got this:
```
Traceback (most recent call last):
File "xxx.py", line 272, in <module>
main()
File "xxx.py", line 151, in main
tokenizer, model = get_model(base_model=args.model, page_attention=args.vllm)
File "xxx.py", line 80, in get_model
model = AutoModelForCausalLM.from_pretrained(
File "/usr/local/lib/python3.8/dist-packages/transformers/models/auto/auto_factory.py", line 561, in from_pretrained
return model_class.from_pretrained(
File "/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py", line 3480, in from_pretrained
) = cls._load_pretrained_model(
File "/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py", line 3870, in _load_pretrained_model
new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
File "/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py", line 743, in _load_state_dict_into_meta_model
set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs)
File "/usr/local/lib/python3.8/dist-packages/accelerate/utils/modeling.py", line 285, in set_module_tensor_to_device
raise ValueError(
ValueError: Trying to set a tensor of shape torch.Size([50296, 2560]) in "weight" (which has shape torch.Size([50304, 2560])), this look incorrect.
```
**Can you help me with that?**
Note that, the saved config.json is:
```
{
"_name_or_path": "microsoft/phi-2",
"activation_function": "gelu_new",
"architectures": [
"PhiForCausalLM"
],
"attn_pdrop": 0.0,
"auto_map": {
"AutoConfig": "microsoft/phi-2--configuration_phi.PhiConfig",
"AutoModelForCausalLM": "microsoft/phi-2--modeling_phi.PhiForCausalLM"
},
"embd_pdrop": 0.0,
"flash_attn": true,
"flash_rotary": true,
"fused_dense": true,
"img_processor": null,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "phi-msft",
"n_embd": 2560,
"n_head": 32,
"n_head_kv": null,
"n_inner": null,
"n_layer": 32,
"n_positions": 2048,
"resid_pdrop": 0.1,
"rotary_dim": 32,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.36.2",
"use_cache": false,
"vocab_size": 50296
}
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
,
### Expected behavior
, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28415/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28414 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28414/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28414/comments | https://api.github.com/repos/huggingface/transformers/issues/28414/events | https://github.com/huggingface/transformers/pull/28414 | 2,072,545,115 | PR_kwDOCUB6oc5jlleD | 28,414 | Fix mismatching loading in from_pretrained with/without accelerate | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 8 | 2024-01-09T14:58:34 | 2024-01-23T10:21:49 | 2024-01-16T13:29:51 | COLLABORATOR | null | It appears that passing a `device_map` may result in some parameters being not contiguous in the loaded model, which is not the case when loading a model without a `device_map`.
See for instance:
```python
from transformers import OwlViTProcessor, OwlViTForObjectDetection
model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch16")
print("is contig (no device_map):", model.owlvit.visual_projection.weight.is_contiguous())
model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch16", device_map="auto")
print("is contig (device_map):", model.owlvit.visual_projection.weight.is_contiguous())
```
printing
```
is contig (no device_map): True
is contig (device_map): False
```
A byproduct of this bug is that when a model is loaded with a `device_map`, then using
```python
model.save_pretrained("owlvit", safe_serialization=True)
```
results in
```
Traceback (most recent call last):
File "<tmp 2>", line 13, in <module>
model.save_pretrained("owlvit", save_config=True, safe_serialization=True)
File "/home/fxmarty/hf_internship/transformers/src/transformers/modeling_utils.py", line 2406, in save_pretrained
safe_save_file(shard, os.path.join(save_directory, shard_file), metadata={"format": "pt"})
File "/home/fxmarty/anaconda3/envs/hf-inf/lib/python3.9/site-packages/safetensors/torch.py", line 281, in save_file
serialize_file(_flatten(tensors), filename, metadata=metadata)
File "/home/fxmarty/anaconda3/envs/hf-inf/lib/python3.9/site-packages/safetensors/torch.py", line 481, in _flatten
"data": _tobytes(v, k),
File "/home/fxmarty/anaconda3/envs/hf-inf/lib/python3.9/site-packages/safetensors/torch.py", line 396, in _tobytes
raise ValueError(
ValueError: You are trying to save a non contiguous tensor: `owlvit.visual_projection.weight` which is not allowed. It either means you are trying to save tensors which are reference of each other in which case it's recommended to save only the full tensors, and reslice at load time, or simply call `.contiguous()` on your tensor to pack it before saving.
```
This bug stems from the fact that in the case we don't use accelerate, the weights are loaded through [`_load_from_state_dict`](https://github.com/huggingface/transformers/blob/357971ec367fecb9951ae3218feafece5f61416a/src/transformers/modeling_utils.py#L600), which makes use of [`param.copy_(input_param)`](https://github.com/pytorch/pytorch/blob/db79ceb110f6646523019a59bbd7b838f43d4a86/torch/nn/modules/module.py#L2040C29-L2040C29) which preserves the contiguity of the module's parameters. On the contrary, Accelerate's [`set_module_tensor_to_device`](https://github.com/huggingface/accelerate/blob/3969731ce827b088fcc56ea790935cdece12f800/src/accelerate/utils/modeling.py#L370) appears to override the existing value [simply by the one from the state dict](https://github.com/huggingface/transformers/blob/357971ec367fecb9951ae3218feafece5f61416a/src/transformers/modeling_utils.py#L716-L758), which may be not contiguous.
This PR fixes the issue. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28414/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28414",
"html_url": "https://github.com/huggingface/transformers/pull/28414",
"diff_url": "https://github.com/huggingface/transformers/pull/28414.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28414.patch",
"merged_at": "2024-01-16T13:29:51"
} |
https://api.github.com/repos/huggingface/transformers/issues/28413 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28413/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28413/comments | https://api.github.com/repos/huggingface/transformers/issues/28413/events | https://github.com/huggingface/transformers/issues/28413 | 2,072,410,625 | I_kwDOCUB6oc57hnoB | 28,413 | CausalLMOutputWithPast does not output hidden states | {
"login": "Tiziano41",
"id": 156085316,
"node_id": "U_kgDOCU2sRA",
"avatar_url": "https://avatars.githubusercontent.com/u/156085316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tiziano41",
"html_url": "https://github.com/Tiziano41",
"followers_url": "https://api.github.com/users/Tiziano41/followers",
"following_url": "https://api.github.com/users/Tiziano41/following{/other_user}",
"gists_url": "https://api.github.com/users/Tiziano41/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tiziano41/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tiziano41/subscriptions",
"organizations_url": "https://api.github.com/users/Tiziano41/orgs",
"repos_url": "https://api.github.com/users/Tiziano41/repos",
"events_url": "https://api.github.com/users/Tiziano41/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tiziano41/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 4 | 2024-01-09T13:51:36 | 2024-01-09T14:49:21 | null | NONE | null | ### System Info
sentence-transformers 2.2.2
transformers 4.31.0
numpy 1.19
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
This is the code I'm using :
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2", trust_remote_code=True)
inputs = tokenizer('''
Name the writings of Dante Alighieri.
Answer:
''', return_tensors="pt", return_attention_mask=False)
embeddings = model(**inputs, output_hidden_states = True)

### Expected behavior
As you can see in the image, the hidden_states attribute is None, while I would expect the intermediate layers representations.
I tried to specify the output_hidden_states = True, both in model instantiation and on inference but neither works.
Is there any parameter that I'm missing ? That's all I have seen on the available documentation.
Thank you in advance for you support. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28413/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28412 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28412/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28412/comments | https://api.github.com/repos/huggingface/transformers/issues/28412/events | https://github.com/huggingface/transformers/issues/28412 | 2,072,311,882 | I_kwDOCUB6oc57hPhK | 28,412 | TGI Support for Mixtral AWQ | {
"login": "RonanKMcGovern",
"id": 78278410,
"node_id": "MDQ6VXNlcjc4Mjc4NDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/78278410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RonanKMcGovern",
"html_url": "https://github.com/RonanKMcGovern",
"followers_url": "https://api.github.com/users/RonanKMcGovern/followers",
"following_url": "https://api.github.com/users/RonanKMcGovern/following{/other_user}",
"gists_url": "https://api.github.com/users/RonanKMcGovern/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RonanKMcGovern/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RonanKMcGovern/subscriptions",
"organizations_url": "https://api.github.com/users/RonanKMcGovern/orgs",
"repos_url": "https://api.github.com/users/RonanKMcGovern/repos",
"events_url": "https://api.github.com/users/RonanKMcGovern/events{/privacy}",
"received_events_url": "https://api.github.com/users/RonanKMcGovern/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-09T12:47:32 | 2024-01-09T19:15:19 | 2024-01-09T19:15:19 | NONE | null | ### Feature request
Currently, TGI seems to be able to load Mixtral AWQ models. However, the responses returned are blank.
### Motivation
It's possible to inference a Mixtral model from 16 bit weights (incl. with eetq if desired) but the downloading of weights is buggy and slow see [here](https://github.com/huggingface/text-generation-inference/issues/1413). Also, it would be nice to be able to just download 4-bit weights.
### Your contribution
The AWQ weights are good because they work in vLLM. So, there seems to be a bug on the TGI implementation (although it's unclear whether Mixtral is loading as a fluke because it doesn't seem to be explicitly supported for AWQ). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28412/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28412/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28411 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28411/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28411/comments | https://api.github.com/repos/huggingface/transformers/issues/28411/events | https://github.com/huggingface/transformers/issues/28411 | 2,072,297,197 | I_kwDOCUB6oc57hL7t | 28,411 | Indicies element out of bounds from inclusive range | {
"login": "nogifeet",
"id": 72322393,
"node_id": "MDQ6VXNlcjcyMzIyMzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/72322393?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nogifeet",
"html_url": "https://github.com/nogifeet",
"followers_url": "https://api.github.com/users/nogifeet/followers",
"following_url": "https://api.github.com/users/nogifeet/following{/other_user}",
"gists_url": "https://api.github.com/users/nogifeet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nogifeet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nogifeet/subscriptions",
"organizations_url": "https://api.github.com/users/nogifeet/orgs",
"repos_url": "https://api.github.com/users/nogifeet/repos",
"events_url": "https://api.github.com/users/nogifeet/events{/privacy}",
"received_events_url": "https://api.github.com/users/nogifeet/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-09T12:38:33 | 2024-01-10T10:22:46 | 2024-01-10T10:22:46 | NONE | null | ### System Info
Hello, we are using the TR-OCR model exported to Onnx. We notice a problem with the large checkpoints for both printed and handwritten; when we run inference using the onnxruntime java library.
Dataset: IAM handwritten (Lines)
Different behaviours are observed on CPU and GPU:
CPU: (we might get an error like below)
Status Message: Non-zero status code returned while running the Gather node. Name:'Gather_346' Status Message:
indices element out of data bounds, idx=514 must be within the inclusive range [-514,513]
at ai.onnxruntime.OrtSession.run(Native Method)
at ai.onnxruntime.OrtSession.run(OrtSession.java:301)
at ai.onnxruntime.OrtSession.run(OrtSession.java:242)
GPU: We notice that the end token is not generated and the decoder keeps repeating the tokens after a point.
This is the main problem, usually, the Gather_346 and Gather_320 operators fail and throw data bounds error.
We have also noticed different behaviour when we turn caching on/off. Note we don't face this problem on the base or small checkpoints but only on the "large" checkpoints. Looking to understand whether this is an onnxruntime issue or hf, please let me know.
A similar issue was raised in the onnxruntime page: https://github.com/microsoft/onnxruntime/issues/2080
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Export the large checkpoints of TR-OCR.
2. Run a simple example from the IAM dataset image attached.
3. Use this image 
4. Don't use any max_length limit and you will notice that the end token is not generated and the tokens are repeated.
5. Current Output: The edges of the transoms should be bevelled to be edges to the edges of the
### Expected behavior
Current Output: The edges of the transoms should be bevelled to be edges to the edges of the
Expected Output: The edges of the transoms should be bevelled to | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28411/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28411/timeline | null | not_planned | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28410 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28410/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28410/comments | https://api.github.com/repos/huggingface/transformers/issues/28410/events | https://github.com/huggingface/transformers/issues/28410 | 2,072,135,788 | I_kwDOCUB6oc57gkhs | 28,410 | Query on Llama2 Tokenizer Behavior During Causal LM Instruction Tuning | {
"login": "Hannibal046",
"id": 38466901,
"node_id": "MDQ6VXNlcjM4NDY2OTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/38466901?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hannibal046",
"html_url": "https://github.com/Hannibal046",
"followers_url": "https://api.github.com/users/Hannibal046/followers",
"following_url": "https://api.github.com/users/Hannibal046/following{/other_user}",
"gists_url": "https://api.github.com/users/Hannibal046/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hannibal046/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hannibal046/subscriptions",
"organizations_url": "https://api.github.com/users/Hannibal046/orgs",
"repos_url": "https://api.github.com/users/Hannibal046/repos",
"events_url": "https://api.github.com/users/Hannibal046/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hannibal046/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2024-01-09T10:59:27 | 2024-01-16T03:38:57 | 2024-01-16T03:38:57 | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.15.0-1050-azure-x86_64-with-glibc2.17
- Python version: 3.8.18
- Huggingface_hub version: 0.20.1
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained("meta-llama/Llama-2-7b")
prompt = """Because of his beliefs he knew that if he were to kill he would suffer what?
A. shoot
B. commit crime
C. damnation
D. charming
E. think twice
The answer is: """
label = prompt + "C"
print(tokenizer(prompt,return_length=True).length)
print(tokenizer(label, return_length=True).length)
```

### Expected behavior
Hello,
I am currently in the process of preparing data for Causal Language Model instruction tuning. As the loss calculation is solely based on the labels, it is imperative for me to accurately determine the length of the prompts in order to exclude them from the loss computation.
Upon examining the code, I observed that the Llama2 tokenizer seems to append a special empty string token at the end of each prompt. This addition results in an unchanged token length irrespective of whether a label is present or not.
Could you please clarify if this behavior is intentional or if it represents a bug? If this is indeed the expected functionality, could you suggest an alternative method or best practice for achieving the correct exclusion of prompt tokens in the loss calculation?
I appreciate your assistance on this matter.
Thank you! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28410/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28409 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28409/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28409/comments | https://api.github.com/repos/huggingface/transformers/issues/28409/events | https://github.com/huggingface/transformers/issues/28409 | 2,072,073,395 | I_kwDOCUB6oc57gVSz | 28,409 | Add FlashAttention-2 support for Mask2Former model | {
"login": "DanieleVeri",
"id": 20779433,
"node_id": "MDQ6VXNlcjIwNzc5NDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/20779433?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DanieleVeri",
"html_url": "https://github.com/DanieleVeri",
"followers_url": "https://api.github.com/users/DanieleVeri/followers",
"following_url": "https://api.github.com/users/DanieleVeri/following{/other_user}",
"gists_url": "https://api.github.com/users/DanieleVeri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DanieleVeri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DanieleVeri/subscriptions",
"organizations_url": "https://api.github.com/users/DanieleVeri/orgs",
"repos_url": "https://api.github.com/users/DanieleVeri/repos",
"events_url": "https://api.github.com/users/DanieleVeri/events{/privacy}",
"received_events_url": "https://api.github.com/users/DanieleVeri/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | 0 | 2024-01-09T10:24:09 | 2024-01-10T10:05:03 | null | NONE | null | ### Feature request
Is it possible to add FlashAttention-2 support to the Mask2Former model?
### Motivation
Since it is already availble for ViT, it would be great to have it on Mask2Former too.
Maybe the additional input masks to the decoder layer represent a major challenge?
### Your contribution
I could help by testing the implementations. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28409/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28408 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28408/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28408/comments | https://api.github.com/repos/huggingface/transformers/issues/28408/events | https://github.com/huggingface/transformers/pull/28408 | 2,072,032,542 | PR_kwDOCUB6oc5jj1Pc | 28,408 | Remove `task` arg in `load_dataset` in image-classification example | {
"login": "regisss",
"id": 15324346,
"node_id": "MDQ6VXNlcjE1MzI0MzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/15324346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/regisss",
"html_url": "https://github.com/regisss",
"followers_url": "https://api.github.com/users/regisss/followers",
"following_url": "https://api.github.com/users/regisss/following{/other_user}",
"gists_url": "https://api.github.com/users/regisss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/regisss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/regisss/subscriptions",
"organizations_url": "https://api.github.com/users/regisss/orgs",
"repos_url": "https://api.github.com/users/regisss/repos",
"events_url": "https://api.github.com/users/regisss/events{/privacy}",
"received_events_url": "https://api.github.com/users/regisss/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2024-01-09T10:01:51 | 2024-01-16T08:57:47 | 2024-01-16T07:04:08 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The `task` argument is now deprecated in the `datasets.load_dataset` method. This PR removes it and adds the renaming logic needed to deal with datasets like Cifar10 (the `task` attribute of datasets used to help with that).
Internal discussion here: https://huggingface.slack.com/archives/C034N0A7H09/p1704447848692889
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28408/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/28408/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28408",
"html_url": "https://github.com/huggingface/transformers/pull/28408",
"diff_url": "https://github.com/huggingface/transformers/pull/28408.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28408.patch",
"merged_at": "2024-01-16T07:04:08"
} |
https://api.github.com/repos/huggingface/transformers/issues/28407 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28407/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28407/comments | https://api.github.com/repos/huggingface/transformers/issues/28407/events | https://github.com/huggingface/transformers/pull/28407 | 2,071,993,801 | PR_kwDOCUB6oc5jjs9c | 28,407 | [Whisper] Fix slow test | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 7 | 2024-01-09T09:40:34 | 2024-01-10T21:35:37 | 2024-01-10T21:35:36 | MEMBER | null | # What does this PR do?
This PR fixes a slow test for Whisper that doesn't work anymore because it's using a deprecated dataset. For this slow test to pass, we will need the CI to have access to a security token (cc @ydshieh) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28407/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28407",
"html_url": "https://github.com/huggingface/transformers/pull/28407",
"diff_url": "https://github.com/huggingface/transformers/pull/28407.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28407.patch",
"merged_at": "2024-01-10T21:35:36"
} |
https://api.github.com/repos/huggingface/transformers/issues/28406 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28406/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28406/comments | https://api.github.com/repos/huggingface/transformers/issues/28406/events | https://github.com/huggingface/transformers/pull/28406 | 2,071,931,777 | PR_kwDOCUB6oc5jjfiz | 28,406 | Fix auxiliary loss related code in transformers | {
"login": "SangbumChoi",
"id": 34004152,
"node_id": "MDQ6VXNlcjM0MDA0MTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/34004152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SangbumChoi",
"html_url": "https://github.com/SangbumChoi",
"followers_url": "https://api.github.com/users/SangbumChoi/followers",
"following_url": "https://api.github.com/users/SangbumChoi/following{/other_user}",
"gists_url": "https://api.github.com/users/SangbumChoi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SangbumChoi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SangbumChoi/subscriptions",
"organizations_url": "https://api.github.com/users/SangbumChoi/orgs",
"repos_url": "https://api.github.com/users/SangbumChoi/repos",
"events_url": "https://api.github.com/users/SangbumChoi/events{/privacy}",
"received_events_url": "https://api.github.com/users/SangbumChoi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 10 | 2024-01-09T09:08:22 | 2024-01-19T14:12:02 | 2024-01-19T14:12:02 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@amyeroberts https://github.com/huggingface/transformers/pull/28354
There are several things that I summarized
1. deta/table_transformer has no issue
2. conditional_detr had slight issue but it was simple
3. yolos turns out that they don't have auxiliary output for their result. So I removed related changes and did not add aux_loss test. Check [out](https://github.com/hustvl/YOLOS/blob/5717fc29d727dab84ad585c56457b4de1225eddc/models/detector.py#L53) they don't have 'aux_output'.
4. maskformer issue related to configuration file
let me know if other models need this auxiliary loss test! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28406/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28406/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28406",
"html_url": "https://github.com/huggingface/transformers/pull/28406",
"diff_url": "https://github.com/huggingface/transformers/pull/28406.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28406.patch",
"merged_at": "2024-01-19T14:12:02"
} |
https://api.github.com/repos/huggingface/transformers/issues/28405 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28405/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28405/comments | https://api.github.com/repos/huggingface/transformers/issues/28405/events | https://github.com/huggingface/transformers/pull/28405 | 2,071,834,084 | PR_kwDOCUB6oc5jjKkH | 28,405 | [`core`/ FEAT] Add the possibility to push custom tags using `PreTrainedModel` itself | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2024-01-09T08:04:09 | 2024-01-15T14:02:45 | 2024-01-15T13:48:08 | CONTRIBUTOR | null | # What does this PR do?
From an idea we discussed internally. This PR introduces a new API to inject custom tags in the model card
From a community perspective it will make easier to push custom tags as for now it is only limited to trainers.
Below I demonstrate the simplicity of the API
```python
from transformers import AutoModelForCausalLM
model_name = "HuggingFaceM4/tiny-random-LlamaForCausalLM"
model = AutoModelForCausalLM.from_pretrained(model_name)
model.add_model_tags(["tag-test"])
model.push_to_hub("llama-tagged")
```
cc @osanseviero @Narsil @julien-c
Note with the current design each time a user calls `push_to_hub`, it will create a model card template if no model card is present on the hub | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28405/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28405/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28405",
"html_url": "https://github.com/huggingface/transformers/pull/28405",
"diff_url": "https://github.com/huggingface/transformers/pull/28405.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28405.patch",
"merged_at": "2024-01-15T13:48:08"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.