url stringlengths 66 66 | repository_url stringclasses 1
value | labels_url stringlengths 80 80 | comments_url stringlengths 75 75 | events_url stringlengths 73 73 | html_url stringlengths 54 56 | id int64 2.03B 2.11B | node_id stringlengths 18 19 | number int64 27.9k 28.8k | title stringlengths 3 306 | user dict | labels list | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees list | milestone null | comments int64 0 39 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | active_lock_reason null | body stringlengths 19 42.4k ⌀ | reactions dict | timeline_url stringlengths 75 75 | performed_via_github_app null | state_reason stringclasses 3
values | draft bool 2
classes | pull_request dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/28203 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28203/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28203/comments | https://api.github.com/repos/huggingface/transformers/issues/28203/events | https://github.com/huggingface/transformers/pull/28203 | 2,053,810,910 | PR_kwDOCUB6oc5ipUb4 | 28,203 | fix FA2 when using quantization | {
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-22T11:48:25 | 2023-12-28T09:05:28 | 2023-12-26T03:06:41 | CONTRIBUTOR | null | # What does this PR do?
1. when I use QLoRA+Flash Attention with bf16, I get the following warning of casting to `float16` which is incorrect as it should be casting to bf16:
```bash
The input hidden states seems to be silently casted in float32, this might be related to the fact you have upcasted embedding or layer norm layers in float32. We will cast back the input in torch.float16.
```
This PR resolves this issue. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28203/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28203/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28203",
"html_url": "https://github.com/huggingface/transformers/pull/28203",
"diff_url": "https://github.com/huggingface/transformers/pull/28203.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28203.patch",
"merged_at": "2023-12-26T03:06:41"
} |
https://api.github.com/repos/huggingface/transformers/issues/28202 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28202/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28202/comments | https://api.github.com/repos/huggingface/transformers/issues/28202/events | https://github.com/huggingface/transformers/pull/28202 | 2,053,675,463 | PR_kwDOCUB6oc5io2r6 | 28,202 | Fix the check of models supporting FA/SDPA not run | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-22T09:57:52 | 2023-12-22T11:56:12 | 2023-12-22T11:56:11 | COLLABORATOR | null | # What does this PR do?
The original check (as test methods) in `tests/utils/test_doc_samples.py` won't run (and didn't run like in #28133) as that file is not impacted by the modeling files (in terms of import relation).
Those 2 checks don't need `torch` at all and could be done in the first stage of check (`check_repository_consistency`) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28202/reactions",
"total_count": 4,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28202/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28202",
"html_url": "https://github.com/huggingface/transformers/pull/28202",
"diff_url": "https://github.com/huggingface/transformers/pull/28202.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28202.patch",
"merged_at": "2023-12-22T11:56:11"
} |
https://api.github.com/repos/huggingface/transformers/issues/28201 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28201/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28201/comments | https://api.github.com/repos/huggingface/transformers/issues/28201/events | https://github.com/huggingface/transformers/pull/28201 | 2,053,671,852 | PR_kwDOCUB6oc5io15K | 28,201 | [BUG] BarkEosPrioritizerLogitsProcessor eos_token_id use list, tensor size mismatch | {
"login": "inkinworld",
"id": 12553724,
"node_id": "MDQ6VXNlcjEyNTUzNzI0",
"avatar_url": "https://avatars.githubusercontent.com/u/12553724?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/inkinworld",
"html_url": "https://github.com/inkinworld",
"followers_url": "https://api.github.com/users/inkinworld/followers",
"following_url": "https://api.github.com/users/inkinworld/following{/other_user}",
"gists_url": "https://api.github.com/users/inkinworld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/inkinworld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/inkinworld/subscriptions",
"organizations_url": "https://api.github.com/users/inkinworld/orgs",
"repos_url": "https://api.github.com/users/inkinworld/repos",
"events_url": "https://api.github.com/users/inkinworld/events{/privacy}",
"received_events_url": "https://api.github.com/users/inkinworld/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-22T09:55:04 | 2024-01-10T11:08:10 | 2024-01-10T10:46:49 | CONTRIBUTOR | null | # What does this PR do?
Fixes bug about `transformers.generation.logits_process.BarkEosPrioritizerLogitsProcessor`.
when `BarkEosPrioritizerLogitsProcessor` eos_token_id use list, tensor size mismatch.
such as below test case:
```
def test_early_stop_processor_multi_eos(self):
input_ids = None
eos_token_id = [2, 3]
min_eos_p = 0.1 ## some small float
scores = self._get_uniform_logits(2, 4)
scores[0][eos_token_id] = -6 ## less than log(min_eos_p)
esp = BarkEosPrioritizerLogitsProcessor(eos_token_id=eos_token_id, min_eos_p=min_eos_p)
actual_scores = esp(input_ids, scores)
expected_scores_list = [
scores[0].tolist(),
[float("-inf"), float("-inf"), scores[0][0], scores[0][0]],
]
self.assertListEqual(actual_scores.tolist(), expected_scores_list)
```
will occur this exception
```
self = <transformers.generation.logits_process.BarkEosPrioritizerLogitsProcessor object at 0x12f1e0220>
input_ids = None
scores = tensor([[ 0.2500, 0.2500, -6.0000, -6.0000],
[ 0.2500, 0.2500, 0.2500, 0.2500]])
@add_start_docstrings(LOGITS_PROCESSOR_INPUTS_DOCSTRING)
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
if self.min_eos_p:
probs = torch.nn.functional.softmax(scores.float(), dim=-1)
# create scores full of -inf except for the eos_token_id
early_stop_scores = torch.ones_like(scores) * -float("inf")
early_stop_scores[:, self.eos_token_id] = scores[:, self.eos_token_id]
do_early_stop = probs[:, self.eos_token_id] > self.min_eos_p
# do_early_stop = torch.any(do_early_stop, dim=1, keepdim=True)
> scores = torch.where(do_early_stop, early_stop_scores, scores)
E RuntimeError: The size of tensor a (2) must match the size of tensor b (4) at non-singleton dimension 1
src/transformers/generation/logits_process.py:2142: RuntimeError
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@gante
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28201/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28201/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28201",
"html_url": "https://github.com/huggingface/transformers/pull/28201",
"diff_url": "https://github.com/huggingface/transformers/pull/28201.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28201.patch",
"merged_at": "2024-01-10T10:46:49"
} |
https://api.github.com/repos/huggingface/transformers/issues/28200 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28200/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28200/comments | https://api.github.com/repos/huggingface/transformers/issues/28200/events | https://github.com/huggingface/transformers/issues/28200 | 2,053,668,491 | I_kwDOCUB6oc56aH6L | 28,200 | RuntimeError: Failed to import transformers.models.mistral.modeling_mistral because of the following error (look up to see its traceback): cannot import name 'is_flash_attn_greater_or_equal_2_10' from 'transformers.utils' (/usr/local/lib/python3.10/dist-packages/transformers/utils/__init__.py) | {
"login": "Jaykumaran",
"id": 60032500,
"node_id": "MDQ6VXNlcjYwMDMyNTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/60032500?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jaykumaran",
"html_url": "https://github.com/Jaykumaran",
"followers_url": "https://api.github.com/users/Jaykumaran/followers",
"following_url": "https://api.github.com/users/Jaykumaran/following{/other_user}",
"gists_url": "https://api.github.com/users/Jaykumaran/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jaykumaran/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jaykumaran/subscriptions",
"organizations_url": "https://api.github.com/users/Jaykumaran/orgs",
"repos_url": "https://api.github.com/users/Jaykumaran/repos",
"events_url": "https://api.github.com/users/Jaykumaran/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jaykumaran/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-22T09:52:23 | 2023-12-29T13:54:32 | 2023-12-29T13:54:32 | NONE | null | ### System Info
# !pip install trl transformers==4.35.2 accelerate peft==0.6.2 -Uqqq
!pip install trl transformers accelerate peft==0.6.2 -Uqqq
!pip install datasets bitsandbytes einops wandb -Uqqq
!pip install flash-attn --no-build-isolation -Uqq
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
# !pip install trl transformers==4.35.2 accelerate peft==0.6.2 -Uqqq
!pip install trl transformers accelerate peft==0.6.2 -Uqqq
!pip install datasets bitsandbytes einops wandb -Uqqq
!pip install flash-attn --no-build-isolation -Uqq
MODEL_NAME = "HuggingFaceH4/zephyr-7b-beta"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True, # load model in 4-bit precision
bnb_4bit_quant_type="nf4", # pre-trained model should be quantized in 4-bit NF format
bnb_4bit_use_double_quant=True, # Using double quantization as mentioned in QLoRA paper
bnb_4bit_compute_dtype=torch.bfloat16,
# During computation, pre-trained model should be loaded in BF16 format
)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME,
quantization_config = bnb_config,
device_map = 0,
use_cache=True,
trust_remote_code=True,
use_flash_attention_2 = True
)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
### Expected behavior
when trying to load the model,it results in following error.
RuntimeError: Failed to import transformers.models.mistral.modeling_mistral because of the following error (look up to see its traceback): cannot import name 'is_flash_attn_greater_or_equal_2_10' from 'transformers.utils' (/usr/local/lib/python3.10/dist-packages/transformers/utils/__init__.py) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28200/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28200/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28199 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28199/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28199/comments | https://api.github.com/repos/huggingface/transformers/issues/28199/events | https://github.com/huggingface/transformers/pull/28199 | 2,053,656,288 | PR_kwDOCUB6oc5ioyeT | 28,199 | Autocast | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2023-12-22T09:43:07 | 2024-01-22T04:24:56 | 2024-01-22T04:24:39 | CONTRIBUTOR | null | Enable autocast in the pipeline.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28199/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28199/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28199",
"html_url": "https://github.com/huggingface/transformers/pull/28199",
"diff_url": "https://github.com/huggingface/transformers/pull/28199.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28199.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28198 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28198/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28198/comments | https://api.github.com/repos/huggingface/transformers/issues/28198/events | https://github.com/huggingface/transformers/pull/28198 | 2,053,617,836 | PR_kwDOCUB6oc5ioqFd | 28,198 | Update `docs/source/en/perf_infer_gpu_one.md` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-22T09:11:52 | 2023-12-22T09:40:23 | 2023-12-22T09:40:22 | COLLABORATOR | null | # What does this PR do?
Update `docs/source/en/perf_infer_gpu_one.md` to fix
> FAILED tests/utils/test_doc_samples.py::TestDocLists::test_sdpa_support_list - ValueError: mixtral should be in listed in the SDPA documentation but is not. Please update the documentation. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28198/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28198/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28198",
"html_url": "https://github.com/huggingface/transformers/pull/28198",
"diff_url": "https://github.com/huggingface/transformers/pull/28198.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28198.patch",
"merged_at": "2023-12-22T09:40:22"
} |
https://api.github.com/repos/huggingface/transformers/issues/28197 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28197/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28197/comments | https://api.github.com/repos/huggingface/transformers/issues/28197/events | https://github.com/huggingface/transformers/issues/28197 | 2,053,583,464 | I_kwDOCUB6oc56ZzJo | 28,197 | LLaVA: index error when computing extended_attention_mask | {
"login": "TideDra",
"id": 92413813,
"node_id": "U_kgDOBYIfdQ",
"avatar_url": "https://avatars.githubusercontent.com/u/92413813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TideDra",
"html_url": "https://github.com/TideDra",
"followers_url": "https://api.github.com/users/TideDra/followers",
"following_url": "https://api.github.com/users/TideDra/following{/other_user}",
"gists_url": "https://api.github.com/users/TideDra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TideDra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TideDra/subscriptions",
"organizations_url": "https://api.github.com/users/TideDra/orgs",
"repos_url": "https://api.github.com/users/TideDra/repos",
"events_url": "https://api.github.com/users/TideDra/events{/privacy}",
"received_events_url": "https://api.github.com/users/TideDra/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-22T08:43:25 | 2023-12-22T16:47:40 | 2023-12-22T16:47:40 | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.15.0-1042-azure-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@younesbelkad
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm evaluating llava-1.5-7b-hf on MM-Vet using batch generation with `use_cache=True`, here is my script:
```python
import json
from PIL import Image
from transformers import AutoProcessor, LlavaForConditionalGeneration,AutoTokenizer
from torch.utils.data import Dataset,DataLoader
import torch
import os
from tqdm import tqdm
DATA_ROOT = "/mnt/gozhang/code/LLaVA/playground/data/eval/mm-vet"
processor = AutoProcessor.from_pretrained("/mnt/gozhang/ckpts/llava-1.5-7b-hf")
tokenizer = AutoTokenizer.from_pretrained("/mnt/gozhang/ckpts/llava-1.5-7b-hf")
processor.tokenizer.pad_token = processor.tokenizer.bos_token
class MMVetDataset(Dataset):
def __init__(self,data_root) -> None:
super().__init__()
self.data_root = data_root
with open(os.path.join(data_root, "mm-vet.json"), "r") as f:
data = json.load(f)
self.data = [(k,v) for k,v in data.items()]
def __len__(self):
return len(self.data)
def __getitem__(self, index):
return {'id':self.data[index][0],
'image':os.path.join(self.data_root,'images',self.data[index][1]['imagename']),
'question':"USER: <image>\n"+self.data[index][1]['question']+" ASSISTANT:"}
def collator(batch):
ids = [b['id'] for b in batch]
questions = [b['question'] for b in batch]
images = [Image.open(b['image']) for b in batch]
inputs = processor(text=questions,images=images,return_tensors="pt",padding=True)
return ids,inputs
model = LlavaForConditionalGeneration.from_pretrained("/mnt/gozhang/ckpts/llava-1.5-7b-hf",torch_dtype=torch.float16)
model.to('cuda')
#model.to(torch.float16)
dataset = MMVetDataset(DATA_ROOT)
dataloader = DataLoader(dataset,batch_size=16,collate_fn=collator)
results = {}
bar = tqdm(total=len(dataset))
model.eval()
with torch.inference_mode():
for ids, inputs in dataloader:
inputs.to('cuda')
inputs['pixel_values'] = inputs['pixel_values'].half()
outputs = model.generate(**inputs,temperature=0.2,do_sample=True,max_new_tokens=1024,use_cache=True)
input_token_len = inputs['input_ids'].shape[1]
responses=tokenizer.batch_decode(outputs[:, input_token_len:], skip_special_tokens=True, clean_up_tokenization_spaces=False)
for id,res in zip(ids,responses):
results[id]=res
bar.update(len(responses))
with open('mmvet_result.json','w') as f:
json.dump(results,f,indent=4)
```
However, it occasionally raises `RuntimeError: CUDA error: device-side assert triggered` when computing `extended_attention_mask`. This error happens randomly during the whole evaluation, sometimes happens in the third batch, sometimes in the last batch, etc.
I print some shapes in the `model.forward()` method and I think the `extended_attention_mask` is wrongly computed.
```python
def forward(
self,
input_ids: torch.LongTensor = None,
pixel_values: torch.FloatTensor = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_values: Optional[List[torch.FloatTensor]] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
vision_feature_layer: Optional[int] = None,
vision_feature_select_strategy: Optional[str] = None,
labels: Optional[torch.LongTensor] = None,
use_cache: Optional[bool] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple, LlavaCausalLMOutputWithPast]:
output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
output_hidden_states = (
output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
)
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
vision_feature_layer = (
vision_feature_layer if vision_feature_layer is not None else self.config.vision_feature_layer
)
vision_feature_select_strategy = (
vision_feature_select_strategy
if vision_feature_select_strategy is not None
else self.config.vision_feature_select_strategy
)
if inputs_embeds is None:
# 1. Extra the input embeddings
inputs_embeds = self.get_input_embeddings()(input_ids)
# 2. Merge text and images
if pixel_values is not None and input_ids.shape[1] != 1:
image_outputs = self.vision_tower(pixel_values, output_hidden_states=True)
# this is not memory efficient at all (output_hidden_states=True) will save all the hidden stated.
selected_image_feature = image_outputs.hidden_states[vision_feature_layer]
if vision_feature_select_strategy == "default":
selected_image_feature = selected_image_feature[:, 1:]
elif vision_feature_select_strategy == "full":
selected_image_feature = selected_image_feature
else:
raise ValueError(
f"Unexpected select feature strategy: {self.config.vision_feature_select_strategy}"
)
image_features = self.multi_modal_projector(selected_image_feature)
inputs_embeds, attention_mask, position_ids = self._merge_input_ids_with_image_features(
image_features, inputs_embeds, input_ids, attention_mask, position_ids
)
if labels is None:
labels = torch.full_like(attention_mask, self.config.ignore_index).to(torch.long)
else:
# In case input_ids.shape[1] == 1 & pixel_values==None & past_key_values != None, we are in the case of
# generation with cache
if past_key_values is not None and pixel_values is not None and input_ids.shape[1] == 1:
# Retrieve the first layer to inspect the logits and mask out the hidden states
# that are set to 0
first_layer_past_key_value = past_key_values[0][0][:, 0, :, 0]
batch_index, non_attended_tokens = torch.where(first_layer_past_key_value == 0)
# Get the target length
target_seqlen = first_layer_past_key_value.shape[-1] + 1
extended_attention_mask = torch.ones(
(attention_mask.shape[0], target_seqlen - attention_mask.shape[1]),
dtype=attention_mask.dtype,
device=attention_mask.device,
)
# Zero-out the places where we don't need to attend
print(extended_attention_mask.shape) # torch.Size([16,575])
print(len(past_key_values)) # 32
print(len(past_key_values[0])) # 2
print(past_key_values[0][0].shape) # torch.Size([16,32,688,128])
print(attention_mask.shape) # torch.Size(16,114)
print(batch_index) #tensor([2],device='cuda:0')
print(non_attended_tokens) #tensor([687],device='cuda:0')
try:
extended_attention_mask[batch_index, non_attended_tokens] = 0
except:
pdb.set_trace()
attention_mask = torch.cat((attention_mask, extended_attention_mask), dim=1)
position_ids = torch.sum(attention_mask, dim=1).unsqueeze(-1) - 1
####Following code is ignored
```
Apparently, `extended_attention_mask` has a constant sequence length of 575 (target_seqlen - attention_mask.shape[1]), which I think is roughly the number of image tokens, while the index of `non_attended_tokens` may exceed this length and then raise the CUDA error. Maybe the sequence length of `extended_attention_mask` should just be `target_seqlen`, and don't need to be concatenate with `attention_mask`? Honestly I don't understand the code here, it's really weird.
### Expected behavior
The generation should always work fine when using cache. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28197/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28197/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28196 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28196/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28196/comments | https://api.github.com/repos/huggingface/transformers/issues/28196/events | https://github.com/huggingface/transformers/pull/28196 | 2,053,577,492 | PR_kwDOCUB6oc5iohTk | 28,196 | Add CogVLM (cleaner) | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2023-12-22T08:38:03 | 2024-01-23T15:13:16 | null | CONTRIBUTOR | null | # What does this PR do?
This PR adds CogVLM, in a cleaner way. Follow-up of #27718. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28196/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28196/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28196",
"html_url": "https://github.com/huggingface/transformers/pull/28196",
"diff_url": "https://github.com/huggingface/transformers/pull/28196.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28196.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28195 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28195/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28195/comments | https://api.github.com/repos/huggingface/transformers/issues/28195/events | https://github.com/huggingface/transformers/pull/28195 | 2,053,571,469 | PR_kwDOCUB6oc5iogAr | 28,195 | Drop `feature_extractor_type` when loading an image processor file | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-22T08:32:27 | 2023-12-22T12:19:05 | 2023-12-22T12:19:04 | COLLABORATOR | null | # What does this PR do?
`preprocessor_config.json` created in old days like [this](https://huggingface.co/openai/clip-vit-large-patch14/blob/main/preprocessor_config.json) has, for example, `"feature_extractor_type": "CLIPFeatureExtractor",` in it. If that file is for an image processor, during the loading (in `__init__`), it is added as the object's attribute. This is already misleading.
If we save the image processor again, the file will contain `feature_extractor_type` and `image_processor_type`, which is even more confusing. See the example below.
**This PR pop up this attribute during the loading, so it won't be an attribute of the loaded object.**
### To reproduce
```python
from transformers import CLIPImageProcessor
import json
p = CLIPImageProcessor.from_pretrained("openai/clip-vit-large-patch14")
print(getattr(p, "feature_extractor_type", None))
print(getattr(p, "image_processor_type", None))
print("-" * 40)
p.save_pretrained("myclip")
p = CLIPImageProcessor.from_pretrained("myclip")
print(getattr(p, "feature_extractor_type", None))
print(getattr(p, "image_processor_type", None))
```
### Output
**before this PR**
```bash
CLIPFeatureExtractor
None
----------------------------------------
CLIPFeatureExtractor
CLIPImageProcessor
```
**after this PR**
```bash
None
None
----------------------------------------
None
CLIPImageProcessor
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28195/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28195",
"html_url": "https://github.com/huggingface/transformers/pull/28195",
"diff_url": "https://github.com/huggingface/transformers/pull/28195.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28195.patch",
"merged_at": "2023-12-22T12:19:04"
} |
https://api.github.com/repos/huggingface/transformers/issues/28194 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28194/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28194/comments | https://api.github.com/repos/huggingface/transformers/issues/28194/events | https://github.com/huggingface/transformers/issues/28194 | 2,053,555,305 | I_kwDOCUB6oc56ZsRp | 28,194 | Can you please provide the longformer version of the torch to tf file? | {
"login": "lsl200032",
"id": 109401083,
"node_id": "U_kgDOBoVT-w",
"avatar_url": "https://avatars.githubusercontent.com/u/109401083?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lsl200032",
"html_url": "https://github.com/lsl200032",
"followers_url": "https://api.github.com/users/lsl200032/followers",
"following_url": "https://api.github.com/users/lsl200032/following{/other_user}",
"gists_url": "https://api.github.com/users/lsl200032/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lsl200032/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lsl200032/subscriptions",
"organizations_url": "https://api.github.com/users/lsl200032/orgs",
"repos_url": "https://api.github.com/users/lsl200032/repos",
"events_url": "https://api.github.com/users/lsl200032/events{/privacy}",
"received_events_url": "https://api.github.com/users/lsl200032/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2023-12-22T08:17:53 | 2024-01-31T08:03:08 | 2024-01-31T08:03:08 | NONE | null | ### Feature request
Can you please provide the longformer version of the torch to tf file?
### Motivation
Can you please provide the longformer version of the torch to tf file?
### Your contribution
Can you please provide the longformer version of the torch to tf file? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28194/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28193 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28193/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28193/comments | https://api.github.com/repos/huggingface/transformers/issues/28193/events | https://github.com/huggingface/transformers/issues/28193 | 2,053,500,334 | I_kwDOCUB6oc56Ze2u | 28,193 | ValueError: Target module WQLinear_GEMM is not supported. Currently, only `torch.nn.Linear` and `Conv1D` are supported.- AWQ Quantisation Issues | {
"login": "Vasanth03",
"id": 59615743,
"node_id": "MDQ6VXNlcjU5NjE1NzQz",
"avatar_url": "https://avatars.githubusercontent.com/u/59615743?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vasanth03",
"html_url": "https://github.com/Vasanth03",
"followers_url": "https://api.github.com/users/Vasanth03/followers",
"following_url": "https://api.github.com/users/Vasanth03/following{/other_user}",
"gists_url": "https://api.github.com/users/Vasanth03/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vasanth03/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vasanth03/subscriptions",
"organizations_url": "https://api.github.com/users/Vasanth03/orgs",
"repos_url": "https://api.github.com/users/Vasanth03/repos",
"events_url": "https://api.github.com/users/Vasanth03/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vasanth03/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-22T07:23:22 | 2023-12-22T11:33:40 | 2023-12-22T11:33:39 | NONE | null | Hi @casper-hansen
-> I am trying to train the AWQ quantised model using hugging face trainer.
While using PEFT (LoRA adaptor) the following error pops up.

-> This is the version that I have used !pip install -q -U https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
Any help is much appreciated. Thanks
_Originally posted by @Vasanth03 in https://github.com/huggingface/transformers/issues/27321#issuecomment-1867330086_
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28193/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28192 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28192/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28192/comments | https://api.github.com/repos/huggingface/transformers/issues/28192/events | https://github.com/huggingface/transformers/pull/28192 | 2,053,434,683 | PR_kwDOCUB6oc5ioCSy | 28,192 | don't initialize the output embeddings if we're going to tie them to input embeddings | {
"login": "tom-p-reichel",
"id": 43631024,
"node_id": "MDQ6VXNlcjQzNjMxMDI0",
"avatar_url": "https://avatars.githubusercontent.com/u/43631024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tom-p-reichel",
"html_url": "https://github.com/tom-p-reichel",
"followers_url": "https://api.github.com/users/tom-p-reichel/followers",
"following_url": "https://api.github.com/users/tom-p-reichel/following{/other_user}",
"gists_url": "https://api.github.com/users/tom-p-reichel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tom-p-reichel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tom-p-reichel/subscriptions",
"organizations_url": "https://api.github.com/users/tom-p-reichel/orgs",
"repos_url": "https://api.github.com/users/tom-p-reichel/repos",
"events_url": "https://api.github.com/users/tom-p-reichel/events{/privacy}",
"received_events_url": "https://api.github.com/users/tom-p-reichel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 8 | 2023-12-22T06:12:37 | 2024-01-31T01:20:00 | 2024-01-31T01:19:18 | CONTRIBUTOR | null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This small change marks the output embeddings for a model as initialized if we will be tying them to the input embeddings. Without this change, the output embeddings are usually randomly initialized every time affected models (models that tie the output embeddings to input embeddings and do not otherwise initialize the output embeddings) are loaded. This seems to be responsible for *multiple second* startup delays in downstream tools, e.g. insanely-fast-whisper, as every single time the whisper model is loaded a very massive matrix is unnecessarily filled with uniformly random numbers before it is replaced with another matrix.
Before and after applying this patch, downstream tool insanely-fast-whisper transcribed a short audio file in 18 and 13 seconds respectively for a 5 second improvement. The patch does not seem to change the behavior of the tool-- a test transcription of an hour of audio remains unchanged before and after the patch.
I suspect other applications using models that tie their input/output embeddings together will experience a small speedup in loading from this patch.
I ran a portion of the transformers testing locally, which passed, but we'll see how the full test suite fares soon enough.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28192/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28192",
"html_url": "https://github.com/huggingface/transformers/pull/28192",
"diff_url": "https://github.com/huggingface/transformers/pull/28192.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28192.patch",
"merged_at": "2024-01-31T01:19:18"
} |
https://api.github.com/repos/huggingface/transformers/issues/28191 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28191/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28191/comments | https://api.github.com/repos/huggingface/transformers/issues/28191/events | https://github.com/huggingface/transformers/issues/28191 | 2,053,399,431 | I_kwDOCUB6oc56ZGOH | 28,191 | ImportError: Using the Trainer with PyTorch requires accelerate>=0.20.1 | {
"login": "Ompramod9921",
"id": 86967995,
"node_id": "MDQ6VXNlcjg2OTY3OTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/86967995?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ompramod9921",
"html_url": "https://github.com/Ompramod9921",
"followers_url": "https://api.github.com/users/Ompramod9921/followers",
"following_url": "https://api.github.com/users/Ompramod9921/following{/other_user}",
"gists_url": "https://api.github.com/users/Ompramod9921/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ompramod9921/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ompramod9921/subscriptions",
"organizations_url": "https://api.github.com/users/Ompramod9921/orgs",
"repos_url": "https://api.github.com/users/Ompramod9921/repos",
"events_url": "https://api.github.com/users/Ompramod9921/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ompramod9921/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5616426447,
"node_id": "LA_kwDOCUB6oc8AAAABTsPdzw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/solved",
"name": "solved",
"color": "B1D6DC",
"default": false,
"description": ""
}
] | open | false | null | [] | null | 7 | 2023-12-22T05:22:01 | 2024-01-15T04:38:56 | null | NONE | null | ### System Info
@muellerzr and @pacman100
I'm trying to use the Trainer with PyTorch in my Python project, but I'm encountering an ImportError stating that accelerate>=0.20.1 is required. Despite having installed the accelerate package, I'm still getting this error.
Here's the error message I'm seeing:
ImportError: Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1`: Please run `pip install transformers[torch]` or `pip install accelerate -U`

I have tried both suggested solutions (pip install transformers[torch] and pip install accelerate -U), but the issue persists.
Could anyone please provide guidance on how to resolve this issue?
Thank you!
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Here's a minimal code snippet that reproduces the issue:
from transformers import TrainingArguments, Trainer
# Define the training arguments
training_args = TrainingArguments(
output_dir='./results',
num_train_epochs=3,
per_device_train_batch_size=16,
per_device_eval_batch_size=64,
warmup_steps=500,
weight_decay=0.01,
logging_dir='./logs',
logging_steps=10,
)
When running this code, I receive the following error:
ImportError: Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1`: Please run `pip install transformers[torch]` or `pip install accelerate -U`
Despite having installed the accelerate package, I continue to encounter this error. I have attempted to upgrade the accelerate package using pip install --upgrade accelerate, and cleared the pip cache using pip cache purge, but the issue remains unresolved.
The versions of the relevant packages I'm using are as follows:
import transformers
import accelerate
print(transformers.__version__)
print(accelerate.__version__)
Output:
4.12.5
0.21.0
As you can see, I'm using transformers version 4.12.5 and accelerate version 0.21.0, both of which should be compatible with each other
### Expected behavior
Expected Behavior:
I expect the `Trainer` to work seamlessly with `PyTorch` without any import errors. Specifically, I expect the `accelerate` package to be correctly recognized by the `Trainer`, allowing me to run my code without encountering the `ImportError` stating that `accelerate>=0.20.1` is required.
The `accelerate` package is a key dependency for the `Trainer` to function properly, and despite having installed it, I continue to face this issue. I have tried both suggested solutions (`pip install transformers[torch]` and `pip install accelerate -U`) to no avail.
Therefore, I believe there might be a compatibility issue between the `Trainer` and the `accelerate` package, or perhaps an issue with my current Python environment setup. I would appreciate any guidance on how to troubleshoot and resolve this issue. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28191/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28191/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28190 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28190/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28190/comments | https://api.github.com/repos/huggingface/transformers/issues/28190/events | https://github.com/huggingface/transformers/issues/28190 | 2,053,318,726 | I_kwDOCUB6oc56YyhG | 28,190 | torch.compile() silently fails when used on HuggingFace pipeline inference code | {
"login": "rosario-purple",
"id": 123594463,
"node_id": "U_kgDOB13m3w",
"avatar_url": "https://avatars.githubusercontent.com/u/123594463?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rosario-purple",
"html_url": "https://github.com/rosario-purple",
"followers_url": "https://api.github.com/users/rosario-purple/followers",
"following_url": "https://api.github.com/users/rosario-purple/following{/other_user}",
"gists_url": "https://api.github.com/users/rosario-purple/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rosario-purple/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rosario-purple/subscriptions",
"organizations_url": "https://api.github.com/users/rosario-purple/orgs",
"repos_url": "https://api.github.com/users/rosario-purple/repos",
"events_url": "https://api.github.com/users/rosario-purple/events{/privacy}",
"received_events_url": "https://api.github.com/users/rosario-purple/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in pro... | open | false | null | [] | null | 4 | 2023-12-22T03:19:51 | 2024-01-30T09:11:10 | null | NONE | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu)
- Jax version: 0.4.21
- JaxLib version: 0.4.21
- Using GPU in script?: A100
### Who can help?
@Narsil @gante @ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the following Python code:
```
model = AutoModelForCausalLM.from_pretrained(
MODEL_ID,
torch_dtype=torch.bfloat16,
device_map=device,
use_flash_attention_2=True,
)
model.eval()
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
tokenizer.pad_token_id = tokenizer.eos_token_id
model = torch.compile(model)
generation_pipeline = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
batch_size=10,
)
batch_results = generation_pipeline(
["foo", "bar", "bin", "baz"],
max_new_tokens=200,
temperature=0.6,
do_sample=True,
repetition_penalty=1.05,
num_return_sequences=20,
)
```
(in my case, MODEL_ID is set to `"Open-Orca/Mistral-7B-OpenOrca"`, which is a fine-tune of Mistral-7B, but any LLM should work)
### Expected behavior
torch.compile() should compile the model, print some compilation messages, and then cause inference/text generation to be run faster. Instead, torch.compile() appears to not run at all, no messages are printed, and it has no effect on inference/generation speed. There is no error message, it just silently doesn't compile, effectively acting as if the line `model = torch.compile(model)` doesn't exist. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28190/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28190/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28189 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28189/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28189/comments | https://api.github.com/repos/huggingface/transformers/issues/28189/events | https://github.com/huggingface/transformers/issues/28189 | 2,053,227,321 | I_kwDOCUB6oc56YcM5 | 28,189 | Text-to-speech data collator exhibits weird batching behavior with Seq2SeqTrainer | {
"login": "GinUTE",
"id": 91470404,
"node_id": "MDQ6VXNlcjkxNDcwNDA0",
"avatar_url": "https://avatars.githubusercontent.com/u/91470404?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GinUTE",
"html_url": "https://github.com/GinUTE",
"followers_url": "https://api.github.com/users/GinUTE/followers",
"following_url": "https://api.github.com/users/GinUTE/following{/other_user}",
"gists_url": "https://api.github.com/users/GinUTE/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GinUTE/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GinUTE/subscriptions",
"organizations_url": "https://api.github.com/users/GinUTE/orgs",
"repos_url": "https://api.github.com/users/GinUTE/repos",
"events_url": "https://api.github.com/users/GinUTE/events{/privacy}",
"received_events_url": "https://api.github.com/users/GinUTE/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 9 | 2023-12-22T01:01:43 | 2024-01-07T04:49:32 | 2023-12-28T20:10:31 | NONE | null | ### System Info
- transformers version: 4.37.0.dev0
- platform: Linux-6.1.58+-x86_64-with-glibc2.35 (Colaboratory free accelerated runtime)
- python version: 3.10.12
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am currently fine-tuning SpeechT5 on Vietnamese TTS. I followed the official fine-tuning guide [here](https://colab.research.google.com/drive/1i7I5pzBcU3WDFarDnzweIj4-sVVoIUFJ).
The only difference I made is that I changed the tokenizer wrapped in SpeechT5Processor with my own Vietnamese SentencePiece character-level tokenizer. I made sure to add the same special tokens in the original tokenizer, and it is working as expected. I used the following code snippet:
```
processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_tts")
tokenizer = SpeechT5Tokenizer("spm-char.model")
processor.tokenizer = tokenizer
model = SpeechT5ForTextToSpeech.from_pretrained("microsoft/speecht5_tts")
model.resize_token_embeddings(new_num_tokens=len(tokenizer), pad_to_multiple_of=8)
```
The issue arises when I got to the training phase at `trainer.train()`. It throws the following error:
`Sizes of tensors must match except in dimension 2. Expected size 16 but got size 256 for tensor number 1 in the list.`
I found that the error changes according to batch size. Specifically, the second sentence always throws:
`Expect size <batch size> but got size <batch size to the power of 2> for tensor number 1 in the list.`
Batch size other than 1 will throw such an error.
I made no change to the original data collator, here is the code snippet:
```
@dataclass
class TTSDataCollatorWithPadding:
processor: Any
def __call__(
self, features: List[Dict[str, Union[List[int], torch.Tensor]]]
) -> Dict[str, torch.Tensor]:
input_ids = [{"input_ids": feature["input_ids"]} for feature in features]
label_features = [{"input_values": feature["labels"]} for feature in features]
speaker_features = [feature["speaker_embeddings"] for feature in features]
batch = processor.pad(
input_ids=input_ids, labels=label_features, return_tensors="pt"
)
batch["labels"] = batch["labels"].masked_fill(
batch.decoder_attention_mask.unsqueeze(-1).ne(1), -100
)
del batch["decoder_attention_mask"]
if model.config.reduction_factor > 1:
target_lengths = torch.tensor(
[len(feature["input_values"]) for feature in label_features]
)
target_lengths = target_lengths.new(
[
length - length % model.config.reduction_factor
for length in target_lengths
]
)
max_length = max(target_lengths)
batch["labels"] = batch["labels"][:, :max_length]
batch["speaker_embeddings"] = torch.tensor(speaker_features)
return batch
data_collator = TTSDataCollatorWithPadding(processor=processor)
```
I checked the batch returned by the data collator with 16 examples and it seems to check out:
```
{'input_ids': torch.Size([16, 188]),
'attention_mask': torch.Size([16, 188]),
'labels': torch.Size([16, 628, 80]),
'speaker_embeddings': torch.Size([16, 512])}
```
I suspect it must be something to do with the DataLoader, or something else obvious that I just cannot wrap my head around. Any help is appreciated.
### Expected behavior
The fine-tuning should proceed as per usual. I fine-tuned SpeechT5 on Vietnamese TTS once before but not with a custom tokenizer. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28189/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28188 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28188/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28188/comments | https://api.github.com/repos/huggingface/transformers/issues/28188/events | https://github.com/huggingface/transformers/issues/28188 | 2,052,983,589 | I_kwDOCUB6oc56Xgsl | 28,188 | RuntimeError: FlashAttention only supports Ampere GPUs or newer. | {
"login": "bilalghanem",
"id": 47889448,
"node_id": "MDQ6VXNlcjQ3ODg5NDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/47889448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bilalghanem",
"html_url": "https://github.com/bilalghanem",
"followers_url": "https://api.github.com/users/bilalghanem/followers",
"following_url": "https://api.github.com/users/bilalghanem/following{/other_user}",
"gists_url": "https://api.github.com/users/bilalghanem/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bilalghanem/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bilalghanem/subscriptions",
"organizations_url": "https://api.github.com/users/bilalghanem/orgs",
"repos_url": "https://api.github.com/users/bilalghanem/repos",
"events_url": "https://api.github.com/users/bilalghanem/events{/privacy}",
"received_events_url": "https://api.github.com/users/bilalghanem/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 9 | 2023-12-21T19:53:52 | 2024-01-23T21:07:52 | 2024-01-08T08:33:22 | NONE | null | ### System Info
I am trying to run the following code:
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# Configs
device = "cuda:7"
model_name = "openchat/openchat_3.5"
model = AutoModelForCausalLM.from_pretrained(model_name, device_map=device, load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16, attn_implementation="flash_attention_2")
tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side='left')
```
I can load the model completely fine, but when I want to generate, I get this error:
> ---------------------------------------------------------------------------
> RuntimeError Traceback (most recent call last)
> Cell In[3], [line 76](vscode-notebook-cell:?execution_count=3&line=76)
> [74](vscode-notebook-cell:?execution_count=3&line=74) model_input_text = template.format(start, html_, end)
> [75](vscode-notebook-cell:?execution_count=3&line=75) model_inputs = tokenizer([model_input_text], return_tensors="pt", padding=False).to(device)
> ---> [76](vscode-notebook-cell:?execution_count=3&line=76) generated_ids = model.generate(**model_inputs, do_sample=True, top_p=1.0, temperature=0.8, top_k=50, max_new_tokens=1024)
> [77](vscode-notebook-cell:?execution_count=3&line=77) model_outputs_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
> [78](vscode-notebook-cell:?execution_count=3&line=78) print(model_outputs_text[model_input_text.rindex("GPT4 Correct Assistant:")+10:])
>
> File [~/PATH/venv/lib/python3.8/site-packages/torch/utils/_contextlib.py:115](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/torch/utils/_contextlib.py:115), in context_decorator.<locals>.decorate_context(*args, **kwargs)
> [112](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/torch/utils/_contextlib.py:112) @functools.wraps(func)
> [113](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/torch/utils/_contextlib.py:113) def decorate_context(*args, **kwargs):
> [114](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/torch/utils/_contextlib.py:114) with ctx_factory():
> --> [115](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/torch/utils/_contextlib.py:115) return func(*args, **kwargs)
>
> File [~/PATH/venv/lib/python3.8/site-packages/transformers/generation/utils.py:1764](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/transformers/generation/utils.py:1764), in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, negative_prompt_ids, negative_prompt_attention_mask, **kwargs)
> [1756](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/transformers/generation/utils.py:1756) input_ids, model_kwargs = self._expand_inputs_for_generation(
> [1757](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/transformers/generation/utils.py:1757) input_ids=input_ids,
> [1758](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/transformers/generation/utils.py:1758) expand_size=generation_config.num_return_sequences,
> [1759](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/transformers/generation/utils.py:1759) is_encoder_decoder=self.config.is_encoder_decoder,
> [1760](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/transformers/generation/utils.py:1760) **model_kwargs,
> [1761](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/transformers/generation/utils.py:1761) )
> [1763](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/transformers/generation/utils.py:1763) # 13. run sample
> -> [1764](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/transformers/generation/utils.py:1764) return self.sample(
> [1765](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/transformers/generation/utils.py:1765) input_ids,
> ...
> [58](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/flash_attn/flash_attn_interface.py:58) None,
> [59](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/flash_attn/flash_attn_interface.py:59) )
> [60](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/flash_attn/flash_attn_interface.py:60) return out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state
>
> RuntimeError: FlashAttention only supports Ampere GPUs or newer.
I am working on Ubuntu 20.04 with NVIDIA Quadro RTX 5000.
Cuda version: 12.2
NVIDIA-SMI 535.129.03
torch==2.1.2
transformers==4.36.2
### Who can help?
@SunMarc @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Loading an LLM model with enabling fast attention.
### Expected behavior
Generate text. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28188/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28188/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28187 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28187/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28187/comments | https://api.github.com/repos/huggingface/transformers/issues/28187/events | https://github.com/huggingface/transformers/pull/28187 | 2,052,849,458 | PR_kwDOCUB6oc5imDdC | 28,187 | Update YOLOS slow test values | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-21T17:54:39 | 2023-12-22T11:39:57 | 2023-12-21T18:17:07 | COLLABORATOR | null | # What does this PR do?
Updates the test values for YOLOS after the merging in of #27663 to resolve failing slow model tests on nightly.
Some small value changes are expected because of the change of output image size from the image processor.
As a sense check, plotted the output of the object detection model in the tests to visualise differences to confirm they are small and still sensible:
**Old detections**

**New detections**

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28187/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28187/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28187",
"html_url": "https://github.com/huggingface/transformers/pull/28187",
"diff_url": "https://github.com/huggingface/transformers/pull/28187.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28187.patch",
"merged_at": "2023-12-21T18:17:07"
} |
https://api.github.com/repos/huggingface/transformers/issues/28186 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28186/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28186/comments | https://api.github.com/repos/huggingface/transformers/issues/28186/events | https://github.com/huggingface/transformers/pull/28186 | 2,052,807,186 | PR_kwDOCUB6oc5il6Gk | 28,186 | Fix slow backbone tests - out_indices must match stage name ordering | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-21T17:21:18 | 2023-12-21T18:18:15 | 2023-12-21T18:16:51 | COLLABORATOR | null | # What does this PR do?
Fixes slow autobackbone tests failing on nightly after #27606
#27606 enforces the out_indices and out_features to be in the same order as the stage names. This ensures backbone selects the correct features in its forward pass. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28186/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28186/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28186",
"html_url": "https://github.com/huggingface/transformers/pull/28186",
"diff_url": "https://github.com/huggingface/transformers/pull/28186.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28186.patch",
"merged_at": "2023-12-21T18:16:51"
} |
https://api.github.com/repos/huggingface/transformers/issues/28185 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28185/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28185/comments | https://api.github.com/repos/huggingface/transformers/issues/28185/events | https://github.com/huggingface/transformers/pull/28185 | 2,052,665,966 | PR_kwDOCUB6oc5ila8v | 28,185 | Cache: dynamic cache with cross attention and UMT5 `Cache` support | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2023-12-21T15:47:55 | 2024-01-30T00:41:32 | null | MEMBER | null | # What does this PR do?
#28065 was becoming messy due to all Bart "copied from" dependencies, so this PR is a tiny version of it.
This PR:
1. Introduces `DynamicCacheWithCrossAttention`, which expands `DynamicCache` [cache object equivalent to the previous `past_key_values` input/output] with the ability to hold a cross-attention cache. This design was intentional: most LLMs (and now even multimodel models) tend to be decoder-only, so this separation will keep the cache class for decoder-only models simpler. It also enables us to be more strict -- in #28065 I've caught an unintended cache deletion in Whisper thanks to the increased specificity!
2. Adds `Cache` support to `modeling_umt5.py`, which is a form to test whether `DynamicCacheWithCrossAttention` is equivalent to the previous cache. These changes are the equivalent of the modeling changes in #26681, but for encoder-decoder models.
______________________________________
Local tests run:
1. `RUN_SLOW=1 py.test tests/models/umt5/test_modeling_umt5.py -vv` [Note: adds a test to ensure we keep the same results as in `main`] | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28185/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28185/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28185",
"html_url": "https://github.com/huggingface/transformers/pull/28185",
"diff_url": "https://github.com/huggingface/transformers/pull/28185.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28185.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28184 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28184/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28184/comments | https://api.github.com/repos/huggingface/transformers/issues/28184/events | https://github.com/huggingface/transformers/issues/28184 | 2,052,603,134 | I_kwDOCUB6oc56WDz- | 28,184 | LLaVa Left Padding Got Weird Results | {
"login": "SeungyounShin",
"id": 20262536,
"node_id": "MDQ6VXNlcjIwMjYyNTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/20262536?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SeungyounShin",
"html_url": "https://github.com/SeungyounShin",
"followers_url": "https://api.github.com/users/SeungyounShin/followers",
"following_url": "https://api.github.com/users/SeungyounShin/following{/other_user}",
"gists_url": "https://api.github.com/users/SeungyounShin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SeungyounShin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SeungyounShin/subscriptions",
"organizations_url": "https://api.github.com/users/SeungyounShin/orgs",
"repos_url": "https://api.github.com/users/SeungyounShin/repos",
"events_url": "https://api.github.com/users/SeungyounShin/events{/privacy}",
"received_events_url": "https://api.github.com/users/SeungyounShin/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 7 | 2023-12-21T15:10:46 | 2024-01-11T13:40:21 | null | NONE | null | ### System Info
Reproduce :
```python
from PIL import Image
import requests
from transformers import AutoProcessor, LlavaForConditionalGeneration
model = LlavaForConditionalGeneration.from_pretrained("llava-hf/llava-1.5-7b-hf").to(
"cuda"
)
processor = AutoProcessor.from_pretrained("llava-hf/llava-1.5-7b-hf")
prompt1 = "<image>\n<image>\nUSER: What's the the difference of two images?\nASSISTANT:"
prompt2 = "<image>\nUSER: Describe the image.\nASSISTANT:"
prompt3 = "<image>\nUSER: Describe the image.\nASSISTANT:"
url1 = "https://images.unsplash.com/photo-1552053831-71594a27632d?q=80&w=3062&auto=format&fit=crop&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D"
url2 = "https://images.unsplash.com/photo-1617258683320-61900b281ced?q=80&w=3087&auto=format&fit=crop&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D"
image1 = Image.open(requests.get(url1, stream=True).raw)
image2 = Image.open(requests.get(url2, stream=True).raw)
inputs = processor(
text=[prompt1, prompt2, prompt3],
images=[image1, image2, image1, image2],
return_tensors="pt",
padding=True,
)
for key in inputs:
inputs[key] = inputs[key].to("cuda")
print(key, inputs[key].shape)
# Generate
generate_ids = model.generate(**inputs, max_length=512)
outputs = processor.batch_decode(
generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(outputs)
```
This will outputs :
```Result
["\n \nUSER: What's the the difference of two images?\nASSISTANT: In the two images, the primary difference is the presence of a flower in the dog's mouth. In the first image, the dog is holding a flower in its mouth, while in the second image, the dog is not holding a flower. This subtle change in the scene highlights the dog's interaction with the flower, and it may evoke different emotions or interpretations depending on the viewer's perspective.", '\nUSER: Describe the image.\nASSISTANT: The dog is a \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n', '\nUSER: Describe the image.\nASSISTANT: The \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nЪ schließ']
```
I checked images are rightly placed. but for batch2 and 3
It's consist of lots of padding (False x 583)
[False x 583, False, True x 576 , False, False, False, False, False, False, False, False, False, False, False, False, False, False]
I guess llava doesn't see this kind of prefix on training phase would result in weird behavior.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
stated at above
### Expected behavior
skip | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28184/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28183 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28183/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28183/comments | https://api.github.com/repos/huggingface/transformers/issues/28183/events | https://github.com/huggingface/transformers/issues/28183 | 2,052,577,262 | I_kwDOCUB6oc56V9fu | 28,183 | Bug in new version transformers 4.34.0-4.36.2 | {
"login": "JAX627",
"id": 113168400,
"node_id": "U_kgDOBr7QEA",
"avatar_url": "https://avatars.githubusercontent.com/u/113168400?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JAX627",
"html_url": "https://github.com/JAX627",
"followers_url": "https://api.github.com/users/JAX627/followers",
"following_url": "https://api.github.com/users/JAX627/following{/other_user}",
"gists_url": "https://api.github.com/users/JAX627/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JAX627/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JAX627/subscriptions",
"organizations_url": "https://api.github.com/users/JAX627/orgs",
"repos_url": "https://api.github.com/users/JAX627/repos",
"events_url": "https://api.github.com/users/JAX627/events{/privacy}",
"received_events_url": "https://api.github.com/users/JAX627/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 10 | 2023-12-21T14:55:21 | 2024-01-08T16:47:30 | null | NONE | null | ### System Info
ver: transformers 4.34.0-4.36.2
problem: finetune chatglm3 model, finetune.py don't generate pytorch_model.bin file in output, as point out in https://github.com/THUDM/ChatGLM3/discussions/253#discussioncomment-7837093
it seems like problems in modeling_utils.py file, and it can be solved by pip install transformers==4.33.0, seems like higher version transformers not suitable for chatglm3 totally
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. download chatglm3-6b-32k model
2. pip install transformers 4.34.0-4.36.2
3. follow finetune steps in https://github.com/THUDM/ChatGLM3/tree/main/finetune_chatmodel_demo
4. after finish finetuning, there is no pytorch_model.bin file in output dir
5. pip install transformers==4.33.0
6. follow finetune steps in https://github.com/THUDM/ChatGLM3/tree/main/finetune_chatmodel_demo
7. after finish finetuning, there is the pytorch_model.bin file in output dir
### Expected behavior
solve the problem in new version transformers | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28183/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28183/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28182 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28182/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28182/comments | https://api.github.com/repos/huggingface/transformers/issues/28182/events | https://github.com/huggingface/transformers/pull/28182 | 2,052,427,965 | PR_kwDOCUB6oc5ikmUG | 28,182 | [`Docs`] Add 4-bit serialization docs | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-21T13:28:55 | 2023-12-22T09:18:39 | 2023-12-22T09:18:33 | CONTRIBUTOR | null | # What does this PR do?
Follow up work from: https://github.com/huggingface/transformers/pull/26037
Adds few lines in the documentation about serializing 4-bit models on the Hub
cc @amyeroberts @stevhliu
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28182/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28182",
"html_url": "https://github.com/huggingface/transformers/pull/28182",
"diff_url": "https://github.com/huggingface/transformers/pull/28182.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28182.patch",
"merged_at": "2023-12-22T09:18:33"
} |
https://api.github.com/repos/huggingface/transformers/issues/28181 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28181/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28181/comments | https://api.github.com/repos/huggingface/transformers/issues/28181/events | https://github.com/huggingface/transformers/pull/28181 | 2,052,412,895 | PR_kwDOCUB6oc5iki_r | 28,181 | update the logger message with accordant weights_file_name | {
"login": "izyForever",
"id": 43177954,
"node_id": "MDQ6VXNlcjQzMTc3OTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/43177954?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/izyForever",
"html_url": "https://github.com/izyForever",
"followers_url": "https://api.github.com/users/izyForever/followers",
"following_url": "https://api.github.com/users/izyForever/following{/other_user}",
"gists_url": "https://api.github.com/users/izyForever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/izyForever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/izyForever/subscriptions",
"organizations_url": "https://api.github.com/users/izyForever/orgs",
"repos_url": "https://api.github.com/users/izyForever/repos",
"events_url": "https://api.github.com/users/izyForever/events{/privacy}",
"received_events_url": "https://api.github.com/users/izyForever/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-12-21T13:18:49 | 2023-12-22T15:05:26 | 2023-12-22T15:05:10 | CONTRIBUTOR | null | # What does this PR do?
Update the logger message with accordant weights_file_name.
Fixes # (issue)
https://github.com/huggingface/transformers/issues/28076
@amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28181/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28181",
"html_url": "https://github.com/huggingface/transformers/pull/28181",
"diff_url": "https://github.com/huggingface/transformers/pull/28181.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28181.patch",
"merged_at": "2023-12-22T15:05:10"
} |
https://api.github.com/repos/huggingface/transformers/issues/28180 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28180/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28180/comments | https://api.github.com/repos/huggingface/transformers/issues/28180/events | https://github.com/huggingface/transformers/issues/28180 | 2,052,332,919 | I_kwDOCUB6oc56VB13 | 28,180 | Verify interpolation of image processors | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | open | false | null | [] | null | 5 | 2023-12-21T12:26:51 | 2024-01-26T05:30:07 | null | CONTRIBUTOR | null | ### Feature request
As pointed out in https://github.com/huggingface/transformers/pull/27742, some image processors might need a correction on the default interpolation method being used (resampling in Pillow). We could check this on a per-model basis.
### Motivation
Interpolation methods have a slight (often minimal) impact on performance. However it would be great to verify this on a per-model basis.
e.g. [ViT](https://github.com/huggingface/transformers/blob/main/src/transformers/models/vit/image_processing_vit.py#L52)'s image processor defaults to BILINEAR but should use BICUBIC as seen [here](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/vision_transformer.py#L1062). We can update the default values of the image processors, but can't update the configs on the hub as this would break people's fine-tuned models.
### Your contribution
I could work on this, but this seems like a good first issue for first contributors.
To be checked (by comparing against original implementation):
- [ ] ViT
- [ ] ConvNext
- [ ] DeiT
- [ ] DPT
- [ ] ... | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28180/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28180/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28179 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28179/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28179/comments | https://api.github.com/repos/huggingface/transformers/issues/28179/events | https://github.com/huggingface/transformers/issues/28179 | 2,052,091,367 | I_kwDOCUB6oc56UG3n | 28,179 | How to fine tune facebook/esm2_t33_650M_UR50D | {
"login": "Admire7494",
"id": 98265794,
"node_id": "U_kgDOBdtqwg",
"avatar_url": "https://avatars.githubusercontent.com/u/98265794?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Admire7494",
"html_url": "https://github.com/Admire7494",
"followers_url": "https://api.github.com/users/Admire7494/followers",
"following_url": "https://api.github.com/users/Admire7494/following{/other_user}",
"gists_url": "https://api.github.com/users/Admire7494/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Admire7494/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Admire7494/subscriptions",
"organizations_url": "https://api.github.com/users/Admire7494/orgs",
"repos_url": "https://api.github.com/users/Admire7494/repos",
"events_url": "https://api.github.com/users/Admire7494/events{/privacy}",
"received_events_url": "https://api.github.com/users/Admire7494/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-21T09:50:27 | 2024-01-30T08:03:39 | 2024-01-30T08:03:39 | NONE | null | ### System Info
How to fine tune facebook/esm2_t33_650M_UR50D?It's too big and the model.half() couldn't work. Besids, i always met the error : CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc). Is it possible that the model in the huggingface is wrong?
The following is the script:
from os.path import join
import os
import pandas as pd
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch.utils.data as data
import transformers
from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer
from datasets import Dataset,load_metric
from sklearn.model_selection import train_test_split
#os.environ['CUDA_VISIBLE_DEVICES'] = '1'
CURRENT_DIR = os.getcwd()
check_point = join(CURRENT_DIR,"esm1b_t33_650M_UR50S")
#Data processing
def process_tsv(file):
sequences = list()
labels = list()
df = pd.read_csv(file,sep="\t")
for ind in df.index:
sequences.append(df["sequence"][ind])
labels.append(df["label"][ind])
return sequences,labels
def tokenize_add_label(sequences, labels, tokenizer):
"""This function takes sequences and labels creates a Dataset containing tokenized sequences and add labels to it
args:
sequences (str): a list of sequences
labels (int): a list of labels
tokenizer : a pre-trained tokenizer
return:
Dataset: tokenized sequences and associated labels)"""
sequences_tokenized = tokenizer(sequences, padding=True, truncation=True)
sequences_tokenized = torch.float16(sequences_tokenized)
labels = torch.tensor(labels)
labels = labels.long()
sequences_dataset = Dataset.from_dict(sequences_tokenized)
sequences_dataset = sequences_dataset.add_column("labels", labels)
return sequences_dataset
sequences,labels = process_tsv(join(CURRENT_DIR,"example.tsv"))
tokenizer = AutoTokenizer.from_pretrained(check_point)
sequences_dataset = tokenize_add_label(sequences,labels,tokenizer)
num_labels = max(labels)+1
model = AutoModelForSequenceClassification.from_pretrained(check_point,num_labels=num_labels)
#device = "cuda" if torch.cuda.is_available() else "cpu"
#model.to(device)
model.cuda()
#model = model.half()
#model.enable_input_require_grads()
model_name = check_point.split("/")[-1]
trainer_dir = f"{model_name}-finetuned-model_esm-1b_on_7beta"
if not os.path.exists(trainer_dir):
os.mkdir(trainer_dir)
batch_size = 1
training_args = transformers.TrainingArguments(
output_dir=trainer_dir, # output directory
overwrite_output_dir=True,
num_train_epochs=3, # total number of training epochs
per_device_train_batch_size=batch_size, # batch size per device during training
per_device_eval_batch_size=batch_size, # batch size for evaluation
learning_rate=2e-5,
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir=trainer_dir, # directory for storing logs
logging_steps=10,
load_best_model_at_end=True,
evaluation_strategy="epoch",
save_strategy="epoch",
save_total_limit=1,
metric_for_best_model="accuracy",
greater_is_better=True,
disable_tqdm=True,
gradient_accumulation_steps = 2,
gradient_checkpointing=True
)
metric = load_metric(join(CURRENT_DIR,"metrics","accuracy/accuracy.py"))
def compute_metrics(eval_pred):
logits, labels = eval_pred
print("logits",logits)
print("labels",labels)
predictions = np.argmax(logits, axis=-1)
print("predictions",predictions)
return metric.compute(predictions=predictions, references=labels)
trainer = Trainer(
model = model,
args = training_args,
train_dataset=sequences_dataset,
eval_dataset=sequences_dataset,
tokenizer=tokenizer,
compute_metrics=compute_metrics,
)
model.config.problem_type
trainer.train()
trainer.state.log_history
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.
Some weights of EsmForSequenceClassification were not initialized from the model checkpoint at /home/wangmuqiang/fine_tune_esm2/esm1b_t33_650M_UR50S and are newly initialized: ['classifier.dense.bias', 'classifier.out_proj.bias', 'classifier.out_proj.weight', 'classifier.dense.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
/home/wangmuqiang/fine_tune_esm2/fine_tune_esm1b_7beta.py:87: FutureWarning: load_metric is deprecated and will be removed in the next major version of datasets. Use 'evaluate.load' instead, from the new library 馃 Evaluate: https://huggingface.co/docs/evaluate
metric = load_metric(join(CURRENT_DIR,"metrics","accuracy/accuracy.py"))
Detected kernel version 4.18.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
/home/wangmuqiang/.conda/envs/esm/lib/python3.9/site-packages/torch/utils/checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.
warnings.warn(
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
Traceback (most recent call last):
File "/home/wangmuqiang/fine_tune_esm2/fine_tune_esm1b_7beta.py", line 108, in <module>
trainer.train()
File "/home/wangmuqiang/.conda/envs/esm/lib/python3.9/site-packages/transformers/trainer.py", line 1537, in train
return inner_training_loop(
File "/home/wangmuqiang/.conda/envs/esm/lib/python3.9/site-packages/transformers/trainer.py", line 1854, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/home/wangmuqiang/.conda/envs/esm/lib/python3.9/site-packages/transformers/trainer.py", line 2737, in training_step
self.accelerator.backward(loss)
File "/home/wangmuqiang/.conda/envs/esm/lib/python3.9/site-packages/accelerate/accelerator.py", line 1905, in backward
loss.backward(**kwargs)
File "/home/wangmuqiang/.conda/envs/esm/lib/python3.9/site-packages/torch/_tensor.py", line 492, in backward
torch.autograd.backward(
File "/home/wangmuqiang/.conda/envs/esm/lib/python3.9/site-packages/torch/autograd/__init__.py", line 251, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/home/wangmuqiang/.conda/envs/esm/lib/python3.9/site-packages/torch/autograd/function.py", line 288, in apply
return user_fn(self, *args)
File "/home/wangmuqiang/.conda/envs/esm/lib/python3.9/site-packages/torch/utils/checkpoint.py", line 288, in backward
torch.autograd.backward(outputs_with_grad, args_with_grad)
File "/home/wangmuqiang/.conda/envs/esm/lib/python3.9/site-packages/torch/autograd/__init__.py", line 251, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
### Expected behavior
the script that successfully ran in RTX 3090 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28179/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28179/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28178 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28178/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28178/comments | https://api.github.com/repos/huggingface/transformers/issues/28178/events | https://github.com/huggingface/transformers/issues/28178 | 2,052,081,383 | I_kwDOCUB6oc56UEbn | 28,178 | Call `.destroy()` on `DeepSpeedEngine` somewhere post training | {
"login": "chiragjn",
"id": 10295418,
"node_id": "MDQ6VXNlcjEwMjk1NDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/10295418?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chiragjn",
"html_url": "https://github.com/chiragjn",
"followers_url": "https://api.github.com/users/chiragjn/followers",
"following_url": "https://api.github.com/users/chiragjn/following{/other_user}",
"gists_url": "https://api.github.com/users/chiragjn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chiragjn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chiragjn/subscriptions",
"organizations_url": "https://api.github.com/users/chiragjn/orgs",
"repos_url": "https://api.github.com/users/chiragjn/repos",
"events_url": "https://api.github.com/users/chiragjn/events{/privacy}",
"received_events_url": "https://api.github.com/users/chiragjn/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2023-12-21T09:46:34 | 2024-01-22T13:21:56 | null | NONE | null | ### System Info
transformers==4.36.2
accelerate==0.25.0
deepspeed==0.12.5
### Who can help?
I was using deepspeed stage 2 with Trainer and accelerate and at the end of training when the Trainer has been garbage collected, I noticed my GPU VRAM was not clearing even after aggressively calling `gc.collect()` and `torch.cuda.empty_cache()`
I spent some time debugging and narrowed it down to deepspeed optimizer not removing hooks on pytorch tensors.
I have submitted a PR on Deepspeed: https://github.com/microsoft/DeepSpeed/pull/4858
But to invoke this logic `engine.destroy()` must be called in some place post-training
For now, I am manually calling it outside the trainer post-training and can confirm it works, would be nice if Trainer can take care of it or there is some note in the docs.
@pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
- Train any model with Zero 2 + gradient accumulation, delete and let the trainer garbage collect, model parameters would still linger around in the GPU memory
### Expected behavior
GPU memory should be reclaimable post training | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28178/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28178/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28177 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28177/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28177/comments | https://api.github.com/repos/huggingface/transformers/issues/28177/events | https://github.com/huggingface/transformers/issues/28177 | 2,052,062,336 | I_kwDOCUB6oc56T_yA | 28,177 | AttributeError: Can't get attribute 'SiLUActivation' on <module 'transformers.activations' | {
"login": "Lokesh-Jatangi",
"id": 142205264,
"node_id": "U_kgDOCHnhUA",
"avatar_url": "https://avatars.githubusercontent.com/u/142205264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lokesh-Jatangi",
"html_url": "https://github.com/Lokesh-Jatangi",
"followers_url": "https://api.github.com/users/Lokesh-Jatangi/followers",
"following_url": "https://api.github.com/users/Lokesh-Jatangi/following{/other_user}",
"gists_url": "https://api.github.com/users/Lokesh-Jatangi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lokesh-Jatangi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lokesh-Jatangi/subscriptions",
"organizations_url": "https://api.github.com/users/Lokesh-Jatangi/orgs",
"repos_url": "https://api.github.com/users/Lokesh-Jatangi/repos",
"events_url": "https://api.github.com/users/Lokesh-Jatangi/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lokesh-Jatangi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-12-21T09:37:40 | 2024-01-15T19:37:01 | 2024-01-15T19:37:01 | NONE | null | ### System Info
System info -
- `transformers` version: 4.36.2
- Platform: Linux-5.10.0-26-cloud-amd64-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.20.1
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
I am using a custom script which loads LLAMA checkpoint through torch.
`model_orig = torch.load(checkpoint_path)`
While unpickling checkpoints in torch "SiLUActivation" class is missing from activations.py.
This PR https://github.com/huggingface/transformers/pull/27136 removed the SiLUActivation class mentioning it was reduntant.
P.S :- With transformers version 4.35.0 , loading a checkpoint through torch containing SiLU activation layer was succesful.
Find the below trace :-
` line 65, in load_model_from_checkpoint
model_orig = torch.load(checkpoint_path)
File "/opt/conda/envs/adapt/lib/python3.10/site-packages/torch/serialization.py", line 1014, in load
return _load(opened_zipfile,
File "/opt/conda/envs/adapt/lib/python3.10/site-packages/torch/serialization.py", line 1422, in _load
result = unpickler.load()
File "/opt/conda/envs/adapt/lib/python3.10/site-packages/torch/serialization.py", line 1415, in find_class
return super().find_class(mod_name, name)
AttributeError: Can't get attribute 'SiLUActivation' on <module 'transformers.activations' from '/opt/conda/envs/adapt/lib/python3.10/site-packages/transformers/activations.py'>`
I would happy to add it the SiLU class back to activations.py file and submit it here. Please let me know if i can proceed .
### Who can help?
@amyeroberts
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Any model which has SILU activation function and loaded through "torch.load()" will face this issue.
### Expected behavior
After adding reverting back the changes , the torch should be able identify SiLU activation class. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28177/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28177/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28176 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28176/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28176/comments | https://api.github.com/repos/huggingface/transformers/issues/28176/events | https://github.com/huggingface/transformers/issues/28176 | 2,051,950,925 | I_kwDOCUB6oc56TklN | 28,176 | Swinv2config isnt working with depth estimator | {
"login": "hackkhai",
"id": 51231270,
"node_id": "MDQ6VXNlcjUxMjMxMjcw",
"avatar_url": "https://avatars.githubusercontent.com/u/51231270?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hackkhai",
"html_url": "https://github.com/hackkhai",
"followers_url": "https://api.github.com/users/hackkhai/followers",
"following_url": "https://api.github.com/users/hackkhai/following{/other_user}",
"gists_url": "https://api.github.com/users/hackkhai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hackkhai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hackkhai/subscriptions",
"organizations_url": "https://api.github.com/users/hackkhai/orgs",
"repos_url": "https://api.github.com/users/hackkhai/repos",
"events_url": "https://api.github.com/users/hackkhai/events{/privacy}",
"received_events_url": "https://api.github.com/users/hackkhai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-21T08:28:21 | 2024-01-30T08:03:41 | 2024-01-30T08:03:41 | NONE | null | ### System Info
ValueError: Unrecognized configuration class <class 'transformers.models.swinv2.configuration_swinv2.Swinv2Config'> for this kind of AutoModel: AutoBackbone.
Model type should be one of BeitConfig, BitConfig, ConvNextConfig, ConvNextV2Config, DinatConfig, Dinov2Config, FocalNetConfig, MaskFormerSwinConfig, NatConfig, ResNetConfig, SwinConfig, TimmBackboneConfig, VitDetConfig.
### Who can help?
@amyeroberts @Narsil
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import pipeline
pipe = pipeline(task="depth-estimation", model="Intel/dpt-swinv2-large-384")
result = pipe("http://images.cocodataset.org/val2017/000000039769.jpg")
result["depth"]
```
### Expected behavior
ValueError: Unrecognized configuration class <class 'transformers.models.swinv2.configuration_swinv2.Swinv2Config'> for this kind of AutoModel: AutoBackbone.
Model type should be one of BeitConfig, BitConfig, ConvNextConfig, ConvNextV2Config, DinatConfig, Dinov2Config, FocalNetConfig, MaskFormerSwinConfig, NatConfig, ResNetConfig, SwinConfig, TimmBackboneConfig, VitDetConfig.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28176/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28176/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28175 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28175/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28175/comments | https://api.github.com/repos/huggingface/transformers/issues/28175/events | https://github.com/huggingface/transformers/issues/28175 | 2,051,940,970 | I_kwDOCUB6oc56TiJq | 28,175 | ValueError: LlavaForConditionalGeneration does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. Please open an issue on GitHub to request support for this architecture: https://github.com/huggingface/transformers/issues/new | {
"login": "1106280506Hx",
"id": 103016865,
"node_id": "U_kgDOBiPpoQ",
"avatar_url": "https://avatars.githubusercontent.com/u/103016865?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/1106280506Hx",
"html_url": "https://github.com/1106280506Hx",
"followers_url": "https://api.github.com/users/1106280506Hx/followers",
"following_url": "https://api.github.com/users/1106280506Hx/following{/other_user}",
"gists_url": "https://api.github.com/users/1106280506Hx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/1106280506Hx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/1106280506Hx/subscriptions",
"organizations_url": "https://api.github.com/users/1106280506Hx/orgs",
"repos_url": "https://api.github.com/users/1106280506Hx/repos",
"events_url": "https://api.github.com/users/1106280506Hx/events{/privacy}",
"received_events_url": "https://api.github.com/users/1106280506Hx/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
},
{
"id": 6349658421,
... | open | false | null | [] | null | 4 | 2023-12-21T08:21:39 | 2023-12-21T11:47:27 | null | NONE | null | processor = AutoProcessor.from_pretrained("/gemini/data-2/data/llava")
model = AutoModelForPreTraining.from_pretrained("/gemini/data-2/data/llava",load_in_4bit=True,bnb_4bit_compute_dtype=torch.float16,low_cpu_mem_usage=True,attn_implementation="sdpa").to("cuda")
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28175/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28174 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28174/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28174/comments | https://api.github.com/repos/huggingface/transformers/issues/28174/events | https://github.com/huggingface/transformers/issues/28174 | 2,051,602,981 | I_kwDOCUB6oc56SPol | 28,174 | Problems when converting fairseq model to hf format | {
"login": "upskyy",
"id": 54731898,
"node_id": "MDQ6VXNlcjU0NzMxODk4",
"avatar_url": "https://avatars.githubusercontent.com/u/54731898?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/upskyy",
"html_url": "https://github.com/upskyy",
"followers_url": "https://api.github.com/users/upskyy/followers",
"following_url": "https://api.github.com/users/upskyy/following{/other_user}",
"gists_url": "https://api.github.com/users/upskyy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/upskyy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/upskyy/subscriptions",
"organizations_url": "https://api.github.com/users/upskyy/orgs",
"repos_url": "https://api.github.com/users/upskyy/repos",
"events_url": "https://api.github.com/users/upskyy/events{/privacy}",
"received_events_url": "https://api.github.com/users/upskyy/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 6 | 2023-12-21T02:52:07 | 2024-01-29T08:03:18 | null | NONE | null | ### System Info
- `transformers` version: 4.37.0.dev0
- Platform: Linux-5.15.0-88-generic-x86_64-with-glibc2.35
- Python version: 3.10.8
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.3.2
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@sanchit-gandhi
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Thanks for releasing this awesome repo.
## Issue 1
I am converting the fairseq checkpoint to huggingface format (wav2vec2_conformer). Converting is no problem, but the results are different.
I did some debugging and found something different from the fairseq implementation.
In fairseq, if the convolution subsampling dimension and encoder dimension are the same, `nn.Linear` is not used, but hf is used unconditionally, so there is a problem of using random weights.
### fairseq
https://github.com/facebookresearch/fairseq/blob/main/fairseq/models/wav2vec/wav2vec2.py#L324-L328
```python
self.post_extract_proj = (
nn.Linear(self.embed, cfg.encoder_embed_dim)
if self.embed != cfg.encoder_embed_dim and not cfg.quantize_input
else None
)
```
### huggingface
https://github.com/huggingface/transformers/blob/main/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L536
```python
class Wav2Vec2ConformerFeatureProjection(nn.Module):
def __init__(self, config):
super().__init__()
self.layer_norm = nn.LayerNorm(config.conv_dim[-1], eps=config.layer_norm_eps)
self.projection = nn.Linear(config.conv_dim[-1], config.hidden_size) # <-- HERE
self.dropout = nn.Dropout(config.feat_proj_dropout)
def forward(self, hidden_states):
# non-projected hidden states are needed for quantization
norm_hidden_states = self.layer_norm(hidden_states)
hidden_states = self.projection(norm_hidden_states)
hidden_states = self.dropout(hidden_states)
return hidden_states, norm_hidden_states
```
I think this is right.
```python
class Wav2Vec2ConformerFeatureProjection(nn.Module):
def __init__(self, config):
super().__init__()
self.layer_norm = nn.LayerNorm(config.conv_dim[-1], eps=config.layer_norm_eps)
if config.conv_dim[-1] != config.hidden_size:
self.projection = nn.Linear(config.conv_dim[-1], config.hidden_size)
self.dropout = nn.Dropout(config.feat_proj_dropout)
```
## Issue 2
Also, fairseq performs layer norm before entering the conformer encoder, but huggingface is supposed to perform layer norm after the conformer encoder without any options. Can this be handled as an option? I think the results change because of this.
### fairseq
https://github.com/facebookresearch/fairseq/blob/main/fairseq/models/wav2vec/wav2vec2.py#L1230-L1231
```python
def extract_features(self, x, padding_mask=None, tgt_layer=None):
if padding_mask is not None:
x = index_put(x, padding_mask, 0)
# B x T x C -> T x B x C
x = x.transpose(0, 1)
# B X T X C here
position_emb = None
if self.pos_enc_type == "rel_pos":
position_emb = self.embed_positions(x)
if not self.layer_norm_first: # <-- HERE
x = self.layer_norm(x)
x = F.dropout(x, p=self.dropout, training=self.training)
layer_results = []
r = None
for i, layer in enumerate(self.layers):
dropout_probability = np.random.random()
if not self.training or (dropout_probability > self.layerdrop):
x, z = layer(
x,
self_attn_padding_mask=padding_mask,
need_weights=False,
position_emb=position_emb,
)
if tgt_layer is not None:
layer_results.append((x, z))
if i == tgt_layer:
r = x
break
```
### huggingface
https://github.com/huggingface/transformers/blob/main/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L929
### Expected behavior
How do you think about this problem?
If modifications are possible, I can proceed with the PR by including a converting script including the fairseq extension. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28174/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28174/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28173 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28173/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28173/comments | https://api.github.com/repos/huggingface/transformers/issues/28173/events | https://github.com/huggingface/transformers/issues/28173 | 2,051,315,575 | I_kwDOCUB6oc56RJd3 | 28,173 | VitsTokenizer decode without special tokens produces odd results | {
"login": "xenova",
"id": 26504141,
"node_id": "MDQ6VXNlcjI2NTA0MTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xenova",
"html_url": "https://github.com/xenova",
"followers_url": "https://api.github.com/users/xenova/followers",
"following_url": "https://api.github.com/users/xenova/following{/other_user}",
"gists_url": "https://api.github.com/users/xenova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xenova/subscriptions",
"organizations_url": "https://api.github.com/users/xenova/orgs",
"repos_url": "https://api.github.com/users/xenova/repos",
"events_url": "https://api.github.com/users/xenova/events{/privacy}",
"received_events_url": "https://api.github.com/users/xenova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 5 | 2023-12-20T21:37:28 | 2024-01-12T17:39:38 | null | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (False)
- Tensorflow version (GPU?): 2.15.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu)
- Jax version: 0.4.20
- JaxLib version: 0.4.20
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker (tokenizers) @Vaibhavs10 @sanchit-gandhi (audio team)
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```py
>>> from transformers import AutoTokenizer
>>> tokenizer=AutoTokenizer.from_pretrained('facebook/mms-tts-eng')
>>> tokenizer.encode('hello world')
[0, 6, 0, 7, 0, 21, 0, 21, 0, 22, 0, 19, 0, 9, 0, 22, 0, 25, 0, 21, 0, 5, 0]
>>> tokenizer.decode(tokenizer.encode('hello world'), skip_special_tokens=False)
'hello world'
>>> tokenizer.decode(tokenizer.encode('hello world'), skip_special_tokens=True)
'el ol'
>>> tokenizer.decode(tokenizer.encode('abcdefghijklmnopqrstuvwxyz'), skip_special_tokens=True)
'bdfhjmoqsuwy'
```
From the last example, it looks like it's taking the even-positioned elements.
### Expected behavior
`[0, 6, 0, 7, 0, 21, 0, 21, 0, 22, 0, 19, 0, 9, 0, 22, 0, 25, 0, 21, 0, 5, 0]`, for which the tokenized version is:
```
['k', 'h', 'k', 'e', 'k', 'l', 'k', 'l', 'k', 'o', 'k', ' ', 'k', 'w', 'k', 'o', 'k', 'r', 'k', 'l', 'k', 'd', 'k']
```
should be decoded as 'hello world', or something more informative than 'el ol'. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28173/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 1,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28173/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28172 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28172/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28172/comments | https://api.github.com/repos/huggingface/transformers/issues/28172/events | https://github.com/huggingface/transformers/pull/28172 | 2,051,285,377 | PR_kwDOCUB6oc5igryw | 28,172 | [docs] Sort es/toctree.yml like en/toctree.yml | {
"login": "aaronjimv",
"id": 67152883,
"node_id": "MDQ6VXNlcjY3MTUyODgz",
"avatar_url": "https://avatars.githubusercontent.com/u/67152883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaronjimv",
"html_url": "https://github.com/aaronjimv",
"followers_url": "https://api.github.com/users/aaronjimv/followers",
"following_url": "https://api.github.com/users/aaronjimv/following{/other_user}",
"gists_url": "https://api.github.com/users/aaronjimv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaronjimv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaronjimv/subscriptions",
"organizations_url": "https://api.github.com/users/aaronjimv/orgs",
"repos_url": "https://api.github.com/users/aaronjimv/repos",
"events_url": "https://api.github.com/users/aaronjimv/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaronjimv/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-12-20T21:20:42 | 2023-12-27T14:35:38 | 2023-12-27T14:07:49 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
I think that the file `es/_toctree.yml` is not aligned with `en/_toctree.yml`.
I would like to ask if it was this way intentionally, and if not the case, I would appreciate checking this change.
I kept this part the same because the `Performance and Scalability` section is not in the Spanish documentation:
```
- isExpanded: false
sections:
- local: debugging
title: Debugging
title: Rendimiento y escalabilidad
```
Thanks for your time.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@osanseviero @stevhliu
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28172/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28172/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28172",
"html_url": "https://github.com/huggingface/transformers/pull/28172",
"diff_url": "https://github.com/huggingface/transformers/pull/28172.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28172.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28171 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28171/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28171/comments | https://api.github.com/repos/huggingface/transformers/issues/28171/events | https://github.com/huggingface/transformers/pull/28171 | 2,051,234,051 | PR_kwDOCUB6oc5iggiH | 28,171 | Bug: `training_args.py` fix missing import with accelerate with version `accelerate==0.20.1` | {
"login": "michaelfeil",
"id": 63565275,
"node_id": "MDQ6VXNlcjYzNTY1Mjc1",
"avatar_url": "https://avatars.githubusercontent.com/u/63565275?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelfeil",
"html_url": "https://github.com/michaelfeil",
"followers_url": "https://api.github.com/users/michaelfeil/followers",
"following_url": "https://api.github.com/users/michaelfeil/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelfeil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelfeil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelfeil/subscriptions",
"organizations_url": "https://api.github.com/users/michaelfeil/orgs",
"repos_url": "https://api.github.com/users/michaelfeil/repos",
"events_url": "https://api.github.com/users/michaelfeil/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelfeil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-20T20:38:05 | 2023-12-22T14:35:45 | 2023-12-22T11:41:35 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
I have accelerate and transformers using `poetry` pinned to
```
accelerate="^0.20.1"
transformers="4.36.2"
```
This leads to the weird error, that
```python
is_accelerate_available(min_version="0.20.1") # True
is_accelerate_available() # False, leading no import at top of file
```
```python
Step #1 - "build-image": #99 59.93 @cached_property
Step #1 - "build-image": #99 59.93 def _setup_devices(self) -> "torch.device":
Step #1 - "build-image": #99 59.93 requires_backends(self, ["torch"])
Step #1 - "build-image": #99 59.93 logger.info("PyTorch: setting up devices")
Step #1 - "build-image": #99 59.93 if not is_sagemaker_mp_enabled():
Step #1 - "build-image": #99 59.93 if not is_accelerate_available(min_version="0.20.1"):
Step #1 - "build-image": #99 59.93 raise ImportError(
Step #1 - "build-image": #99 59.93 "Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1`: Please run `pip install transformers[torch]` or `pip install accelerate -U`"
Step #1 - "build-image": #99 59.93 )
Step #1 - "build-image": #99 59.93 > AcceleratorState._reset_state(reset_partial_state=True)
Step #1 - "build-image": #99 59.93 E NameError: name 'AcceleratorState' is not defined
```
## Before submitting
- [NA ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ NA ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [NA ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28171/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28171/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28171",
"html_url": "https://github.com/huggingface/transformers/pull/28171",
"diff_url": "https://github.com/huggingface/transformers/pull/28171.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28171.patch",
"merged_at": "2023-12-22T11:41:35"
} |
https://api.github.com/repos/huggingface/transformers/issues/28170 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28170/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28170/comments | https://api.github.com/repos/huggingface/transformers/issues/28170/events | https://github.com/huggingface/transformers/issues/28170 | 2,051,205,921 | I_kwDOCUB6oc56Qush | 28,170 | Error while importing the transformers | {
"login": "iamshreeram",
"id": 7752805,
"node_id": "MDQ6VXNlcjc3NTI4MDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7752805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iamshreeram",
"html_url": "https://github.com/iamshreeram",
"followers_url": "https://api.github.com/users/iamshreeram/followers",
"following_url": "https://api.github.com/users/iamshreeram/following{/other_user}",
"gists_url": "https://api.github.com/users/iamshreeram/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iamshreeram/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iamshreeram/subscriptions",
"organizations_url": "https://api.github.com/users/iamshreeram/orgs",
"repos_url": "https://api.github.com/users/iamshreeram/repos",
"events_url": "https://api.github.com/users/iamshreeram/events{/privacy}",
"received_events_url": "https://api.github.com/users/iamshreeram/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-20T20:18:52 | 2023-12-21T13:33:57 | 2023-12-21T13:33:56 | NONE | null | ### System Info
**Transformers Version** : 4.36.0.dev0
**Platform** : Mac OS
**Python** : 3.9.18
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce :
1. Run the following program to translate to the target language:
```
from transformers import pipeline
pipeline_generator = pipeline(
"automatic-speech-recognition",
"facebook/seamless-m4t-v2-large",
)
transcript = pipeline_generator("https://www2.cs.uic.edu/~i101/SoundFiles/preamble10.wav", generate_kwargs={"tgt_lang": "spa", },)
```
2. This throws the following exception:
```
Traceback (most recent call last):
File "/Users/home/ram/project/python/text-translation-speech/ttrans.py", line 12, in <module>
transcript = pipeline_generator("https://www2.cs.uic.edu/~i101/SoundFiles/preamble10.wav", generate_kwargs={"tgt_lang": "ta", },)
File "/Applications/anaconda3/envs/dub/lib/python3.9/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 357, in __call__
return super().__call__(inputs, **kwargs)
File "/Applications/anaconda3/envs/dub/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1134, in __call__
self.get_iterator(
File "/Applications/anaconda3/envs/dub/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1182, in get_iterator
feature_extractor = self.feature_extractor if self.feature_extractor is not None else self.image_processor
AttributeError: 'AutomaticSpeechRecognitionPipeline' object has no attribute 'image_processor'
```
3. Despite not importing `image_processor`, the exception is thrown.
### Expected behavior
Produce output in the target language as seen in this [thread](https://github.com/facebookresearch/seamless_communication/issues/237#issuecomment-1864534911), running with the expected results.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28170/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28170/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28169 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28169/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28169/comments | https://api.github.com/repos/huggingface/transformers/issues/28169/events | https://github.com/huggingface/transformers/pull/28169 | 2,051,138,169 | PR_kwDOCUB6oc5igLsF | 28,169 | disable test_retain_grad_hidden_states_attentions on SeamlessM4TModelWithTextInputTest | {
"login": "dwyatte",
"id": 2512762,
"node_id": "MDQ6VXNlcjI1MTI3NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2512762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dwyatte",
"html_url": "https://github.com/dwyatte",
"followers_url": "https://api.github.com/users/dwyatte/followers",
"following_url": "https://api.github.com/users/dwyatte/following{/other_user}",
"gists_url": "https://api.github.com/users/dwyatte/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dwyatte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dwyatte/subscriptions",
"organizations_url": "https://api.github.com/users/dwyatte/orgs",
"repos_url": "https://api.github.com/users/dwyatte/repos",
"events_url": "https://api.github.com/users/dwyatte/events{/privacy}",
"received_events_url": "https://api.github.com/users/dwyatte/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-20T19:22:30 | 2023-12-21T07:39:45 | 2023-12-21T07:39:44 | CONTRIBUTOR | null | # What does this PR do?
Disables `tests/models/seamless_m4t/test_modeling_seamless_m4t.py::SeamlessM4TModelWithTextInputTest::test_retain_grad_hidden_states_attentions` as discussed in https://github.com/huggingface/transformers/pull/28144#issuecomment-1864990888
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28169/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28169/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28169",
"html_url": "https://github.com/huggingface/transformers/pull/28169",
"diff_url": "https://github.com/huggingface/transformers/pull/28169.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28169.patch",
"merged_at": "2023-12-21T07:39:44"
} |
https://api.github.com/repos/huggingface/transformers/issues/28168 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28168/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28168/comments | https://api.github.com/repos/huggingface/transformers/issues/28168/events | https://github.com/huggingface/transformers/pull/28168 | 2,051,109,413 | PR_kwDOCUB6oc5igFeH | 28,168 | Fix `input_embeds` docstring in encoder-decoder architectures | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-20T18:57:56 | 2023-12-21T11:01:58 | 2023-12-21T11:01:55 | MEMBER | null | # What does this PR do?
Big diff, small change:
- adds a missing paragraph between the docstring of `past_key_values` and `input_embeds`
- adds missing `input_embeds` docstring in a few TF models
It chips away some of the diff in #28065 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28168/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28168/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28168",
"html_url": "https://github.com/huggingface/transformers/pull/28168",
"diff_url": "https://github.com/huggingface/transformers/pull/28168.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28168.patch",
"merged_at": "2023-12-21T11:01:55"
} |
https://api.github.com/repos/huggingface/transformers/issues/28167 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28167/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28167/comments | https://api.github.com/repos/huggingface/transformers/issues/28167/events | https://github.com/huggingface/transformers/issues/28167 | 2,050,971,650 | I_kwDOCUB6oc56P1gC | 28,167 | Misleading doc on BLIP `outputs.loss`: doesn't return true NLL but NLL *with label smoothing* | {
"login": "DianeBouchacourt",
"id": 13796686,
"node_id": "MDQ6VXNlcjEzNzk2Njg2",
"avatar_url": "https://avatars.githubusercontent.com/u/13796686?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DianeBouchacourt",
"html_url": "https://github.com/DianeBouchacourt",
"followers_url": "https://api.github.com/users/DianeBouchacourt/followers",
"following_url": "https://api.github.com/users/DianeBouchacourt/following{/other_user}",
"gists_url": "https://api.github.com/users/DianeBouchacourt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DianeBouchacourt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DianeBouchacourt/subscriptions",
"organizations_url": "https://api.github.com/users/DianeBouchacourt/orgs",
"repos_url": "https://api.github.com/users/DianeBouchacourt/repos",
"events_url": "https://api.github.com/users/DianeBouchacourt/events{/privacy}",
"received_events_url": "https://api.github.com/users/DianeBouchacourt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | open | false | null | [] | null | 5 | 2023-12-20T17:18:28 | 2024-01-25T12:56:01 | null | NONE | null | ### System Info
Transformers 4.35.2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Not really a bug, more a misleading feature:
Computing the negative log-likelihood (NLL) is useful for understanding the probability of a caption for a given image, using BLIP generative text decoder. However, if one uses BLIP for ConditionalGeneration as explained here
https://huggingface.co/docs/transformers/model_doc/blip#transformers.BlipForConditionalGeneration
adapted for computation of the NLL, one would naturally do:
```
from PIL import Image
import requests
from transformers import AutoProcessor, BlipForConditionalGeneration
processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = "A image of two cats"
inputs = processor(images=image, text=text, return_tensors="pt")
outputs = model(**inputs, labels=inputs['input_ids'])
nll=outputs.loss.item()
```
However, the loss is computed **with label smoothing** as in training, because it is hard-coded in BLIPLLM head (just like in the original Salesforce code)
https://github.com/huggingface/transformers/blob/c48787f347bd604f656c2cfff730e029c8f8c1fe/src/transformers/models/blip/modeling_blip_text.py#L892
Therefore it isn't the true NLL that the call to .loss returns, and I believe the documentation should be clearer on this.
I propose to:
* change the doc to make this clearer
* or add a parameter label_smoothing when initializing the BLIP model
* or add a function to compute NLL explicitely, separated from .loss, e.g.:
```
def return_nll(scores, target):
loss_fct = CrossEntropyLoss(reduction='mean', label_smoothing=0.0) # we're setting it to 0
loss = loss_fct(scores, target)
return loss
def compute_generative_probability(model, processor, image, text):
inputs = processor(images=image, text=text,
return_tensors="pt", padding=True)
outputs = model(**inputs, labels=inputs['input_ids'])
shifted_predictions_scores = outputs.logits[0 , :-1, :].contiguous()
shifted_labels = inputs["input_ids"][0, 1:].contiguous().to(shifted_predictions_scores.device)
nll = return_nll(shifted_predictions_scores, target=shifted_labels)
return nll
```
Writing this so that others researchers are aware :) Thanks a lot for the amazing library
### Expected behavior
See code above | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28167/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28167/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28166 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28166/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28166/comments | https://api.github.com/repos/huggingface/transformers/issues/28166/events | https://github.com/huggingface/transformers/pull/28166 | 2,050,712,916 | PR_kwDOCUB6oc5ietpb | 28,166 | Generate: fix speculative decoding | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-20T14:54:47 | 2023-12-20T18:55:39 | 2023-12-20T18:55:35 | MEMBER | null | # What does this PR do?
This PR:
- Fixes speculative decoding quality:
- Incorrect indexing operation
- The assistant model should sample when the larger model also samples (more generally, it should take the original model's `generation_config`)
- Custom logits processors should also be passed to the assistant model
- Changes docs to put an emphasis on "speculative decoding" as opposed to "assisted generation", as the former is more popular
________
`RUN_SLOW=1 py.test tests/ -k speculative` was run locally to confirm that slow assisted generation whisper tests were passing. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28166/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28166",
"html_url": "https://github.com/huggingface/transformers/pull/28166",
"diff_url": "https://github.com/huggingface/transformers/pull/28166.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28166.patch",
"merged_at": "2023-12-20T18:55:35"
} |
https://api.github.com/repos/huggingface/transformers/issues/28165 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28165/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28165/comments | https://api.github.com/repos/huggingface/transformers/issues/28165/events | https://github.com/huggingface/transformers/pull/28165 | 2,050,664,966 | PR_kwDOCUB6oc5iei9i | 28,165 | Add new meta w2v2-conformer BERT-like model | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 11 | 2023-12-20T14:31:25 | 2024-01-18T13:37:34 | 2024-01-18T13:37:34 | COLLABORATOR | null | # What does this PR do?
Meta just open-sourced a Wav2Vec2-BERT conformer [model](https://huggingface.co/facebook/w2v-bert-2.0). This one is particularly interesting because it's under a MIT license and was pretrained on 101 input languages!
It requires adaption to the current w2v2-conformer code, which this PR does.
cc @sanchit-gandhi, @Vaibhavs10 and @amyeroberts | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28165/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28165/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28165",
"html_url": "https://github.com/huggingface/transformers/pull/28165",
"diff_url": "https://github.com/huggingface/transformers/pull/28165.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28165.patch",
"merged_at": "2024-01-18T13:37:34"
} |
https://api.github.com/repos/huggingface/transformers/issues/28164 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28164/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28164/comments | https://api.github.com/repos/huggingface/transformers/issues/28164/events | https://github.com/huggingface/transformers/issues/28164 | 2,050,512,860 | I_kwDOCUB6oc56OFfc | 28,164 | Inconsistencies between `.save_pretrained` and `from_pretrained` for slow and fast tokenizers (RoFormer) | {
"login": "xenova",
"id": 26504141,
"node_id": "MDQ6VXNlcjI2NTA0MTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xenova",
"html_url": "https://github.com/xenova",
"followers_url": "https://api.github.com/users/xenova/followers",
"following_url": "https://api.github.com/users/xenova/following{/other_user}",
"gists_url": "https://api.github.com/users/xenova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xenova/subscriptions",
"organizations_url": "https://api.github.com/users/xenova/orgs",
"repos_url": "https://api.github.com/users/xenova/repos",
"events_url": "https://api.github.com/users/xenova/events{/privacy}",
"received_events_url": "https://api.github.com/users/xenova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 8 | 2023-12-20T13:03:58 | 2024-01-16T15:37:17 | 2024-01-16T15:37:17 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (False)
- Tensorflow version (GPU?): 2.15.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu)
- Jax version: 0.4.20
- JaxLib version: 0.4.20
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
My original problem occurred when loading and saving with AutoTokenizer:
```py
from transformers import AutoTokenizer
# Load original tokenizer
original = AutoTokenizer.from_pretrained('alchemab/antiberta2')
print(original("生活的真谛是"))
# {'input_ids': [1, 4, 4, 4, 4, 4, 4, 2], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1]}
# Save tokenizer
original.save_pretrained('saved')
# Load this new tokenizer
new = AutoTokenizer.from_pretrained('saved')
print(new("生活的真谛是"))
# {'input_ids': [1, 4, 2], 'token_type_ids': [0, 0, 0], 'attention_mask': [1, 1, 1]}
```
Digging a bit deeper, it seems to be an issue with the slow to fast converter, with certain default values being overridden (presumably `handle_chinese_chars` in `BertNormalizer`). I know RoFormer isn't a very popular model these days, but since it uses a near-identical tokenization strategy to Bert models, this issue may have implications elsewhere.
### Expected behavior
Should produce the same (correct) results if it were loaded with the original (slow) tokenizer
```py
from transformers import RoFormerTokenizer
# Load original tokenizer
original = RoFormerTokenizer.from_pretrained('alchemab/antiberta2')
print(original("生活的真谛是"))
# {'input_ids': [1, 4, 4, 4, 4, 4, 4, 2], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1]}
# Save tokenizer
original.save_pretrained('saved')
# Load this new tokenizer
new = RoFormerTokenizer.from_pretrained('saved')
print(new("生活的真谛是"))
# {'input_ids': [1, 4, 4, 4, 4, 4, 4, 2], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1]}
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28164/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28163 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28163/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28163/comments | https://api.github.com/repos/huggingface/transformers/issues/28163/events | https://github.com/huggingface/transformers/pull/28163 | 2,050,454,676 | PR_kwDOCUB6oc5id0l1 | 28,163 | [Phi] Extend implementation to use GQA/MQA. | {
"login": "gugarosa",
"id": 4120639,
"node_id": "MDQ6VXNlcjQxMjA2Mzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4120639?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gugarosa",
"html_url": "https://github.com/gugarosa",
"followers_url": "https://api.github.com/users/gugarosa/followers",
"following_url": "https://api.github.com/users/gugarosa/following{/other_user}",
"gists_url": "https://api.github.com/users/gugarosa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gugarosa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gugarosa/subscriptions",
"organizations_url": "https://api.github.com/users/gugarosa/orgs",
"repos_url": "https://api.github.com/users/gugarosa/repos",
"events_url": "https://api.github.com/users/gugarosa/events{/privacy}",
"received_events_url": "https://api.github.com/users/gugarosa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 39 | 2023-12-20T12:27:25 | 2024-01-15T11:42:37 | 2024-01-11T14:58:02 | CONTRIBUTOR | null | # What does this PR do?
As we discussed on the repositories and the e-mail thread, these are minor changes that we would like to integrate into HF.
One thing that we need to discuss is how to leverage the current batch of models (using `transformers>=4.36.0`), since the proposed change will make a shape difference in `qkv` weights and biases.
We could change the conversion script from Phi (transformers=4.36.0) to reflect this new implementation, while we compromise in converting our current repositories weights to this new format and use the HF-based code in the next deploys. With that, we could avoid having two conversions, i.e., phi-msft -> phi and phi (4.36.0) -> new_phi.
Please let me know your thoughts!
## Changes
- Adds support for using GQA/MQA with Phi-based models. This is a combined implementation between the old `PhiAttention` and `LlamaAttention`.
- Fixes documentation official Phi-based models paths.
- Adds support for dynamically pad the vocab_size to a multiple of 64 (better use of Ampere/Hopper-based GPUs).
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
@susnato @LysandreJik @ArthurZucker @philschmid @osanseviero
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28163/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28163/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28163",
"html_url": "https://github.com/huggingface/transformers/pull/28163",
"diff_url": "https://github.com/huggingface/transformers/pull/28163.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28163.patch",
"merged_at": "2024-01-11T14:58:02"
} |
https://api.github.com/repos/huggingface/transformers/issues/28162 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28162/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28162/comments | https://api.github.com/repos/huggingface/transformers/issues/28162/events | https://github.com/huggingface/transformers/issues/28162 | 2,050,409,938 | I_kwDOCUB6oc56NsXS | 28,162 | save_pretrained no longer works for AutomaticSpeechRecognitionPipeline | {
"login": "Hubert-Bonisseur",
"id": 48770768,
"node_id": "MDQ6VXNlcjQ4NzcwNzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hubert-Bonisseur",
"html_url": "https://github.com/Hubert-Bonisseur",
"followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers",
"following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}",
"gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions",
"organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs",
"repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos",
"events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-20T11:58:12 | 2024-01-18T16:11:51 | 2024-01-18T16:11:51 | NONE | null | ### System Info
transformers-4.37.0.dev0
### Who can help?
@Narsil
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import pipeline
asr_pipeline = pipeline('automatic-speech-recognition',
model="openai/whisper-tiny")
asr_pipeline.save_pretrained("pipeline_save")
```
Gives this error:
```
Traceback (most recent call last):
File "/Users/bruno/testing.py", line 6, in <module>
asr_pipeline.save_pretrained("pipeline_save")
File "/Users/bruno/venv/lib/python3.11/site-packages/transformers/pipelines/base.py", line 883, in save_pretrained
if self.image_processor is not None:
^^^^^^^^^^^^^^^^^^^^
AttributeError: 'AutomaticSpeechRecognitionPipeline' object has no attribute 'image_processor'
```
### Expected behavior
The pipeline should be saved.
save_pretrained a pipeline is used by BentoML, as a result versions of transformers newer than 4.32.1 cannot be used to serve a AutomaticSpeechRecognitionPipeline with bentoML.
https://github.com/bentoml/BentoML/issues/4339 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28162/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28162/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28161 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28161/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28161/comments | https://api.github.com/repos/huggingface/transformers/issues/28161/events | https://github.com/huggingface/transformers/pull/28161 | 2,050,387,978 | PR_kwDOCUB6oc5idl31 | 28,161 | Update FA2 exception msg to point to hub discussions | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-20T11:43:14 | 2023-12-20T16:52:22 | 2023-12-20T16:52:17 | COLLABORATOR | null | # What does this PR do?
Small update the FA2 warning pointing users towards discussions on the hub. Addresses cases like in #28100 when support is requested for model not in the transformers repo.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28161/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28161",
"html_url": "https://github.com/huggingface/transformers/pull/28161",
"diff_url": "https://github.com/huggingface/transformers/pull/28161.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28161.patch",
"merged_at": "2023-12-20T16:52:17"
} |
https://api.github.com/repos/huggingface/transformers/issues/28160 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28160/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28160/comments | https://api.github.com/repos/huggingface/transformers/issues/28160/events | https://github.com/huggingface/transformers/issues/28160 | 2,050,382,630 | I_kwDOCUB6oc56Nlsm | 28,160 | [Flash Attention 2] Performance improvement | {
"login": "li-plus",
"id": 39846316,
"node_id": "MDQ6VXNlcjM5ODQ2MzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/39846316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/li-plus",
"html_url": "https://github.com/li-plus",
"followers_url": "https://api.github.com/users/li-plus/followers",
"following_url": "https://api.github.com/users/li-plus/following{/other_user}",
"gists_url": "https://api.github.com/users/li-plus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/li-plus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/li-plus/subscriptions",
"organizations_url": "https://api.github.com/users/li-plus/orgs",
"repos_url": "https://api.github.com/users/li-plus/repos",
"events_url": "https://api.github.com/users/li-plus/events{/privacy}",
"received_events_url": "https://api.github.com/users/li-plus/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3081136536,
"node_id": "MDU6TGFiZWwzMDgxMTM2NTM2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Difficult%20Issue",
"name": "Good Difficult Issue",
"color": "684CC7",
"default": false,
"description": ""
},
{
"id": 6202871275,
"node_id": ... | open | false | null | [] | null | 3 | 2023-12-20T11:39:32 | 2023-12-20T13:18:13 | null | CONTRIBUTOR | null | ### Feature request
The current flash attention 2 integration is sub-optimal in performance because it requires unpadding and padding the activations on **each** layer. For example in llama implementation:
https://github.com/huggingface/transformers/blob/769a9542de4e8b23f0a551738e18760621f463e8/src/transformers/models/llama/modeling_llama.py#L591-L612
These small kernels for unpad/pad keep gpu waiting for cpu, as shown in the visible gaps between kernels in cuda stream.

I'll suggest unpadding the activations at the very beginning (right after word embeddings) and padding it back at the end (maybe before lm_head), and the gap should disappear.
### Motivation
To eliminate performance overhead of flash attention 2.
### Your contribution
I can write the code when I'm not busy. Maybe not now. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28160/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28160/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28159 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28159/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28159/comments | https://api.github.com/repos/huggingface/transformers/issues/28159/events | https://github.com/huggingface/transformers/issues/28159 | 2,050,081,233 | I_kwDOCUB6oc56McHR | 28,159 | traning a model `Falcon-7b instruct` and facing error | {
"login": "rajveer43",
"id": 64583161,
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajveer43",
"html_url": "https://github.com/rajveer43",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 6 | 2023-12-20T08:28:22 | 2023-12-21T05:00:58 | 2023-12-21T05:00:58 | CONTRIBUTOR | null | ### System Info
Kaggle notebook, google colab
```
training_args = transformers.TrainingArguments(
per_device_train_batch_size=4,
gradient_accumulation_steps=4, #4
num_train_epochs=6,
learning_rate=2e-4,
fp16=True,
save_total_limit=3,
logging_steps=500,
output_dir="experiments",
optim="paged_adamw_8bit",
lr_scheduler_type="cosine",
warmup_ratio=0.05,
push_to_hub=True,
)
trainer = transformers.Trainer(
model=model,
train_dataset=train_data_transformed,
args=training_args,
data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False)
)
model.config.use_cache = False
trainer.train()
```
error:
```
RuntimeError: Inference tensors cannot be saved for backward. To work around you can make a clone to get a normal tensor and use it in autograd.
```
```
You're using a PreTrainedTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[27], line 1
----> 1 trainer.train()
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1528, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1525 try:
1526 # Disable progress bars when uploading models during checkpoints to avoid polluting stdout
1527 hf_hub_utils.disable_progress_bars()
-> 1528 return inner_training_loop(
1529 args=args,
1530 resume_from_checkpoint=resume_from_checkpoint,
1531 trial=trial,
1532 ignore_keys_for_eval=ignore_keys_for_eval,
1533 )
1534 finally:
1535 hf_hub_utils.enable_progress_bars()
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1854, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1851 self.control = self.callback_handler.on_step_begin(args, self.state, self.control)
1853 with self.accelerator.accumulate(model):
-> 1854 tr_loss_step = self.training_step(model, inputs)
1856 if (
1857 args.logging_nan_inf_filter
1858 and not is_torch_tpu_available()
1859 and (torch.isnan(tr_loss_step) or torch.isinf(tr_loss_step))
1860 ):
1861 # if loss is nan or inf simply add the average of previous logged losses
1862 tr_loss += tr_loss / (1 + self.state.global_step - self._globalstep_last_logged)
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:2732, in Trainer.training_step(self, model, inputs)
2730 scaled_loss.backward()
2731 else:
-> 2732 self.accelerator.backward(loss)
2734 return loss.detach() / self.args.gradient_accumulation_steps
File /opt/conda/lib/python3.10/site-packages/accelerate/accelerator.py:1903, in Accelerator.backward(self, loss, **kwargs)
1901 return
1902 elif self.scaler is not None:
-> 1903 self.scaler.scale(loss).backward(**kwargs)
1904 else:
1905 loss.backward(**kwargs)
File /opt/conda/lib/python3.10/site-packages/torch/_tensor.py:487, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs)
477 if has_torch_function_unary(self):
478 return handle_torch_function(
479 Tensor.backward,
480 (self,),
(...)
485 inputs=inputs,
486 )
--> 487 torch.autograd.backward(
488 self, gradient, retain_graph, create_graph, inputs=inputs
489 )
File /opt/conda/lib/python3.10/site-packages/torch/autograd/__init__.py:200, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
195 retain_graph = create_graph
197 # The reason we repeat same the comment below is that
198 # some Python versions print out the first line of a multi-line function
199 # calls in the traceback and some print out the last line
--> 200 Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
201 tensors, grad_tensors_, retain_graph, create_graph, inputs,
202 allow_unreachable=True, accumulate_grad=True)
File /opt/conda/lib/python3.10/site-packages/torch/autograd/function.py:274, in BackwardCFunction.apply(self, *args)
270 raise RuntimeError("Implementing both 'backward' and 'vjp' for a custom "
271 "Function is not allowed. You should only implement one "
272 "of them.")
273 user_fn = vjp_fn if vjp_fn is not Function.vjp else backward_fn
--> 274 return user_fn(self, *args)
File /opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py:141, in CheckpointFunction.backward(ctx, *args)
137 detached_inputs = detach_variable(tuple(inputs))
138 with torch.enable_grad(), \
139 torch.cuda.amp.autocast(**ctx.gpu_autocast_kwargs), \
140 torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs):
--> 141 outputs = ctx.run_function(*detached_inputs)
143 if isinstance(outputs, torch.Tensor):
144 outputs = (outputs,)
File ~/.cache/huggingface/modules/transformers_modules/tiiuae/falcon-7b-instruct/cf4b3c42ce2fdfe24f753f0f0d179202fea59c99/modeling_falcon.py:785, in FalconModel.forward.<locals>.create_custom_forward.<locals>.custom_forward(*inputs)
783 def custom_forward(*inputs):
784 # None for past_key_value
--> 785 return module(*inputs, use_cache=use_cache, output_attentions=output_attentions)
File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/conda/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(module, *args, **kwargs)
163 output = module._old_forward(*args, **kwargs)
164 else:
--> 165 output = module._old_forward(*args, **kwargs)
166 return module._hf_hook.post_forward(module, output)
File ~/.cache/huggingface/modules/transformers_modules/tiiuae/falcon-7b-instruct/cf4b3c42ce2fdfe24f753f0f0d179202fea59c99/modeling_falcon.py:453, in FalconDecoderLayer.forward(self, hidden_states, alibi, attention_mask, layer_past, head_mask, use_cache, output_attentions)
450 attention_layernorm_out = self.input_layernorm(hidden_states)
452 # Self attention.
--> 453 attn_outputs = self.self_attention(
454 attention_layernorm_out,
455 layer_past=layer_past,
456 attention_mask=attention_mask,
457 alibi=alibi,
458 head_mask=head_mask,
459 use_cache=use_cache,
460 output_attentions=output_attentions,
461 )
463 attention_output = attn_outputs[0]
465 if not self.config.new_decoder_architecture:
File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/conda/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(module, *args, **kwargs)
163 output = module._old_forward(*args, **kwargs)
164 else:
--> 165 output = module._old_forward(*args, **kwargs)
166 return module._hf_hook.post_forward(module, output)
File ~/.cache/huggingface/modules/transformers_modules/tiiuae/falcon-7b-instruct/cf4b3c42ce2fdfe24f753f0f0d179202fea59c99/modeling_falcon.py:307, in FalconAttention.forward(self, hidden_states, alibi, attention_mask, layer_past, head_mask, use_cache, output_attentions)
304 value_layer = value_layer.transpose(1, 2).reshape(batch_size * num_kv_heads, query_length, self.head_dim)
306 past_kv_length = 0 if layer_past is None else layer_past[0].shape[1]
--> 307 query_layer, key_layer = self.maybe_rotary(query_layer, key_layer, past_kv_length)
309 if layer_past is not None:
310 past_key, past_value = layer_past
File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File ~/.cache/huggingface/modules/transformers_modules/tiiuae/falcon-7b-instruct/cf4b3c42ce2fdfe24f753f0f0d179202fea59c99/modeling_falcon.py:108, in FalconRotaryEmbedding.forward(self, query, key, past_key_values_length)
106 batch, seq_len, head_dim = query.shape
107 cos, sin = self.cos_sin(seq_len, past_key_values_length, query.device, query.dtype)
--> 108 return (query * cos) + (rotate_half(query) * sin), (key * cos) + (rotate_half(key) * sin)
RuntimeError: Inference tensors cannot be saved for backward. To work around you can make a clone to get a normal tensor and use it in autograd.
```
how can I solve this?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
-
### Expected behavior
-
@ArthurZucker | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28159/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28159/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28158 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28158/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28158/comments | https://api.github.com/repos/huggingface/transformers/issues/28158/events | https://github.com/huggingface/transformers/issues/28158 | 2,049,997,699 | I_kwDOCUB6oc56MHuD | 28,158 | During the training process, what happens if tf32 and bf16 are enabled at the same time? | {
"login": "Bonytu",
"id": 47250017,
"node_id": "MDQ6VXNlcjQ3MjUwMDE3",
"avatar_url": "https://avatars.githubusercontent.com/u/47250017?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bonytu",
"html_url": "https://github.com/Bonytu",
"followers_url": "https://api.github.com/users/Bonytu/followers",
"following_url": "https://api.github.com/users/Bonytu/following{/other_user}",
"gists_url": "https://api.github.com/users/Bonytu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bonytu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bonytu/subscriptions",
"organizations_url": "https://api.github.com/users/Bonytu/orgs",
"repos_url": "https://api.github.com/users/Bonytu/repos",
"events_url": "https://api.github.com/users/Bonytu/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bonytu/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api... | null | 1 | 2023-12-20T07:35:08 | 2024-01-31T13:40:52 | null | NONE | null | ### System Info
transformers 4.34.1
### Who can help?
@muellerzr @pacman100
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. use trainer and set --tf32 True and --bf16 True
### Expected behavior
Hi, when I was training llama2-13b, I set both --tf32 True and --bf16 True at the same time. I'm confused because the trainer worked normally when both of these parameters were enabled. During this process, which parts used tf32 and which parts used bf16? How exactly does it work when both are turned on at the same time?
Also I found many tutorial set these two params at the same time. [(https://www.philschmid.de/instruction-tune-llama-2)](tutorial) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28158/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28158/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28157 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28157/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28157/comments | https://api.github.com/repos/huggingface/transformers/issues/28157/events | https://github.com/huggingface/transformers/issues/28157 | 2,049,793,039 | I_kwDOCUB6oc56LVwP | 28,157 | AUtokenizer is giving wrong result | {
"login": "ONE-THING-9",
"id": 123763769,
"node_id": "U_kgDOB2B8OQ",
"avatar_url": "https://avatars.githubusercontent.com/u/123763769?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ONE-THING-9",
"html_url": "https://github.com/ONE-THING-9",
"followers_url": "https://api.github.com/users/ONE-THING-9/followers",
"following_url": "https://api.github.com/users/ONE-THING-9/following{/other_user}",
"gists_url": "https://api.github.com/users/ONE-THING-9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ONE-THING-9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ONE-THING-9/subscriptions",
"organizations_url": "https://api.github.com/users/ONE-THING-9/orgs",
"repos_url": "https://api.github.com/users/ONE-THING-9/repos",
"events_url": "https://api.github.com/users/ONE-THING-9/events{/privacy}",
"received_events_url": "https://api.github.com/users/ONE-THING-9/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 6 | 2023-12-20T04:04:59 | 2024-01-27T08:03:14 | null | NONE | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (False)
- Tensorflow version (GPU?): 2.15.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu)
- Jax version: 0.4.20
- JaxLib version: 0.4.20
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @younesbelkada
While using AutoTokenizer for "sarvamai/OpenHathi-7B-Hi-v0.1-Base". The tokenizer is giving the wrong output.
Tokenizer is splitting the words that are in vocab like
('▁विधायकों', 33821)
tokenizer.tokenize("विधायकों")
output
['▁', 'वि', 'धा', 'य', 'कों']
Observed this with many words : बिश्नोई , एबीवीपी......
However, it is working fine with LlamaTokenizer
https://huggingface.co/sarvamai/OpenHathi-7B-Hi-v0.1-Base
<img width="852" alt="Screenshot 2023-12-16 at 8 42 30 PM" src="https://github.com/huggingface/transformers/assets/123763769/220734ad-8ae1-4323-a1ea-a29beb2b15a2">
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
used given code in model info page
### Expected behavior
AutoTokenizer gives the wrong output | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28157/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28156 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28156/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28156/comments | https://api.github.com/repos/huggingface/transformers/issues/28156/events | https://github.com/huggingface/transformers/issues/28156 | 2,049,744,008 | I_kwDOCUB6oc56LJyI | 28,156 | Whisper v3 dependency issue | {
"login": "lionsheep0724",
"id": 79906095,
"node_id": "MDQ6VXNlcjc5OTA2MDk1",
"avatar_url": "https://avatars.githubusercontent.com/u/79906095?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lionsheep0724",
"html_url": "https://github.com/lionsheep0724",
"followers_url": "https://api.github.com/users/lionsheep0724/followers",
"following_url": "https://api.github.com/users/lionsheep0724/following{/other_user}",
"gists_url": "https://api.github.com/users/lionsheep0724/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lionsheep0724/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lionsheep0724/subscriptions",
"organizations_url": "https://api.github.com/users/lionsheep0724/orgs",
"repos_url": "https://api.github.com/users/lionsheep0724/repos",
"events_url": "https://api.github.com/users/lionsheep0724/events{/privacy}",
"received_events_url": "https://api.github.com/users/lionsheep0724/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 12 | 2023-12-20T02:53:34 | 2024-01-27T09:45:28 | null | NONE | null | ### System Info
- transformers version: transformers-4.37.0.dev0 (installed via `pip install --upgrade git+https://github.com/huggingface/transformers.git accelerate datasets[audio]`, which instructed in [here](https://huggingface.co/openai/whisper-large-v3)
- Platform: Windows 10, WSL
- Python version: 3.10
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_path = f"./models/whisper-large-v3"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_path, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_path)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
max_new_tokens=128,
chunk_length_s=30,
batch_size=16,
return_timestamps=True,
torch_dtype=torch_dtype,
device=device,
)
```
### Expected behavior
- I'm trying to load pretrained whisper-large-v3 model but I guess there is dependency issue in transformers (transformers-4.37.0.dev0)
- I got an error as follows. ```ImportError: tokenizers>=0.11.1,!=0.11.3,<0.14 is required for a normal functioning of this module, but found tokenizers==0.15.0.```
- I guess transformers(4.37.0.dev0) and whisper-v3 depends on tokenizers under 0.15, but installed one through pip command in official hf-whisper page is 0.15.
- When I install lower version of tokenizers, ```ValueError: Non-consecutive added token ‘<|0.02|>’ found. Should have index 50365 but has index 50366 in saved vocabulary.``` error occurrs.
- I'm confused which tokenizers version I need to install. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28156/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28156/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28155 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28155/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28155/comments | https://api.github.com/repos/huggingface/transformers/issues/28155/events | https://github.com/huggingface/transformers/issues/28155 | 2,049,695,852 | I_kwDOCUB6oc56K-Bs | 28,155 | What is the minimum video card with large memory required to run the mixtral-8x7b model | {
"login": "zysNLP",
"id": 45376689,
"node_id": "MDQ6VXNlcjQ1Mzc2Njg5",
"avatar_url": "https://avatars.githubusercontent.com/u/45376689?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zysNLP",
"html_url": "https://github.com/zysNLP",
"followers_url": "https://api.github.com/users/zysNLP/followers",
"following_url": "https://api.github.com/users/zysNLP/following{/other_user}",
"gists_url": "https://api.github.com/users/zysNLP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zysNLP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zysNLP/subscriptions",
"organizations_url": "https://api.github.com/users/zysNLP/orgs",
"repos_url": "https://api.github.com/users/zysNLP/repos",
"events_url": "https://api.github.com/users/zysNLP/events{/privacy}",
"received_events_url": "https://api.github.com/users/zysNLP/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-20T01:54:45 | 2024-01-28T08:04:44 | 2024-01-28T08:04:44 | NONE | null | I mean the model that just came out:mistralai/Mixtral-8x7B-Instruct-v0.1,looks like a lot of parameter files,what is the minimum nvidia graphics card video memory required? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28155/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28155/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28154 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28154/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28154/comments | https://api.github.com/repos/huggingface/transformers/issues/28154/events | https://github.com/huggingface/transformers/issues/28154 | 2,049,630,196 | I_kwDOCUB6oc56Kt_0 | 28,154 | ffmpeg_microphone does not use current input device on Mac/Darwin | {
"login": "ruisilvestre",
"id": 1216164,
"node_id": "MDQ6VXNlcjEyMTYxNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1216164?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ruisilvestre",
"html_url": "https://github.com/ruisilvestre",
"followers_url": "https://api.github.com/users/ruisilvestre/followers",
"following_url": "https://api.github.com/users/ruisilvestre/following{/other_user}",
"gists_url": "https://api.github.com/users/ruisilvestre/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ruisilvestre/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ruisilvestre/subscriptions",
"organizations_url": "https://api.github.com/users/ruisilvestre/orgs",
"repos_url": "https://api.github.com/users/ruisilvestre/repos",
"events_url": "https://api.github.com/users/ruisilvestre/events{/privacy}",
"received_events_url": "https://api.github.com/users/ruisilvestre/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2023-12-20T00:35:11 | 2024-01-19T09:49:14 | null | NONE | null | While going through the HF tutorials for STT [here](https://huggingface.co/learn/audio-course/chapter7/voice-assistant), I found some unexpected behaviour with the ffmpeg_microphone_live function on my Mac. I also just found someone that might be having the same issue [here](https://github.com/huggingface/transformers/issues/25183#issuecomment-1778473797) but it's an issue related to sound in Colab env so I'm creating this separately.
The input device index used is always 0, but that might not match the current system input device. Using the current system input device would be the expected behaviour (also according to the other platforms' code that all specify `default` for input device). E.g. I was working with my laptop closed (just connected to the monitor) and wanted to capture sound with my headphones but couldn't.
The solution seems to be fairly simple. Based on the [ffmpeg devices documentation](https://ffmpeg.org/ffmpeg-devices.html#avfoundation) the value `default` is also supported for audio in avfoundation, and it will match the current system input device.
I've changed this manually in audio_utils.py ffmpeg_microphone(...) and it seems to work as expected.
```
elif system == "Darwin":
format_ = "avfoundation"
input_ = ":default"
```
Here's the [link](https://github.com/huggingface/transformers/blob/fb78769b9c053876ed7ae152ee995b0439a4462a/src/transformers/pipelines/audio_utils.py#L68) to the same line in the HF repo.
I can make a PR for it if you want. This could also go with adding a param for the device index to those microphone functions similar to how other audio libraries do for easier customisation, which just falls back to use the `default` input device.
## Additional Info
`transformers-cli env` output
- `transformers` version: 4.35.2
- Platform: macOS-14.2-arm64-arm-64bit
- Python version: 3.10.13
- other info not relevant for this issue
Code to reproduce is the snippet in the voice-assistant tutorial. In case the 0th device is not the one you want to listen with, the code will just fail since it won't capture any audio.
```
import sys
def transcribe(chunk_length_s=5.0, stream_chunk_s=1.0):
sampling_rate = transcriber.feature_extractor.sampling_rate
mic = ffmpeg_microphone_live(
sampling_rate=sampling_rate,
chunk_length_s=chunk_length_s,
stream_chunk_s=stream_chunk_s,
)
print("Start speaking...")
for item in transcriber(mic, generate_kwargs={"max_new_tokens": 128}):
sys.stdout.write("\033[K")
print(item["text"], end="\r")
if not item["partial"][0]:
break
return item["text"]
```
According to [ffmpeg devices documentation](https://ffmpeg.org/ffmpeg-devices.html#Examples) you can print out your system input devices using
`ffmpeg -f avfoundation -list_devices true -i ""`
For me this gives:
```
[...]
[AVFoundation indev @ 0x7fcc33004d00] AVFoundation video devices:
[AVFoundation indev @ 0x7fcc33004d00] [0] FaceTime HD Camera
[AVFoundation indev @ 0x7fcc33004d00] [1] Rui Silvestre’s iPhone Camera
[AVFoundation indev @ 0x7fcc33004d00] [2] Capture screen 0
[AVFoundation indev @ 0x7fcc33004d00] AVFoundation audio devices:
[AVFoundation indev @ 0x7fcc33004d00] [0] MacBook Pro Microphone
[AVFoundation indev @ 0x7fcc33004d00] [1] Rui Silvestre’s iPhone Microphone
[AVFoundation indev @ 0x7fcc33004d00] [2] AirPods Pro
[AVFoundation indev @ 0x7fcc33004d00] [3] Microsoft Teams Audio
```
The audio device at index 0 is my MacBook mic but I currently have my AirPods on and would want to use that as my input device. I've also noticed the indexes change fairly frequently depending on which devices are nearby.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28154/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28154/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28153 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28153/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28153/comments | https://api.github.com/repos/huggingface/transformers/issues/28153/events | https://github.com/huggingface/transformers/issues/28153 | 2,049,517,555 | I_kwDOCUB6oc56KSfz | 28,153 | Annotations not being transformed after padding on Deformable DETR preprocessing | {
"login": "Tengoles",
"id": 26772529,
"node_id": "MDQ6VXNlcjI2NzcyNTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/26772529?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tengoles",
"html_url": "https://github.com/Tengoles",
"followers_url": "https://api.github.com/users/Tengoles/followers",
"following_url": "https://api.github.com/users/Tengoles/following{/other_user}",
"gists_url": "https://api.github.com/users/Tengoles/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tengoles/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tengoles/subscriptions",
"organizations_url": "https://api.github.com/users/Tengoles/orgs",
"repos_url": "https://api.github.com/users/Tengoles/repos",
"events_url": "https://api.github.com/users/Tengoles/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tengoles/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "htt... | null | 2 | 2023-12-19T22:13:11 | 2024-01-30T10:12:54 | null | NONE | null | ### System Info
@amyeroberts
Maybe I'm missing something but it seems like the annotations are not being transformed accordingly after applying pad to a batch of images:
https://github.com/huggingface/transformers/blob/v4.36.1/src/transformers/models/deformable_detr/image_processing_deformable_detr.py#L1330
Is this dealt with further down the train pipeline? when I render the output annotations of that method (encoded_inputs["labels"]) they are incorrect for the images of the batch that required to be padded.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoImageProcessor
processor = AutoImageProcessor.from_pretrained("SenseTime/deformable-detr")
encoding = processor(images=imgs, annotations=targets, return_tensors="pt",
do_pad=True)
### Expected behavior
Annotations may require transformation just like they are transformed accordingly when applying resize and rescale on previous lines within the same method. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28153/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28153/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28152 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28152/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28152/comments | https://api.github.com/repos/huggingface/transformers/issues/28152/events | https://github.com/huggingface/transformers/pull/28152 | 2,049,484,831 | PR_kwDOCUB6oc5iahED | 28,152 | remove cpu dockerfiles to fix #28148 | {
"login": "evelynmitchell",
"id": 1007591,
"node_id": "MDQ6VXNlcjEwMDc1OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1007591?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/evelynmitchell",
"html_url": "https://github.com/evelynmitchell",
"followers_url": "https://api.github.com/users/evelynmitchell/followers",
"following_url": "https://api.github.com/users/evelynmitchell/following{/other_user}",
"gists_url": "https://api.github.com/users/evelynmitchell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/evelynmitchell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/evelynmitchell/subscriptions",
"organizations_url": "https://api.github.com/users/evelynmitchell/orgs",
"repos_url": "https://api.github.com/users/evelynmitchell/repos",
"events_url": "https://api.github.com/users/evelynmitchell/events{/privacy}",
"received_events_url": "https://api.github.com/users/evelynmitchell/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-19T21:46:27 | 2023-12-20T14:29:46 | 2023-12-20T04:53:49 | NONE | null | # What does this PR do?
Removes unneeded cpu Dockerfiles.
Fixes ##28148
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
https://github.com/huggingface/transformers/issues/28148
- [x ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? - not needed removed unnecessary item.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28152/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28152/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28152",
"html_url": "https://github.com/huggingface/transformers/pull/28152",
"diff_url": "https://github.com/huggingface/transformers/pull/28152.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28152.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28151 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28151/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28151/comments | https://api.github.com/repos/huggingface/transformers/issues/28151/events | https://github.com/huggingface/transformers/pull/28151 | 2,049,467,164 | PR_kwDOCUB6oc5iadTq | 28,151 | 4D mask documentation updates | {
"login": "poedator",
"id": 24738311,
"node_id": "MDQ6VXNlcjI0NzM4MzEx",
"avatar_url": "https://avatars.githubusercontent.com/u/24738311?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/poedator",
"html_url": "https://github.com/poedator",
"followers_url": "https://api.github.com/users/poedator/followers",
"following_url": "https://api.github.com/users/poedator/following{/other_user}",
"gists_url": "https://api.github.com/users/poedator/gists{/gist_id}",
"starred_url": "https://api.github.com/users/poedator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/poedator/subscriptions",
"organizations_url": "https://api.github.com/users/poedator/orgs",
"repos_url": "https://api.github.com/users/poedator/repos",
"events_url": "https://api.github.com/users/poedator/events{/privacy}",
"received_events_url": "https://api.github.com/users/poedator/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2023-12-19T21:32:22 | 2024-01-19T12:27:49 | null | CONTRIBUTOR | null | following https://github.com/huggingface/transformers/pull/27539 this PR adds updates to transformers documentation to reflect possibility of utilizing 4D masks.
Plan:
- add updates for Llama model docstring(s)
- identify other models that can use 4D masks in present form (which requires ability to accept custom `position_ids` argument) and updating their docstrings. Classes that need updates:
- Falcon Model
- [TODO identify more]
- update code comments that may need corrections, like cases where the mask may be either 2D or 4D now. one example is based on [this comment](https://github.com/huggingface/transformers/pull/27539#issuecomment-1863285474) by @shentianxiao
Update 20.12.2023:
to find out which models require docstring changes, I scanned all model classes in transformers insing inspect.
- excluded tf and jax classes
- excluded models without `position_ids` argument in `.forward()` - can't use 4D mask effectively
- excluded models that do not use `_prepare_4d_attention_mask` method - need different code change to use 4D mask
- excluded multi-modal models (clip, clvp, vit, bark, git)
what is left is LlamaModel, FalconModel and XGLMModel.
cc @ArthurZucker | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28151/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28151/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28151",
"html_url": "https://github.com/huggingface/transformers/pull/28151",
"diff_url": "https://github.com/huggingface/transformers/pull/28151.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28151.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28150 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28150/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28150/comments | https://api.github.com/repos/huggingface/transformers/issues/28150/events | https://github.com/huggingface/transformers/issues/28150 | 2,049,441,164 | I_kwDOCUB6oc56J_2M | 28,150 | Codellama will not stop generating at EOS | {
"login": "bin123apple",
"id": 99925255,
"node_id": "U_kgDOBfS9Bw",
"avatar_url": "https://avatars.githubusercontent.com/u/99925255?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bin123apple",
"html_url": "https://github.com/bin123apple",
"followers_url": "https://api.github.com/users/bin123apple/followers",
"following_url": "https://api.github.com/users/bin123apple/following{/other_user}",
"gists_url": "https://api.github.com/users/bin123apple/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bin123apple/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bin123apple/subscriptions",
"organizations_url": "https://api.github.com/users/bin123apple/orgs",
"repos_url": "https://api.github.com/users/bin123apple/repos",
"events_url": "https://api.github.com/users/bin123apple/events{/privacy}",
"received_events_url": "https://api.github.com/users/bin123apple/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-12-19T21:10:17 | 2023-12-20T22:24:46 | 2023-12-20T21:59:43 | NONE | null | ### System Info
- `transformers` version: 4.36.0.dev0
- Platform: Linux-5.4.0-132-generic-x86_64-with-glibc2.31
- Python version: 3.11.3
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.3.3
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: A100
- Using distributed or parallel set-up in script?: DeepSpeed ZeRO Stage 3; 7 GPUs data parallelism training.
### Who can help?
@ArthurZucker @youn
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hey! Could you help to check the reason for this very weird question? Thanks a lot!
I am using some GPT-4 generated answers to finetune the codellama-13b model.
One data example in my dataset looks like this (Others have the similar format):
` The original fortran code: program DRB093_doall2_collapse_orig_no\n use omp_lib\n use DRB093\n implicit none\n\n integer :: len, i, j\n len = 100\n\n allocate (a(len,len))\n\n !$omp parallel do collapse(2)\n do i = 1, len\n do j = 1, len\n a(i,j) = a(i,j)+1\n end do\n end do\n !$omp end parallel do\nend program. `
`The translated C++ code: #include <stdio.h>\nint a[100][100];\nint main()\n{\n int i,j;\n#pragma omp parallel for collapse(2)\n for (i=0;i<100;i++)\n for (j=0;j<100;j++)\n a[i][j]=a[i][j]+1;\n return 0;\n}\n\n`
I used these the supervised finetuning scripts from deepspeed: https://github.com/microsoft/DeepSpeedExamples/tree/master/applications/DeepSpeed-Chat/training/ to finetune the codellama-13b.
And my inference script looks like this:
```
from transformers import AutoModelForCausalLM, AutoConfig,CodeLlamaTokenizer
dump_device = f'cuda:{device_num}'
model_config = AutoConfig.from_pretrained(model_name_or_path)
model_class = AutoModelForCausalLM.from_config(model_config)
model = model_class.from_pretrained(model_name_or_path,
from_tf=bool(".ckpt" in model_name_or_path),
config=model_config).to(dump_device)
tokenizer = CodeLlamaTokenizer.from_pretrained(model_name_or_path,fast_tokenizer=True)
model.config.end_token_id = tokenizer.eos_token_id
model.config.pad_token_id = model.config.eos_token_id
model.resize_token_embeddings(len(tokenizer))
tokenizer.pad_token = tokenizer.eos_token
if len(input_prompt) < max_prompt_length:
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
# attention_mask = input_ids.ne(tokenizer.pad_token_id).float().to(dump_device)
outputs = model.generate(input_ids, max_length=max_tokens,num_return_sequences=n,eos_token_id=tokenizer.eos_token_id)
generated_ans = []
for i in range(n):
predicted_answer = tokenizer.decode(outputs[i], skip_special_tokens=False)
generated_ans.append(predicted_answer[len(input_prompt):])
else:
print("This task is out of my max prompt length limitation!")
```
It looks that the codellama-13b does get some new knowledge about fortran to C++ translation by learning this dataset. It starts to give the correct translation instead of talking randomly. But it keeps repeating the same thing for its answer. One example looks like this:
My Question:
```
Please help me to translate the following fortran code to C++:
program DRB096_doall2_taskloop_collapse_orig_no\n use omp_lib
use DRB096\n implicit none\n\n integer :: len, i, j\n len = 100\n\n allocate (a(len,len))\n\n !$omp parallel\n !$omp single\n !$omp taskloop collapse(2)\n do i = 1, len\n do j = 1, len\n a(i,j) = a(i,j)+1\n end do\n end do\n !$omp end taskloop\n !$omp end single\n !$omp end parallel\n\n print 100, a(50,50)\n 100 format ('a(50,50) =',i3)\n\nend program
```
Its answer:
```
Translated C++ code:
#include <stdio.h>\n#if (_OPENMP<201511)\n#error "An OpenMP 4.5 compiler is needed to compile this test."\n#endif\n\n#include <stdio.h>\nint a[100][100];\nint main()\n{\n int i, j;\n#pragma omp parallel\n {\n#pragma omp single\n {\n#pragma omp taskloop collapse(2)\n for (i = 0; i < 100; i++)\n for (j = 0; j < 100; j++)\n a[i][j]+=1; \n }\n }\n printf ("a[50][50]=%d\n", a[50][50]);\n return 0;\n}\n
<|enoftext|>
Translated C++ code:
#include <stdio.h>\n#if (_OPENMP<201511)\n#error "An OpenMP 4.5 compiler is needed to compile this test."\n#endif\n\n#include <stdio.h>\nint a[100][100];\nint main()\n{\n int i, j;\n#pragma omp parallel\n {\n#pragma omp single\n {\n#pragma omp taskloop collapse(2)\n for (i = 0; i < 100; i++)\n for (j = 0; j < 100; j++)\n a[i][j]+=1; \n }\n }\n printf ("a[50][50]=%d\n", a[50][50]);\n return 0;\n}\n
<|enoftext|>
Translated C++ code:
#include <stdio.h>\n#if (_OPENMP<201511)\n#error "An OpenMP 4.5 compiler is needed to compile this test."\n#endif\n\n#include <stdio.h>\nin
```
It will include a `<|enoftext|>` at the end of the correct generated answer and keep repeating the answer again and again until reach the `max_length_limitation`.
This is very weird, because actually `<|enoftext|>` is not included inside the llama tokenizer, it is the EOS token for GPT-4. For the llama tokenizer the EOS token is `</s>`. In the beginning, I thought it maybe because my dataset includes a lot of `<|enoftext|>` tokens, but I check the whole dataset, there is actually no `<|enoftext|>` inside.... And even if there are some `<|enoftext|>` inside the dataset, I think the codellama should also generate `</s>` at the suitable place inside of repeating the same answer again and again. Does it mean that I have to add a `</s>` and the end of my dataset while finetuning the model? Or is there anything wrong inside my inference script? And could you help to explain where this `<|enoftext|>` come from? My dataset does not contain this token and it is also not inside the llama tokenizer... I am very confusing about it..
Thanks a lot for all the help!
### Expected behavior
I expect the codellama model stop at the correct place instead of repeating the same answer and include a `<|enoftext|>`
Expected answer:
```
Translated C++ code:
#include <stdio.h>\n#if (_OPENMP<201511)\n#error "An OpenMP 4.5 compiler is needed to compile this test."\n#endif\n\n#include <stdio.h>\nint a[100][100];\nint main()\n{\n int i, j;\n#pragma omp parallel\n {\n#pragma omp single\n {\n#pragma omp taskloop collapse(2)\n for (i = 0; i < 100; i++)\n for (j = 0; j < 100; j++)\n a[i][j]+=1; \n }\n }\n printf ("a[50][50]=%d\n", a[50][50]);\n return 0;\n}\n
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28150/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28150/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28149 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28149/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28149/comments | https://api.github.com/repos/huggingface/transformers/issues/28149/events | https://github.com/huggingface/transformers/pull/28149 | 2,049,424,841 | PR_kwDOCUB6oc5iaUBm | 28,149 | Remove deprecated CPU dockerfiles | {
"login": "ashahba",
"id": 12436063,
"node_id": "MDQ6VXNlcjEyNDM2MDYz",
"avatar_url": "https://avatars.githubusercontent.com/u/12436063?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashahba",
"html_url": "https://github.com/ashahba",
"followers_url": "https://api.github.com/users/ashahba/followers",
"following_url": "https://api.github.com/users/ashahba/following{/other_user}",
"gists_url": "https://api.github.com/users/ashahba/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashahba/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashahba/subscriptions",
"organizations_url": "https://api.github.com/users/ashahba/orgs",
"repos_url": "https://api.github.com/users/ashahba/repos",
"events_url": "https://api.github.com/users/ashahba/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashahba/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-19T20:59:08 | 2023-12-24T19:45:46 | 2023-12-20T04:51:36 | CONTRIBUTOR | null | This PR fixes #28148
Originally a PR was submitted here: https://github.com/huggingface/transformers/pull/28084 but per @ydshieh 's assessment, those Dockerfiles are no longer being maintained and should be removed.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28149/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28149",
"html_url": "https://github.com/huggingface/transformers/pull/28149",
"diff_url": "https://github.com/huggingface/transformers/pull/28149.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28149.patch",
"merged_at": "2023-12-20T04:51:36"
} |
https://api.github.com/repos/huggingface/transformers/issues/28148 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28148/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28148/comments | https://api.github.com/repos/huggingface/transformers/issues/28148/events | https://github.com/huggingface/transformers/issues/28148 | 2,049,419,124 | I_kwDOCUB6oc56J6d0 | 28,148 | CPU Dockerfile(s) are deprecated and need to be removed. | {
"login": "ashahba",
"id": 12436063,
"node_id": "MDQ6VXNlcjEyNDM2MDYz",
"avatar_url": "https://avatars.githubusercontent.com/u/12436063?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashahba",
"html_url": "https://github.com/ashahba",
"followers_url": "https://api.github.com/users/ashahba/followers",
"following_url": "https://api.github.com/users/ashahba/following{/other_user}",
"gists_url": "https://api.github.com/users/ashahba/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashahba/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashahba/subscriptions",
"organizations_url": "https://api.github.com/users/ashahba/orgs",
"repos_url": "https://api.github.com/users/ashahba/repos",
"events_url": "https://api.github.com/users/ashahba/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashahba/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-19T20:54:55 | 2023-12-20T04:51:37 | 2023-12-20T04:51:37 | CONTRIBUTOR | null | Please remove deprecated CPU Dockerfile(s) since they cause customer confusion.
_Originally posted by @ydshieh in https://github.com/huggingface/transformers/issues/28084#issuecomment-1862419041_
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28148/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28147 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28147/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28147/comments | https://api.github.com/repos/huggingface/transformers/issues/28147/events | https://github.com/huggingface/transformers/issues/28147 | 2,049,309,452 | I_kwDOCUB6oc56JfsM | 28,147 | logit too slow compared to generate | {
"login": "enochlev",
"id": 47466848,
"node_id": "MDQ6VXNlcjQ3NDY2ODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/47466848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enochlev",
"html_url": "https://github.com/enochlev",
"followers_url": "https://api.github.com/users/enochlev/followers",
"following_url": "https://api.github.com/users/enochlev/following{/other_user}",
"gists_url": "https://api.github.com/users/enochlev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enochlev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enochlev/subscriptions",
"organizations_url": "https://api.github.com/users/enochlev/orgs",
"repos_url": "https://api.github.com/users/enochlev/repos",
"events_url": "https://api.github.com/users/enochlev/events{/privacy}",
"received_events_url": "https://api.github.com/users/enochlev/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-12-19T19:29:04 | 2024-01-19T12:31:11 | 2024-01-19T12:31:11 | NONE | null | ### System Info
I am trying to construct a library for constrained generation. The goal hopfully is to skip generating text if there is only one possible next token.
The problem I am having is the logits function is way too slow to allow constrained generation to be of any use. Is there a way to speed up logits?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
here is an example, that might work (my actual working code is in neuronx).
```
import torch
from transformers import LlamaForCausalLM, AutoTokenizer
import time
# Load the model and tokenizer
model_name = "meta-llama/Llama-2-7b-hf"
model = LlamaForCausalLM.from_pretrained(model_name,device_map="cuda")
tokenizer = AutoTokenizer.from_pretrained(model_name)
import time
num_iterations = 10
start_time = time.time()
for _ in range(num_iterations):
logits = generator.neuron_model.forward(torch.tensor(generator.encode(input_prompt), dtype=torch.long)).squeeze()
softmax_probs = torch.nn.functional.softmax(logits, dim=-1)
next_token_index = torch.multinomial(softmax_probs, 1).item()
end_time = time.time()
logits_time = end_time - start_time
print(f"Time taken for generating text using logits: {logits_time / num_iterations} seconds")
# Timing the generation using the generate_text method
start_time = time.time()
generated_text = generator.generate(input_prompt=input_prompt,max_length=10)
end_time = time.time()
generate_time = end_time - start_time
print(f"Time taken for generating text using generate_text: {generate_time / num_iterations} seconds")
```
here is the contrained genertion code
```
neuron_model = LlamaForSampling.from_pretrained(model_path + 'llama-2-7b-vicuna', batch_size=1, tp_degree=6, amp='bf16', context_length_estimate=[4000], n_positions=4000)
neuron_model.to_neuron()
tokenizer = AutoTokenizer.from_pretrained(model_path + 'llama-2-7b-vicuna')
import torch
import torch.nn.functional as F
import numpy as np
class ConstrainedTextGenerator:
def __init__(self, sequences, neuron_model, eos_token_id=2):
self.neuron_model = neuron_model
self.eos_token_id = self.encode("</s>")
self.tree = self.preprocess(sequences)
def preprocess(self, sequences):
tree = {}
for sequence in sequences:
sequence_ids = self.encode(sequence)
current_tree = tree
for token in sequence_ids:
token_item = token.item() # Convert tensor to int
if token_item not in current_tree:
current_tree[token_item] = {}
current_tree = current_tree[token_item]
# Add </s> to mark the end of each sequence
eos_token = self.eos_token_id.item() # Convert tensor to int
if eos_token not in current_tree:
current_tree[eos_token] = {}
return tree
def encode(self, text):
# Replace this with your encoding logic, assuming it returns a list of token_ids
return tokenizer.encode(text, add_special_tokens=False, return_tensors="pt")[0]
def generate_text(self, input_prompt=""):
input_ids_list = [[]]
current_tree = self.tree
# Encode the input prompt
prompt_ids = self.encode(input_prompt)
# Append prompt_ids to input_ids_list
input_ids_list[0].extend(prompt_ids.tolist())
while True:
# Check if there are multiple options at the current position
if len(current_tree) > 1:
# Get the indices of the available tokens
available_indices = [list(current_tree.keys()).index(token) for token in current_tree.keys()]
# Choose the token based on the softmax probabilities
logits = self.neuron_model.forward(torch.tensor(input_ids_list, dtype=torch.long)).squeeze()
softmax_probs = torch.nn.functional.softmax(logits[available_indices], dim=-1)
# Sample from the softmax probabilities
next_token_index = torch.multinomial(softmax_probs, 1).item()
next_token = list(current_tree.keys())[available_indices[next_token_index]]
else:
# If there's only one option, skip forward and fill it in
next_token = list(current_tree.keys())[0]
input_ids_list[-1].append(next_token)
# Check if it's the end of a sequence
if next_token == self.eos_token_id.item():
break
else:
current_tree = current_tree.get(next_token, {})
# Remove the empty sequence at the end, if any
if not input_ids_list[-1]:
input_ids_list.pop()
input_ids = torch.tensor([token for seq in input_ids_list for token in seq], dtype=torch.long)
generated_text = ' '.join(map(str, input_ids.tolist()))
return input_ids
```
### Expected behavior
I expect logits and generate to have the same geneartion speed per token | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28147/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28147/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28146 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28146/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28146/comments | https://api.github.com/repos/huggingface/transformers/issues/28146/events | https://github.com/huggingface/transformers/pull/28146 | 2,049,257,155 | PR_kwDOCUB6oc5iZvBJ | 28,146 | Even more TF test fixes | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-19T18:51:43 | 2023-12-21T15:14:48 | 2023-12-21T15:14:47 | MEMBER | null | This PR hopefully fixes the last remaining issues from the `build()` PR and gets the CI back to normal! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28146/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/28146/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28146",
"html_url": "https://github.com/huggingface/transformers/pull/28146",
"diff_url": "https://github.com/huggingface/transformers/pull/28146.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28146.patch",
"merged_at": "2023-12-21T15:14:47"
} |
https://api.github.com/repos/huggingface/transformers/issues/28145 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28145/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28145/comments | https://api.github.com/repos/huggingface/transformers/issues/28145/events | https://github.com/huggingface/transformers/pull/28145 | 2,049,240,601 | PR_kwDOCUB6oc5iZrVp | 28,145 | [docs] Trainer docs | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-19T18:40:23 | 2023-12-20T18:37:27 | 2023-12-20T18:37:23 | MEMBER | null | Part 2 of #27986 to finish cleaning up the `Trainer` API docs. This includes:
- moving the CUDA extension installation problems to the performance and scalability debugging [doc](https://huggingface.co/docs/transformers/main/en/debugging) where it is more appropriate
- GPU selection has its own section in the multiple GPU training [doc](https://huggingface.co/docs/transformers/main/en/perf_train_gpu_many)
- spin out the FSDP sections into their own docs
- add a link from the Trainer guide to the FSDP guide | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28145/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28145",
"html_url": "https://github.com/huggingface/transformers/pull/28145",
"diff_url": "https://github.com/huggingface/transformers/pull/28145.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28145.patch",
"merged_at": "2023-12-20T18:37:23"
} |
https://api.github.com/repos/huggingface/transformers/issues/28144 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28144/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28144/comments | https://api.github.com/repos/huggingface/transformers/issues/28144/events | https://github.com/huggingface/transformers/pull/28144 | 2,049,149,365 | PR_kwDOCUB6oc5iZXQ_ | 28,144 | Fix ONNX export for causal LM sequence classifiers by removing reverse indexing | {
"login": "dwyatte",
"id": 2512762,
"node_id": "MDQ6VXNlcjI1MTI3NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2512762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dwyatte",
"html_url": "https://github.com/dwyatte",
"followers_url": "https://api.github.com/users/dwyatte/followers",
"following_url": "https://api.github.com/users/dwyatte/following{/other_user}",
"gists_url": "https://api.github.com/users/dwyatte/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dwyatte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dwyatte/subscriptions",
"organizations_url": "https://api.github.com/users/dwyatte/orgs",
"repos_url": "https://api.github.com/users/dwyatte/repos",
"events_url": "https://api.github.com/users/dwyatte/events{/privacy}",
"received_events_url": "https://api.github.com/users/dwyatte/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 8 | 2023-12-19T17:43:36 | 2023-12-22T10:33:44 | 2023-12-22T10:33:44 | CONTRIBUTOR | null | # What does this PR do?
Follow-up to https://github.com/huggingface/transformers/pull/27450 and another step to fixing https://github.com/huggingface/optimum/issues/1527. ONNX implements indexing using a combination of its own operators and when using reverse indexing (e.g., -1 to indicate 1 element from the right side of an array), it can produce incorrect results (see [PyTorch's ONNX export code](https://github.com/pytorch/pytorch/blob/71bedc3a69e3203fd8f76a68ecf2bd7c58d2e13e/torch/onnx/symbolic_opset9.py#L5859-L5865)). In practice, this can cause the batch dimension to get shuffled
Causal LM sequence were previously using `-1` for the last token. Adding `sequence_lengths = torch.where(sequence_lengths >= 0, sequence_lengths, input_ids.shape[-1] - 1)` effectively removes reverse indexing
While this could be fixed in https://github.com/huggingface/optimum by forcing the inputs used to trace the graph to contain a pad token and avoiding reverse indexing, it seems better to fix in `transformers` with the added benefit of bringing the code in line with TensorFlow implementations of the same code (e.g., https://github.com/huggingface/transformers/pull/25085/files#diff-7c6fdd54ac4b8ce0c09bb17da15f176d3e5827df39dd8234fd802631e99ef38dR801-R804)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker, @amyeroberts, @younesbelkada (CC @fxmarty)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28144/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28144",
"html_url": "https://github.com/huggingface/transformers/pull/28144",
"diff_url": "https://github.com/huggingface/transformers/pull/28144.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28144.patch",
"merged_at": "2023-12-22T10:33:44"
} |
https://api.github.com/repos/huggingface/transformers/issues/28143 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28143/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28143/comments | https://api.github.com/repos/huggingface/transformers/issues/28143/events | https://github.com/huggingface/transformers/pull/28143 | 2,049,060,725 | PR_kwDOCUB6oc5iZDjp | 28,143 | [docs] Fix mistral link in mixtral.md | {
"login": "aaronjimv",
"id": 67152883,
"node_id": "MDQ6VXNlcjY3MTUyODgz",
"avatar_url": "https://avatars.githubusercontent.com/u/67152883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaronjimv",
"html_url": "https://github.com/aaronjimv",
"followers_url": "https://api.github.com/users/aaronjimv/followers",
"following_url": "https://api.github.com/users/aaronjimv/following{/other_user}",
"gists_url": "https://api.github.com/users/aaronjimv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaronjimv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaronjimv/subscriptions",
"organizations_url": "https://api.github.com/users/aaronjimv/orgs",
"repos_url": "https://api.github.com/users/aaronjimv/repos",
"events_url": "https://api.github.com/users/aaronjimv/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaronjimv/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-19T16:55:38 | 2023-12-19T18:41:06 | 2023-12-19T18:34:14 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fix the mistral link in **`Mixtral`** docs page.
The link in this section generate a 404 error:
> The following implementation details are shared with Mistral AI’s first model [mistral](https://huggingface.co/docs/transformers/main/en/model_doc/~models/doc/mistral):
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@stevhliu | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28143/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28143",
"html_url": "https://github.com/huggingface/transformers/pull/28143",
"diff_url": "https://github.com/huggingface/transformers/pull/28143.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28143.patch",
"merged_at": "2023-12-19T18:34:14"
} |
https://api.github.com/repos/huggingface/transformers/issues/28142 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28142/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28142/comments | https://api.github.com/repos/huggingface/transformers/issues/28142/events | https://github.com/huggingface/transformers/pull/28142 | 2,049,058,176 | PR_kwDOCUB6oc5iZC_e | 28,142 | Fix FA2 integration | {
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-12-19T16:54:00 | 2023-12-26T12:33:27 | 2023-12-20T08:55:07 | CONTRIBUTOR | null | # What does this PR do?
1. Fix FA2 integration.
Issues with the current FA2 integration.
1. It makes providing `torch_dtype` to the `from_pretrained` class method mandatory. This leads to the whole model being loaded in half-precision which leads to unstable training because it would result in pure half precision training instead of mixed-precision training. Please refer https://github.com/huggingface/transformers/issues/26498#issuecomment-1812528717 for more details.
Currently, main branch throws below error when not passing half precision to `torch_dtype` which shouldn't be the case.
```bash
You are attempting to use Flash Attention 2.0 without specifying a torch dtype. This might lead to unexpected behaviour
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
...
File /raid/sourab/transformers/src/transformers/modeling_utils.py:1422, in PreTrainedModel._check_and_enable_flash_attn_2(cls, config, torch_dtype, device_map, check_device_map, hard_check_only)
1418 logger.warning(
1419 "You are attempting to use Flash Attention 2.0 without specifying a torch dtype. This might lead to unexpected behaviour"
1420 )
1421 elif torch_dtype is not None and torch_dtype not in [torch.float16, torch.bfloat16]:
-> 1422 raise ValueError(
1423 f"Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes. You passed {torch_dtype}, this might lead to"
1424 " unexpected behaviour."
1425 )
1427 # The check `torch.empty(0).device.type != "cuda"` is needed as the model may be initialized after `torch.set_default_device` has been called,
1428 # or the model may be initialized under the context manager `with torch.device("cuda"):`.
1429 if check_device_map and device_map is None and torch.empty(0).device.type != "cuda":
ValueError: Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes. You passed torch.float32, this might lead to unexpected behaviour.
```
2. As a workaround, one would pass `torch_dtype`, then recast the model to float32 and try to train but then end up getting error from Flash Attention library as given below:
```
File /raid/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py:79, in _flash_attn_varlen_forward(q, k, v, cu_seqlens_q, cu_seqlens_k, max_seqlen_q, max_seqlen_k, dropout_p, softmax_scale, causal, window_size, return_softmax)
77 maybe_contiguous = lambda x: x.contiguous() if x.stride(-1) != 1 else x
78 q, k, v = [maybe_contiguous(x) for x in (q, k, v)]
---> 79 out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = flash_attn_cuda.varlen_fwd(
80 q,
81 k,
82 v,
83 None,
84 cu_seqlens_q,
85 cu_seqlens_k,
86 max_seqlen_q,
87 max_seqlen_k,
88 dropout_p,
89 softmax_scale,
90 False,
91 causal,
92 window_size[0],
93 window_size[1],
94 return_softmax,
95 None,
96 )
97 # if out.isnan().any() or softmax_lse.isnan().any():
98 # breakpoint()
99 return out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state
RuntimeError: FlashAttention only support fp16 and bf16 data type
```
3. Now, to overcome that, one would need to cast the trainable params to float32 and all the other params to float16, this is only possible with EPFT approaches. For normal fine-tuning, things end here leaving no way to use flash attention correctly. But this change, leads to unstable learning plateauing at high loss therefore no luck in PEFT setup too.

All these issues are being resolved by this PR. Notice the above graph with the before and after PR logs. With this PR, the loss is similar to the case when not using FA2.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28142/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28142/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28142",
"html_url": "https://github.com/huggingface/transformers/pull/28142",
"diff_url": "https://github.com/huggingface/transformers/pull/28142.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28142.patch",
"merged_at": "2023-12-20T08:55:07"
} |
https://api.github.com/repos/huggingface/transformers/issues/28141 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28141/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28141/comments | https://api.github.com/repos/huggingface/transformers/issues/28141/events | https://github.com/huggingface/transformers/pull/28141 | 2,049,005,796 | PR_kwDOCUB6oc5iY3WB | 28,141 | Update VITS modeling to enable ONNX export | {
"login": "echarlaix",
"id": 80481427,
"node_id": "MDQ6VXNlcjgwNDgxNDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/80481427?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/echarlaix",
"html_url": "https://github.com/echarlaix",
"followers_url": "https://api.github.com/users/echarlaix/followers",
"following_url": "https://api.github.com/users/echarlaix/following{/other_user}",
"gists_url": "https://api.github.com/users/echarlaix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/echarlaix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/echarlaix/subscriptions",
"organizations_url": "https://api.github.com/users/echarlaix/orgs",
"repos_url": "https://api.github.com/users/echarlaix/repos",
"events_url": "https://api.github.com/users/echarlaix/events{/privacy}",
"received_events_url": "https://api.github.com/users/echarlaix/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-19T16:23:50 | 2024-01-05T16:52:38 | 2024-01-05T16:52:32 | COLLABORATOR | null | This PR enables the ONNX export of VITS models in Optimum (https://github.com/huggingface/optimum/pull/1607), currently the export is failing due to [a cast operator added before the pow operator](https://github.com/pytorch/pytorch/blob/v2.1.2/torch/onnx/symbolic_opset9.py#L3382) in the model graph, resulting in an issue during the concatenation of two values of different data type
cc @xenova
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28141/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28141/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28141",
"html_url": "https://github.com/huggingface/transformers/pull/28141",
"diff_url": "https://github.com/huggingface/transformers/pull/28141.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28141.patch",
"merged_at": "2024-01-05T16:52:32"
} |
https://api.github.com/repos/huggingface/transformers/issues/28140 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28140/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28140/comments | https://api.github.com/repos/huggingface/transformers/issues/28140/events | https://github.com/huggingface/transformers/issues/28140 | 2,048,820,728 | I_kwDOCUB6oc56HoX4 | 28,140 | GPU or MPS error when running run_clm.py | {
"login": "oscar-defelice",
"id": 49638680,
"node_id": "MDQ6VXNlcjQ5NjM4Njgw",
"avatar_url": "https://avatars.githubusercontent.com/u/49638680?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oscar-defelice",
"html_url": "https://github.com/oscar-defelice",
"followers_url": "https://api.github.com/users/oscar-defelice/followers",
"following_url": "https://api.github.com/users/oscar-defelice/following{/other_user}",
"gists_url": "https://api.github.com/users/oscar-defelice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oscar-defelice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oscar-defelice/subscriptions",
"organizations_url": "https://api.github.com/users/oscar-defelice/orgs",
"repos_url": "https://api.github.com/users/oscar-defelice/repos",
"events_url": "https://api.github.com/users/oscar-defelice/events{/privacy}",
"received_events_url": "https://api.github.com/users/oscar-defelice/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-12-19T14:52:07 | 2024-01-28T08:04:47 | 2024-01-28T08:04:47 | CONTRIBUTOR | null | ### System Info
## System Info
```bash
- `transformers` version: 4.37.0.dev0
- Platform: macOS-14.2-arm64-arm-64bit
- Python version: 3.11.7
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
```
---
Even if I am pasting this output, if I run on Ubuntu with 2 GPU I got the same issue.
### Who can help?
@ArthurZucker @muellerz
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I run
```bash
python run_clm.py --model_name_or_path nferruz/ProtGPT2 --train_file data/fine_tune_data.txt --tokenizer_name nferruz/ProtGPT2 --do_train --output_dir models/ProtGPT/output --learning_rate 1e-06
```
And no matter what I try with batch_size and learning rate I always get
```bash
RuntimeError: MPS backend out of memory (MPS allocated: 78.40 GB, other allocations: 2.98 GB, max allowed: 81.60 GB). Tried to allocate 320.00 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).
```
### Expected behavior
It should work and finetune the model =) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28140/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28140/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28139 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28139/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28139/comments | https://api.github.com/repos/huggingface/transformers/issues/28139/events | https://github.com/huggingface/transformers/issues/28139 | 2,048,708,752 | I_kwDOCUB6oc56HNCQ | 28,139 | `from_pretrained` is extremely slow when deepspeed zero3 is enabled | {
"login": "Jingru",
"id": 4298653,
"node_id": "MDQ6VXNlcjQyOTg2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4298653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jingru",
"html_url": "https://github.com/Jingru",
"followers_url": "https://api.github.com/users/Jingru/followers",
"following_url": "https://api.github.com/users/Jingru/following{/other_user}",
"gists_url": "https://api.github.com/users/Jingru/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jingru/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jingru/subscriptions",
"organizations_url": "https://api.github.com/users/Jingru/orgs",
"repos_url": "https://api.github.com/users/Jingru/repos",
"events_url": "https://api.github.com/users/Jingru/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jingru/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 7 | 2023-12-19T13:53:54 | 2024-01-27T08:03:19 | null | NONE | null | ### System Info
pytorch: 2.0.1+cu118
transformers: 4.33.3
deepspeed: 0.12.5
### Who can help?
@ArthurZucker @younesbelkada @pac
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Run command `torchrun --nnodes 1 --nproc-per-node 8 --rdzv-endpoint=localhost:35000 test.py`
And my script `test.py` as follows:
```
import deepspeed
from transformers.deepspeed import HfDeepSpeedConfig
from transformers import AutoModelForCausalLM
deepspeed.init_distributed()
ds_config = {
"train_batch_size": 32,
"train_micro_batch_size_per_gpu": 4,
"steps_per_print": 10,
"zero_optimization": {
"stage": 3,
"offload_param": {"device": "cpu"},
"offload_optimizer": {"device": "cpu"},
"stage3_param_persistence_threshold": 10000.0,
"stage3_max_live_parameters": 30000000.0,
"stage3_prefetch_bucket_size": 30000000.0,
"memory_efficient_linear": False,
},
"fp16": {"enabled": True, "loss_scale_window": 100},
"gradient_clipping": 1.0,
"prescale_gradients": False,
"wall_clock_breakdown": False,
"hybrid_engine": {
"enabled": True,
"max_out_tokens": 512,
"inference_tp_size": 1,
"release_inference_cache": False,
"pin_parameters": True,
"tp_gather_partition_size": 8,
},
}
dschf = HfDeepSpeedConfig(ds_config)
model = AutoModelForCausalLM.from_pretrained(
"../llama_actor", from_tf=False, trust_remote_code=False
)
```
In addition, the pretrained model is saved by `transformers==4.31.0`.
2. This command hangs for over 1800s, and encountered nccl timeout error.
### Expected behavior
Model is loaded in a few minutes and this command should not hang. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28139/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28139/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28138 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28138/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28138/comments | https://api.github.com/repos/huggingface/transformers/issues/28138/events | https://github.com/huggingface/transformers/pull/28138 | 2,048,585,744 | PR_kwDOCUB6oc5iXa1F | 28,138 | HF_ENDPOINT value affected in hub.py cached_file | {
"login": "fenglui",
"id": 141198,
"node_id": "MDQ6VXNlcjE0MTE5OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/141198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fenglui",
"html_url": "https://github.com/fenglui",
"followers_url": "https://api.github.com/users/fenglui/followers",
"following_url": "https://api.github.com/users/fenglui/following{/other_user}",
"gists_url": "https://api.github.com/users/fenglui/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fenglui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fenglui/subscriptions",
"organizations_url": "https://api.github.com/users/fenglui/orgs",
"repos_url": "https://api.github.com/users/fenglui/repos",
"events_url": "https://api.github.com/users/fenglui/events{/privacy}",
"received_events_url": "https://api.github.com/users/fenglui/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-12-19T12:42:54 | 2023-12-25T19:30:18 | 2023-12-25T19:30:18 | NONE | null | # What does this PR do?
use os.environ.get("HF_ENDPOINT", HUGGINGFACE_CO_RESOLVE_ENDPOINT) value as endpoint param, so HF_ENDPOINT value will affected when download files using cached_file method
Fixes # (issue)
## Before submitting
- [ ] os.environ["HF_ENDPOINT"]="https://hf-mirror.com" may not affected
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28138/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28138/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28138",
"html_url": "https://github.com/huggingface/transformers/pull/28138",
"diff_url": "https://github.com/huggingface/transformers/pull/28138.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28138.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28137 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28137/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28137/comments | https://api.github.com/repos/huggingface/transformers/issues/28137/events | https://github.com/huggingface/transformers/issues/28137 | 2,048,495,611 | I_kwDOCUB6oc56GY_7 | 28,137 | Fail to upload models to hub | {
"login": "minghao-wu",
"id": 17817832,
"node_id": "MDQ6VXNlcjE3ODE3ODMy",
"avatar_url": "https://avatars.githubusercontent.com/u/17817832?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minghao-wu",
"html_url": "https://github.com/minghao-wu",
"followers_url": "https://api.github.com/users/minghao-wu/followers",
"following_url": "https://api.github.com/users/minghao-wu/following{/other_user}",
"gists_url": "https://api.github.com/users/minghao-wu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minghao-wu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minghao-wu/subscriptions",
"organizations_url": "https://api.github.com/users/minghao-wu/orgs",
"repos_url": "https://api.github.com/users/minghao-wu/repos",
"events_url": "https://api.github.com/users/minghao-wu/events{/privacy}",
"received_events_url": "https://api.github.com/users/minghao-wu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 7 | 2023-12-19T11:45:49 | 2023-12-20T09:36:26 | 2023-12-20T09:35:40 | NONE | null | ### System Info
- `transformers` version: 4.34.1
- Platform: Linux-4.18.0-513.9.1.el8_9.x86_64-x86_64-with-glibc2.28
- Python version: 3.11.5
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.4.0
- Accelerate version: 0.24.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I was using following snippet to push my models to hub (I cannot sucessfully push my models using `.push_to_hub()` my slurm cluster).
```
import huggingface_hub
huggingface_hub.login(token="XXX")
model_name = os.path.basename(os.path.dirname(args.ckpt))
repo_id = f"minghaowu/"+model_name
print("uploading to", repo_id)
api = huggingface_hub.HfApi()
api.create_repo(
repo_id=repo_id,
repo_type="model",
private=True,
exist_ok=True,
)
api.upload_folder(
folder_path=args.ckpt,
repo_id=repo_id,
repo_type="model",
)
```
### Expected behavior
The provided code snippet has been working smoothly for a few days, but today I got the error message as follows:
```
Traceback (most recent call last):
File "/home/minghaow/.conda/envs/upload/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 261, in hf_raise_for_statusenizer.json: 94%|████████████████████████████████████████████████████████████████████████▌ | 13.7M/14.5M [00:01<00:00, 11.3MB/s]
response.raise_for_status()██████████ | 1/5 [00:06<00:24, 6.19s/it]
File "/home/minghaow/.conda/envs/upload/lib/python3.11/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://huggingface.co/api/models/minghaowu/docnmt-bloom-7b-lora-p4-en-fr/commit/main
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/minghaow/docnmtllm-project/docnmtllm/train_para/upload_model.py", line 44, in <module>
api.upload_folder(
File "/home/minghaow/.conda/envs/upload/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/minghaow/.conda/envs/upload/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 849, in _inner
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/minghaow/.conda/envs/upload/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 3748, in upload_folder
commit_info = self.create_commit(
^^^^^^^^^^^^^^^^^^^
File "/home/minghaow/.conda/envs/upload/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/minghaow/.conda/envs/upload/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 849, in _inner
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/minghaow/.conda/envs/upload/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 2967, in create_commit
hf_raise_for_status(commit_resp, endpoint_name="commit")
File "/home/minghaow/.conda/envs/upload/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 299, in hf_raise_for_status
raise BadRequestError(message, response=response) from e
huggingface_hub.utils._errors.BadRequestError: (Request ID: Root=1-65817fb9-7af65e20605305f129b7ad48;ddc7d2fa-2111-4a83-b540-25eda4ca6e86)
Bad request for commit endpoint:
"model-index[0].results[0].dataset.config" must be a string
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28137/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28136 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28136/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28136/comments | https://api.github.com/repos/huggingface/transformers/issues/28136/events | https://github.com/huggingface/transformers/pull/28136 | 2,048,467,063 | PR_kwDOCUB6oc5iXAnM | 28,136 | [Whisper] Make tokenizer normalization public | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-19T11:27:02 | 2024-01-29T16:07:40 | 2024-01-29T16:07:36 | CONTRIBUTOR | null | # What does this PR do?
Using the Whisper English normalizer is common practice when evaluating Whisper models on English ASR. Here, we have to normalize the predictions, e.g. using the argument `normalize=True` to the tokenizer `.decode` method:
https://github.com/huggingface/transformers/blob/5aec50ecaf9c1c039cde85881f0586110f845859/src/transformers/models/whisper/tokenization_whisper.py#L633
However, we also have to normalize the reference, which is most easily done by calling the **private** method `_normalize`: https://github.com/huggingface/transformers/blob/5aec50ecaf9c1c039cde85881f0586110f845859/src/transformers/models/whisper/tokenization_whisper.py#L509
This PR updates the tokenizer to use a **public** method for the second normalization step, the recommended design for exposed methods. Note that I have chosen here to deprecate the existing private method `_normalize`, rather than removing it blindly, since I anticipate that it has been accessed by some users already and want to prevent a hard breaking change. Happy to remove it in one go if we feel it's ok removing a private method. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28136/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28136/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28136",
"html_url": "https://github.com/huggingface/transformers/pull/28136",
"diff_url": "https://github.com/huggingface/transformers/pull/28136.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28136.patch",
"merged_at": "2024-01-29T16:07:35"
} |
https://api.github.com/repos/huggingface/transformers/issues/28135 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28135/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28135/comments | https://api.github.com/repos/huggingface/transformers/issues/28135/events | https://github.com/huggingface/transformers/pull/28135 | 2,048,452,546 | PR_kwDOCUB6oc5iW9XH | 28,135 | Update split string in doctest to reflect #28087 | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-19T11:18:14 | 2023-12-19T13:55:09 | 2023-12-19T13:55:09 | COLLABORATOR | null | # What does this PR do?
Resolves current failing test `tests/utils/test_doc_samples.py::TestDocLists::test_sdpa_support_list` on main because the string used to split the doc string wasn't updated in line with #28087
cc @stevhliu | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28135/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28135/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28135",
"html_url": "https://github.com/huggingface/transformers/pull/28135",
"diff_url": "https://github.com/huggingface/transformers/pull/28135.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28135.patch",
"merged_at": "2023-12-19T13:55:09"
} |
https://api.github.com/repos/huggingface/transformers/issues/28134 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28134/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28134/comments | https://api.github.com/repos/huggingface/transformers/issues/28134/events | https://github.com/huggingface/transformers/issues/28134 | 2,048,244,765 | I_kwDOCUB6oc56Fbwd | 28,134 | Different intermediate results given different number of epochs | {
"login": "DolevAdas",
"id": 33514523,
"node_id": "MDQ6VXNlcjMzNTE0NTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/33514523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DolevAdas",
"html_url": "https://github.com/DolevAdas",
"followers_url": "https://api.github.com/users/DolevAdas/followers",
"following_url": "https://api.github.com/users/DolevAdas/following{/other_user}",
"gists_url": "https://api.github.com/users/DolevAdas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DolevAdas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DolevAdas/subscriptions",
"organizations_url": "https://api.github.com/users/DolevAdas/orgs",
"repos_url": "https://api.github.com/users/DolevAdas/repos",
"events_url": "https://api.github.com/users/DolevAdas/events{/privacy}",
"received_events_url": "https://api.github.com/users/DolevAdas/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-12-19T09:18:32 | 2024-01-28T08:04:50 | 2024-01-28T08:04:50 | NONE | null | ### System Info
- `transformers` version: 4.31.0
- Platform: Linux-4.18.0-477.15.1.el8_8.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.6
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (False)
- - Using GPU in script?: no
- Using distributed or parallel set-up in script?:no
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
We are using Hugging Face API to fine-tune a pretrained model ( BertForSequenceClassification).
We see differences in the first five epochs between 5 and 15 epoch runs and do not understand why they would not be (nearly) identical given that only the number of epochs is different between those runs. ( the seed and other parameters are all the same).
**For example:**
### Seed 7
**5 epochs :**
,loss,learning_rate,epoch,step
0,**24.6558**,4.955555555555556e-05,0.04,500,,,,,,,,,
1,19.9439,4.9111111111111114e-05,0.09,1000,,,,,,,,,
2,19.2654,4.866666666666667e-05,0.13,1500,,,,,,,,,
3,20.4078,4.8222222222222225e-05,0.18,2000,,,,,,,,,
4,20.3372,4.7777777777777784e-05,0.22,2500,,,,,,,,,
5,20.0602,4.7333333333333336e-05,0.27,3000,,,,,,,,,
6,19.6761,4.6888888888888895e-05,0.31,3500,,,,,,,,,
7,20.193,4.644444444444445e-05,0.36,4000,,,,,,,,,
8,19.1265,4.600000000000001e-05,0.4,4500,,,,,,,,,
9,19.1949,4.555555555555556e-05,0.44,5000,,,,,,,,,
10,19.5078,4.511111111111112e-05,0.49,5500,,,,,,,,,
11,20.7165,4.466666666666667e-05,0.53,6000,,,,,,,,,
12,20.1907,4.422222222222222e-05,0.58,6500,,,,,,,,,
13,19.6967,4.377777777777778e-05,0.62,7000,,,,,,,,,
14,19.6693,4.3333333333333334e-05,0.67,7500,,,,,,,,,
15,20.011,4.2888888888888886e-05,0.71,8000,,,,,,,,,
16,19.516,4.2444444444444445e-05,0.76,8500,,,,,,,,,
17,18.9949,4.2e-05,0.8,9000,,,,,,,,,
**15 epochs:**
,loss,learning_rate,epoch,step
0,**18.9326**,4.9851851851851855e-05,0.04,500,,,,,,,,,
1,5.6773,4.970370370370371e-05,0.09,1000,,,,,,,,,
2,4.6515,4.955555555555556e-05,0.13,1500,,,,,,,,,
3,4.2881,4.940740740740741e-05,0.18,2000,,,,,,,,,
4,3.641,4.925925925925926e-05,0.22,2500,,,,,,,,,
5,3.2491,4.9111111111111114e-05,0.27,3000,,,,,,,,,
6,3.012,4.896296296296297e-05,0.31,3500,,,,,,,,,
7,2.8161,4.881481481481482e-05,0.36,4000,,,,,,,,,
8,2.7497,4.866666666666667e-05,0.4,4500,,,,,,,,,
9,2.6776,4.851851851851852e-05,0.44,5000,,,,,,,,,
10,2.5254,4.837037037037037e-05,0.49,5500,,,,,,,,,
11,2.6059,4.8222222222222225e-05,0.53,6000,,,,,,,,,
12,2.5966,4.807407407407408e-05,0.58,6500,,,,,,,,,
13,2.2252,4.792592592592593e-05,0.62,7000,,,,,,,,,
14,2.3321,4.7777777777777784e-05,0.67,7500,,,,,,,,,
15,2.23,4.762962962962963e-05,0.71,8000,,,,,,,,,
16,2.3754,4.7481481481481483e-05,0.76,8500,,,,,,,,,
### Seed 0 :
**5 epochs:**
,loss,learning_rate,epoch,step
0,**17.7629**,4.955555555555556e-05,0.04,500,,,,,,,,,
1,5.6264,4.9111111111111114e-05,0.09,1000,,,,,,,,,
2,4.9429,4.866666666666667e-05,0.13,1500,,,,,,,,,
3,4.5756,4.8222222222222225e-05,0.18,2000,,,,,,,,,
4,4.4063,4.7777777777777784e-05,0.22,2500,,,,,,,,,
5,3.9688,4.7333333333333336e-05,0.27,3000,,,,,,,,,
6,3.6656,4.6888888888888895e-05,0.31,3500,,,,,,,,,
7,3.6779,4.644444444444445e-05,0.36,4000,,,,,,,,,
8,3.2495,4.600000000000001e-05,0.4,4500,,,,,,,,,
9,3.2306,4.555555555555556e-05,0.44,5000,,,,,,,,,
10,3.1333,4.511111111111112e-05,0.49,5500,,,,,,,,,
11,2.7543,4.466666666666667e-05,0.53,6000,,,,,,,,,
12,3.1086,4.422222222222222e-05,0.58,6500,,,,,,,,,
13,3.0666,4.377777777777778e-05,0.62,7000,,,,,,,,,
14,3.156,4.3333333333333334e-05,0.67,7500,,,,,,,,,
15,2.5553,4.2888888888888886e-05,0.71,8000,,,,,,,,,
16,2.7727,4.2444444444444445e-05,0.76,8500,,,,,,,,,
17,2.651,4.2e-05,0.8,9000,,,,,,,,,
**15 epochs:**
,loss,learning_rate,epoch,step
0,**14.8927**,4.9851851851851855e-05,0.04,500,,,,,,,,,
1,5.4558,4.970370370370371e-05,0.09,1000,,,,,,,,,
2,4.065,4.955555555555556e-05,0.13,1500,,,,,,,,,
3,3.8751,4.940740740740741e-05,0.18,2000,,,,,,,,,
4,3.4581,4.925925925925926e-05,0.22,2500,,,,,,,,,
5,3.1641,4.9111111111111114e-05,0.27,3000,,,,,,,,,
6,2.8896,4.896296296296297e-05,0.31,3500,,,,,,,,,
7,2.8967,4.881481481481482e-05,0.36,4000,,,,,,,,,
8,2.5912,4.866666666666667e-05,0.4,4500,,,,,,,,,
9,2.5563,4.851851851851852e-05,0.44,5000,,,,,,,,,
10,2.482,4.837037037037037e-05,0.49,5500,,,,,,,,,
11,2.1695,4.8222222222222225e-05,0.53,6000,,,,,,,,,
12,2.447,4.807407407407408e-05,0.58,6500,,,,,,,,,
13,2.4438,4.792592592592593e-05,0.62,7000,,,,,,,,,
14,2.2014,4.7777777777777784e-05,0.67,7500,,,,,,,,,
15,2.2,4.762962962962963e-05,0.71,8000,,,,,,,,,
The only difference in the experiments is the number of epochs.
We also saved the train and validation split to a file and read it from there. To make sure we are reading in the same order.
**My environment**: python 3.9.6, cuda 12.2.0, pytorch 2.0.1
**Here is part of my code:**
from transformers import (AutoTokenizer, DataCollatorWithPadding, TrainingArguments,
BertForSequenceClassification, Trainer, AutoConfig)
import datasets
import numpy as np
import torch
import torch.nn as nn
import random
random.seed(cseed)
np.random.seed(cseed)
torch.manual_seed(cseed)
torch.cuda.manual_seed_all(cseed)
os.environ['CUBLAS_WORKSPACE_CONFIG']=":16:8"
tokenizer = AutoTokenizer.from_pretrained(checkpoint, model_max_length=max_token_len)
training_args = TrainingArguments(out_path,
save_total_limit = 10,
#load_best_model_at_end = True,
report_to=None,
evaluation_strategy="steps",
eval_steps=11250,
do_eval=True,
num_train_epochs=epochs_num,
seed = cseed
)
from transformers import set_seed
set_seed(cseed)
trian_data_from_disk = datasets.Dataset.load_from_disk(tokenized_datasets_path+"/train" , keep_in_memory=True)
validation_data_from_disk = datasets.Dataset.load_from_disk(tokenized_datasets_path+"/validation" , keep_in_memory=True)
model = BertForSequenceClassification.from_pretrained(checkpoint, num_labels=1)
loss_fn = nn.MSELoss()
trainer = CustomTrainer(
model,
training_args,
train_dataset=trian_data_from_disk,
eval_dataset=validation_data_from_disk,
data_collator=data_collator,
tokenizer=tokenizer,
)
training_results = trainer.train()
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28134/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28133 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28133/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28133/comments | https://api.github.com/repos/huggingface/transformers/issues/28133/events | https://github.com/huggingface/transformers/pull/28133 | 2,048,116,832 | PR_kwDOCUB6oc5iVz5g | 28,133 | [`Mixtral` & `Mistral`] Add support for sdpa | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-19T07:55:23 | 2023-12-21T11:38:23 | 2023-12-21T11:38:22 | COLLABORATOR | null | # What does this PR do?
Adds the SDPA attention for both classes cc @younesbelkada for visibility 😉 Will help for fast LLava | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28133/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28133/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28133",
"html_url": "https://github.com/huggingface/transformers/pull/28133",
"diff_url": "https://github.com/huggingface/transformers/pull/28133.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28133.patch",
"merged_at": "2023-12-21T11:38:22"
} |
https://api.github.com/repos/huggingface/transformers/issues/28132 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28132/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28132/comments | https://api.github.com/repos/huggingface/transformers/issues/28132/events | https://github.com/huggingface/transformers/pull/28132 | 2,048,108,896 | PR_kwDOCUB6oc5iVyNU | 28,132 | [`Refactor Attention mask handling`] Moves attention mask processing to the Attention class | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 5 | 2023-12-19T07:49:18 | 2024-01-28T08:04:53 | null | COLLABORATOR | null | # What does this PR do?
This is more aligned with our philosophy, but also simplifies and will simplify things.
Will help a lot with the static cache.
The only way to share the mask is to call `LlamaAttention` but if you have a better way I'll update it!
This makes the attention class self contained, which is also pretty convenient for testing.
Ran the slow test without fa2 will run them again on dgx once approved.
cc @patrickvonplaten for visibility | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28132/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28132",
"html_url": "https://github.com/huggingface/transformers/pull/28132",
"diff_url": "https://github.com/huggingface/transformers/pull/28132.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28132.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28131 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28131/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28131/comments | https://api.github.com/repos/huggingface/transformers/issues/28131/events | https://github.com/huggingface/transformers/pull/28131 | 2,048,089,805 | PR_kwDOCUB6oc5iVuEK | 28,131 | [`Sdpa / Flash`] save the attention not a bool | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-19T07:34:46 | 2023-12-19T07:53:11 | 2023-12-19T07:52:52 | COLLABORATOR | null | # What does this PR do?
Just a small cleanup that shall be proagated | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28131/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28131",
"html_url": "https://github.com/huggingface/transformers/pull/28131",
"diff_url": "https://github.com/huggingface/transformers/pull/28131.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28131.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28130 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28130/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28130/comments | https://api.github.com/repos/huggingface/transformers/issues/28130/events | https://github.com/huggingface/transformers/issues/28130 | 2,047,968,862 | I_kwDOCUB6oc56EYZe | 28,130 | Mistral flash attention 2 is not work, training speed is equal to the original way which not use flash attn | {
"login": "FangxuLiu",
"id": 22525254,
"node_id": "MDQ6VXNlcjIyNTI1MjU0",
"avatar_url": "https://avatars.githubusercontent.com/u/22525254?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FangxuLiu",
"html_url": "https://github.com/FangxuLiu",
"followers_url": "https://api.github.com/users/FangxuLiu/followers",
"following_url": "https://api.github.com/users/FangxuLiu/following{/other_user}",
"gists_url": "https://api.github.com/users/FangxuLiu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FangxuLiu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FangxuLiu/subscriptions",
"organizations_url": "https://api.github.com/users/FangxuLiu/orgs",
"repos_url": "https://api.github.com/users/FangxuLiu/repos",
"events_url": "https://api.github.com/users/FangxuLiu/events{/privacy}",
"received_events_url": "https://api.github.com/users/FangxuLiu/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 5 | 2023-12-19T05:50:11 | 2024-01-23T13:20:17 | null | NONE | null | ### System Info
transformers==4.36.2
torch==2.0
model = transformers.AutoModelForCausalLM.from_pretrained(script_args.model_path, trust_remote_code=True, use_cache=False, attn_implementation="flash_attention_2", torch_dtype="auto")
I am pretraining Mistral model with deepspeed zero2, when I used flash attention 2, the training speed not improved.
And some log are there:
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
so I want to know what should I do? @ArthurZucker @younesbelkada
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
model = transformers.AutoModelForCausalLM.from_pretrained(script_args.model_path, trust_remote_code=True, use_cache=False, attn_implementation="flash_attention_2", torch_dtype="auto")
torchrun \
--nnode 1 \
--master_port 10000 \
--nproc_per_node 4 \
training/train_instruction.py \
--model_path /mnt/bn/ecom-nas-lfx/mrgt/models/Mistral-7B-v0.1 \
--train_data /mnt/bn/ecom-nas-lfx/mrgt/data/v12_1/v2code_train.jsonl \
--output_dir /mnt/bn/ecom-nas-lfx/mrgt/models/mistral-v12-base-4gpu-flash-test \
--max_length 2048 \
--evaluation_strategy no \
--per_device_train_batch_size 1 \
--gradient_accumulation_steps 1 \
--learning_rate 1e-5 \
--weight_decay 0.1 \
--optim adamw_torch \
--num_train_epochs 2 \
--max_steps -1 \
--lr_scheduler_type cosine \
--warmup_steps 100 \
--logging_strategy steps \
--logging_steps 1 \
--save_strategy steps \
--save_steps 2000 \
--save_total_limit 1 \
--seed 42 \
--bf16 True \
--report_to none \
--deepspeed config/zero2.json
### Expected behavior
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28130/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28129 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28129/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28129/comments | https://api.github.com/repos/huggingface/transformers/issues/28129/events | https://github.com/huggingface/transformers/issues/28129 | 2,047,946,036 | I_kwDOCUB6oc56ES00 | 28,129 | LayerDrop support | {
"login": "EthanBnntt",
"id": 95309712,
"node_id": "U_kgDOBa5PkA",
"avatar_url": "https://avatars.githubusercontent.com/u/95309712?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EthanBnntt",
"html_url": "https://github.com/EthanBnntt",
"followers_url": "https://api.github.com/users/EthanBnntt/followers",
"following_url": "https://api.github.com/users/EthanBnntt/following{/other_user}",
"gists_url": "https://api.github.com/users/EthanBnntt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EthanBnntt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EthanBnntt/subscriptions",
"organizations_url": "https://api.github.com/users/EthanBnntt/orgs",
"repos_url": "https://api.github.com/users/EthanBnntt/repos",
"events_url": "https://api.github.com/users/EthanBnntt/events{/privacy}",
"received_events_url": "https://api.github.com/users/EthanBnntt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-19T05:24:29 | 2023-12-19T05:33:26 | 2023-12-19T05:33:26 | NONE | null | ### Feature request
Add support for LayerDrop in Transformers.
### Motivation
LayerDrop allows for faster training, regularization, and superior pruning after training.
### Your contribution
This is a feature I will work on implementing. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28129/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28128 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28128/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28128/comments | https://api.github.com/repos/huggingface/transformers/issues/28128/events | https://github.com/huggingface/transformers/pull/28128 | 2,047,881,724 | PR_kwDOCUB6oc5iVCKI | 28,128 | bug fix: fix vocab_size being 0 for deepspeed zero3 | {
"login": "circlecrystal",
"id": 5665980,
"node_id": "MDQ6VXNlcjU2NjU5ODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5665980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/circlecrystal",
"html_url": "https://github.com/circlecrystal",
"followers_url": "https://api.github.com/users/circlecrystal/followers",
"following_url": "https://api.github.com/users/circlecrystal/following{/other_user}",
"gists_url": "https://api.github.com/users/circlecrystal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/circlecrystal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/circlecrystal/subscriptions",
"organizations_url": "https://api.github.com/users/circlecrystal/orgs",
"repos_url": "https://api.github.com/users/circlecrystal/repos",
"events_url": "https://api.github.com/users/circlecrystal/events{/privacy}",
"received_events_url": "https://api.github.com/users/circlecrystal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-12-19T04:06:45 | 2024-01-26T08:03:20 | 2024-01-26T08:03:20 | NONE | null | # What does this PR do?
This PR fixes the error encountered during model training with DeepSpeed Zero-3.
@pacman100 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28128/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28128",
"html_url": "https://github.com/huggingface/transformers/pull/28128",
"diff_url": "https://github.com/huggingface/transformers/pull/28128.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28128.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28127 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28127/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28127/comments | https://api.github.com/repos/huggingface/transformers/issues/28127/events | https://github.com/huggingface/transformers/pull/28127 | 2,047,754,831 | PR_kwDOCUB6oc5iUoUC | 28,127 | Update modeling_utils.py | {
"login": "mzelling",
"id": 36188891,
"node_id": "MDQ6VXNlcjM2MTg4ODkx",
"avatar_url": "https://avatars.githubusercontent.com/u/36188891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mzelling",
"html_url": "https://github.com/mzelling",
"followers_url": "https://api.github.com/users/mzelling/followers",
"following_url": "https://api.github.com/users/mzelling/following{/other_user}",
"gists_url": "https://api.github.com/users/mzelling/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mzelling/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mzelling/subscriptions",
"organizations_url": "https://api.github.com/users/mzelling/orgs",
"repos_url": "https://api.github.com/users/mzelling/repos",
"events_url": "https://api.github.com/users/mzelling/events{/privacy}",
"received_events_url": "https://api.github.com/users/mzelling/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-19T01:25:49 | 2023-12-19T17:07:58 | 2023-12-19T17:07:58 | CONTRIBUTOR | null | In the docstring for PreTrainedModel.resize_token_embeddings, correct the definition of the new_num_tokens parameter to read "the new number of tokens" (meaning the new size of the vocab) rather than "the number of new tokens" (meaning the number of newly added tokens only). This is in agreement with what the code does (see source and docstring of function PreTrainedModel._get_resized_embeddings).
@stevhliu @MKhalusova
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28127/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28127",
"html_url": "https://github.com/huggingface/transformers/pull/28127",
"diff_url": "https://github.com/huggingface/transformers/pull/28127.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28127.patch",
"merged_at": "2023-12-19T17:07:58"
} |
https://api.github.com/repos/huggingface/transformers/issues/28126 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28126/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28126/comments | https://api.github.com/repos/huggingface/transformers/issues/28126/events | https://github.com/huggingface/transformers/pull/28126 | 2,047,716,623 | PR_kwDOCUB6oc5iUgXB | 28,126 | [gpt-neox] Add attention_bias config to support model trained without attention biases | {
"login": "dalgarak",
"id": 20063100,
"node_id": "MDQ6VXNlcjIwMDYzMTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/20063100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dalgarak",
"html_url": "https://github.com/dalgarak",
"followers_url": "https://api.github.com/users/dalgarak/followers",
"following_url": "https://api.github.com/users/dalgarak/following{/other_user}",
"gists_url": "https://api.github.com/users/dalgarak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dalgarak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dalgarak/subscriptions",
"organizations_url": "https://api.github.com/users/dalgarak/orgs",
"repos_url": "https://api.github.com/users/dalgarak/repos",
"events_url": "https://api.github.com/users/dalgarak/events{/privacy}",
"received_events_url": "https://api.github.com/users/dalgarak/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-12-19T00:31:29 | 2023-12-20T09:15:11 | 2023-12-20T09:05:32 | CONTRIBUTOR | null | # What does this PR do?
This PR adds attention_bias configuration into GPT-NeoX models. Currently released models all use bias by default for the linear layers in attention block, but the GPT-NeoX library allows us to train models without attention bias. (can be trained with use_bias_in_attn_linear=False)
For compatibility with existing models, we set the default value of attention-bias to True. I've done some testing and verified the behavior with attn_implementation="flash_attention_2".
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28126/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28126",
"html_url": "https://github.com/huggingface/transformers/pull/28126",
"diff_url": "https://github.com/huggingface/transformers/pull/28126.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28126.patch",
"merged_at": "2023-12-20T09:05:32"
} |
https://api.github.com/repos/huggingface/transformers/issues/28125 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28125/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28125/comments | https://api.github.com/repos/huggingface/transformers/issues/28125/events | https://github.com/huggingface/transformers/issues/28125 | 2,047,659,948 | I_kwDOCUB6oc56DM-s | 28,125 | [Docs] Broken link in Kubernetes doc | {
"login": "dmsuehir",
"id": 13952606,
"node_id": "MDQ6VXNlcjEzOTUyNjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/13952606?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dmsuehir",
"html_url": "https://github.com/dmsuehir",
"followers_url": "https://api.github.com/users/dmsuehir/followers",
"following_url": "https://api.github.com/users/dmsuehir/following{/other_user}",
"gists_url": "https://api.github.com/users/dmsuehir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dmsuehir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dmsuehir/subscriptions",
"organizations_url": "https://api.github.com/users/dmsuehir/orgs",
"repos_url": "https://api.github.com/users/dmsuehir/repos",
"events_url": "https://api.github.com/users/dmsuehir/events{/privacy}",
"received_events_url": "https://api.github.com/users/dmsuehir/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2023-12-18T23:47:22 | 2024-01-17T22:09:05 | null | CONTRIBUTOR | null | ### System Info
N/A
### Who can help?
@stevhliu
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I recently helped add kubernetes instructions to the documentation [here](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_train_cpu_many.md#usage-with-kubernetes), and I saw that with the recent patch, it's now posted at the huggingface.co docs site [here](https://huggingface.co/docs/transformers/perf_train_cpu_many#usage-with-kubernetes). However, at the docs site, it seems like links to non-Hugging Face pages are broken. For example, in the first sentence under the heading when it links "Kubeflow PyTorchJob training operator", that link doesn't work for me. What's also weird is that the link *does* work if I right click it and open it in a new tab, but regular click gives me a 404. The links also work fine from the GitHub.
### Expected behavior
Links should work as they do in GitHub from the .md | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28125/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28124 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28124/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28124/comments | https://api.github.com/repos/huggingface/transformers/issues/28124/events | https://github.com/huggingface/transformers/issues/28124 | 2,047,594,060 | I_kwDOCUB6oc56C85M | 28,124 | [Trainer.train] learning rate logging inconsistency: learning rate for the future step is logged | {
"login": "HanGuo97",
"id": 18187806,
"node_id": "MDQ6VXNlcjE4MTg3ODA2",
"avatar_url": "https://avatars.githubusercontent.com/u/18187806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HanGuo97",
"html_url": "https://github.com/HanGuo97",
"followers_url": "https://api.github.com/users/HanGuo97/followers",
"following_url": "https://api.github.com/users/HanGuo97/following{/other_user}",
"gists_url": "https://api.github.com/users/HanGuo97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HanGuo97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HanGuo97/subscriptions",
"organizations_url": "https://api.github.com/users/HanGuo97/orgs",
"repos_url": "https://api.github.com/users/HanGuo97/repos",
"events_url": "https://api.github.com/users/HanGuo97/events{/privacy}",
"received_events_url": "https://api.github.com/users/HanGuo97/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2023-12-18T22:51:50 | 2024-01-18T09:58:42 | null | NONE | null | ### System Info
NA
### Who can help?
@muellerzr and @pacman100
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
[This](https://github.com/huggingface/transformers/blob/c52b515e948fc12ff58ad773a0385860d0162f61/src/transformers/trainer.py#L1913) line of code steps forward the LR scheduler, before `_maybe_log_save_evaluate` is called. This means the learning rate logged represents the learning in the upcoming iteration.
For most of the use cases, the differences between them is small. However, in certain cases, this caused confusion.
### Expected behavior
The learning rate for the current iteration is logged. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28124/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28123 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28123/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28123/comments | https://api.github.com/repos/huggingface/transformers/issues/28123/events | https://github.com/huggingface/transformers/pull/28123 | 2,047,564,974 | PR_kwDOCUB6oc5iT-OF | 28,123 | [Doc] Fix token link in What 🤗 Transformers can do | {
"login": "aaronjimv",
"id": 67152883,
"node_id": "MDQ6VXNlcjY3MTUyODgz",
"avatar_url": "https://avatars.githubusercontent.com/u/67152883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaronjimv",
"html_url": "https://github.com/aaronjimv",
"followers_url": "https://api.github.com/users/aaronjimv/followers",
"following_url": "https://api.github.com/users/aaronjimv/following{/other_user}",
"gists_url": "https://api.github.com/users/aaronjimv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaronjimv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaronjimv/subscriptions",
"organizations_url": "https://api.github.com/users/aaronjimv/orgs",
"repos_url": "https://api.github.com/users/aaronjimv/repos",
"events_url": "https://api.github.com/users/aaronjimv/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaronjimv/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-18T22:26:59 | 2023-12-19T15:25:44 | 2023-12-18T23:06:55 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fix the tokens link in `What 🤗 Transformers can do` .
The link in this section generate a 404 error:
> Token classification
In any NLP task, text is preprocessed by separating the sequence of text into individual words or subwords. These are known as [tokens](https://huggingface.co/glossary#token). Token classification assigns each token a label from a predefined set of classes.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@stevhliu | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28123/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28123",
"html_url": "https://github.com/huggingface/transformers/pull/28123",
"diff_url": "https://github.com/huggingface/transformers/pull/28123.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28123.patch",
"merged_at": "2023-12-18T23:06:55"
} |
https://api.github.com/repos/huggingface/transformers/issues/28122 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28122/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28122/comments | https://api.github.com/repos/huggingface/transformers/issues/28122/events | https://github.com/huggingface/transformers/pull/28122 | 2,047,370,131 | PR_kwDOCUB6oc5iTS_q | 28,122 | Fix weights not properly initialized due to shape mismatch | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 7 | 2023-12-18T20:07:58 | 2023-12-20T13:20:04 | 2023-12-20T13:20:02 | COLLABORATOR | null | # What does this PR do?
Currently, if there is some weight shape mismatched between the model and the checkpoint, and if ignore_mismatched_sizes=True, that/those weight(s) won't get initialized by the model's `_init_weights` method, and could get crazy values like 1e37.
This could make the training gets `nan loss value` from the beginning, (then `Trainer` will change this to `0.0`) and the training won't have any progress (loss always 0.0).
One example is by running `src/transformers/modeling_utils.py` (add `ignore_mismatched_sizes=True`).
We usually set `ignore_mismatched_sizes=True` when we want to perform classification tasks using an existing model but to another task having different number of targets.
This PR aims to fix this issue.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28122/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28122/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28122",
"html_url": "https://github.com/huggingface/transformers/pull/28122",
"diff_url": "https://github.com/huggingface/transformers/pull/28122.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28122.patch",
"merged_at": "2023-12-20T13:20:02"
} |
https://api.github.com/repos/huggingface/transformers/issues/28121 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28121/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28121/comments | https://api.github.com/repos/huggingface/transformers/issues/28121/events | https://github.com/huggingface/transformers/issues/28121 | 2,047,216,945 | I_kwDOCUB6oc56Bg0x | 28,121 | Add StyleTTS 2 to HF Transformers Pipeline | {
"login": "fakerybakery",
"id": 76186054,
"node_id": "MDQ6VXNlcjc2MTg2MDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/76186054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fakerybakery",
"html_url": "https://github.com/fakerybakery",
"followers_url": "https://api.github.com/users/fakerybakery/followers",
"following_url": "https://api.github.com/users/fakerybakery/following{/other_user}",
"gists_url": "https://api.github.com/users/fakerybakery/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fakerybakery/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fakerybakery/subscriptions",
"organizations_url": "https://api.github.com/users/fakerybakery/orgs",
"repos_url": "https://api.github.com/users/fakerybakery/repos",
"events_url": "https://api.github.com/users/fakerybakery/events{/privacy}",
"received_events_url": "https://api.github.com/users/fakerybakery/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 4 | 2023-12-18T18:33:14 | 2024-01-12T17:13:48 | null | NONE | null | ### Feature request
Add [StyleTTS](https://github.com/yl4579/StyleTTS2) 2 to HF Transformers Pipeline
### Motivation
Would be great to have an easier way to run STTS2
### Your contribution
I created a [fork](https://github.com/neuralvox/styletts2) with importable scripts | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28121/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28120 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28120/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28120/comments | https://api.github.com/repos/huggingface/transformers/issues/28120/events | https://github.com/huggingface/transformers/issues/28120 | 2,047,205,290 | I_kwDOCUB6oc56Bd-q | 28,120 | Add Tortoise TTS to HF Pipeline | {
"login": "fakerybakery",
"id": 76186054,
"node_id": "MDQ6VXNlcjc2MTg2MDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/76186054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fakerybakery",
"html_url": "https://github.com/fakerybakery",
"followers_url": "https://api.github.com/users/fakerybakery/followers",
"following_url": "https://api.github.com/users/fakerybakery/following{/other_user}",
"gists_url": "https://api.github.com/users/fakerybakery/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fakerybakery/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fakerybakery/subscriptions",
"organizations_url": "https://api.github.com/users/fakerybakery/orgs",
"repos_url": "https://api.github.com/users/fakerybakery/repos",
"events_url": "https://api.github.com/users/fakerybakery/events{/privacy}",
"received_events_url": "https://api.github.com/users/fakerybakery/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-12-18T18:24:38 | 2024-01-18T16:04:52 | 2024-01-18T16:04:52 | NONE | null | ### Feature request
Hi,
Might it be possible to add [Tortoise TTS](https://github.com/neonbjb/tortoise-tts) to the `text-to-speech` pipeline?
### Motivation
Tortoise TTS is currently the highest-quality permissively licensed text-to-speech library available.
### Your contribution
Tortoise TTS is already pip-ified so it shouldn't be too hard to add. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28120/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28120/timeline | null | not_planned | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28119 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28119/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28119/comments | https://api.github.com/repos/huggingface/transformers/issues/28119/events | https://github.com/huggingface/transformers/issues/28119 | 2,047,169,168 | I_kwDOCUB6oc56BVKQ | 28,119 | Save model checkpoint error when multi-gpu training still happens on 4.36.1 | {
"login": "z7ye",
"id": 25996703,
"node_id": "MDQ6VXNlcjI1OTk2NzAz",
"avatar_url": "https://avatars.githubusercontent.com/u/25996703?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/z7ye",
"html_url": "https://github.com/z7ye",
"followers_url": "https://api.github.com/users/z7ye/followers",
"following_url": "https://api.github.com/users/z7ye/following{/other_user}",
"gists_url": "https://api.github.com/users/z7ye/gists{/gist_id}",
"starred_url": "https://api.github.com/users/z7ye/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/z7ye/subscriptions",
"organizations_url": "https://api.github.com/users/z7ye/orgs",
"repos_url": "https://api.github.com/users/z7ye/repos",
"events_url": "https://api.github.com/users/z7ye/events{/privacy}",
"received_events_url": "https://api.github.com/users/z7ye/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api... | null | 14 | 2023-12-18T18:00:13 | 2024-01-25T14:19:54 | null | NONE | null | ### System Info
platform: linux
python: 3.9
transformers: 4.36.1
running on two A10.2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
I saw the release notes of 4.36.1 says this error already fixed, however, it still repeats after I install the latest version when I am running on a two A10.2 machine.
```
Traceback (most recent call last):
2023-12-17 18:09:08 10.0.1.12: File "/home/datascience/conda/pytorch2_0forgpuonpython3_9_vziqun/lib/python3.9/runpy.py", line 197, in _run_module_as_main
2023-12-17 18:09:08 10.0.1.12: return _run_code(code, main_globals, None,
2023-12-17 18:09:08 10.0.1.12: File "/home/datascience/conda/pytorch2_0forgpuonpython3_9_vziqun/lib/python3.9/runpy.py", line 87, in _run_code
2023-12-17 18:09:08 10.0.1.12: exec(code, run_globals)
2023-12-17 18:09:08 10.0.1.12: File "/home/datascience/decompressed_artifact/code/src/axolotl/cli/train.py", line 38, in <module>
2023-12-17 18:09:08 10.0.1.12: fire.Fire(do_cli)
2023-12-17 18:09:08 10.0.1.12: File "/home/datascience/conda/pytorch2_0forgpuonpython3_9_vziqun/lib/python3.9/site-packages/fire/core.py", line 141, in Fire
2023-12-17 18:09:08 10.0.1.12: component_trace = _Fire(component, args, parsed_flag_args, context, name)
2023-12-17 18:09:08 10.0.1.12: File "/home/datascience/conda/pytorch2_0forgpuonpython3_9_vziqun/lib/python3.9/site-packages/fire/core.py", line 475, in _Fire
2023-12-17 18:09:08 10.0.1.12: component, remaining_args = _CallAndUpdateTrace(
2023-12-17 18:09:08 10.0.1.12: File "/home/datascience/conda/pytorch2_0forgpuonpython3_9_vziqun/lib/python3.9/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
2023-12-17 18:09:08 10.0.1.12: component = fn(*varargs, **kwargs)
2023-12-17 18:09:08 10.0.1.12: File "/home/datascience/decompressed_artifact/code/src/axolotl/cli/train.py", line 34, in do_cli
2023-12-17 18:09:08 10.0.1.12: train(cfg=parsed_cfg, cli_args=parsed_cli_args, dataset_meta=dataset_meta)
2023-12-17 18:09:08 10.0.1.12: File "/home/datascience/decompressed_artifact/code/src/axolotl/train.py", line 126, in train
2023-12-17 18:09:08 10.0.1.12: trainer.train(resume_from_checkpoint=resume_from_checkpoint)
2023-12-17 18:09:08 10.0.1.12: File "/home/datascience/conda/pytorch2_0forgpuonpython3_9_vziqun/lib/python3.9/site-packages/transformers/trainer.py", line 1537, in train
2023-12-17 18:09:08 10.0.1.12: return inner_training_loop(
2023-12-17 18:09:08 10.0.1.12: File "/home/datascience/conda/pytorch2_0forgpuonpython3_9_vziqun/lib/python3.9/site-packages/transformers/trainer.py", line 1929, in _inner_training_loop
2023-12-17 18:09:08 10.0.1.12: self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
2023-12-17 18:09:08 10.0.1.12: File "/home/datascience/conda/pytorch2_0forgpuonpython3_9_vziqun/lib/python3.9/site-packages/transformers/trainer.py", line 2274, in _maybe_log_save_evaluate
2023-12-17 18:09:08 10.0.1.12: self._save_checkpoint(model, trial, metrics=metrics)
2023-12-17 18:09:08 10.0.1.12: File "/home/datascience/conda/pytorch2_0forgpuonpython3_9_vziqun/lib/python3.9/site-packages/transformers/trainer.py", line 2376, in _save_checkpoint
2023-12-17 18:09:08 10.0.1.12: self.state.save_to_json(os.path.join(staging_output_dir, TRAINER_STATE_NAME))
2023-12-17 18:09:08 10.0.1.12: File "/home/datascience/conda/pytorch2_0forgpuonpython3_9_vziqun/lib/python3.9/site-packages/transformers/trainer_callback.py", line 114, in save_to_json
2023-12-17 18:09:08 10.0.1.12: with open(json_path, "w", encoding="utf-8") as f:
2023-12-17 18:09:08 10.0.1.12: FileNotFoundError: [Errno 2] No such file or directory: './qlora-out/tmp-checkpoint-1080/trainer_state.json'
```
### Expected behavior
expect it to work. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28119/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28118 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28118/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28118/comments | https://api.github.com/repos/huggingface/transformers/issues/28118/events | https://github.com/huggingface/transformers/pull/28118 | 2,047,169,117 | PR_kwDOCUB6oc5iSnXF | 28,118 | Fix a typo in tokenizer documentation | {
"login": "mssalvatore",
"id": 19957806,
"node_id": "MDQ6VXNlcjE5OTU3ODA2",
"avatar_url": "https://avatars.githubusercontent.com/u/19957806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mssalvatore",
"html_url": "https://github.com/mssalvatore",
"followers_url": "https://api.github.com/users/mssalvatore/followers",
"following_url": "https://api.github.com/users/mssalvatore/following{/other_user}",
"gists_url": "https://api.github.com/users/mssalvatore/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mssalvatore/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mssalvatore/subscriptions",
"organizations_url": "https://api.github.com/users/mssalvatore/orgs",
"repos_url": "https://api.github.com/users/mssalvatore/repos",
"events_url": "https://api.github.com/users/mssalvatore/events{/privacy}",
"received_events_url": "https://api.github.com/users/mssalvatore/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-18T18:00:11 | 2023-12-18T18:44:35 | 2023-12-18T18:44:35 | CONTRIBUTOR | null | # What does this PR do?
Fixes a typo in tokenizer documentation. For some methods, such as `tokenize()`, the description currently reads "Converts a string in a sequence of tokens, using the tokenizer." I believe what is meant is "Converts a string INTO a sequence of tokens".
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? (N/A)
## Who can review?
@ArthurZucker
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28118/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28118",
"html_url": "https://github.com/huggingface/transformers/pull/28118",
"diff_url": "https://github.com/huggingface/transformers/pull/28118.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28118.patch",
"merged_at": "2023-12-18T18:44:35"
} |
https://api.github.com/repos/huggingface/transformers/issues/28117 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28117/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28117/comments | https://api.github.com/repos/huggingface/transformers/issues/28117/events | https://github.com/huggingface/transformers/pull/28117 | 2,047,094,751 | PR_kwDOCUB6oc5iSXIn | 28,117 | Fix indentation error - semantic_segmentation.md | {
"login": "rajveer43",
"id": 64583161,
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajveer43",
"html_url": "https://github.com/rajveer43",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-18T17:09:22 | 2023-12-19T01:54:10 | 2023-12-18T17:47:54 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
this PR removes the indentation error in code segment of sementic_segmentation.md file
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Documentation: @stevhliu and @MKhalusova
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28117/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28117",
"html_url": "https://github.com/huggingface/transformers/pull/28117",
"diff_url": "https://github.com/huggingface/transformers/pull/28117.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28117.patch",
"merged_at": "2023-12-18T17:47:54"
} |
https://api.github.com/repos/huggingface/transformers/issues/28116 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28116/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28116/comments | https://api.github.com/repos/huggingface/transformers/issues/28116/events | https://github.com/huggingface/transformers/issues/28116 | 2,047,064,498 | I_kwDOCUB6oc56A7my | 28,116 | TypeError: TextInputSequence must be str in converting squad examples to features | {
"login": "sajastu",
"id": 10419055,
"node_id": "MDQ6VXNlcjEwNDE5MDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/10419055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sajastu",
"html_url": "https://github.com/sajastu",
"followers_url": "https://api.github.com/users/sajastu/followers",
"following_url": "https://api.github.com/users/sajastu/following{/other_user}",
"gists_url": "https://api.github.com/users/sajastu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sajastu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sajastu/subscriptions",
"organizations_url": "https://api.github.com/users/sajastu/orgs",
"repos_url": "https://api.github.com/users/sajastu/repos",
"events_url": "https://api.github.com/users/sajastu/events{/privacy}",
"received_events_url": "https://api.github.com/users/sajastu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-12-18T16:51:45 | 2024-01-26T08:03:24 | 2024-01-26T08:03:23 | NONE | null | ### System Info
- `transformers` version: 4.36.1
- Platform: Linux-4.15.0-196-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- Tensorflow version (GPU?): 2.9.1 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.0 (gpu)
- Jax version: 0.3.16
- JaxLib version: 0.3.15
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Steps to reproduce this behaviour:
I basically have written a function that calls the `squad_convert_examples_to_features` of HF after doing some input framings. This is a mockup code just to show the behaviour, but it's in fact a part of a larger model. Here's my code:
```python
from transformers import SquadExample, squad_convert_examples_to_features, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("ahotrod/electra_large_discriminator_squad2_512") #an ELECTRA-LARGE tokenizer
qa_pairs = [[['QuestionA?', "AnswerA"], ['QuestionB', 'AnswerB'], ['QuestionC', 'AnswerC'], ["QuestionD", 'AnswerD']]]
context = "Here's the context text..."
def _answer_questions(
summaries, qa_pairs_lists
) :
qa_inputs = []
context_to_input_index = {}
mapping = {}
for i, (summary, qa_pairs_list) in enumerate(zip(summaries, [[qa_pairs_lists]])):
for j, qa_pairs in enumerate(qa_pairs_list):
for k, qa in enumerate(qa_pairs):
question = qa["question"]
key = (question, summary)
if key not in context_to_input_index:
context_to_input_index[key] = len(qa_inputs)
qa_inputs.append(key)
mapping[(i, j, k)] = context_to_input_index[key]
examples = []
for i, (question, context) in enumerate(qa_inputs):
examples.append(SquadExample(
qas_id=str(i),
question_text=question,
context_text=context,
answer_text=None,
start_position_character=None,
title=None,
is_impossible=True,
answers=[]
))
features, dataset = squad_convert_examples_to_features(
examples,
tokenizer,
384,
0,
512,
False,
padding_strategy="max_length",
return_dataset=False,
threads=1,
tqdm_enabled=True,
)
# throws
"""
Traceback (most recent call last):
File "test.py", line 55, in <module>
_answer_questions(
File "test.py", line 39, in _answer_questions
features, dataset = squad_convert_examples_to_features(
File "/path/to/HF_installed/squad.py", line 376, in squad_convert_examples_to_features
features = list(
File "lib/python3.8/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "lib/python3.8/multiprocessing/pool.py", line 420, in <genexpr>
return (item for chunk in result for item in chunk)
File "lib/python3.8/multiprocessing/pool.py", line 868, in next
raise value
TypeError: TextInputSequence must be str
"""
# test
_answer_questions(
[context],
[{'question': v[0], 'answer': v[1] } for v in qa_pairs[0]]
)
```
Here's more debugging info about where this error is coming from:
> Traceback (most recent call last):
> File "PYTHON_PATH/multiprocessing/pool.py", line 125, in worker
> result = (True, func(*args, **kwds))
> File "PYTHON_PATH/multiprocessing/pool.py", line 48, in mapstar
> return list(map(*args))
> File "test.py", line 96, in squad_convert_example_to_features
> encoded_dict = tokenizer.encode_plus( # TODO(thom) update this logic
> File "PYTHON_PATH/site-packages/transformers/tokenization_utils_base.py", line 2981, in encode_plus
> return self._encode_plus(
> File "PYTHON_PATH/site-packages/transformers/tokenization_utils_fast.py", line 576, in _encode_plus
> batched_output = self._batch_encode_plus(
> File "PYTHON_PATH/site-packages/transformers/tokenization_utils_fast.py", line 504, in _batch_encode_plus
> encodings = self._tokenizer.encode_batch(
> TypeError: TextInputSequence must be str
### Expected behavior
I'm expecting to use the `squad_convert_examples_to_features` function smoothly, getting all the `features` and `dataset` without any bugs. I did some digging around the web for a quick fix or workaround and found out that switching the tokenizer to a regular one (by setting `use_fast=False` when initiating the tokenizer) seems to do the trick. But since this issue has been around for like 2 years now (if I remember correctly), I think it's high time to open a new issue page and flag this potential bug. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28116/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28115 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28115/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28115/comments | https://api.github.com/repos/huggingface/transformers/issues/28115/events | https://github.com/huggingface/transformers/pull/28115 | 2,046,937,134 | PR_kwDOCUB6oc5iR0Yx | 28,115 | [`Mixtral`] Fix loss + nits | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-18T15:38:55 | 2023-12-31T01:41:01 | 2023-12-19T16:31:54 | COLLABORATOR | null | # What does this PR do?
Properly compute the loss. Pushes for a uniform distribution.
fixes #28021
Fixes https://github.com/huggingface/transformers/issues/28093 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28115/reactions",
"total_count": 4,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28115/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28115",
"html_url": "https://github.com/huggingface/transformers/pull/28115",
"diff_url": "https://github.com/huggingface/transformers/pull/28115.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28115.patch",
"merged_at": "2023-12-19T16:31:54"
} |
https://api.github.com/repos/huggingface/transformers/issues/28114 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28114/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28114/comments | https://api.github.com/repos/huggingface/transformers/issues/28114/events | https://github.com/huggingface/transformers/pull/28114 | 2,046,732,521 | PR_kwDOCUB6oc5iRG49 | 28,114 | [Whisper] Fix word-level timestamps with bs>1 or num_beams>1 | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-12-18T14:01:33 | 2023-12-23T21:29:52 | 2023-12-22T12:43:11 | COLLABORATOR | null | # What does this PR do?
Supersedes #26699
This PR fixes two issues related to Whisper:
1. Wrong DTW matrix computation when computing word-level timestamps with beam search (issues #27362 and #28007)
2. Bug when computing world-levels timestamps with bs>1 using the pipeline (issue #27446 and PR #26699)
The first issue happens because the DTW matrix is derived from the cross attentions. The latter is of size `beam_search*num_return_sequences*batch_size`, but it should be of size `num_return_sequences*batch_size` , so we need to keep track of the beam indices.
The second issue happens because when batching with the pipeline, `stride` is passed as a list of tuple (one per sample) instead of a single tuple.
When there are multiple strides passed to `_extract_token_timestamps`, we can't compute the DTW matrix in parallel.
It is treated in two cases:
1. If same stride for each sample, compute DTW weights in parallel
2. If not the same stride (i.e end of audio file) compute them sequentially
The loss of parallelism is not so dramatic, since in all cases the DTW algorithm is performed sequentially.
Fixes #27362, #28007, #27446
cc @sanchit-gandhi, @amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28114/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28114/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28114",
"html_url": "https://github.com/huggingface/transformers/pull/28114",
"diff_url": "https://github.com/huggingface/transformers/pull/28114.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28114.patch",
"merged_at": "2023-12-22T12:43:11"
} |
https://api.github.com/repos/huggingface/transformers/issues/28113 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28113/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28113/comments | https://api.github.com/repos/huggingface/transformers/issues/28113/events | https://github.com/huggingface/transformers/pull/28113 | 2,046,650,583 | PR_kwDOCUB6oc5iQ1Z6 | 28,113 | Remove warning if `DISABLE_TELEMETRY` is used | {
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-18T13:19:01 | 2023-12-18T15:18:02 | 2023-12-18T15:18:01 | CONTRIBUTOR | null | In https://github.com/huggingface/transformers/issues/27564 I did some cleaning in the environment variables. I added a warning if `DISABLE_TELEMETRY` was set to encourage using `HF_HUB_DISABLE_TELEMETRY` instead. However this warning is not necessary for at least two reasons:
- `DISABLE_TELEMETRY` is already well understood and parsed by `huggingface_hub`. No need to handle it specifically in `transformers`. If in the future we want to deprecate it and/or handle it differently, everything would have to happen in `huggingface_hub` directly.
- Also, as highlighted in https://github.com/huggingface/huggingface_hub/issues/1917, keeping `DISABLE_TELEMETRY` in addition to our custom `HF_HUB_DISABLED_TELEMETRY` is also beneficial if this variable become a standard with other libraries. In any case, we do not benefit from not handling it.
Therefore this PR removes the deprecation warning + let `huggingface_hub` handle the environment variables by itself. It removes any custom code from `transformers` about this topic. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28113/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28113/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28113",
"html_url": "https://github.com/huggingface/transformers/pull/28113",
"diff_url": "https://github.com/huggingface/transformers/pull/28113.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28113.patch",
"merged_at": "2023-12-18T15:18:01"
} |
https://api.github.com/repos/huggingface/transformers/issues/28112 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28112/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28112/comments | https://api.github.com/repos/huggingface/transformers/issues/28112/events | https://github.com/huggingface/transformers/issues/28112 | 2,046,551,463 | I_kwDOCUB6oc55--Wn | 28,112 | Error pushing Mixtral fine-tune to hub | {
"login": "RonanKMcGovern",
"id": 78278410,
"node_id": "MDQ6VXNlcjc4Mjc4NDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/78278410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RonanKMcGovern",
"html_url": "https://github.com/RonanKMcGovern",
"followers_url": "https://api.github.com/users/RonanKMcGovern/followers",
"following_url": "https://api.github.com/users/RonanKMcGovern/following{/other_user}",
"gists_url": "https://api.github.com/users/RonanKMcGovern/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RonanKMcGovern/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RonanKMcGovern/subscriptions",
"organizations_url": "https://api.github.com/users/RonanKMcGovern/orgs",
"repos_url": "https://api.github.com/users/RonanKMcGovern/repos",
"events_url": "https://api.github.com/users/RonanKMcGovern/events{/privacy}",
"received_events_url": "https://api.github.com/users/RonanKMcGovern/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 13 | 2023-12-18T12:21:52 | 2024-01-12T11:39:14 | 2024-01-12T11:39:14 | NONE | null | ### System Info
- `transformers` version: 4.36.1
- Platform: Linux-5.4.0-155-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes, 2x A6000
- Using distributed or parallel set-up in script?:
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
model = AutoModelForCausalLM.from_pretrained(
base_model,
torch_dtype=torch.bfloat16,
device_map="auto",
attn_implementation="flash_attention_2",
cache_dir=cache_dir
)
# Apply an adapter:
from peft import PeftModel
model = PeftModel.from_pretrained(
model,
adapter_dir,
)
model = model.merge_and_unload() # merge adapters with the base model.
model.push_to_hub(new_model, token=True, max_shard_size="10GB",safe_serialization=True)
```
Leads to:
```
SafetensorError Traceback (most recent call last)
Cell In[20], line 1
----> 1 model.push_to_hub(new_model, token=True, max_shard_size="10GB",safe_serialization=True)
File /usr/local/lib/python3.10/dist-packages/transformers/utils/hub.py:871, in PushToHubMixin.push_to_hub(self, repo_id, use_temp_dir, commit_message, private, token, max_shard_size, create_pr, safe_serialization, revision, commit_description, **deprecated_kwargs)
868 files_timestamps = self._get_files_timestamps(work_dir)
870 # Save all files.
--> 871 self.save_pretrained(work_dir, max_shard_size=max_shard_size, safe_serialization=safe_serialization)
873 return self._upload_modified_files(
874 work_dir,
875 repo_id,
(...)
881 commit_description=commit_description,
882 )
File /usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py:2376, in PreTrainedModel.save_pretrained(self, save_directory, is_main_process, state_dict, save_function, push_to_hub, max_shard_size, safe_serialization, variant, token, save_peft_format, **kwargs)
2372 for shard_file, shard in shards.items():
2373 if safe_serialization:
2374 # At some point we will need to deal better with save_function (used for TPU and other distributed
2375 # joyfulness), but for now this enough.
-> 2376 safe_save_file(shard, os.path.join(save_directory, shard_file), metadata={"format": "pt"})
2377 else:
2378 save_function(shard, os.path.join(save_directory, shard_file))
File /usr/local/lib/python3.10/dist-packages/safetensors/torch.py:281, in save_file(tensors, filename, metadata)
250 def save_file(
251 tensors: Dict[str, torch.Tensor],
252 filename: Union[str, os.PathLike],
253 metadata: Optional[Dict[str, str]] = None,
254 ):
255 """
256 Saves a dictionary of tensors into raw bytes in safetensors format.
257
(...)
279 ```
280 """
--> 281 serialize_file(_flatten(tensors), filename, metadata=metadata)
SafetensorError: Error while serializing: IoError(Os { code: 28, kind: StorageFull, message: "No space left on device" })
```
Even though I'm only using 31% of 600 GB of disk space locally.
```
### Expected behavior
Typically, safetensors push successfully. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28112/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28111 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28111/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28111/comments | https://api.github.com/repos/huggingface/transformers/issues/28111/events | https://github.com/huggingface/transformers/issues/28111 | 2,046,459,318 | I_kwDOCUB6oc55-n22 | 28,111 | Facing issues when trying to fine-tune T5 | {
"login": "wolfassi123",
"id": 82727504,
"node_id": "MDQ6VXNlcjgyNzI3NTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/82727504?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wolfassi123",
"html_url": "https://github.com/wolfassi123",
"followers_url": "https://api.github.com/users/wolfassi123/followers",
"following_url": "https://api.github.com/users/wolfassi123/following{/other_user}",
"gists_url": "https://api.github.com/users/wolfassi123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wolfassi123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wolfassi123/subscriptions",
"organizations_url": "https://api.github.com/users/wolfassi123/orgs",
"repos_url": "https://api.github.com/users/wolfassi123/repos",
"events_url": "https://api.github.com/users/wolfassi123/events{/privacy}",
"received_events_url": "https://api.github.com/users/wolfassi123/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 6 | 2023-12-18T11:29:36 | 2024-01-11T08:36:15 | 2024-01-11T08:34:28 | NONE | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): 2.15.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.7.5 (gpu)
- Jax version: 0.4.20
- JaxLib version: 0.4.20
- Using GPU in script?: T4
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @youne
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am trying to fine tune a T5-base model but have been facing issues despite following the step by step guide found on the huggingface hub [here](https://huggingface.co/docs/transformers/tasks/translation).
So far this is my code:
`transformers.logging.set_verbosity_error()`
```python
from datasets import load_dataset
canard_train_augm = load_dataset("gaussalgo/Canard_Wiki-augmented", split="train")
canard_test_augm = load_dataset("gaussalgo/Canard_Wiki-augmented", split="test")
from transformers import AutoTokenizer
model_name = "t5-small"
tokenizer = AutoTokenizer.from_pretrained(model_name)
def preprocess_function(examples):
combined_input = examples["Question"] + ": " + examples["true_contexts"]
return tokenizer(combined_input, examples["Rewrite"],max_length=512, padding="max_length", truncation=True, return_tensors="pt")
tokenized_train = canard_train_augm.map(preprocess_function)
tokenized_test = canard_test_augm.map(preprocess_function)
from transformers import DataCollatorForSeq2Seq
data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=model_name)
from transformers import DataCollatorForSeq2Seq
data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=model_name)
import evaluate
metric = evaluate.load("sacrebleu")
import numpy as np
def postprocess_text(preds, labels):
preds = [pred.strip() for pred in preds]
labels = [[label.strip()] for label in labels]
return preds, labels
def compute_metrics(eval_preds):
preds, labels = eval_preds
if isinstance(preds, tuple):
preds = preds[0]
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)
result = metric.compute(predictions=decoded_preds, references=decoded_labels)
result = {"bleu": result["score"]}
prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds]
result["gen_len"] = np.mean(prediction_lens)
result = {k: round(v, 4) for k, v in result.items()}
return result
from transformers import AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
training_args = Seq2SeqTrainingArguments(
output_dir="wtf",
evaluation_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
weight_decay=0.01,
save_total_limit=3,
num_train_epochs=2,
predict_with_generate=True,
fp16=True,
)
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
train_dataset=tokenized_train,
eval_dataset=tokenized_test,
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
)
trainer.train()
```
I tried several examples including my own Customized Class for the trainer function but always ended with the same issue even when I tried the same code found in the step-by-step guide provided by huggingface.
The error happens when calling the `trainer.train()` returning the following:
`ValueError: too many values to unpack (expected 2)`
I followed the exact same format as the documentation and I believe it is something that is happening when calling the loss function but was just unable to put my finger to it, if anyone can help that would be great.
### Expected behavior
Expected behavior is trying being able to fine-tune the T5 model with the above dataset by eliminating or identifying the cause of the error. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28111/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28110 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28110/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28110/comments | https://api.github.com/repos/huggingface/transformers/issues/28110/events | https://github.com/huggingface/transformers/pull/28110 | 2,046,371,146 | PR_kwDOCUB6oc5iP3qg | 28,110 | Spelling correction | {
"login": "saeneas",
"id": 47715864,
"node_id": "MDQ6VXNlcjQ3NzE1ODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/47715864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saeneas",
"html_url": "https://github.com/saeneas",
"followers_url": "https://api.github.com/users/saeneas/followers",
"following_url": "https://api.github.com/users/saeneas/following{/other_user}",
"gists_url": "https://api.github.com/users/saeneas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saeneas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saeneas/subscriptions",
"organizations_url": "https://api.github.com/users/saeneas/orgs",
"repos_url": "https://api.github.com/users/saeneas/repos",
"events_url": "https://api.github.com/users/saeneas/events{/privacy}",
"received_events_url": "https://api.github.com/users/saeneas/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-18T10:52:48 | 2023-12-18T14:04:05 | 2023-12-18T14:04:05 | CONTRIBUTOR | null | correct minor typo in overview
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28110/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28110/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28110",
"html_url": "https://github.com/huggingface/transformers/pull/28110",
"diff_url": "https://github.com/huggingface/transformers/pull/28110.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28110.patch",
"merged_at": "2023-12-18T14:04:05"
} |
https://api.github.com/repos/huggingface/transformers/issues/28109 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28109/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28109/comments | https://api.github.com/repos/huggingface/transformers/issues/28109/events | https://github.com/huggingface/transformers/issues/28109 | 2,046,259,585 | I_kwDOCUB6oc5593GB | 28,109 | remove unnecessary backend related checks in training_args.py | {
"login": "kevint324",
"id": 8800468,
"node_id": "MDQ6VXNlcjg4MDA0Njg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8800468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kevint324",
"html_url": "https://github.com/kevint324",
"followers_url": "https://api.github.com/users/kevint324/followers",
"following_url": "https://api.github.com/users/kevint324/following{/other_user}",
"gists_url": "https://api.github.com/users/kevint324/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kevint324/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kevint324/subscriptions",
"organizations_url": "https://api.github.com/users/kevint324/orgs",
"repos_url": "https://api.github.com/users/kevint324/repos",
"events_url": "https://api.github.com/users/kevint324/events{/privacy}",
"received_events_url": "https://api.github.com/users/kevint324/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api... | null | 4 | 2023-12-18T10:11:16 | 2024-01-10T11:57:53 | null | NONE | null | ### Feature request
[Here](https://github.com/huggingface/transformers/blob/main/src/transformers/training_args.py#L1490-L1519)
IMO these checks in transformers should be removed.
```
if (
self.framework == "pt"
and is_torch_available()
and (self.device.type != "cuda")
and (self.device.type != "npu")
and (self.device.type != "xpu")
and (get_xla_device_type(self.device) != "GPU")
and (self.fp16 or self.fp16_full_eval)
):
raise ValueError(
"FP16 Mixed precision training with AMP or APEX (`--fp16`) and FP16 half precision evaluation"
" (`--fp16_full_eval`) can only be used on CUDA or NPU devices or certain XPU devices (with IPEX)."
)
if (
self.framework == "pt"
and is_torch_available()
and (self.device.type != "cuda")
and (self.device.type != "npu")
and (self.device.type != "xpu")
and (get_xla_device_type(self.device) != "GPU")
and (get_xla_device_type(self.device) != "TPU")
and (self.device.type != "cpu")
and (self.bf16 or self.bf16_full_eval)
):
raise ValueError(
"BF16 Mixed precision training with AMP (`--bf16`) and BF16 half precision evaluation"
" (`--bf16_full_eval`) can only be used on CUDA, XPU (with IPEX), NPU or CPU/TPU/NeuronCore devices."
)
```
### Motivation
To make things work each vendor need to extend this `if` by putting another line of ` and (self.device.type != "my_precious_chip")`.
It makes code bloated in transformers.
And I don't really think it's transformers' job to determine capability for backends. Just passthrough the paramters and let backend itself to determine if they can handle the dtype. They should have enough means to report a error.
### Your contribution
I'm glad to delete them if approved : -p | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28109/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28109/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28108 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28108/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28108/comments | https://api.github.com/repos/huggingface/transformers/issues/28108/events | https://github.com/huggingface/transformers/pull/28108 | 2,046,253,318 | PR_kwDOCUB6oc5iPeIf | 28,108 | Avoid unnecessary warnings when loading `CLIPConfig` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-12-18T10:08:23 | 2023-12-20T16:24:55 | 2023-12-20T16:24:54 | COLLABORATOR | null | # What does this PR do?
Avoid unnecessary warnings when loading `CLIPConfig`: when a user doesn't change something inside `text_config`.
Fix #28042 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28108/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28108",
"html_url": "https://github.com/huggingface/transformers/pull/28108",
"diff_url": "https://github.com/huggingface/transformers/pull/28108.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28108.patch",
"merged_at": "2023-12-20T16:24:54"
} |
https://api.github.com/repos/huggingface/transformers/issues/28107 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28107/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28107/comments | https://api.github.com/repos/huggingface/transformers/issues/28107/events | https://github.com/huggingface/transformers/pull/28107 | 2,046,130,822 | PR_kwDOCUB6oc5iPFL0 | 28,107 | [`Llava` / `Vip-Llava`] Add SDPA into llava | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-18T09:17:59 | 2023-12-18T12:46:30 | 2023-12-18T12:46:30 | CONTRIBUTOR | null | # What does this PR do?
As per title, adds SDPA into Llava-family
This makes generation faster through torch sdpa for llava-like models
Also closes: https://huggingface.co/llava-hf/llava-1.5-7b-hf/discussions/9
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28107/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28107",
"html_url": "https://github.com/huggingface/transformers/pull/28107",
"diff_url": "https://github.com/huggingface/transformers/pull/28107.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28107.patch",
"merged_at": "2023-12-18T12:46:30"
} |
https://api.github.com/repos/huggingface/transformers/issues/28106 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28106/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28106/comments | https://api.github.com/repos/huggingface/transformers/issues/28106/events | https://github.com/huggingface/transformers/issues/28106 | 2,046,055,139 | I_kwDOCUB6oc559FLj | 28,106 | Explicit option to disable deepspeed when loading a model | {
"login": "chiragjn",
"id": 10295418,
"node_id": "MDQ6VXNlcjEwMjk1NDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/10295418?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chiragjn",
"html_url": "https://github.com/chiragjn",
"followers_url": "https://api.github.com/users/chiragjn/followers",
"following_url": "https://api.github.com/users/chiragjn/following{/other_user}",
"gists_url": "https://api.github.com/users/chiragjn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chiragjn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chiragjn/subscriptions",
"organizations_url": "https://api.github.com/users/chiragjn/orgs",
"repos_url": "https://api.github.com/users/chiragjn/repos",
"events_url": "https://api.github.com/users/chiragjn/events{/privacy}",
"received_events_url": "https://api.github.com/users/chiragjn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-12-18T08:44:10 | 2024-01-26T08:03:26 | 2024-01-26T08:03:26 | NONE | null | ### Feature request
Option to disable deepspeed explicitly on a per-model basis
### Motivation
So I have a little bit of an odd setup
In my qlora/lora fine-tuning script, I launch with `accelerate launch --mixed_precision bf16 --use_deepspeed train.py --deepspeed deepspeed_zero3.json ...` and I am using the `TrainingArguments` class to accept this config
In that script, before I start training, I want to load the model with empty weights without deepspeed involved
But once a deepspeed zero 3 config is set, it gets set as a global
https://github.com/huggingface/transformers/blob/e6dcf8abd6f65bb4b6dfc1831b20d9ba49ce00e2/src/transformers/integrations/deepspeed.py#L239
And then all models try to use Deepspeed Zero init or do special handling for Zero 3 sharding
https://github.com/huggingface/transformers/blob/e6dcf8abd6f65bb4b6dfc1831b20d9ba49ce00e2/src/transformers/modeling_utils.py#L1823
This results in error with meta devices
```
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
File "/data/v/ft/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 441, in from_config
return model_class._from_config(config, **kwargs)
File "/data/v/ft/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1247, in _from_config
model = cls(config, **kwargs)
File "/data/v/ft/lib/python3.10/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 459, in wrapper
f(module, *args, **kwargs)
File "/data/v/ft/lib/python3.10/site-packages/transformers/models/mixtral/modeling_mixtral.py", line 1141, in __init__
self.model = MixtralModel(config)
File "/data/v/ft/lib/python3.10/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 459, in wrapper
f(module, *args, **kwargs)
File "/data/v/ft/lib/python3.10/site-packages/transformers/models/mixtral/modeling_mixtral.py", line 964, in __init__
self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
File "/data/v/ft/lib/python3.10/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 466, in wrapper
self._post_init_method(module)
File "/data/v/ft/lib/python3.10/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 995, in _post_init_method
param.data = param.data.to(self.local_device)
NotImplementedError: Cannot copy out of meta tensor; no data!
```
While I can work around my issue, I thought it might be good to have some context manager to disable deepspeed zero in certain sections of the code
---
Additional context on why I load my model separately
Before I start training I just do a check to ensure the base model can fit entirely within the available GPUs in bf16 format. This is to ensure that after tuning I would be able to merge the adapters correctly because currently merge and unload cannot save offloaded modules correctly (A fix for that is under progress See: https://github.com/huggingface/peft/pull/1190)
The code for this check looks like this
```
# Check if model can fit just with gpus
config = AutoConfig.from_pretrained(model_id, trust_remote_code=True)
with init_empty_weights():
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
device_map = infer_auto_device_map(model, dtype=torch.bfloat16)
logger.info(f"Inferred device_map for auto settings: {device_map}")
if any(not isinstance(v, int) for v in device_map.values()):
raise RuntimeError(...)
```
### Your contribution
# | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28106/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28106/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28105 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28105/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28105/comments | https://api.github.com/repos/huggingface/transformers/issues/28105/events | https://github.com/huggingface/transformers/issues/28105 | 2,045,923,480 | I_kwDOCUB6oc558lCY | 28,105 | T5Tokenizer: Different decoding behaviour depending on the tokenizer method used | {
"login": "sorenmulli",
"id": 42035306,
"node_id": "MDQ6VXNlcjQyMDM1MzA2",
"avatar_url": "https://avatars.githubusercontent.com/u/42035306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sorenmulli",
"html_url": "https://github.com/sorenmulli",
"followers_url": "https://api.github.com/users/sorenmulli/followers",
"following_url": "https://api.github.com/users/sorenmulli/following{/other_user}",
"gists_url": "https://api.github.com/users/sorenmulli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sorenmulli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sorenmulli/subscriptions",
"organizations_url": "https://api.github.com/users/sorenmulli/orgs",
"repos_url": "https://api.github.com/users/sorenmulli/repos",
"events_url": "https://api.github.com/users/sorenmulli/events{/privacy}",
"received_events_url": "https://api.github.com/users/sorenmulli/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2023-12-18T07:38:13 | 2023-12-18T10:32:02 | 2023-12-18T10:32:02 | NONE | null | ### System Info
- `transformers` version: 4.36.1
- Platform: Linux-6.1.55-1-lts-x86_64-with-glibc2.38
- Python version: 3.11.5
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```py
from transformers import T5TokenizerFast
tokenizer = T5TokenizerFast.from_pretrained("google/flan-t5-base")
tokens = ['▁', '?', '▁', '?']
ids = tokenizer.convert_tokens_to_ids(tokens)
# [3, 58, 3, 58]
tokenizer.decode(ids)
# '??'
tokenizer.convert_tokens_to_string(tokens)
# '? ?'
tokenizer.decoder.decode(tokens)
# '? ?'
```
### Expected behavior
I expected these two methods to yield same result: `'? ?'`.
I do not understand the result `'??'` and failed myself to find the logic where this space is removed; I guess it must be in `tokenizers`.
In advance, thank you for all help :heart: :hugs: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28105/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28105/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28104 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28104/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28104/comments | https://api.github.com/repos/huggingface/transformers/issues/28104/events | https://github.com/huggingface/transformers/issues/28104 | 2,045,869,224 | I_kwDOCUB6oc558Xyo | 28,104 | CUDA Error running the Translaton example with Accelerate or Trainer in a Multi GPU distributed setup | {
"login": "anindya-saha",
"id": 3349535,
"node_id": "MDQ6VXNlcjMzNDk1MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3349535?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anindya-saha",
"html_url": "https://github.com/anindya-saha",
"followers_url": "https://api.github.com/users/anindya-saha/followers",
"following_url": "https://api.github.com/users/anindya-saha/following{/other_user}",
"gists_url": "https://api.github.com/users/anindya-saha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anindya-saha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anindya-saha/subscriptions",
"organizations_url": "https://api.github.com/users/anindya-saha/orgs",
"repos_url": "https://api.github.com/users/anindya-saha/repos",
"events_url": "https://api.github.com/users/anindya-saha/events{/privacy}",
"received_events_url": "https://api.github.com/users/anindya-saha/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api... | null | 3 | 2023-12-18T06:59:20 | 2024-01-17T09:36:16 | null | NONE | null | ### System Info
Hello Team,
I am trying to run the translation example in examples/pytorch/translation/run_translation.py in a distributed manner through accelerate as follows.
```bash
accelerate launch --config_file default_config.yaml run_translation.py \
--model_name_or_path Helsinki-NLP/opus-mt-en-ro \
--do_train \
--do_eval \
--source_lang en \
--target_lang ro \
--dataset_name wmt16 \
--dataset_config_name ro-en \
--output_dir /tmp/tst-translation \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate \
--pad_to_max_length True \
--report_to none
```
**Accelerator Config**
```bash
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: MULTI_GPU
downcast_bf16: 'no'
gpu_ids: 0,1
machine_rank: 0
main_training_function: main
mixed_precision: fp16
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
But I see the following CUDA error. Could you please help me to understand what changes I need to make. I have run other examples in the summarization and the language-modeling folder in a similar manner successfully.
**Python venv**
```
transformers==4.35.2
accelerate==0.25.0
datasets==2.15.0
```
**Error Logs**
```
../aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [421,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [421,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [421,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [421,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
Traceback (most recent call last):
File "run_translation.py", line 699, in <module>
main()
File "run_translation.py", line 614, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/transformers/trainer.py", line 1555, in train
return inner_training_loop(
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/transformers/trainer.py", line 1860, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/transformers/trainer.py", line 2725, in training_step
loss = self.compute_loss(model, inputs)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/transformers/trainer.py", line 2748, in compute_loss
outputs = model(**inputs)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1519, in forward
else self._run_ddp_forward(*inputs, **kwargs)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1355, in _run_ddp_forward
return self.module(*inputs, **kwargs) # type: ignore[index]
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/accelerate/utils/operations.py", line 680, in forward
return model_forward(*args, **kwargs)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/accelerate/utils/operations.py", line 668, in __call__
return convert_to_fp32(self.model_forward(*args, **kwargs))
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/amp/autocast_mode.py", line 16, in decorate_autocast
return func(*args, **kwargs)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/transformers/models/marian/modeling_marian.py", line 1402, in forward
outputs = self.model(
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/transformers/models/marian/modeling_marian.py", line 1185, in forward
encoder_outputs = self.encoder(
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/transformers/models/marian/modeling_marian.py", line 739, in forward
hidden_states = inputs_embeds + embed_pos
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
0%| | 0/228870 [00:03<?, ?it/s]
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Exception raised from c10_cuda_check_implementation at ../c10/cuda/CUDAException.cpp:44 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f7f442b5617 in /home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7f7f4427098d in /home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) + 0x118 (0x7f7f44371128 in /home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #3: <unknown function> + 0x16e76 (0x7f7f44339e76 in /home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #4: <unknown function> + 0x19bad (0x7f7f4433cbad in /home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #5: <unknown function> + 0x19fcd (0x7f7f4433cfcd in /home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #6: <unknown function> + 0x510c56 (0x7f7f448dcc56 in /home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #7: <unknown function> + 0x55ca7 (0x7f7f4429aca7 in /home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #8: c10::TensorImpl::~TensorImpl() + 0x1e3 (0x7f7f44292cb3 in /home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #9: c10::TensorImpl::~TensorImpl() + 0x9 (0x7f7f44292e49 in /home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #10: <unknown function> + 0x7c1718 (0x7f7f44b8d718 in /home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #11: THPVariable_subclass_dealloc(_object*) + 0x325 (0x7f7f44b8dac5 in /home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #12: /home/anindya/starcoder-tune/bin/python3() [0x5aced3]
frame #13: /home/anindya/starcoder-tune/bin/python3() [0x5b0174]
frame #14: /home/anindya/starcoder-tune/bin/python3() [0x5f7cdd]
frame #15: /home/anindya/starcoder-tune/bin/python3() [0x5b02f0]
frame #16: /home/anindya/starcoder-tune/bin/python3() [0x5835c2]
frame #17: /home/anindya/starcoder-tune/bin/python3() [0x4c518f]
frame #18: _PyGC_CollectNoFail + 0x2f (0x66721f in /home/anindya/starcoder-tune/bin/python3)
frame #19: PyImport_Cleanup + 0x244 (0x67a634 in /home/anindya/starcoder-tune/bin/python3)
frame #20: Py_FinalizeEx + 0x7f (0x67423f in /home/anindya/starcoder-tune/bin/python3)
frame #21: Py_RunMain + 0x32d (0x6b418d in /home/anindya/starcoder-tune/bin/python3)
frame #22: Py_BytesMain + 0x2d (0x6b43fd in /home/anindya/starcoder-tune/bin/python3)
frame #23: __libc_start_main + 0xf3 (0x7f7f59353083 in /lib/x86_64-linux-gnu/libc.so.6)
frame #24: _start + 0x2e (0x5da67e in /home/anindya/starcoder-tune/bin/python3)
[2023-12-18 06:41:41,495] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: -6) local_rank: 0 (pid: 369953) of binary: /home/anindya/starcoder-tune/bin/python3
Traceback (most recent call last):
File "/home/anindya/starcoder-tune/bin/accelerate", line 8, in <module>
sys.exit(main())
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/accelerate/commands/accelerate_cli.py", line 47, in main
args.func(args)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/accelerate/commands/launch.py", line 1008, in launch_command
multi_gpu_launcher(args)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/accelerate/commands/launch.py", line 666, in multi_gpu_launcher
distrib_run.run(args)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/distributed/run.py", line 797, in run
elastic_launch(
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
run_translation.py FAILED
------------------------------------------------------------
```
### Who can help?
@patil-suraj @pacman100 @ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
STEP 1: Create a basic Accelerator config `default_config.yaml` file with 2 GPUs m/c as below.
```bash
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: MULTI_GPU
downcast_bf16: 'no'
gpu_ids: 0,1
machine_rank: 0
main_training_function: main
mixed_precision: fp16
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
STEP 2: Run the translation example.
```bash
accelerate launch --config_file default_config.yaml run_translation.py \
--model_name_or_path Helsinki-NLP/opus-mt-en-ro \
--do_train \
--do_eval \
--source_lang en \
--target_lang ro \
--dataset_name wmt16 \
--dataset_config_name ro-en \
--output_dir /tmp/tst-translation \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate \
--pad_to_max_length True \
--report_to none
```
### Expected behavior
The example should complete without any error. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28104/timeline | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.