url stringlengths 66 66 | repository_url stringclasses 1
value | labels_url stringlengths 80 80 | comments_url stringlengths 75 75 | events_url stringlengths 73 73 | html_url stringlengths 54 56 | id int64 2.03B 2.11B | node_id stringlengths 18 19 | number int64 27.9k 28.8k | title stringlengths 3 306 | user dict | labels list | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees list | milestone null | comments int64 0 39 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | active_lock_reason null | body stringlengths 19 42.4k ⌀ | reactions dict | timeline_url stringlengths 75 75 | performed_via_github_app null | state_reason stringclasses 3
values | draft bool 2
classes | pull_request dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/28608 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28608/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28608/comments | https://api.github.com/repos/huggingface/transformers/issues/28608/events | https://github.com/huggingface/transformers/pull/28608 | 2,090,793,303 | PR_kwDOCUB6oc5kj3bE | 28,608 | [`Test tokenizers`] DO NOT MERGE | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-19T15:33:56 | 2024-01-19T16:07:13 | 2024-01-19T16:07:13 | COLLABORATOR | null | # What does this PR do?
tests `tokenizers==0.15.1rc1` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28608/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28608/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28608",
"html_url": "https://github.com/huggingface/transformers/pull/28608",
"diff_url": "https://github.com/huggingface/transformers/pull/28608.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28608.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28607 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28607/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28607/comments | https://api.github.com/repos/huggingface/transformers/issues/28607/events | https://github.com/huggingface/transformers/pull/28607 | 2,090,724,894 | PR_kwDOCUB6oc5kjogb | 28,607 | Generate: deprecate old src imports | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-19T14:52:48 | 2024-01-27T15:54:22 | 2024-01-27T15:54:19 | MEMBER | null | # What does this PR do?
We have 3 thin wrappers for `generate`, one for each framework, whose sole purpose is to import the mixin from `src/transformers/generation(_flax/_tf)_utils.py`. In other words, to import from `src` according to the codebase before [this PR](https://github.com/huggingface/transformers/pull/20096).
Since this is a `src` import (and not a `from transformers import X`), I believe this can be safely removed before v5. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28607/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28607/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28607",
"html_url": "https://github.com/huggingface/transformers/pull/28607",
"diff_url": "https://github.com/huggingface/transformers/pull/28607.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28607.patch",
"merged_at": "2024-01-27T15:54:19"
} |
https://api.github.com/repos/huggingface/transformers/issues/28606 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28606/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28606/comments | https://api.github.com/repos/huggingface/transformers/issues/28606/events | https://github.com/huggingface/transformers/issues/28606 | 2,090,708,329 | I_kwDOCUB6oc58na1p | 28,606 | Add [VMamba] model | {
"login": "dmus",
"id": 464378,
"node_id": "MDQ6VXNlcjQ2NDM3OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/464378?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dmus",
"html_url": "https://github.com/dmus",
"followers_url": "https://api.github.com/users/dmus/followers",
"following_url": "https://api.github.com/users/dmus/following{/other_user}",
"gists_url": "https://api.github.com/users/dmus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dmus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dmus/subscriptions",
"organizations_url": "https://api.github.com/users/dmus/orgs",
"repos_url": "https://api.github.com/users/dmus/repos",
"events_url": "https://api.github.com/users/dmus/events{/privacy}",
"received_events_url": "https://api.github.com/users/dmus/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | null | 1 | 2024-01-19T14:43:57 | 2024-01-31T19:40:56 | null | NONE | null | ### Model description
VMamba is a visual foundation model proposed in https://arxiv.org/pdf/2401.10166.pdf.
It is inspired by the recent advances in state stace models and in particular Mamba. The proposed architecture is computationally more efficient than vision transformer architectures because it scales linearly with growing resolution. It introduces a Cross-Scan Module (CSM) to have context from all directions (4 directions, starting in each corner and traversing in a horizontal or vertical direction). Evaluation on vision perception tasks shows promising capabilities.
Model weights will become available in a few days according to the repo of the authors.
1. [x] (Optional) Understood theoretical aspects
2. [x] Prepared transformers dev environment
3. [x] Set up debugging environment of the original repository
4. [x] Created script that successfully runs forward pass using
original repository and checkpoint
5. [x] Successfully opened a PR and added the model skeleton to Transformers
6. [x] Successfully converted original checkpoint to Transformers
checkpoint
7. [x] Successfully ran forward pass in Transformers that gives
identical output to original checkpoint
8. [x] Finished model tests in Transformers
9. [ ] ~~Successfully added Tokenizer in Transformers~~
10. [x] Run end-to-end integration tests
11. [x] Finished docs
12. [ ] Uploaded model weights to the hub
13. [x] Submitted the pull request for review
14. [ ] (Optional) Added a demo notebook
I am opening the issue to avoid duplicate work. My main motivation for porting this model is to learn a bit more about it (and about the internals of 🤗 Transformers). Some of you probably know this library much better than me, so feel free to write your own implementation if you can do it better or quicker. Otherwise, don’t hesitate to build on top of my fork.
### Open source status
- [X] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
- Original repo: https://github.com/MzeroMiko/VMamba
- Paper: https://arxiv.org/pdf/2401.10166.pdf
- implementation in progress:
- youtube vmamba vs vision mamba: https://www.youtube.com/watch?v=RtHDu6kFPb8
- vision mamba paper (similar idea): https://arxiv.org/pdf/2401.09417.pdf | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28606/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28606/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28605 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28605/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28605/comments | https://api.github.com/repos/huggingface/transformers/issues/28605/events | https://github.com/huggingface/transformers/pull/28605 | 2,090,707,493 | PR_kwDOCUB6oc5kjkuO | 28,605 | Falcon: removed unused function | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-19T14:43:27 | 2024-01-27T15:53:06 | 2024-01-27T15:52:59 | MEMBER | null | # What does this PR do?
(see title) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28605/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28605/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28605",
"html_url": "https://github.com/huggingface/transformers/pull/28605",
"diff_url": "https://github.com/huggingface/transformers/pull/28605.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28605.patch",
"merged_at": "2024-01-27T15:52:59"
} |
https://api.github.com/repos/huggingface/transformers/issues/28604 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28604/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28604/comments | https://api.github.com/repos/huggingface/transformers/issues/28604/events | https://github.com/huggingface/transformers/pull/28604 | 2,090,532,106 | PR_kwDOCUB6oc5ki93N | 28,604 | fix a hidden bug of `GenerationConfig`, now the `generation_config.json` can be loaded successfully | {
"login": "ParadoxZW",
"id": 32508168,
"node_id": "MDQ6VXNlcjMyNTA4MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/32508168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParadoxZW",
"html_url": "https://github.com/ParadoxZW",
"followers_url": "https://api.github.com/users/ParadoxZW/followers",
"following_url": "https://api.github.com/users/ParadoxZW/following{/other_user}",
"gists_url": "https://api.github.com/users/ParadoxZW/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParadoxZW/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParadoxZW/subscriptions",
"organizations_url": "https://api.github.com/users/ParadoxZW/orgs",
"repos_url": "https://api.github.com/users/ParadoxZW/repos",
"events_url": "https://api.github.com/users/ParadoxZW/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParadoxZW/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 12 | 2024-01-19T13:07:42 | 2024-01-27T16:34:48 | 2024-01-23T17:48:38 | CONTRIBUTOR | null | # What does this PR do?
I was developing an open-source LLM project, and I found that the `generation_config.json` file cannot be successfully loaded to controll model's generation process, though I've written some attributes in this file, such as `eos_token_id` (a newly initialized model object from `from_pretrained` api did not get correct `eos_token_id`).
I am aware of that there're many walkarounds to control the generation process instead of using `generation_config.json`. But I still want to use `generation_config.json` and no more other code, as it should be a standard way. So I dived into the source code of class `GenerationConfig` and spend hours to do debugging stuff.
The initialization process would be called several times during the initialization of a pretrained model. But I found the last time is somehow very strange. Using following code:
```Python
print('before')
logger.info(f"Configuration saved in {output_config_file}") # original L594 of `transformers/generation/configuration_utils.py`
print('after')
```
`before` was printed but `after` was not, as if the function is suddenly returned at here or broken. It gave me clue that there's some problem in the `__repr__` method of `GenerationConfig`. Continue to dive in, I finally located the bug:
```Python
try:
return json.dumps(config_dict, indent=2, sort_keys=True) + "\n" # original L991 of `transformers/generation/configuration_utils.py`
except Exception:
print(e)
raise
```
It gave me the error info of `not supported between instances of 'str' and 'int'`. So it seems that there is some dirty code like `try:... except: pass` outside of the `GenerationConfig` class. Nevertheless, I can finally solve the problem by
```Python
return json.dumps(config_dict, indent=2, sort_keys=False) + "\n"
```
Although only one line of code need to be changed, I believe it's a very hidden bug that one may spend a entire afternoon to find it. Now we can successfully load `generation_config.json` and correctly configure model's generation behavior.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- generate: @gante
- Big Model Inference: @SunMarc | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28604/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28604/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28604",
"html_url": "https://github.com/huggingface/transformers/pull/28604",
"diff_url": "https://github.com/huggingface/transformers/pull/28604.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28604.patch",
"merged_at": "2024-01-23T17:48:38"
} |
https://api.github.com/repos/huggingface/transformers/issues/28603 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28603/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28603/comments | https://api.github.com/repos/huggingface/transformers/issues/28603/events | https://github.com/huggingface/transformers/issues/28603 | 2,090,397,789 | I_kwDOCUB6oc58mPBd | 28,603 | Error Using Ray Tune because of the repo id | {
"login": "matiasfreitas",
"id": 36213075,
"node_id": "MDQ6VXNlcjM2MjEzMDc1",
"avatar_url": "https://avatars.githubusercontent.com/u/36213075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matiasfreitas",
"html_url": "https://github.com/matiasfreitas",
"followers_url": "https://api.github.com/users/matiasfreitas/followers",
"following_url": "https://api.github.com/users/matiasfreitas/following{/other_user}",
"gists_url": "https://api.github.com/users/matiasfreitas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matiasfreitas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matiasfreitas/subscriptions",
"organizations_url": "https://api.github.com/users/matiasfreitas/orgs",
"repos_url": "https://api.github.com/users/matiasfreitas/repos",
"events_url": "https://api.github.com/users/matiasfreitas/events{/privacy}",
"received_events_url": "https://api.github.com/users/matiasfreitas/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-19T11:58:20 | 2024-01-22T15:01:08 | null | NONE | null | ### System Info
- `transformers` version: 4.30.0
- Platform: Linux-6.6.6-76060606-generic-x86_64-with-glibc2.35
- Python version: 3.11.5
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.2
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
2024-01-19 11:54:21,786 ERROR tune_controller.py:911 -- Trial task failed for trial _objective_750f9_00002
Traceback (most recent call last):
File "/home/matiasfg/anaconda3/envs/Dataset_Objects/lib/python3.10/site-packages/ray/air/execution/_internal/event_manager.py", line 110, in resolve_future
result = ray.get(future)
File "/home/matiasfg/anaconda3/envs/Dataset_Objects/lib/python3.10/site-packages/ray/_private/auto_init_hook.py", line 24, in auto_init_wrapper
return fn(*args, **kwargs)
File "/home/matiasfg/anaconda3/envs/Dataset_Objects/lib/python3.10/site-packages/ray/_private/client_mode_hook.py", line 103, in wrapper
return func(*args, **kwargs)
File "/home/matiasfg/anaconda3/envs/Dataset_Objects/lib/python3.10/site-packages/ray/_private/worker.py", line 2524, in get
raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(HFValidationError): ray::ImplicitFunc.train() (pid=198543, ip=192.168.1.83, actor_id=d99794869b3e484929cd90d501000000, repr=_objective)
File "/home/matiasfg/anaconda3/envs/Dataset_Objects/lib/python3.10/site-packages/ray/tune/trainable/trainable.py", line 375, in train
raise skipped from exception_cause(skipped)
File "/home/matiasfg/anaconda3/envs/Dataset_Objects/lib/python3.10/site-packages/ray/tune/trainable/function_trainable.py", line 349, in entrypoint
return self._trainable_func(
File "/home/matiasfg/anaconda3/envs/Dataset_Objects/lib/python3.10/site-packages/ray/tune/trainable/function_trainable.py", line 666, in _trainable_func
output = fn()
File "/home/matiasfg/.local/lib/python3.10/site-packages/transformers/integrations/integration_utils.py", line 350, in dynamic_modules_import_trainable
return trainable(*args, **kwargs)
File "/home/matiasfg/anaconda3/envs/Dataset_Objects/lib/python3.10/site-packages/ray/tune/trainable/util.py", line 325, in inner
return trainable(config, **fn_kwargs)
File "/home/matiasfg/.local/lib/python3.10/site-packages/transformers/integrations/integration_utils.py", line 251, in _objective
local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial)
File "/home/matiasfg/.local/lib/python3.10/site-packages/transformers/trainer.py", line 1514, in train
self.model = self.call_model_init(trial)
File "/home/matiasfg/.local/lib/python3.10/site-packages/transformers/trainer.py", line 1260, in call_model_init
model = self.model_init(trial)
File "/tmp/ipykernel_197623/1278964700.py", line 8, in getModel
File "/home/matiasfg/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2600, in from_pretrained
resolved_config_file = cached_file(
File "/home/matiasfg/.local/lib/python3.10/site-packages/transformers/utils/hub.py", line 431, in cached_file
resolved_file = hf_hub_download(
File "/home/matiasfg/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 110, in _inner_fn
validate_repo_id(arg_value)
File "/home/matiasfg/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 164, in validate_repo_id
raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: '{'learning_rate': 8.288916866885153e-06, 'num_train_epochs': 5, 'seed': 24.443485457985144, 'per_device_train_batch_size': 16}'.
```
### Who can help?
@muellerzr @pacman100 @richardliaw @amogkam
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Running that code with any trainer that code sounds enough, but not sure.
trainer.hyperparameter_search(direction="maximize", backend="ray", n_trials=10)
trainer.train()
### Expected behavior
A name on the standards of the validator should be used as repo id. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28603/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28603/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28602 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28602/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28602/comments | https://api.github.com/repos/huggingface/transformers/issues/28602/events | https://github.com/huggingface/transformers/pull/28602 | 2,090,371,299 | PR_kwDOCUB6oc5kiag2 | 28,602 | [`GPTNeoX`] Fix BC issue with 4.36 | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-19T11:45:48 | 2024-01-21T17:01:21 | 2024-01-21T17:01:20 | COLLABORATOR | null | # What does this PR do?
We broke some of the GPTNeoX model with the dtype in #25830. Sorry for the inconvenience.
This was breaking in terms of logits, now this PR will probably make the model slow than with casting to smaller dtype.
Fixes #28360, fixes #28316.
For a sample generation I am getting some slow down. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28602/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28602/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28602",
"html_url": "https://github.com/huggingface/transformers/pull/28602",
"diff_url": "https://github.com/huggingface/transformers/pull/28602.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28602.patch",
"merged_at": "2024-01-21T17:01:20"
} |
https://api.github.com/repos/huggingface/transformers/issues/28601 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28601/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28601/comments | https://api.github.com/repos/huggingface/transformers/issues/28601/events | https://github.com/huggingface/transformers/pull/28601 | 2,090,366,803 | PR_kwDOCUB6oc5kiZec | 28,601 | Add config tip to custom model docs | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-19T11:44:08 | 2024-01-22T13:46:06 | 2024-01-22T13:46:05 | MEMBER | null | Mentioned this to @LysandreJik earlier - this PR adds a tip to the docs on uploading custom code models to encourage users to use a monolithic config that gets passed to sub-layers, like we use in core `transformers` code. Some models that didn't do this were very painful to port and required rewrites to all layers, so encouraging users to do this earlier might help a lot. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28601/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28601/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28601",
"html_url": "https://github.com/huggingface/transformers/pull/28601",
"diff_url": "https://github.com/huggingface/transformers/pull/28601.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28601.patch",
"merged_at": "2024-01-22T13:46:05"
} |
https://api.github.com/repos/huggingface/transformers/issues/28600 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28600/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28600/comments | https://api.github.com/repos/huggingface/transformers/issues/28600/events | https://github.com/huggingface/transformers/pull/28600 | 2,090,197,596 | PR_kwDOCUB6oc5khzVS | 28,600 | RWKV: raise informative exception when attempting to manipulate `past_key_values` | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-19T10:26:03 | 2024-01-19T14:09:36 | 2024-01-19T14:09:36 | MEMBER | null | # What does this PR do?
Some generation methods (like the new ngram speculation thingy) need to manipulate `past_key_values`. RWKV, a recurrent neural network, doesn't have this attribute -- a standard `AttributeError` is raised when such methods are called with RWKV. (related comment: https://github.com/huggingface/transformers/pull/27775#issuecomment-1897404295)
This PR improves the error message, explaining what's happening and what to do.
NOTE: some newer RWKV variants use custom modeling code, so this PR won't affect them. I'll point the users to this PR if the issue pops up. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28600/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28600/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28600",
"html_url": "https://github.com/huggingface/transformers/pull/28600",
"diff_url": "https://github.com/huggingface/transformers/pull/28600.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28600.patch",
"merged_at": "2024-01-19T14:09:36"
} |
https://api.github.com/repos/huggingface/transformers/issues/28599 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28599/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28599/comments | https://api.github.com/repos/huggingface/transformers/issues/28599/events | https://github.com/huggingface/transformers/issues/28599 | 2,090,169,443 | I_kwDOCUB6oc58lXRj | 28,599 | [Kosmos-2] | {
"login": "basteran",
"id": 27162097,
"node_id": "MDQ6VXNlcjI3MTYyMDk3",
"avatar_url": "https://avatars.githubusercontent.com/u/27162097?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/basteran",
"html_url": "https://github.com/basteran",
"followers_url": "https://api.github.com/users/basteran/followers",
"following_url": "https://api.github.com/users/basteran/following{/other_user}",
"gists_url": "https://api.github.com/users/basteran/gists{/gist_id}",
"starred_url": "https://api.github.com/users/basteran/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/basteran/subscriptions",
"organizations_url": "https://api.github.com/users/basteran/orgs",
"repos_url": "https://api.github.com/users/basteran/repos",
"events_url": "https://api.github.com/users/basteran/events{/privacy}",
"received_events_url": "https://api.github.com/users/basteran/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-19T10:13:24 | 2024-01-19T10:26:34 | null | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.15.0-84-generic-x86_64-with-glibc2.35
- Python version: 3.10.0
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: 0.26.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
I think the person in charge of Kosmos-2 is @ydshieh
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
*This issue refers to another issue reported on [the official Kosmos repository](https://github.com/microsoft/unilm/issues/1429)!*
Hello everyone, thank you very much for your contribution. I appreciate the effort and consistency in uploading the code for such many models and maintaining this repository.
I saw Kosmos-2 and I quickly thought I could fine-tune it on my downstream task. But I couldn't find any example of how to do it. I see there is [on the official Kosmos repository](https://github.com/microsoft/unilm/tree/master/kosmos-2#training) a little "guide" for Training the model, but I don't know if they're referring to the Pre-training or further fine-tuning, I'm interested in the second one.
So I tried to implement it myself using the `transformers` library, but I'm getting errors during the Fine-Tuning procedure.
```python
model = AutoModelForVision2Seq.from_pretrained("microsoft/kosmos-2-patch14-224", device_map="auto")
processor = AutoProcessor.from_pretrained("microsoft/kosmos-2-patch14-224", device_map="auto")
# load dummy dataset from json file
train_data = load_dataset("json", data_files=tmp_train_file_name)
val_data = load_dataset("json", data_files=tmp_val_file_name)
# process the inputs, i.e. images and texts
def kosmos2_collate_fn(examples):
images, texts = [], []
for example in examples:
image = Image.open(example['image_path'])
images.append(image)
texts.append(example['input_text'])
inputs = processor(text=texts, images=images, return_tensors="pt").to(model.device)
return Dataset.from_dict(inputs)
new_train_data = kosmos2_collate_fn(train_data)
new_val_data = kosmos2_collate_fn(val_data)
training_arguments = TrainingArguments(
remove_unused_columns=False,
per_device_train_batch_size=MICRO_BATCH_SIZE,
gradient_accumulation_steps=GRADIENT_ACCUMULATION_STEPS,
warmup_ratio=0,
num_train_epochs=EPOCHS,
learning_rate=LEARNING_RATE,
logging_strategy="steps",
logging_steps=1,
optim="adamw_torch",
evaluation_strategy="epoch",
save_strategy="epoch",
output_dir=OUTPUT_DIR,
save_total_limit=1,
load_best_model_at_end=True,
label_names=["labels"]
)
trainer = Trainer(
model=model,
train_dataset=new_train_data,
eval_dataset=new_val_data,
args=training_arguments,
)
trainer.train()
```
and the resulting errors:
```console
Generating train split: 40 examples [00:00, 8627.15 examples/s]
Generating train split: 6 examples [00:00, 2428.20 examples/s]
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
0%| | 0/10 [00:00<?, ?it/s]Traceback (most recent call last):
File "/home/user/kosmos2/train.py", line 193, in <module>
trainer.train()
File "/home/user/.local/lib/python3.10/site-packages/transformers/trainer.py", line 1537, in train
return inner_training_loop(
File "/home/user/.local/lib/python3.10/site-packages/transformers/trainer.py", line 1854, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/home/user/.local/lib/python3.10/site-packages/transformers/trainer.py", line 2735, in training_step
loss = self.compute_loss(model, inputs)
File "/home/user/.local/lib/python3.10/site-packages/transformers/trainer.py", line 2776, in compute_loss
raise ValueError(
ValueError: The model did not return a loss from the inputs, only the following keys: logits,past_key_values,image_embeds,projection_attentions,vision_model_output. For reference, the inputs it received are pixel_values,input_ids,attention_mask,image_embeds_position_mask.
0%| | 0/10 [00:03<?, ?it/s]
```
I can't figure out the issue. It says that the model did not return a loss, which means it didn't compute it. It looks like the `processor` did not return any `labels` and the `Trainer` could not compute the loss...
### Expected behavior
I would expect to train the model on my data, i.e. to compute the loss, perform gradient updates, etc. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28599/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28599/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28598 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28598/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28598/comments | https://api.github.com/repos/huggingface/transformers/issues/28598/events | https://github.com/huggingface/transformers/issues/28598 | 2,089,686,976 | I_kwDOCUB6oc58jhfA | 28,598 | what is the correct format of input when fine-tuning GPT2 for text generation with batch input? | {
"login": "minmie",
"id": 40080081,
"node_id": "MDQ6VXNlcjQwMDgwMDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/40080081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minmie",
"html_url": "https://github.com/minmie",
"followers_url": "https://api.github.com/users/minmie/followers",
"following_url": "https://api.github.com/users/minmie/following{/other_user}",
"gists_url": "https://api.github.com/users/minmie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minmie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minmie/subscriptions",
"organizations_url": "https://api.github.com/users/minmie/orgs",
"repos_url": "https://api.github.com/users/minmie/repos",
"events_url": "https://api.github.com/users/minmie/events{/privacy}",
"received_events_url": "https://api.github.com/users/minmie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-19T06:17:29 | 2024-01-22T01:49:43 | 2024-01-22T01:49:43 | NONE | null | ### System Info
- `transformers` version: 4.33.0
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.22.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker
@younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I want to fine-tune GPT2 for text generation with batch input. And I use follow code to format batch input:
```python
from transformers import GPT2LMHeadModel, GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained(r'E:\pythonWork\models\gpt2')
max_length = 8
datas = [
"The dog.",
"The cute dog.",
]
model_input = tokenizer(datas)
print('original input:\n', model_input)
# prepare for batch input
# I add bos token at the start and eos token at the end, and add pad token at the right to pad the sentences to the
# same length. bos_token_id=eos_token_id=50256, and there is not a pad token, so i also use 50256 as pad token.
labels_list = []
for i in range(len(datas)):
input_ids = [tokenizer.bos_token_id] + model_input['input_ids'][i] + [tokenizer.eos_token_id] # add bos and eos token
input_ids = input_ids + max(0, max_length-len(input_ids))*[tokenizer.eos_token_id] # add padding token
attention_mask = [1] + model_input['attention_mask'][i] + [1] # atten bos and eos token
attention_mask = attention_mask + max(0, max_length - len(attention_mask)) * [0] # dose't atten padding token
labels = [tokenizer.bos_token_id] + model_input['input_ids'][i] + [tokenizer.eos_token_id] # take loss for bos and eos
labels = labels + max(0, max_length - len(labels)) * [-100] # padding dose't take loss
model_input['input_ids'][i] = input_ids
model_input['attention_mask'][i] = attention_mask
labels_list.append(labels)
model_input['labels'] = labels_list
print('batch input:\n', model_input)
```
print message
```
original input:
{'input_ids': [[464, 3290, 13], [464, 13779, 3290, 13]],
'attention_mask': [[1, 1, 1], [1, 1, 1, 1]]}
batch input:
{'input_ids': [[50256, 464, 3290, 13, 50256, 50256, 50256, 50256], [50256, 464, 13779, 3290, 13, 50256, 50256, 50256]],
'attention_mask': [[1, 1, 1, 1, 1, 0, 0, 0], [1, 1, 1, 1, 1, 1, 0, 0]],
'labels': [[50256, 464, 3290, 13, 50256, -100, -100, -100], [50256, 464, 13779, 3290, 13, 50256, -100, -100]]}
``
### Expected behavior
my question:
1. the method I take to format batch input, is it right?
2. why can't gpt2 tokenizer auto format batch input like bert tokenzier do?
3. in this pre-training [demo](https://huggingface.co/learn/nlp-course/en/chapter7/6?fw=pt#preparing-the-dataset),
I found that it dose't add bos and eos tokens, and add pad token only at the end of the sequence.
So I think, in the pre-training time only need to add pad token to keep the sequence length consistent.
But when it comes to fine-tuning, additional eos tokens need to be added, and eos needs take loss because the model needs to learn when to stop generating.
Am I right? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28598/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28598/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28597 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28597/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28597/comments | https://api.github.com/repos/huggingface/transformers/issues/28597/events | https://github.com/huggingface/transformers/issues/28597 | 2,089,437,004 | I_kwDOCUB6oc58ikdM | 28,597 | How to find or create the `model_state_dict.bin` file for the `convert_llava_weights_to_hf.py` script | {
"login": "isaac-vidas",
"id": 80056737,
"node_id": "MDQ6VXNlcjgwMDU2NzM3",
"avatar_url": "https://avatars.githubusercontent.com/u/80056737?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/isaac-vidas",
"html_url": "https://github.com/isaac-vidas",
"followers_url": "https://api.github.com/users/isaac-vidas/followers",
"following_url": "https://api.github.com/users/isaac-vidas/following{/other_user}",
"gists_url": "https://api.github.com/users/isaac-vidas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/isaac-vidas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/isaac-vidas/subscriptions",
"organizations_url": "https://api.github.com/users/isaac-vidas/orgs",
"repos_url": "https://api.github.com/users/isaac-vidas/repos",
"events_url": "https://api.github.com/users/isaac-vidas/events{/privacy}",
"received_events_url": "https://api.github.com/users/isaac-vidas/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2024-01-19T02:38:31 | 2024-01-22T14:28:20 | 2024-01-22T14:28:19 | CONTRIBUTOR | null | Hi @younesbelkada,
Following up on the [fix to the LLaVA convert script](https://github.com/huggingface/transformers/pull/28570) and thanks for all the help with the PR!
I encountered some issue with the convert script and wanted to ask about the recommended way to create the `model_state_dict.bin` file specified here: https://github.com/huggingface/transformers/blob/772307be7649e1333a933cfaa229dc0dec2fd331/src/transformers/models/llava/convert_llava_weights_to_hf.py#L74
In order to create the `model_state_dict.bin` I tried something like the following with the original https://github.com/haotian-liu/LLaVA code:
```python
import torch
from llava.model.language_model.llava_llama import LlavaLlamaForCausalLM
# load model
kwargs = {"device_map": "auto", "torch_dtype": torch.float16}
model = LlavaLlamaForCausalLM.from_pretrained("liuhaotian/llava-v1.5-7b", low_cpu_mem_usage=True, **kwargs)
# load vision tower
model.get_vision_tower().load_model()
# Save state dict
torch.save(model.state_dict(), "tmp/hf_models/llava-v1.5-7b/model_state_dict.bin")
```
It works but when I used the convert script I had to make the following changes:
* Remove keys that ended with `.inv_freq` (e.g. `language_model.model.layers.0.self_attn.rotary_emb.inv_freq`)
* Comment out the update to the `model.config.vocab_size` and `model.config.text_config.vocab_size` with the `pad_shape` here: https://github.com/huggingface/transformers/blob/772307be7649e1333a933cfaa229dc0dec2fd331/src/transformers/models/llava/convert_llava_weights_to_hf.py#L96-L97 otherwise, when I would try to load the converted model, it will error with the following:
```python
from transformers import AutoProcessor, LlavaForConditionalGeneration
model_id = "Shopify/llava-1.5-7b"
model = LlavaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
).to(0)
```
```console
ValueError: Trying to set a tensor of shape torch.Size([32064, 5120]) in "weight" (which has shape torch.Size([32128, 5120])), this look incorrect.
```
Am I doing something wrong when I create the `model_state_dict.bin` file or am I missing something else?
Thanks again in advance. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28597/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28597/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28596 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28596/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28596/comments | https://api.github.com/repos/huggingface/transformers/issues/28596/events | https://github.com/huggingface/transformers/issues/28596 | 2,089,426,891 | I_kwDOCUB6oc58ih_L | 28,596 | HfDeepSpeedConfig + ZeRO3 init accuracy bug! | {
"login": "hijkzzz",
"id": 19810594,
"node_id": "MDQ6VXNlcjE5ODEwNTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/19810594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hijkzzz",
"html_url": "https://github.com/hijkzzz",
"followers_url": "https://api.github.com/users/hijkzzz/followers",
"following_url": "https://api.github.com/users/hijkzzz/following{/other_user}",
"gists_url": "https://api.github.com/users/hijkzzz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hijkzzz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hijkzzz/subscriptions",
"organizations_url": "https://api.github.com/users/hijkzzz/orgs",
"repos_url": "https://api.github.com/users/hijkzzz/repos",
"events_url": "https://api.github.com/users/hijkzzz/events{/privacy}",
"received_events_url": "https://api.github.com/users/hijkzzz/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 4 | 2024-01-19T02:27:08 | 2024-01-27T17:36:03 | null | NONE | null | ### System Info
see https://github.com/microsoft/DeepSpeed/issues/4932
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
https://github.com/microsoft/DeepSpeed/issues/4932
### Expected behavior
https://github.com/microsoft/DeepSpeed/issues/4932 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28596/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28596/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28595 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28595/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28595/comments | https://api.github.com/repos/huggingface/transformers/issues/28595/events | https://github.com/huggingface/transformers/issues/28595 | 2,089,424,034 | I_kwDOCUB6oc58ihSi | 28,595 | Trainer is DP? support DDP? | {
"login": "ciaoyizhen",
"id": 83450192,
"node_id": "MDQ6VXNlcjgzNDUwMTky",
"avatar_url": "https://avatars.githubusercontent.com/u/83450192?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ciaoyizhen",
"html_url": "https://github.com/ciaoyizhen",
"followers_url": "https://api.github.com/users/ciaoyizhen/followers",
"following_url": "https://api.github.com/users/ciaoyizhen/following{/other_user}",
"gists_url": "https://api.github.com/users/ciaoyizhen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ciaoyizhen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ciaoyizhen/subscriptions",
"organizations_url": "https://api.github.com/users/ciaoyizhen/orgs",
"repos_url": "https://api.github.com/users/ciaoyizhen/repos",
"events_url": "https://api.github.com/users/ciaoyizhen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ciaoyizhen/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-19T02:23:19 | 2024-01-22T15:08:42 | null | NONE | null | ### Feature request
Is the trainer DDP or DP? If it is DDP, why do I train with multiple graphics cards, and the graphics card memory consumed on cuda-0 is much larger than other graphics cards. Or is it that when I increase per_device_train_batch_size, the cuda-0 card will exceed the graphics card memory, and then it will cut the model parameters to other cards by itself? Or do I need to set any parameters? Just give an example. I ask ChatGPT. He answer me that Trainer is DP.
### Motivation
DDP is more useful than DP.
### Your contribution
If supported. Could you tell me how to use DDP? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28595/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28595/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28594 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28594/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28594/comments | https://api.github.com/repos/huggingface/transformers/issues/28594/events | https://github.com/huggingface/transformers/pull/28594 | 2,089,335,501 | PR_kwDOCUB6oc5kezMw | 28,594 | Test | {
"login": "ibarrionuevo",
"id": 27731841,
"node_id": "MDQ6VXNlcjI3NzMxODQx",
"avatar_url": "https://avatars.githubusercontent.com/u/27731841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ibarrionuevo",
"html_url": "https://github.com/ibarrionuevo",
"followers_url": "https://api.github.com/users/ibarrionuevo/followers",
"following_url": "https://api.github.com/users/ibarrionuevo/following{/other_user}",
"gists_url": "https://api.github.com/users/ibarrionuevo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ibarrionuevo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ibarrionuevo/subscriptions",
"organizations_url": "https://api.github.com/users/ibarrionuevo/orgs",
"repos_url": "https://api.github.com/users/ibarrionuevo/repos",
"events_url": "https://api.github.com/users/ibarrionuevo/events{/privacy}",
"received_events_url": "https://api.github.com/users/ibarrionuevo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-19T00:26:01 | 2024-01-19T00:26:11 | 2024-01-19T00:26:11 | NONE | null | This is a test pull request greated for CI/CD vulnerability testing purposes. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28594/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28594",
"html_url": "https://github.com/huggingface/transformers/pull/28594",
"diff_url": "https://github.com/huggingface/transformers/pull/28594.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28594.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28593 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28593/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28593/comments | https://api.github.com/repos/huggingface/transformers/issues/28593/events | https://github.com/huggingface/transformers/issues/28593 | 2,088,952,370 | I_kwDOCUB6oc58guIy | 28,593 | ViltForTokenClassification not working for personalize multiclass classification. | {
"login": "matiasfreitas",
"id": 36213075,
"node_id": "MDQ6VXNlcjM2MjEzMDc1",
"avatar_url": "https://avatars.githubusercontent.com/u/36213075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matiasfreitas",
"html_url": "https://github.com/matiasfreitas",
"followers_url": "https://api.github.com/users/matiasfreitas/followers",
"following_url": "https://api.github.com/users/matiasfreitas/following{/other_user}",
"gists_url": "https://api.github.com/users/matiasfreitas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matiasfreitas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matiasfreitas/subscriptions",
"organizations_url": "https://api.github.com/users/matiasfreitas/orgs",
"repos_url": "https://api.github.com/users/matiasfreitas/repos",
"events_url": "https://api.github.com/users/matiasfreitas/events{/privacy}",
"received_events_url": "https://api.github.com/users/matiasfreitas/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2024-01-18T19:50:59 | 2024-01-19T11:06:04 | null | NONE | null | ### System Info
I had several errors trying to use the ViLt code for multiclass classification.
On the lines 1473-1478 of the [modeling_vilt.py](https://github.com/huggingface/transformers/blob/v4.36.1/src/transformers/models/vilt/modeling_vilt.py#L1413) we have that code:
```
if labels is not None:
loss_fct = CrossEntropyLoss()
# move labels to correct device to enable PP
labels = labels.to(logits.device)
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
```
Based on my manual testing (I confess I'm not the most skilled to be sure about the theoretical correctness) and on the the file [modeling_vit.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/vit/modeling_vit.pyl)
I changed this lines to:
```
loss = None
if labels is not None:
# move labels to correct device to enable model parallelism
labels = labels.to(logits.device)
if self.config.problem_type is None:
if self.num_labels == 1:
self.config.problem_type = "regression"
elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
self.config.problem_type = "single_label_classification"
else:
self.config.problem_type = "multi_label_classification"
if self.config.problem_type == "regression":
loss_fct = MSELoss()
if self.num_labels == 1:
loss = loss_fct(logits.squeeze(), labels.squeeze())
else:
loss = loss_fct(logits, labels)
elif self.config.problem_type == "single_label_classification":
loss_fct = CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
elif self.config.problem_type == "multi_label_classification":
loss_fct = BCEWithLogitsLoss()
one_hot_labels = F.one_hot(labels, self.num_labels).float()
loss = loss_fct(logits, one_hot_labels)
```
And the results are fine here in my computer.
I think that should be change on the library.
### Who can help?
@ArthurZucker @amyeroberts @younesbelkada
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Running this code:
```
from transformers import DefaultDataCollator, TrainingArguments, Trainer
import evaluate
import numpy as np
from transformers import ViltForTokenClassification, TrainingArguments, Trainer
from torch import Tensor
def getModel(path = None):
if(path is None):
path = "dandelin/vilt-b32-finetuned-nlvr2"
model = ViltForTokenClassification.from_pretrained(
path,
num_labels=len(label['label2idx']),
id2label= label['idx2label'],
label2id= label['label2idx'],
return_dict=True,
problem_type = "multi_label_classification"
)
return model
#Very simple data collator that simply collates batches of dict-like
#objects and performs special handling for potential keys named label and label_ids
data_collator = DefaultDataCollator()
accuracy = evaluate.load("accuracy")
def compute_metrics(eval_pred):
predictions, labels = eval_pred
predictions = np.argmax(predictions, axis=1)
return accuracy.compute(predictions=predictions, references=labels)
training_args = TrainingArguments(
output_dir=middle_step,
# Directory where model checkpoints and logs will be saved.
remove_unused_columns=False,
# Whether to remove unused columns from the input data before training.
evaluation_strategy="epoch",
# The evaluation strategy to adopt during training. "epoch" evaluates at the end of each epoch.
save_strategy="epoch",
# The checkpoint save strategy during training. "epoch" saves at the end of each epoch.
learning_rate=5e-5,
# The initial learning rate for the optimizer.
per_device_train_batch_size=16,
# Batch size per GPU or CPU for training.
gradient_accumulation_steps=4,
# Gradient accumulation involves updating the model's weights
# only after accumulating gradients over multiple batches.
# This can be useful when the effective batch size is too large to fit into GPU memory.
# Instead of processing the entire batch at once, the model processes
# smaller batches and accumulates gradients before updating the weights.
per_device_eval_batch_size=16,
# Batch size per GPU or CPU for evaluation.
num_train_epochs=6,
# Total number of training epochs.
warmup_ratio=0.1,
# Ratio of total training steps used for warmup.
logging_steps=10,
# Log every n updates steps.
load_best_model_at_end=True,
# Whether or not to load the best model found at the end of training.
metric_for_best_model="accuracy",
# Metric used to determine the best model, e.g., "accuracy".
)
trainer = Trainer(
model_init=getModel,
args=training_args,
data_collator=data_collator,
train_dataset=dataset["train"],
eval_dataset=dataset["val"],
compute_metrics=compute_metrics,
)
```
### Expected behavior
Not raise a error on the line loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) because of the non match sizes of the tensor. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28593/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28593/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28592 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28592/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28592/comments | https://api.github.com/repos/huggingface/transformers/issues/28592/events | https://github.com/huggingface/transformers/issues/28592 | 2,088,835,235 | I_kwDOCUB6oc58gRij | 28,592 | Mixtral gets stuck at Loading checkpoint shards. | {
"login": "AdamLouly",
"id": 27873459,
"node_id": "MDQ6VXNlcjI3ODczNDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/27873459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AdamLouly",
"html_url": "https://github.com/AdamLouly",
"followers_url": "https://api.github.com/users/AdamLouly/followers",
"following_url": "https://api.github.com/users/AdamLouly/following{/other_user}",
"gists_url": "https://api.github.com/users/AdamLouly/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AdamLouly/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AdamLouly/subscriptions",
"organizations_url": "https://api.github.com/users/AdamLouly/orgs",
"repos_url": "https://api.github.com/users/AdamLouly/repos",
"events_url": "https://api.github.com/users/AdamLouly/events{/privacy}",
"received_events_url": "https://api.github.com/users/AdamLouly/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 5 | 2024-01-18T18:26:14 | 2024-01-25T02:10:23 | null | CONTRIBUTOR | null | ### System Info
Nightly transformers.
nightly torch
8 GPUs
### Who can help?
When trying to run mixtral using the example in transformers.
it gets stuck at Loading checkpoint shards
at this point:
Loading checkpoint shards: 42%|██████████████████████████████████████████████▋
Noticed this only happens when running on multiple GPUS
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
run the fine tunning example on transformers
### Expected behavior
stuck at
Loading checkpoint shards: 42%|██████████████████████████████████████████████▋ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28592/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28592/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28591 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28591/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28591/comments | https://api.github.com/repos/huggingface/transformers/issues/28591/events | https://github.com/huggingface/transformers/issues/28591 | 2,088,735,017 | I_kwDOCUB6oc58f5Ep | 28,591 | Idefics - AttentionMasks wrongly set with padding='longest' | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-18T17:19:14 | 2024-01-19T12:43:07 | null | MEMBER | null | ### System Info
transformers==4.36.2
### Reproduction
Reported by https://huggingface.co/VishnuSuganth
https://huggingface.co/HuggingFaceM4/idefics-9b-instruct/discussions/11
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28591/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28591/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28590 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28590/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28590/comments | https://api.github.com/repos/huggingface/transformers/issues/28590/events | https://github.com/huggingface/transformers/pull/28590 | 2,088,717,209 | PR_kwDOCUB6oc5kcqsX | 28,590 | Fix id2label assignment in run_classification.py | {
"login": "jheitmann",
"id": 25958845,
"node_id": "MDQ6VXNlcjI1OTU4ODQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/25958845?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jheitmann",
"html_url": "https://github.com/jheitmann",
"followers_url": "https://api.github.com/users/jheitmann/followers",
"following_url": "https://api.github.com/users/jheitmann/following{/other_user}",
"gists_url": "https://api.github.com/users/jheitmann/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jheitmann/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jheitmann/subscriptions",
"organizations_url": "https://api.github.com/users/jheitmann/orgs",
"repos_url": "https://api.github.com/users/jheitmann/repos",
"events_url": "https://api.github.com/users/jheitmann/events{/privacy}",
"received_events_url": "https://api.github.com/users/jheitmann/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-18T17:08:02 | 2024-01-22T11:31:32 | 2024-01-22T11:31:32 | CONTRIBUTOR | null | # What does this PR do?
This pull request addresses an issue in the `run_classification.py` script where the assignment of the `id2label` attribute in the model's config is incorrect. The current implementation copies `config.label2id` without modifying it, leading to an incorrect mapping. The proposed fix ensures that the `id2label` attribute is assigned based on the correct mapping (`label_to_id`) to resolve this issue.
**Changes Made:**
- Modified the assignment of `id2label` in the `run_classification.py` script to use the correct label-to-id mapping.
**Context:**
This issue was introduced with transformers version 4.36, and the incorrect assignment can lead to unexpected behavior in the script.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #28589
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
- @ArthurZucker
- @younesbelkada
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28590/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28590/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28590",
"html_url": "https://github.com/huggingface/transformers/pull/28590",
"diff_url": "https://github.com/huggingface/transformers/pull/28590.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28590.patch",
"merged_at": "2024-01-22T11:31:31"
} |
https://api.github.com/repos/huggingface/transformers/issues/28589 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28589/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28589/comments | https://api.github.com/repos/huggingface/transformers/issues/28589/events | https://github.com/huggingface/transformers/issues/28589 | 2,088,705,569 | I_kwDOCUB6oc58fx4h | 28,589 | Fix id2label assignment in run_classification.py | {
"login": "jheitmann",
"id": 25958845,
"node_id": "MDQ6VXNlcjI1OTU4ODQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/25958845?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jheitmann",
"html_url": "https://github.com/jheitmann",
"followers_url": "https://api.github.com/users/jheitmann/followers",
"following_url": "https://api.github.com/users/jheitmann/following{/other_user}",
"gists_url": "https://api.github.com/users/jheitmann/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jheitmann/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jheitmann/subscriptions",
"organizations_url": "https://api.github.com/users/jheitmann/orgs",
"repos_url": "https://api.github.com/users/jheitmann/repos",
"events_url": "https://api.github.com/users/jheitmann/events{/privacy}",
"received_events_url": "https://api.github.com/users/jheitmann/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-18T17:01:03 | 2024-01-22T11:31:33 | 2024-01-22T11:31:33 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.36.2
- Platform: macOS-13.4-arm64-arm-64bit
- Python version: 3.10.10
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.0
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
export TASK_NAME=mrpc
python examples/pytorch/text-classification/run_glue.py \
--model_name_or_path bert-base-cased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--output_dir /tmp/$TASK_NAME/
```
### Expected behavior
**Issue Description:**
The `run_classification.py` script currently has an issue where the assignment of `id2label` in the model's config is incorrect. The problem arises from copying `config.label2id` without modifying it later on. This issue was introduced with transformers version 4.36.
**Steps to Reproduce:**
1. Execute the `run_classification.py` script with a configuration file.
2. Inspect the `id2label` attribute in the model's config.
**Expected Behavior:**
The `id2label` attribute should be assigned correctly, reflecting the label-to-id mapping.
**Actual Behavior:**
The `id2label` attribute is assigned based on the original `config.label2id`, leading to incorrect mapping.
**Proposed Solution:**
Modify the following line in `run_classification.py`:
```python
model.config.id2label = {id: label for label, id in config.label2id.items()}
```
to:
```python
model.config.id2label = {id: label for label, id in label_to_id.items()}
```
This change ensures that the correct mapping is used. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28589/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28589/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28588 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28588/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28588/comments | https://api.github.com/repos/huggingface/transformers/issues/28588/events | https://github.com/huggingface/transformers/pull/28588 | 2,088,643,049 | PR_kwDOCUB6oc5kcab2 | 28,588 | Add tf_keras imports to prepare for Keras 3 | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 7 | 2024-01-18T16:26:39 | 2024-01-30T17:26:37 | 2024-01-30T17:26:36 | MEMBER | null | Keras 3 will break backward compatibility for our TF code, and is becoming the default Keras in TF 2.16. This PR uses the `tf_keras` package to maintain backward compatibility - it imports tf_keras if available, and if not then it attempts to import keras, but raises an issue if the version is >= 3.
Our future plan is to ensure that TF code remains backward compatible, but to support Keras 3 in all its framework-independent glory with new Keras classes (e.g. `TFBertModel` -> `KerasBertModel`). The PR for this is at #26224, but it's on hold until we handle the urgent issue of preserving backward compatibility. It was also blocked by the need for a couple of other PRs, but those are mostly in now. Because the full Keras 3 PR will require TF models to be rewritten with 100% Keras ops instead of TF ones, we'll likely need to do a community push once the core modelling code is ready to port everything.
cc @fchollet
Fixes #27377
Fixes #28296
TODO:
- [X] Replace `keras` or `tf.keras` with `tf_keras` falling back to `keras` in core modelling code
- [x] Replace `keras` or `tf.keras` with `tf_keras` falling back to `keras` in all model files
- [x] Confirm versions - should we reject Keras >= 3, or Keras >= 2.16? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28588/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28588/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28588",
"html_url": "https://github.com/huggingface/transformers/pull/28588",
"diff_url": "https://github.com/huggingface/transformers/pull/28588.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28588.patch",
"merged_at": "2024-01-30T17:26:36"
} |
https://api.github.com/repos/huggingface/transformers/issues/28587 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28587/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28587/comments | https://api.github.com/repos/huggingface/transformers/issues/28587/events | https://github.com/huggingface/transformers/pull/28587 | 2,088,598,424 | PR_kwDOCUB6oc5kcQm9 | 28,587 | Support gated Linear Layers for SwitchTransformers | {
"login": "agemagician",
"id": 6087313,
"node_id": "MDQ6VXNlcjYwODczMTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/agemagician",
"html_url": "https://github.com/agemagician",
"followers_url": "https://api.github.com/users/agemagician/followers",
"following_url": "https://api.github.com/users/agemagician/following{/other_user}",
"gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}",
"starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/agemagician/subscriptions",
"organizations_url": "https://api.github.com/users/agemagician/orgs",
"repos_url": "https://api.github.com/users/agemagician/repos",
"events_url": "https://api.github.com/users/agemagician/events{/privacy}",
"received_events_url": "https://api.github.com/users/agemagician/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2024-01-18T16:03:18 | 2024-01-22T15:53:55 | 2024-01-22T15:53:54 | CONTRIBUTOR | null | # What does this PR do?
The new version of SwitchTransformers uses a gated linear layer.
This pull request adds gated Linear Layers support for SwitchTransformers.
This is very similar to T5 and T5 v1.1 models.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@amyeroberts , @ArthurZucker , @younesbelkada , @younesbelkada , @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28587/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28587/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28587",
"html_url": "https://github.com/huggingface/transformers/pull/28587",
"diff_url": "https://github.com/huggingface/transformers/pull/28587.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28587.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28585 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28585/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28585/comments | https://api.github.com/repos/huggingface/transformers/issues/28585/events | https://github.com/huggingface/transformers/pull/28585 | 2,088,483,355 | PR_kwDOCUB6oc5kb3U0 | 28,585 | Add w2v2bert to pipeline | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-18T15:05:26 | 2024-01-26T13:01:00 | 2024-01-19T11:25:01 | COLLABORATOR | null | # What does this PR do?
https://github.com/huggingface/transformers/pull/28165 introduced a new W2V2-based model that uses a different feature extractor than classic CTC-based models.
In particular, it takes mel-spectrograms as `input_features`, instead of raw waveform as `input_values`.
The pipeline only takes `input_values` for this kind of models, which requires a bit of workaround.
Note that I've also run every slow tests from the ASR pipeline, just to make sure.
cc @amyeroberts @sanchit-gandhi
Also cc @ydshieh, it would be nice to add the model architecture to the tiny model repo ! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28585/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28585/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28585",
"html_url": "https://github.com/huggingface/transformers/pull/28585",
"diff_url": "https://github.com/huggingface/transformers/pull/28585.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28585.patch",
"merged_at": "2024-01-19T11:25:01"
} |
https://api.github.com/repos/huggingface/transformers/issues/28584 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28584/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28584/comments | https://api.github.com/repos/huggingface/transformers/issues/28584/events | https://github.com/huggingface/transformers/pull/28584 | 2,088,480,788 | PR_kwDOCUB6oc5kb2wP | 28,584 | Don't save `processor_config.json` if a processor has no extra attribute | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-18T15:04:06 | 2024-01-19T09:59:16 | 2024-01-19T09:59:15 | COLLABORATOR | null | # What does this PR do?
Don't save `processor_config.json` if a processor has no extra attribute | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28584/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28584/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28584",
"html_url": "https://github.com/huggingface/transformers/pull/28584",
"diff_url": "https://github.com/huggingface/transformers/pull/28584.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28584.patch",
"merged_at": "2024-01-19T09:59:15"
} |
https://api.github.com/repos/huggingface/transformers/issues/28583 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28583/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28583/comments | https://api.github.com/repos/huggingface/transformers/issues/28583/events | https://github.com/huggingface/transformers/pull/28583 | 2,088,447,016 | PR_kwDOCUB6oc5kbvTa | 28,583 | [`docs`] Improve visualization for vertical parallelism | {
"login": "petergtz",
"id": 3618401,
"node_id": "MDQ6VXNlcjM2MTg0MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3618401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/petergtz",
"html_url": "https://github.com/petergtz",
"followers_url": "https://api.github.com/users/petergtz/followers",
"following_url": "https://api.github.com/users/petergtz/following{/other_user}",
"gists_url": "https://api.github.com/users/petergtz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/petergtz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/petergtz/subscriptions",
"organizations_url": "https://api.github.com/users/petergtz/orgs",
"repos_url": "https://api.github.com/users/petergtz/repos",
"events_url": "https://api.github.com/users/petergtz/events{/privacy}",
"received_events_url": "https://api.github.com/users/petergtz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-18T14:48:43 | 2024-01-29T14:51:55 | 2024-01-25T17:55:11 | CONTRIBUTOR | null | # What does this PR do?
The documentation says "We refer to this Model parallelism as “Vertical” because of how models are typically visualized.", but then visualizes the model horizontally. This change visualizes the model indeed vertically.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@stevhliu @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28583/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28583/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28583",
"html_url": "https://github.com/huggingface/transformers/pull/28583",
"diff_url": "https://github.com/huggingface/transformers/pull/28583.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28583.patch",
"merged_at": "2024-01-25T17:55:11"
} |
https://api.github.com/repos/huggingface/transformers/issues/28582 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28582/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28582/comments | https://api.github.com/repos/huggingface/transformers/issues/28582/events | https://github.com/huggingface/transformers/pull/28582 | 2,088,342,596 | PR_kwDOCUB6oc5kbYbr | 28,582 | Making CTC training example more general | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-18T13:55:34 | 2024-01-18T17:24:29 | 2024-01-18T17:01:49 | COLLABORATOR | null | # What does this PR do?
#28165 introduced a new W2V2-based model that uses a different feature extractor than classic CTC-based models.
In particular, it takes mel-spectrograms as `input_features`, instead of raw waveform as `input_values`.
This runs well with the [example from the README](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition#single-gpu-ctc), as well as with the new introduced model. Happy to try some other configurations as well.
cc @patrickvonplaten and @amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28582/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28582/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28582",
"html_url": "https://github.com/huggingface/transformers/pull/28582",
"diff_url": "https://github.com/huggingface/transformers/pull/28582.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28582.patch",
"merged_at": "2024-01-18T17:01:49"
} |
https://api.github.com/repos/huggingface/transformers/issues/28581 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28581/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28581/comments | https://api.github.com/repos/huggingface/transformers/issues/28581/events | https://github.com/huggingface/transformers/pull/28581 | 2,088,324,666 | PR_kwDOCUB6oc5kbUgz | 28,581 | Fix phi model doc checkpoint | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-18T13:45:59 | 2024-01-22T17:15:11 | 2024-01-22T17:15:07 | COLLABORATOR | null | # What does this PR do?
Small fix c.f. https://github.com/huggingface/transformers/commit/d93ef7d7512e79612606f29e6ae308920f0a86cd#r137345713
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28581/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28581",
"html_url": "https://github.com/huggingface/transformers/pull/28581",
"diff_url": "https://github.com/huggingface/transformers/pull/28581.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28581.patch",
"merged_at": "2024-01-22T17:15:07"
} |
https://api.github.com/repos/huggingface/transformers/issues/28580 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28580/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28580/comments | https://api.github.com/repos/huggingface/transformers/issues/28580/events | https://github.com/huggingface/transformers/issues/28580 | 2,088,302,103 | I_kwDOCUB6oc58ePYX | 28,580 | This model has one file that has been marked as unsafe. [training_args.bin] | {
"login": "rizwan-ai",
"id": 34979598,
"node_id": "MDQ6VXNlcjM0OTc5NTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/34979598?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rizwan-ai",
"html_url": "https://github.com/rizwan-ai",
"followers_url": "https://api.github.com/users/rizwan-ai/followers",
"following_url": "https://api.github.com/users/rizwan-ai/following{/other_user}",
"gists_url": "https://api.github.com/users/rizwan-ai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rizwan-ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rizwan-ai/subscriptions",
"organizations_url": "https://api.github.com/users/rizwan-ai/orgs",
"repos_url": "https://api.github.com/users/rizwan-ai/repos",
"events_url": "https://api.github.com/users/rizwan-ai/events{/privacy}",
"received_events_url": "https://api.github.com/users/rizwan-ai/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-18T13:34:07 | 2024-01-22T13:56:04 | null | NONE | null | ### System Info
**Framework versions**
- Transformers 4.36.2
- Pytorch 2.1.2+cpu
- Datasets 2.16.1
- Tokenizers 0.15.0
This model has one file that has been marked as unsafe.
[training_args.bin](https://huggingface.co/rizwan-ai/distilbert-base-uncased-finetuned-emotion/blob/main/training_args.bin)
### Git LFS Details
* **SHA256:** d672df2806e4b013fbfdf9d995526b2c4e4a7d56a8b84b77b1d6213241ea11f0
* **Pointer size:** 129 Bytes
* **Size of remote file:** 4.73 kB
#### Detected Pickle imports (9)
* "transformers.training_args.TrainingArguments",
* "transformers.training_args.OptimizerNames",
* "transformers.trainer_utils.SchedulerType",
* "accelerate.state.PartialState",
* "torch.device",
* "transformers.trainer_utils.HubStrategy",
* "accelerate.utils.dataclasses.DistributedType",
* "__builtin__.getattr",
* "transformers.trainer_utils.IntervalStrategy"
@ArthurZucker @younesbelkada @pc
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://huggingface.co/rizwan-ai/distilbert-base-uncased-finetuned-emotion
### Expected behavior
How to fix it? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28580/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28580/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28579 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28579/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28579/comments | https://api.github.com/repos/huggingface/transformers/issues/28579/events | https://github.com/huggingface/transformers/pull/28579 | 2,088,191,321 | PR_kwDOCUB6oc5ka3IF | 28,579 | Fix: `generate()` with `max_new_tokens=0` produces a single token. | {
"login": "danielkorat",
"id": 32893314,
"node_id": "MDQ6VXNlcjMyODkzMzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/32893314?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danielkorat",
"html_url": "https://github.com/danielkorat",
"followers_url": "https://api.github.com/users/danielkorat/followers",
"following_url": "https://api.github.com/users/danielkorat/following{/other_user}",
"gists_url": "https://api.github.com/users/danielkorat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danielkorat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danielkorat/subscriptions",
"organizations_url": "https://api.github.com/users/danielkorat/orgs",
"repos_url": "https://api.github.com/users/danielkorat/repos",
"events_url": "https://api.github.com/users/danielkorat/events{/privacy}",
"received_events_url": "https://api.github.com/users/danielkorat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-18T12:30:53 | 2024-01-21T09:25:37 | 2024-01-21T09:25:37 | NONE | null | # What does this PR do?
Currently, setting `max_new_tokens=0` produces 1 token instead of 0, and the warning is unclear.
For example, for the following code:
```python
checkpoint = "bigcode/tiny_starcoder_py"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint)
inputs = tokenizer("def print_hello_world():", return_tensors="pt")
max_new_tokens = 0
outputs = model.generate(**inputs,
pad_token_id=tokenizer.eos_token_id,
max_new_tokens=max_new_tokens)
input_length = len(inputs['input_ids'][0])
output_length = len(outputs[0])
print(f"\nTest:{input_length - output_length == max_new_tokens}")
```
The output is:
```bash
utils.py:1134: UserWarning: Input length of input_ids is 7, but `max_length` is set to 7. This can lead to unexpected behavior. You should consider increasing `max_new_tokens`.
warnings.warn(
Test: False
```
After the fix, this is the output:
```bash
`max_new_tokens`=0, no tokens will be generated.
utils.py:1134: UserWarning: Input length of input_ids is 7, but `max_length` is set to 7. This can lead to unexpected behavior. You should consider increasing `max_new_tokens`.
warnings.warn(
Test: True
```
(Note the new warning).
Currently fixed only for `greedy_search()`. Once this PR is reviewed, I'll add the fix to all other generation modes.
@gante @amyeroberts
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28579/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28579/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28579",
"html_url": "https://github.com/huggingface/transformers/pull/28579",
"diff_url": "https://github.com/huggingface/transformers/pull/28579.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28579.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28578 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28578/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28578/comments | https://api.github.com/repos/huggingface/transformers/issues/28578/events | https://github.com/huggingface/transformers/pull/28578 | 2,088,073,432 | PR_kwDOCUB6oc5kadRR | 28,578 | [SigLIP] Don't pad by default | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-18T11:19:11 | 2024-01-19T12:30:00 | 2024-01-19T12:30:00 | CONTRIBUTOR | null | # What does this PR do?
Fixes #28569
Note: will require an update of the code snippets of the model cards + my demo notebook | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28578/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28578/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28578",
"html_url": "https://github.com/huggingface/transformers/pull/28578",
"diff_url": "https://github.com/huggingface/transformers/pull/28578.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28578.patch",
"merged_at": "2024-01-19T12:30:00"
} |
https://api.github.com/repos/huggingface/transformers/issues/28577 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28577/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28577/comments | https://api.github.com/repos/huggingface/transformers/issues/28577/events | https://github.com/huggingface/transformers/issues/28577 | 2,088,057,678 | I_kwDOCUB6oc58dTtO | 28,577 | Inconsistent behavior between tokenizer and fast tokenizer | {
"login": "xuzhenqi",
"id": 3806642,
"node_id": "MDQ6VXNlcjM4MDY2NDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3806642?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xuzhenqi",
"html_url": "https://github.com/xuzhenqi",
"followers_url": "https://api.github.com/users/xuzhenqi/followers",
"following_url": "https://api.github.com/users/xuzhenqi/following{/other_user}",
"gists_url": "https://api.github.com/users/xuzhenqi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xuzhenqi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xuzhenqi/subscriptions",
"organizations_url": "https://api.github.com/users/xuzhenqi/orgs",
"repos_url": "https://api.github.com/users/xuzhenqi/repos",
"events_url": "https://api.github.com/users/xuzhenqi/events{/privacy}",
"received_events_url": "https://api.github.com/users/xuzhenqi/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-18T11:10:11 | 2024-01-18T11:17:34 | null | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-4.18.0-193.6.3.el8_2.v1.4.x86_64-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
``` python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("codellama/CodeLlama-7b-hf", trust_remote_code=True, use_fast=False)
fast_tokenizer = AutoTokenizer.from_pretrained("codellama/CodeLlama-7b-hf", trust_remote_code=True, use_fast=True)
prompt = "▁<PRE>//"
inputs = tokenizer(prompt, return_tensors="pt")
print(f"tokenizer ids: {inputs.input_ids}")
inputs = fast_tokenizer(prompt, return_tensors="pt")
print(f"fast tokenizer ids: {inputs.input_ids}")
```
This scripts will output:
```
tokenizer ids: tensor([[ 1, 32007, 458]])
fast tokenizer ids: tensor([[ 1, 32007, 849]])
```
In the `tokenizer.json` from the model folder, we can see:
```
"//": 458,
"▁//": 849,
```
Fast tokenizer probably ignores the `<PRE>` token, is it a correct behavior?
### Expected behavior
Fast tokenizer should be consistent with normal tokenizer. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28577/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28577/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28576 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28576/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28576/comments | https://api.github.com/repos/huggingface/transformers/issues/28576/events | https://github.com/huggingface/transformers/issues/28576 | 2,088,057,073 | I_kwDOCUB6oc58dTjx | 28,576 | Feature Request: Expose an Args to Set Prefetch Factor in Trainer | {
"login": "uygnef",
"id": 13539441,
"node_id": "MDQ6VXNlcjEzNTM5NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/13539441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/uygnef",
"html_url": "https://github.com/uygnef",
"followers_url": "https://api.github.com/users/uygnef/followers",
"following_url": "https://api.github.com/users/uygnef/following{/other_user}",
"gists_url": "https://api.github.com/users/uygnef/gists{/gist_id}",
"starred_url": "https://api.github.com/users/uygnef/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uygnef/subscriptions",
"organizations_url": "https://api.github.com/users/uygnef/orgs",
"repos_url": "https://api.github.com/users/uygnef/repos",
"events_url": "https://api.github.com/users/uygnef/events{/privacy}",
"received_events_url": "https://api.github.com/users/uygnef/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-18T11:09:51 | 2024-01-18T11:48:21 | null | NONE | null | ### Feature request
Currently, the trainer does not allow users to set the prefetch factor as an argument from the training script.
### Motivation
This can be a limitation when training large models, especially when the data is being fetched from a remote server, as the loading speed can become a bottleneck due to slow data download for the next partition.
### Your contribution
I can commit a PR | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28576/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28576/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28575 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28575/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28575/comments | https://api.github.com/repos/huggingface/transformers/issues/28575/events | https://github.com/huggingface/transformers/pull/28575 | 2,088,049,724 | PR_kwDOCUB6oc5kaYGD | 28,575 | Use `LoggingLevel` context manager in 3 tests | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2024-01-18T11:05:26 | 2024-01-18T13:41:26 | 2024-01-18T13:41:25 | COLLABORATOR | null | # What does this PR do?
To avoid flaky failing tests due to transformers' root logger's level being changed by some (unknown yet) other tests. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28575/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28575/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28575",
"html_url": "https://github.com/huggingface/transformers/pull/28575",
"diff_url": "https://github.com/huggingface/transformers/pull/28575.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28575.patch",
"merged_at": "2024-01-18T13:41:25"
} |
https://api.github.com/repos/huggingface/transformers/issues/28574 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28574/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28574/comments | https://api.github.com/repos/huggingface/transformers/issues/28574/events | https://github.com/huggingface/transformers/pull/28574 | 2,088,021,115 | PR_kwDOCUB6oc5kaR5J | 28,574 | chore: Fix multiple typos | {
"login": "hugo-syn",
"id": 61210734,
"node_id": "MDQ6VXNlcjYxMjEwNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/61210734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hugo-syn",
"html_url": "https://github.com/hugo-syn",
"followers_url": "https://api.github.com/users/hugo-syn/followers",
"following_url": "https://api.github.com/users/hugo-syn/following{/other_user}",
"gists_url": "https://api.github.com/users/hugo-syn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hugo-syn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hugo-syn/subscriptions",
"organizations_url": "https://api.github.com/users/hugo-syn/orgs",
"repos_url": "https://api.github.com/users/hugo-syn/repos",
"events_url": "https://api.github.com/users/hugo-syn/events{/privacy}",
"received_events_url": "https://api.github.com/users/hugo-syn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-18T10:49:46 | 2024-01-18T13:35:09 | 2024-01-18T13:35:09 | CONTRIBUTOR | null | # What does this PR do?
Fix multiple typos
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28574/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28574/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28574",
"html_url": "https://github.com/huggingface/transformers/pull/28574",
"diff_url": "https://github.com/huggingface/transformers/pull/28574.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28574.patch",
"merged_at": "2024-01-18T13:35:09"
} |
https://api.github.com/repos/huggingface/transformers/issues/28573 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28573/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28573/comments | https://api.github.com/repos/huggingface/transformers/issues/28573/events | https://github.com/huggingface/transformers/issues/28573 | 2,087,647,786 | I_kwDOCUB6oc58bvoq | 28,573 | data_collator in examples/pytorch/language-modeling/run_clm.py | {
"login": "pierowu",
"id": 61963313,
"node_id": "MDQ6VXNlcjYxOTYzMzEz",
"avatar_url": "https://avatars.githubusercontent.com/u/61963313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pierowu",
"html_url": "https://github.com/pierowu",
"followers_url": "https://api.github.com/users/pierowu/followers",
"following_url": "https://api.github.com/users/pierowu/following{/other_user}",
"gists_url": "https://api.github.com/users/pierowu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pierowu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pierowu/subscriptions",
"organizations_url": "https://api.github.com/users/pierowu/orgs",
"repos_url": "https://api.github.com/users/pierowu/repos",
"events_url": "https://api.github.com/users/pierowu/events{/privacy}",
"received_events_url": "https://api.github.com/users/pierowu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-18T06:59:55 | 2024-01-24T07:55:37 | 2024-01-24T07:55:37 | NONE | null | ### System Info
According to the script, the trainer use dafault_data_collator to model causal language modelling.
https://github.com/huggingface/transformers/blob/98dda8ed03ac3f4af5733bdddaa1dab6a81e15c1/examples/pytorch/language-modeling/run_clm.py#L604
Shouldn't we use DataCollatorForLanguageModeling to shift input and output by 1 token instead? It seems that dafault_data_collator can't achieve this goal.
@ArthurZucker @younesbelkada @ArthurZucker @muellerzr and @pacman100
Thank you for answering!
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Same as examples/pytorch/language-modeling/run_clm.py
### Expected behavior
Correctlt train causal language model. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28573/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28573/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28572 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28572/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28572/comments | https://api.github.com/repos/huggingface/transformers/issues/28572/events | https://github.com/huggingface/transformers/pull/28572 | 2,087,586,420 | PR_kwDOCUB6oc5kYy8L | 28,572 | add YaRN RoPE scaling code for LLaMA | {
"login": "jquesnelle",
"id": 687076,
"node_id": "MDQ6VXNlcjY4NzA3Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/687076?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jquesnelle",
"html_url": "https://github.com/jquesnelle",
"followers_url": "https://api.github.com/users/jquesnelle/followers",
"following_url": "https://api.github.com/users/jquesnelle/following{/other_user}",
"gists_url": "https://api.github.com/users/jquesnelle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jquesnelle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jquesnelle/subscriptions",
"organizations_url": "https://api.github.com/users/jquesnelle/orgs",
"repos_url": "https://api.github.com/users/jquesnelle/repos",
"events_url": "https://api.github.com/users/jquesnelle/events{/privacy}",
"received_events_url": "https://api.github.com/users/jquesnelle/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-01-18T06:13:58 | 2024-01-19T12:19:50 | null | NONE | null | # What does this PR do?
This adds the [YaRN RoPE scaling method](https://arxiv.org/abs/2309.00071) to the LLaMA-class of models. It can be activated for finetuned models by setting `rope_scaling.type = 'yarn'` or for non-finetuned models by setting `rope_scaling.type = 'dynamic-yarn'`.
This PR enables the LLaMA family of models (LLaMA, Mistral, SOLAR, etc.) to use YaRN without `trust_remote_code=True`.
While we've [released several models](https://github.com/jquesnelle/yarn) that use `trust_remote_code`, it's nicer to not have to execute arbitrary code 🙂
cc: @gante | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28572/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28572/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28572",
"html_url": "https://github.com/huggingface/transformers/pull/28572",
"diff_url": "https://github.com/huggingface/transformers/pull/28572.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28572.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28570 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28570/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28570/comments | https://api.github.com/repos/huggingface/transformers/issues/28570/events | https://github.com/huggingface/transformers/pull/28570 | 2,087,361,488 | PR_kwDOCUB6oc5kYBkM | 28,570 | [`Llava`] Fix convert_llava_weights_to_hf.py script | {
"login": "isaac-vidas",
"id": 80056737,
"node_id": "MDQ6VXNlcjgwMDU2NzM3",
"avatar_url": "https://avatars.githubusercontent.com/u/80056737?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/isaac-vidas",
"html_url": "https://github.com/isaac-vidas",
"followers_url": "https://api.github.com/users/isaac-vidas/followers",
"following_url": "https://api.github.com/users/isaac-vidas/following{/other_user}",
"gists_url": "https://api.github.com/users/isaac-vidas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/isaac-vidas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/isaac-vidas/subscriptions",
"organizations_url": "https://api.github.com/users/isaac-vidas/orgs",
"repos_url": "https://api.github.com/users/isaac-vidas/repos",
"events_url": "https://api.github.com/users/isaac-vidas/events{/privacy}",
"received_events_url": "https://api.github.com/users/isaac-vidas/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2024-01-18T02:35:48 | 2024-01-19T12:31:25 | 2024-01-19T12:31:25 | CONTRIBUTOR | null | Fix call to `tokenizer.add_tokens` in `convert_llava_weights_to_hf.py` and `convert_vipllava_weights_to_hf.py` scripts.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@younesbelkada if you can review
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28570/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28570/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28570",
"html_url": "https://github.com/huggingface/transformers/pull/28570",
"diff_url": "https://github.com/huggingface/transformers/pull/28570.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28570.patch",
"merged_at": "2024-01-19T12:31:25"
} |
https://api.github.com/repos/huggingface/transformers/issues/28569 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28569/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28569/comments | https://api.github.com/repos/huggingface/transformers/issues/28569/events | https://github.com/huggingface/transformers/issues/28569 | 2,087,314,475 | I_kwDOCUB6oc58aeQr | 28,569 | Clarify usage / implementation of padding for SigLIP model processor | {
"login": "skysyk",
"id": 3191242,
"node_id": "MDQ6VXNlcjMxOTEyNDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3191242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skysyk",
"html_url": "https://github.com/skysyk",
"followers_url": "https://api.github.com/users/skysyk/followers",
"following_url": "https://api.github.com/users/skysyk/following{/other_user}",
"gists_url": "https://api.github.com/users/skysyk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/skysyk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skysyk/subscriptions",
"organizations_url": "https://api.github.com/users/skysyk/orgs",
"repos_url": "https://api.github.com/users/skysyk/repos",
"events_url": "https://api.github.com/users/skysyk/events{/privacy}",
"received_events_url": "https://api.github.com/users/skysyk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-18T01:43:21 | 2024-01-19T20:31:55 | 2024-01-19T12:30:01 | NONE | null | ### Feature request
Change the implementation of `SiglipProcessor` to use the global default behavior for `padding` of `False` or update the documentation to indicate the usage is different and defaults to `'max_length'` if the `padding` argument is not provided.
### Motivation
In the HF documentation for padding (both the docs as well as the function comments for the processor class), the default behavior (argument) is described to be `False` or `'do_not_pad'`. For the `SiglipProcessor`, `max_length` is the default behavior implemented in [code](https://github.com/huggingface/transformers/blob/98dda8ed03ac3f4af5733bdddaa1dab6a81e15c1/src/transformers/models/siglip/processing_siglip.py#L53) while the [example in the docs](https://huggingface.co/docs/transformers/main/en/model_doc/siglip#using-the-model-yourself) omits the padding argument. This is at odds with the overall documentation as well as behavior / usage examples provided in similar models such as CLIP (where explicitly `padding=True` in the usage example) and could give the wrong impression upon first glance that padding is not used for SigLIP.
### Your contribution
Opening this issue to discuss a clarification / improvement. I can help implement the preferred solution. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28569/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28569/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28568 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28568/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28568/comments | https://api.github.com/repos/huggingface/transformers/issues/28568/events | https://github.com/huggingface/transformers/issues/28568 | 2,087,171,304 | I_kwDOCUB6oc58Z7To | 28,568 | Optimised 4bit inference kernels | {
"login": "nivibilla",
"id": 26687662,
"node_id": "MDQ6VXNlcjI2Njg3NjYy",
"avatar_url": "https://avatars.githubusercontent.com/u/26687662?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nivibilla",
"html_url": "https://github.com/nivibilla",
"followers_url": "https://api.github.com/users/nivibilla/followers",
"following_url": "https://api.github.com/users/nivibilla/following{/other_user}",
"gists_url": "https://api.github.com/users/nivibilla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nivibilla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nivibilla/subscriptions",
"organizations_url": "https://api.github.com/users/nivibilla/orgs",
"repos_url": "https://api.github.com/users/nivibilla/repos",
"events_url": "https://api.github.com/users/nivibilla/events{/privacy}",
"received_events_url": "https://api.github.com/users/nivibilla/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | 5 | 2024-01-17T23:18:15 | 2024-01-19T10:06:00 | null | NONE | null | ### Feature request
Integration of new 4bit kernels
https://github.com/IST-DASLab/marlin
### Motivation
provide faster Inference than awq/exllama for batch sizes upto 32
### Your contribution
Just saw this today, can try provide sample notebook. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28568/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28568/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28567 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28567/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28567/comments | https://api.github.com/repos/huggingface/transformers/issues/28567/events | https://github.com/huggingface/transformers/pull/28567 | 2,086,938,368 | PR_kwDOCUB6oc5kWlHG | 28,567 | Fix the documentation checkpoint for xlm-roberta-xl | {
"login": "jeremyfowers",
"id": 80718789,
"node_id": "MDQ6VXNlcjgwNzE4Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/80718789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeremyfowers",
"html_url": "https://github.com/jeremyfowers",
"followers_url": "https://api.github.com/users/jeremyfowers/followers",
"following_url": "https://api.github.com/users/jeremyfowers/following{/other_user}",
"gists_url": "https://api.github.com/users/jeremyfowers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeremyfowers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeremyfowers/subscriptions",
"organizations_url": "https://api.github.com/users/jeremyfowers/orgs",
"repos_url": "https://api.github.com/users/jeremyfowers/repos",
"events_url": "https://api.github.com/users/jeremyfowers/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeremyfowers/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-17T20:37:16 | 2024-01-18T13:47:50 | 2024-01-18T13:47:50 | CONTRIBUTOR | null | # What does this PR do?
This is a small PR that corrects the name of the `xlm-roberta-xl` checkpoint in the `transformers` documentation.
I also noticed that the docstrings were referring to the model as either `XLM-RoBERTa-xlarge` or `XLM-Roberta-xlarge` and I corrected all of those instances to `XLM-RoBERTa-XL`.
<!-- Remove if not applicable -->
Fixes #28562
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@julien-c @stevhliu
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28567/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28567/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28567",
"html_url": "https://github.com/huggingface/transformers/pull/28567",
"diff_url": "https://github.com/huggingface/transformers/pull/28567.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28567.patch",
"merged_at": "2024-01-18T13:47:50"
} |
https://api.github.com/repos/huggingface/transformers/issues/28566 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28566/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28566/comments | https://api.github.com/repos/huggingface/transformers/issues/28566/events | https://github.com/huggingface/transformers/pull/28566 | 2,086,880,853 | PR_kwDOCUB6oc5kWYmi | 28,566 | fix: suppress `GatedRepoError` to use cache file (fix #28558). | {
"login": "scruel",
"id": 16933298,
"node_id": "MDQ6VXNlcjE2OTMzMjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/16933298?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scruel",
"html_url": "https://github.com/scruel",
"followers_url": "https://api.github.com/users/scruel/followers",
"following_url": "https://api.github.com/users/scruel/following{/other_user}",
"gists_url": "https://api.github.com/users/scruel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/scruel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scruel/subscriptions",
"organizations_url": "https://api.github.com/users/scruel/orgs",
"repos_url": "https://api.github.com/users/scruel/repos",
"events_url": "https://api.github.com/users/scruel/events{/privacy}",
"received_events_url": "https://api.github.com/users/scruel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.githu... | null | 3 | 2024-01-17T19:57:11 | 2024-01-26T20:37:15 | 2024-01-26T16:25:09 | CONTRIBUTOR | null | # What does this PR do?
The repo will have some optional files missed, and we have `_raise_exceptions_for_missing_entries=False` to suppress such errors, however for gated repo we won't be able to know if files exists or not without passing the `token` parameter (or env variable), even we already fully downloaded the repo, we will still won't be able to use it.
For those optional files, we will suppress exceptions, and for those required, we will still get errors, this PR is for keeping this behavior same if `GatedRepoError` occurs.
Fixes #28558
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@amyeroberts @ArthurZucker @ydshieh
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28566/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28566/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28566",
"html_url": "https://github.com/huggingface/transformers/pull/28566",
"diff_url": "https://github.com/huggingface/transformers/pull/28566.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28566.patch",
"merged_at": "2024-01-26T16:25:09"
} |
https://api.github.com/repos/huggingface/transformers/issues/28565 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28565/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28565/comments | https://api.github.com/repos/huggingface/transformers/issues/28565/events | https://github.com/huggingface/transformers/issues/28565 | 2,086,814,877 | I_kwDOCUB6oc58YkSd | 28,565 | Disabling adapters is not removing the adapter from active adapters | {
"login": "balachandra",
"id": 1454090,
"node_id": "MDQ6VXNlcjE0NTQwOTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1454090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/balachandra",
"html_url": "https://github.com/balachandra",
"followers_url": "https://api.github.com/users/balachandra/followers",
"following_url": "https://api.github.com/users/balachandra/following{/other_user}",
"gists_url": "https://api.github.com/users/balachandra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/balachandra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/balachandra/subscriptions",
"organizations_url": "https://api.github.com/users/balachandra/orgs",
"repos_url": "https://api.github.com/users/balachandra/repos",
"events_url": "https://api.github.com/users/balachandra/events{/privacy}",
"received_events_url": "https://api.github.com/users/balachandra/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-17T19:13:26 | 2024-01-17T20:08:26 | null | NONE | null | ### System Info
Using AWS sagemaker notebook with conda_pytorch_p310 as env.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Add an adapter to the model using `add_adapter`
2. Disable all adapters using `disable_adapters`
3. List active adapters using `active_adapters`
### Expected behavior
Ideally, when all the adapters are disabled, the active adapters should return an empty list. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28565/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28565/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28564 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28564/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28564/comments | https://api.github.com/repos/huggingface/transformers/issues/28564/events | https://github.com/huggingface/transformers/pull/28564 | 2,086,644,413 | PR_kwDOCUB6oc5kVk3x | 28,564 | Fix Switch Transformers When sparse_step = 1 | {
"login": "agemagician",
"id": 6087313,
"node_id": "MDQ6VXNlcjYwODczMTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/agemagician",
"html_url": "https://github.com/agemagician",
"followers_url": "https://api.github.com/users/agemagician/followers",
"following_url": "https://api.github.com/users/agemagician/following{/other_user}",
"gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}",
"starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/agemagician/subscriptions",
"organizations_url": "https://api.github.com/users/agemagician/orgs",
"repos_url": "https://api.github.com/users/agemagician/repos",
"events_url": "https://api.github.com/users/agemagician/events{/privacy}",
"received_events_url": "https://api.github.com/users/agemagician/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-17T17:26:20 | 2024-01-17T21:26:21 | 2024-01-17T21:26:21 | CONTRIBUTOR | null | # What does this PR do?
In case sparse_step = 1, the current code will not work.
Because anything % 1 will always equal 0, even though there should be a spare layer at each block.
Fixes # (issue)
I didn't open an issue. I just solved the problem with this PR.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker , @younesbelkada , @younesbelkada , @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28564/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28564/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28564",
"html_url": "https://github.com/huggingface/transformers/pull/28564",
"diff_url": "https://github.com/huggingface/transformers/pull/28564.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28564.patch",
"merged_at": "2024-01-17T21:26:21"
} |
https://api.github.com/repos/huggingface/transformers/issues/28563 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28563/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28563/comments | https://api.github.com/repos/huggingface/transformers/issues/28563/events | https://github.com/huggingface/transformers/pull/28563 | 2,086,638,401 | PR_kwDOCUB6oc5kVjjV | 28,563 | [Whisper] Fix audio classification with weighted layer sum | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-17T17:22:35 | 2024-01-18T16:41:47 | 2024-01-18T16:41:45 | CONTRIBUTOR | null | # What does this PR do?
Fixes #28002: the `WhisperForAudioClassfication` is corrected for compatibility with `use_weighted_layer_sum=True`. Implements 2 tests to confirm correctness. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28563/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28563/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28563",
"html_url": "https://github.com/huggingface/transformers/pull/28563",
"diff_url": "https://github.com/huggingface/transformers/pull/28563.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28563.patch",
"merged_at": "2024-01-18T16:41:44"
} |
https://api.github.com/repos/huggingface/transformers/issues/28562 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28562/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28562/comments | https://api.github.com/repos/huggingface/transformers/issues/28562/events | https://github.com/huggingface/transformers/issues/28562 | 2,086,612,194 | I_kwDOCUB6oc58Xyzi | 28,562 | The examples for xlm-roberta-xl reference a model that doesn't exist | {
"login": "jeremyfowers",
"id": 80718789,
"node_id": "MDQ6VXNlcjgwNzE4Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/80718789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeremyfowers",
"html_url": "https://github.com/jeremyfowers",
"followers_url": "https://api.github.com/users/jeremyfowers/followers",
"following_url": "https://api.github.com/users/jeremyfowers/following{/other_user}",
"gists_url": "https://api.github.com/users/jeremyfowers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeremyfowers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeremyfowers/subscriptions",
"organizations_url": "https://api.github.com/users/jeremyfowers/orgs",
"repos_url": "https://api.github.com/users/jeremyfowers/repos",
"events_url": "https://api.github.com/users/jeremyfowers/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeremyfowers/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-17T17:06:07 | 2024-01-18T13:47:51 | 2024-01-18T13:47:51 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.8.18
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the example code for `XLMRobertaXLForMaskedLM` verbatim in python: https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLForMaskedLM.forward.example
Example code pasted here for convenience:
```
from transformers import AutoTokenizer, XLMRobertaXLForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-xlarge")
model = XLMRobertaXLForMaskedLM.from_pretrained("xlm-roberta-xlarge")
inputs = tokenizer("The capital of France is <mask>.", return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# retrieve index of <mask>
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(
as_tuple=True
)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-<mask> tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
```
This results in:
```
OSError: xlm-roberta-xlarge is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`
```
### Expected behavior
This should find and run the model. However, it does not. Replacing the model string from `"xlm-roberta-xlarge"` to `"facebook/xlm-roberta-xl"` fixes the problem. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28562/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28562/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28561 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28561/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28561/comments | https://api.github.com/repos/huggingface/transformers/issues/28561/events | https://github.com/huggingface/transformers/pull/28561 | 2,086,502,150 | PR_kwDOCUB6oc5kVF31 | 28,561 | Update image_processing_deformable_detr.py | {
"login": "sounakdey",
"id": 8640971,
"node_id": "MDQ6VXNlcjg2NDA5NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8640971?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sounakdey",
"html_url": "https://github.com/sounakdey",
"followers_url": "https://api.github.com/users/sounakdey/followers",
"following_url": "https://api.github.com/users/sounakdey/following{/other_user}",
"gists_url": "https://api.github.com/users/sounakdey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sounakdey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sounakdey/subscriptions",
"organizations_url": "https://api.github.com/users/sounakdey/orgs",
"repos_url": "https://api.github.com/users/sounakdey/repos",
"events_url": "https://api.github.com/users/sounakdey/events{/privacy}",
"received_events_url": "https://api.github.com/users/sounakdey/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-17T16:04:24 | 2024-01-22T15:17:40 | 2024-01-22T15:17:39 | CONTRIBUTOR | null | # What does this PR do?
This PR in `image_processing_deformable_detr.py` prevents the usage of `unbind` on `target_sizes` when it is of `NoneType` similar to https://github.com/huggingface/transformers/blob/d6ffe74dfa577b5e7d12e48aa1c686ad8d3ef557/src/transformers/models/detr/image_processing_detr.py#L1606
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28561/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28561/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28561",
"html_url": "https://github.com/huggingface/transformers/pull/28561",
"diff_url": "https://github.com/huggingface/transformers/pull/28561.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28561.patch",
"merged_at": "2024-01-22T15:17:39"
} |
https://api.github.com/repos/huggingface/transformers/issues/28560 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28560/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28560/comments | https://api.github.com/repos/huggingface/transformers/issues/28560/events | https://github.com/huggingface/transformers/issues/28560 | 2,086,424,006 | I_kwDOCUB6oc58XE3G | 28,560 | Cohere embed - Sagemaker deploy - Should have a `model_type` key in its config.json | {
"login": "pthd",
"id": 7238429,
"node_id": "MDQ6VXNlcjcyMzg0Mjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7238429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pthd",
"html_url": "https://github.com/pthd",
"followers_url": "https://api.github.com/users/pthd/followers",
"following_url": "https://api.github.com/users/pthd/following{/other_user}",
"gists_url": "https://api.github.com/users/pthd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pthd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pthd/subscriptions",
"organizations_url": "https://api.github.com/users/pthd/orgs",
"repos_url": "https://api.github.com/users/pthd/repos",
"events_url": "https://api.github.com/users/pthd/events{/privacy}",
"received_events_url": "https://api.github.com/users/pthd/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-17T15:23:35 | 2024-01-17T19:00:27 | null | NONE | null | ### System Info
- `transformers` version: 4.33.2
- Platform: macOS-14.2.1-arm64-arm-64bit
- Python version: 3.9.16
- Huggingface_hub version: 0.18.0
- Safetensors version: 0.4.0
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@philschmid
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
deployed according to
https://huggingface.co/Cohere/Cohere-embed-multilingual-v3.0?sagemaker_deploy=true
W-Cohere__Cohere-embed-mult-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - ValueError: Unrecognized model in /.sagemaker/mms/models/Cohere__Cohere-embed-multilingual-v3.0. Should have a `model_type` key in its config.json, or contain one of the following strings in its name:
### Expected behavior
deployment succeeds but invocation raises error
```
W-Cohere__Cohere-embed-mult-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - ValueError: Unrecognized model in /.sagemaker/mms/models/Cohere__Cohere-embed-multilingual-v3.0. Should have a `model_type` key in its config.json, or contain one of the following strings in its name:
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28560/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28560/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28559 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28559/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28559/comments | https://api.github.com/repos/huggingface/transformers/issues/28559/events | https://github.com/huggingface/transformers/pull/28559 | 2,086,294,196 | PR_kwDOCUB6oc5kUZCd | 28,559 | ClearMLCallback enhancements: support multiple runs and handle logging better | {
"login": "eugen-ajechiloae-clearml",
"id": 97950284,
"node_id": "U_kgDOBdaaTA",
"avatar_url": "https://avatars.githubusercontent.com/u/97950284?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eugen-ajechiloae-clearml",
"html_url": "https://github.com/eugen-ajechiloae-clearml",
"followers_url": "https://api.github.com/users/eugen-ajechiloae-clearml/followers",
"following_url": "https://api.github.com/users/eugen-ajechiloae-clearml/following{/other_user}",
"gists_url": "https://api.github.com/users/eugen-ajechiloae-clearml/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eugen-ajechiloae-clearml/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eugen-ajechiloae-clearml/subscriptions",
"organizations_url": "https://api.github.com/users/eugen-ajechiloae-clearml/orgs",
"repos_url": "https://api.github.com/users/eugen-ajechiloae-clearml/repos",
"events_url": "https://api.github.com/users/eugen-ajechiloae-clearml/events{/privacy}",
"received_events_url": "https://api.github.com/users/eugen-ajechiloae-clearml/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-17T14:18:34 | 2024-01-31T13:19:04 | null | CONTRIBUTOR | null | Currently, training multiple models in the same script might cause some model checkpoints logged to ClearML to be lost, as well as scalars, when using ClearMLCallback. This PR fixes these issues. Scalar visualization in ClearML has also been enhanced.
What the PR does:
1. We count the number of times `ClearMLCallback.setup` is called via class variables. When a second training run is created, we do one of the following: we create a new task that will be used to log all the models, metrics etc. OR we keep the same task and we suffix the metrics/checkpoint etc. with the setup number.
We keep the same task if the task was created externally or if we are running remotely. We create new tasks if `ClearMLCallback` is the one that created the first task (in this case we also close the task so we can create another one).
2. We now delete model checkpoints if `save_total_limit` is set and the limit has been exceeded.
3. We switched the title/series of the logged scalars for better visualization.
4. We now, by default, don't fetch the configurations/hparams from the backend when running remotely, as these can contain temp files or other variables that are related to the local environment. The user can still override these tho, by setting, in the UI/via scripts, `_ignore_hparams_ui_overrides_` or `_ignore_model_config_ui_overrides_` to False. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28559/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28559/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28559",
"html_url": "https://github.com/huggingface/transformers/pull/28559",
"diff_url": "https://github.com/huggingface/transformers/pull/28559.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28559.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28557 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28557/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28557/comments | https://api.github.com/repos/huggingface/transformers/issues/28557/events | https://github.com/huggingface/transformers/pull/28557 | 2,086,242,029 | PR_kwDOCUB6oc5kUNpJ | 28,557 | Fix duplicate & unnecessary flash attention warnings | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 7 | 2024-01-17T13:52:28 | 2024-01-26T08:37:05 | 2024-01-26T08:37:05 | COLLABORATOR | null | Complete the fixes from https://github.com/huggingface/transformers/pull/28142, that finally closes https://github.com/huggingface/transformers/issues/28052
With this PR:
* no log is shown when loading from `from_config` with a good `torch_dtype` is set (fp16, bf16), while previously erronous logs were shown (https://github.com/huggingface/transformers/issues/28052#issuecomment-1856811089)
* no duplicate logs are shown (was previously the case due to `_autoset_attn_implementation` being called both in from_pretrained/from_config and __init__) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28557/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28557/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28557",
"html_url": "https://github.com/huggingface/transformers/pull/28557",
"diff_url": "https://github.com/huggingface/transformers/pull/28557.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28557.patch",
"merged_at": "2024-01-26T08:37:05"
} |
https://api.github.com/repos/huggingface/transformers/issues/28556 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28556/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28556/comments | https://api.github.com/repos/huggingface/transformers/issues/28556/events | https://github.com/huggingface/transformers/pull/28556 | 2,086,192,123 | PR_kwDOCUB6oc5kUCyF | 28,556 | Feature Update [added `initial_prompt` support for automatic-speech-recognition whisper pipeline] | {
"login": "Biswajit2902",
"id": 10162006,
"node_id": "MDQ6VXNlcjEwMTYyMDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/10162006?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Biswajit2902",
"html_url": "https://github.com/Biswajit2902",
"followers_url": "https://api.github.com/users/Biswajit2902/followers",
"following_url": "https://api.github.com/users/Biswajit2902/following{/other_user}",
"gists_url": "https://api.github.com/users/Biswajit2902/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Biswajit2902/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Biswajit2902/subscriptions",
"organizations_url": "https://api.github.com/users/Biswajit2902/orgs",
"repos_url": "https://api.github.com/users/Biswajit2902/repos",
"events_url": "https://api.github.com/users/Biswajit2902/events{/privacy}",
"received_events_url": "https://api.github.com/users/Biswajit2902/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 7 | 2024-01-17T13:26:24 | 2024-01-29T10:04:39 | null | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (feature)
- `initial_prompt` support for whisper Pipeline (automatic-speech-recognition)
## Before submitting
- [x] Added initial_prompt as an option for whisper model
- [x] To handle initial prompt `processor` considered as optional parameter
- [x] Current implementation supports only Torch version of decoding.
- [x] how to use initial prompt;
``` python
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from datasets import load_dataset
import torch
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "openai/whisper-small"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
max_new_tokens=128,
chunk_length_s=15,
batch_size=16,
torch_dtype=torch_dtype,
device=device,
processor=processor
)
dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation")
sample = dataset[0]["audio"]
audio = dataset[0]["audio"]["array"]
sampling_rate = dataset[0]["audio"]["sampling_rate"]
# including timestamp
print(pipe(audio, initial_prompt = "Biswajit, Whisper", return_timestamps=True))
# without timestamp
print(pipe(audio, initial_prompt = "Biswajit, Whisper"))
```
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. @sanchit-gandhi , @Narsil, Can anyone help to take this PR forward please. Let me know, if anything is needed.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28556/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28556/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28556",
"html_url": "https://github.com/huggingface/transformers/pull/28556",
"diff_url": "https://github.com/huggingface/transformers/pull/28556.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28556.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28555 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28555/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28555/comments | https://api.github.com/repos/huggingface/transformers/issues/28555/events | https://github.com/huggingface/transformers/pull/28555 | 2,086,171,345 | PR_kwDOCUB6oc5kT-Ur | 28,555 | Fix max_new_tokens for assistant model in assistant generation | {
"login": "jmamou",
"id": 19263306,
"node_id": "MDQ6VXNlcjE5MjYzMzA2",
"avatar_url": "https://avatars.githubusercontent.com/u/19263306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmamou",
"html_url": "https://github.com/jmamou",
"followers_url": "https://api.github.com/users/jmamou/followers",
"following_url": "https://api.github.com/users/jmamou/following{/other_user}",
"gists_url": "https://api.github.com/users/jmamou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmamou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmamou/subscriptions",
"organizations_url": "https://api.github.com/users/jmamou/orgs",
"repos_url": "https://api.github.com/users/jmamou/repos",
"events_url": "https://api.github.com/users/jmamou/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmamou/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-17T13:15:03 | 2024-01-24T12:41:15 | 2024-01-24T12:41:14 | NONE | null | # What does this PR do?
During assistant generation, at each iteration, assistant model generate `num_assistant_tokens` tokens.
If the maximum numbers of tokens to generate is limited by `max_len`, in the case that `max_len-cur_len` is less than `num_assistant_tokens`, it will be more efficient for the assistant model to generate only `max_len-cur_len` instead of generating `num_assistant_tokens`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@gante
@echarlaix
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28555/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28555/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28555",
"html_url": "https://github.com/huggingface/transformers/pull/28555",
"diff_url": "https://github.com/huggingface/transformers/pull/28555.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28555.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28554 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28554/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28554/comments | https://api.github.com/repos/huggingface/transformers/issues/28554/events | https://github.com/huggingface/transformers/pull/28554 | 2,086,068,129 | PR_kwDOCUB6oc5kToJn | 28,554 | `intial_prompt` support for automatic-speech-recognition (whisper) pipeline | {
"login": "Biswajit2902",
"id": 10162006,
"node_id": "MDQ6VXNlcjEwMTYyMDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/10162006?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Biswajit2902",
"html_url": "https://github.com/Biswajit2902",
"followers_url": "https://api.github.com/users/Biswajit2902/followers",
"following_url": "https://api.github.com/users/Biswajit2902/following{/other_user}",
"gists_url": "https://api.github.com/users/Biswajit2902/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Biswajit2902/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Biswajit2902/subscriptions",
"organizations_url": "https://api.github.com/users/Biswajit2902/orgs",
"repos_url": "https://api.github.com/users/Biswajit2902/repos",
"events_url": "https://api.github.com/users/Biswajit2902/events{/privacy}",
"received_events_url": "https://api.github.com/users/Biswajit2902/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-17T12:16:10 | 2024-01-17T13:20:50 | 2024-01-17T13:20:49 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (feature)
- `initial_prompt` support for whisper Pipeline (automatic-speech-recognition)
## Before submitting
- [ ] Added initial_prompt as an option for whisper model
- [ ] To handle initial prompt `processor` considered as optional parameter
- [ ] Current implementation supports only Torch version of decoding.
- [ ] how to use initial prompt;
``` python
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from datasets import load_dataset
model_id = "openai/whisper-small"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
max_new_tokens=128,
chunk_length_s=15,
batch_size=16,
torch_dtype=torch_dtype,
device=device,
processor=processor
)
dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation")
sample = dataset[0]["audio"]
# including timestamp
print(pipe(audio, initial_prompt = "Biswajit, Whisper", return_timestamps=True))
# without timestamp
print(pipe(audio, initial_prompt = "Biswajit, Whisper"))
```
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. @sanchit-gandhi , @Narsil, Can anyone help to take this PR forward please. Let me know, if anything is needed.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28554/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28554/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28554",
"html_url": "https://github.com/huggingface/transformers/pull/28554",
"diff_url": "https://github.com/huggingface/transformers/pull/28554.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28554.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28553 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28553/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28553/comments | https://api.github.com/repos/huggingface/transformers/issues/28553/events | https://github.com/huggingface/transformers/issues/28553 | 2,086,028,169 | I_kwDOCUB6oc58VkOJ | 28,553 | llama 2 conversion script unknown error | {
"login": "liboliba",
"id": 51449526,
"node_id": "MDQ6VXNlcjUxNDQ5NTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/51449526?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liboliba",
"html_url": "https://github.com/liboliba",
"followers_url": "https://api.github.com/users/liboliba/followers",
"following_url": "https://api.github.com/users/liboliba/following{/other_user}",
"gists_url": "https://api.github.com/users/liboliba/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liboliba/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liboliba/subscriptions",
"organizations_url": "https://api.github.com/users/liboliba/orgs",
"repos_url": "https://api.github.com/users/liboliba/repos",
"events_url": "https://api.github.com/users/liboliba/events{/privacy}",
"received_events_url": "https://api.github.com/users/liboliba/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2024-01-17T11:52:06 | 2024-01-17T15:52:47 | null | NONE | null | ### System Info
Hi,
I have downloaded llama 2 weights and installed the transformer package. I plan to use it under transformer package and applied the conversion script.
The conversion script does not work:
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path/tomyfilepath
File "...path/src/transformers/models/llama/convert_llama_weights_to_hf.py", line 126
print(f"Fetching all parameters from the checkpoint at {input_base_path}.")
^
SyntaxError: invalid syntax
On Linux when I do for example:
ls /path/to/downloaded/llama/llama-2-7b-chat
I get:
checklist.chk consolidated.00.pth params.json
I assume I have the correct files. Any advise would be grateful.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path/tomyfilepath
### Expected behavior
It is expected tokenizer and model be converted so that they are usable for transformer package. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28553/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28553/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28552 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28552/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28552/comments | https://api.github.com/repos/huggingface/transformers/issues/28552/events | https://github.com/huggingface/transformers/pull/28552 | 2,086,011,975 | PR_kwDOCUB6oc5kTb95 | 28,552 | Fix SDPA tests | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-17T11:42:12 | 2024-01-17T16:29:20 | 2024-01-17T16:29:19 | COLLABORATOR | null | @ydshieh testing on a T4 (& cpu), all tests pass now. Most of the failing ones were due to the fact that we run the CI on a T4 GPU, that does not support bf16. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28552/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28552/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28552",
"html_url": "https://github.com/huggingface/transformers/pull/28552",
"diff_url": "https://github.com/huggingface/transformers/pull/28552.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28552.patch",
"merged_at": "2024-01-17T16:29:19"
} |
https://api.github.com/repos/huggingface/transformers/issues/28551 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28551/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28551/comments | https://api.github.com/repos/huggingface/transformers/issues/28551/events | https://github.com/huggingface/transformers/pull/28551 | 2,085,796,120 | PR_kwDOCUB6oc5kSswv | 28,551 | [Makefile] Exclude research projects from format | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-17T09:43:20 | 2024-01-17T09:59:41 | 2024-01-17T09:59:40 | MEMBER | null | # What does this PR do?
When running `make style` from root, the research folder files are changed even though they are outdated. This PR makes sure we exclude the deprecated research folder all together.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28551/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28551/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28551",
"html_url": "https://github.com/huggingface/transformers/pull/28551",
"diff_url": "https://github.com/huggingface/transformers/pull/28551.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28551.patch",
"merged_at": "2024-01-17T09:59:40"
} |
https://api.github.com/repos/huggingface/transformers/issues/28550 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28550/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28550/comments | https://api.github.com/repos/huggingface/transformers/issues/28550/events | https://github.com/huggingface/transformers/issues/28550 | 2,085,781,875 | I_kwDOCUB6oc58UoFz | 28,550 | Tr-OCR Large Checkpoint model diverges | {
"login": "nogifeet",
"id": 72322393,
"node_id": "MDQ6VXNlcjcyMzIyMzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/72322393?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nogifeet",
"html_url": "https://github.com/nogifeet",
"followers_url": "https://api.github.com/users/nogifeet/followers",
"following_url": "https://api.github.com/users/nogifeet/following{/other_user}",
"gists_url": "https://api.github.com/users/nogifeet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nogifeet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nogifeet/subscriptions",
"organizations_url": "https://api.github.com/users/nogifeet/orgs",
"repos_url": "https://api.github.com/users/nogifeet/repos",
"events_url": "https://api.github.com/users/nogifeet/events{/privacy}",
"received_events_url": "https://api.github.com/users/nogifeet/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 5 | 2024-01-17T09:35:17 | 2024-01-29T20:48:08 | null | NONE | null | ### System Info
Transformers Version -- 4.35.2
Python Version -- 3.10.12 [GCC 11.4.0]
Environment -- Google Colab
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When using the large checkpoint, the initial max_token_length is 20, which is not ideal as the tokens could be larger for different images. When playing with this parameter we notice that the model starts diverging and producing repeated tokens in some of the sample images.
Please use the below sample image

from transformers import TrOCRProcessor, VisionEncoderDecoderModel
import requests
from PIL import Image
processor = TrOCRProcessor.from_pretrained("microsoft/trocr-large-handwritten")
model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-large-handwritten")
image = Image.open("/content/e04-083-00.png").convert("RGB")
pixel_values = processor(image, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values,use_cache=True,max_new_tokens=100)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_text)
// The edges of the transoms should be bevelled to be edges to the edges of the edges of the edges of the edges of the
edges of the edges of the edges of the edges of the edges of the edges of the edges of the edges of the edges of
the edges of the edges of the edges of the edges of the edges of the edges of the edges of the edges of the edges
of the edges of the edges of the edges of the edges of the edges of the edges of the
print(len(generated_text))
// 427
### Expected behavior
I would expect the large checkpoint to behave similarly or even better than the base checkpoint...
from transformers import TrOCRProcessor, VisionEncoderDecoderModel
import requests
from PIL import Image
processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten")
model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten")
# load image from the IAM dataset
image = Image.open("/content/e04-083-00.png").convert("RGB")
pixel_values = processor(image, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values,use_cache=True,max_new_tokens=100)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_text)
// The edges of the transoms should be bevelled to
print(len(generated_text))
// 47 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28550/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28550/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28558 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28558/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28558/comments | https://api.github.com/repos/huggingface/transformers/issues/28558/events | https://github.com/huggingface/transformers/issues/28558 | 2,086,245,243 | I_kwDOCUB6oc58WZN7 | 28,558 | Optional files still require token for `huggingface-cli` downloaded gated repo | {
"login": "scruel",
"id": 16933298,
"node_id": "MDQ6VXNlcjE2OTMzMjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/16933298?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scruel",
"html_url": "https://github.com/scruel",
"followers_url": "https://api.github.com/users/scruel/followers",
"following_url": "https://api.github.com/users/scruel/following{/other_user}",
"gists_url": "https://api.github.com/users/scruel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/scruel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scruel/subscriptions",
"organizations_url": "https://api.github.com/users/scruel/orgs",
"repos_url": "https://api.github.com/users/scruel/repos",
"events_url": "https://api.github.com/users/scruel/events{/privacy}",
"received_events_url": "https://api.github.com/users/scruel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.githu... | null | 12 | 2024-01-17T09:26:11 | 2024-01-26T16:25:10 | 2024-01-26T16:25:10 | CONTRIBUTOR | null | ### Describe the bug
For functions `from_XXX`, it will create empty files into `.no_exist` directory if repo have some files missed, however the CLI tool `huggingface-cli download` won't do so, which caused inconsistency issues.
### Reproduction
1. `export HF_TOKEN=XXX`
2. `huggingface-cli download --resume-download meta-llama/Llama-2-7b-hf`
3. `python -c "from transformers import AutoTokenizer; AutoTokenizer.from_pretrained('meta-llama/Llama-2-7b-hf', token=False)"`
Will produce `OSError` because we are loading a gated repo, even we already requested and downloaded it via CLI tool, we won't be able to use this cached model (e.g., if we want to use it offline), until we run `from_XXX` method once to have those missed files created as empty files into `.no_exist` directory.
### Logs
```shell
...
OSError: You are trying to access a gated repo.
Make sure to request access at https://huggingface.co/meta-llama/Llama-2-7b-hf and pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`.
```
### System info
```shell
- huggingface_hub version: 0.20.2
- Platform: Linux-6.5.0-14-generic-x86_64-with-glibc2.35
- Python version: 3.11.6
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: /home/scruel/.cache/huggingface/token
- Has saved token ?: True
- Who am I ?: situqingyun
- Configured git credential helpers:
- FastAI: N/A
- Tensorflow: N/A
- Torch: 2.0.1
- Jinja2: 3.1.2
- Graphviz: N/A
- Pydot: N/A
- Pillow: N/A
- hf_transfer: 0.1.4
- gradio: N/A
- tensorboard: N/A
- numpy: 1.26.0
- pydantic: N/A
- aiohttp: 3.9.1
- ENDPOINT: https://huggingface.co
- HF_HUB_CACHE: /home/scruel/.cache/huggingface/hub
- HF_ASSETS_CACHE: /home/scruel/.cache/huggingface/assets
- HF_TOKEN_PATH: /home/scruel/.cache/huggingface/token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
- HF_HUB_ETAG_TIMEOUT: 10
- HF_HUB_DOWNLOAD_TIMEOUT: 10
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28558/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28558/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28549 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28549/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28549/comments | https://api.github.com/repos/huggingface/transformers/issues/28549/events | https://github.com/huggingface/transformers/issues/28549 | 2,085,743,538 | I_kwDOCUB6oc58Ueuy | 28,549 | Fine tuning whisper and whisper lora with prompts | {
"login": "kenfus",
"id": 47979198,
"node_id": "MDQ6VXNlcjQ3OTc5MTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/47979198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kenfus",
"html_url": "https://github.com/kenfus",
"followers_url": "https://api.github.com/users/kenfus/followers",
"following_url": "https://api.github.com/users/kenfus/following{/other_user}",
"gists_url": "https://api.github.com/users/kenfus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kenfus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kenfus/subscriptions",
"organizations_url": "https://api.github.com/users/kenfus/orgs",
"repos_url": "https://api.github.com/users/kenfus/repos",
"events_url": "https://api.github.com/users/kenfus/events{/privacy}",
"received_events_url": "https://api.github.com/users/kenfus/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2024-01-17T09:14:05 | 2024-01-23T12:05:35 | null | NONE | null | ### Feature request
Currently, I am successfully able to fine-tune whisper and whisper Lora with timestamps, thank you for that!
Now, I would like to fine tune whisper with prompts, as Openai has trained it. There is not too much documentation, so maybe it's already in? Currently, my dataset has the following columns: `input_features`, `labels` and `prompt_ids`. The labels do currently not contain the `prompt_ids`. So my first question:
- If I add `prompt_ids` to the `labels` at the beginning, is it already correct? Will the huggingface library automatically cut the labels at the correct point and pass them to the model to start the decoding? I did not understand where this in the code exactly happens.
- If not, where would it be best to add? I think either it should happen automatically from the `label` or we could make the trainer use `prompt_ids` automatically, if available.
### Motivation
Overall, it was a bit unclear on how to finetune whisper with timestamps and prompts. Maybe it's already there, maybe not. In addition, the code was a bit hard to understand in the HU library.
### Your contribution
Absolutely! If I get some guidance, where to look and what to change, I am happy to do so. I am happy to contribute to HU, which helped me a lot in my work. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28549/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28549/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28548 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28548/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28548/comments | https://api.github.com/repos/huggingface/transformers/issues/28548/events | https://github.com/huggingface/transformers/issues/28548 | 2,085,696,927 | I_kwDOCUB6oc58UTWf | 28,548 | Pipeline batching with ZeroShotImageClassificationPipeline outputs less items per iteration than expected | {
"login": "ryan-caesar-ramos",
"id": 65334734,
"node_id": "MDQ6VXNlcjY1MzM0NzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/65334734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ryan-caesar-ramos",
"html_url": "https://github.com/ryan-caesar-ramos",
"followers_url": "https://api.github.com/users/ryan-caesar-ramos/followers",
"following_url": "https://api.github.com/users/ryan-caesar-ramos/following{/other_user}",
"gists_url": "https://api.github.com/users/ryan-caesar-ramos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ryan-caesar-ramos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ryan-caesar-ramos/subscriptions",
"organizations_url": "https://api.github.com/users/ryan-caesar-ramos/orgs",
"repos_url": "https://api.github.com/users/ryan-caesar-ramos/repos",
"events_url": "https://api.github.com/users/ryan-caesar-ramos/events{/privacy}",
"received_events_url": "https://api.github.com/users/ryan-caesar-ramos/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-17T08:46:45 | 2024-01-17T09:41:34 | 2024-01-17T09:38:13 | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-6.5.0-14-generic-x86_64-with-glibc2.35
- Python version: 3.11.5
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import pipeline
from datasets import load_dataset
import numpy as np
from transformers.pipelines.pt_utils import KeyDataset
pipe = pipeline('zero-shot-image-classification', model='openai/clip-vit-base-patch16', device=0)
dataset = load_dataset('mnist', split='test')
for out in pipe(KeyDataset(dataset, "image"), candidate_labels=range(10), batched=True, batch_size=1024):
break
print(len(out))
```
### Expected behavior
I would have assumed that `out` would have 10 classes * 1024 images in the batch = 10240 items in it, but it only has 10. Maybe I'm misinterpreting what batching with pipelines does? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28548/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28548/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28547 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28547/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28547/comments | https://api.github.com/repos/huggingface/transformers/issues/28547/events | https://github.com/huggingface/transformers/pull/28547 | 2,085,630,838 | PR_kwDOCUB6oc5kSJcn | 28,547 | [`PEFT`] make the trainer support resume checkpoint from a named adapter #28531 | {
"login": "chenbin11200",
"id": 5245644,
"node_id": "MDQ6VXNlcjUyNDU2NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5245644?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chenbin11200",
"html_url": "https://github.com/chenbin11200",
"followers_url": "https://api.github.com/users/chenbin11200/followers",
"following_url": "https://api.github.com/users/chenbin11200/following{/other_user}",
"gists_url": "https://api.github.com/users/chenbin11200/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chenbin11200/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chenbin11200/subscriptions",
"organizations_url": "https://api.github.com/users/chenbin11200/orgs",
"repos_url": "https://api.github.com/users/chenbin11200/repos",
"events_url": "https://api.github.com/users/chenbin11200/events{/privacy}",
"received_events_url": "https://api.github.com/users/chenbin11200/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 9 | 2024-01-17T08:03:28 | 2024-01-25T08:49:49 | null | NONE | null | # What does this PR do?
Fixes # 28531
In peft>=0.5.0, when one initialize the PeftModel with a adapter name, like
```python
peft_model = get_peft_model(model=base_model,
peft_config=peft_config,
adapter_name='my_lora_model_name',
```
In this case, the `adapter_config.json` and `adapter_model.bin` files will be saved in `/my_output_dir/checkpoint-300/my_lora_model_name` instead of `/my_output_dir/checkpoint-300` directly. That will raise ValueError when trying to resume a training from checkpoint.
This PR is to fix this issue by join path into the subfolder, and load the adapter from the right subfolder(if necessary, because if one don't offer a adapter_name, the weight and config files will not be saved into a subfolder, this is also considered).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)?
The bug is reported there but not yet discussed. `https://github.com/huggingface/transformers/issues/28531`
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
No unit test, only tested locally for this small change.
## Who can review?
@muellerzr and @pacman100
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28547/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28547/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28547",
"html_url": "https://github.com/huggingface/transformers/pull/28547",
"diff_url": "https://github.com/huggingface/transformers/pull/28547.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28547.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28546 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28546/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28546/comments | https://api.github.com/repos/huggingface/transformers/issues/28546/events | https://github.com/huggingface/transformers/issues/28546 | 2,085,555,311 | I_kwDOCUB6oc58Twxv | 28,546 | How to use fp32 and qLora to fine-tune models | {
"login": "guoyunqingyue",
"id": 77528622,
"node_id": "MDQ6VXNlcjc3NTI4NjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/77528622?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guoyunqingyue",
"html_url": "https://github.com/guoyunqingyue",
"followers_url": "https://api.github.com/users/guoyunqingyue/followers",
"following_url": "https://api.github.com/users/guoyunqingyue/following{/other_user}",
"gists_url": "https://api.github.com/users/guoyunqingyue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guoyunqingyue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guoyunqingyue/subscriptions",
"organizations_url": "https://api.github.com/users/guoyunqingyue/orgs",
"repos_url": "https://api.github.com/users/guoyunqingyue/repos",
"events_url": "https://api.github.com/users/guoyunqingyue/events{/privacy}",
"received_events_url": "https://api.github.com/users/guoyunqingyue/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-17T07:16:11 | 2024-01-23T16:56:08 | null | NONE | null | ### System Info
I'm using transformers version 4.32.0 and I want to fine-tune the Qwen/Qwen-VL-Chat-Int4 model, but my 1080ti GPU doesn't support fp16. When I want to use "training_args.fp16 = False" to modify the parameters, the error "dataclasses.FrozenInstanceError: cannot assign to field fp16" will be reported. I guess this parameter cannot be changed manually. What should I do besides changing the GPU so that it can use fp16?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am using the fine-tuning code given by Qwen:
```python
parser = transformers.HfArgumentParser(
(ModelArguments, DataArguments, TrainingArguments, LoraArguments)
)
(
model_args,
data_args,
training_args,
lora_args,
) = parser.parse_args_into_dataclasses()
if getattr(training_args, 'deepspeed', None) and getattr(lora_args, 'q_lora', False):
training_args.distributed_state.distributed_type = DistributedType.DEEPSPEED
training_args.fp16 = False
compute_dtype = (
torch.float16
if training_args.fp16
else (torch.bfloat16 if training_args.bf16 else torch.float32)
)
local_rank = training_args.local_rank
device_map = None
world_size = int(os.environ.get("WORLD_SIZE", 1))
ddp = world_size != 1
if lora_args.q_lora:
device_map = {"": int(os.environ.get("LOCAL_RANK") or 0)} if ddp else None
if len(training_args.fsdp) > 0 or deepspeed.is_deepspeed_zero3_enabled():
logging.warning(
"FSDP or ZeRO3 are not incompatible with QLoRA."
)
# Set RoPE scaling factor
config = transformers.AutoConfig.from_pretrained(
model_args.model_name_or_path,
cache_dir=training_args.cache_dir,
trust_remote_code=True,
)
config.use_cache = False
# Load model and tokenizer
model = transformers.AutoModelForCausalLM.from_pretrained(
model_args.model_name_or_path,
config=config,
cache_dir=training_args.cache_dir,
device_map=device_map,
trust_remote_code=True,
quantization_config=GPTQConfig(
bits=4, disable_exllama=True
)
if training_args.use_lora and lora_args.q_lora
else None,
)
```
### Expected behavior
I want a solution | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28546/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28546/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28545 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28545/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28545/comments | https://api.github.com/repos/huggingface/transformers/issues/28545/events | https://github.com/huggingface/transformers/issues/28545 | 2,085,330,571 | I_kwDOCUB6oc58S56L | 28,545 | Download reconfiguration | {
"login": "LOVE-YOURSELF-1",
"id": 71559440,
"node_id": "MDQ6VXNlcjcxNTU5NDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/71559440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LOVE-YOURSELF-1",
"html_url": "https://github.com/LOVE-YOURSELF-1",
"followers_url": "https://api.github.com/users/LOVE-YOURSELF-1/followers",
"following_url": "https://api.github.com/users/LOVE-YOURSELF-1/following{/other_user}",
"gists_url": "https://api.github.com/users/LOVE-YOURSELF-1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LOVE-YOURSELF-1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LOVE-YOURSELF-1/subscriptions",
"organizations_url": "https://api.github.com/users/LOVE-YOURSELF-1/orgs",
"repos_url": "https://api.github.com/users/LOVE-YOURSELF-1/repos",
"events_url": "https://api.github.com/users/LOVE-YOURSELF-1/events{/privacy}",
"received_events_url": "https://api.github.com/users/LOVE-YOURSELF-1/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 7 | 2024-01-17T03:43:08 | 2024-01-22T15:11:50 | null | NONE | null | ### Feature request
Download reconfiguration
### Motivation
Separate out the download of the pretrained function for model、configuration and tokenizer.
### Your contribution
Do you think it is necessary? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28545/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28545/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28544 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28544/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28544/comments | https://api.github.com/repos/huggingface/transformers/issues/28544/events | https://github.com/huggingface/transformers/issues/28544 | 2,085,205,752 | I_kwDOCUB6oc58Sbb4 | 28,544 | Early stopping patience does not work when resuming from checkpoint | {
"login": "Ubadub",
"id": 1286898,
"node_id": "MDQ6VXNlcjEyODY4OTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1286898?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ubadub",
"html_url": "https://github.com/Ubadub",
"followers_url": "https://api.github.com/users/Ubadub/followers",
"following_url": "https://api.github.com/users/Ubadub/following{/other_user}",
"gists_url": "https://api.github.com/users/Ubadub/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ubadub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ubadub/subscriptions",
"organizations_url": "https://api.github.com/users/Ubadub/orgs",
"repos_url": "https://api.github.com/users/Ubadub/repos",
"events_url": "https://api.github.com/users/Ubadub/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ubadub/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-17T01:00:05 | 2024-01-17T10:06:48 | null | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.33.3
- Platform: Linux-4.18.0-348.23.1.el8_5.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.16
- Huggingface_hub version: 0.16.2
- Safetensors version: 0.3.1
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.0.post101 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@muellerzr and @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Fundamentally the issue is that the `early_stopping_patience_counter` is not persisted when checkpointing. Consequently, it is [always (re)set to 0](https://github.com/huggingface/transformers/blob/v4.36.1/src/transformers/trainer_callback.py#L564) when initializing `Trainer`, including when resuming from checkpoint. This means that if, for example, you never train your model for `early_stopping_patience`-many evaluation steps at once before stopping and resuming from checkpoint, early stopping will never happen.
An auxiliary issue is that even if you train your model for longer than `early_stopping_patience`-many evaluation steps, and training correctly stops, if you happen to then re-initiate training from a checkpoint, training will resume [even though the run ended with `self.control.should_training_stop == True`](https://github.com/huggingface/transformers/blob/f4f57f9dfa68948a383c352a900d588f63f6290a/src/transformers/trainer_callback.py#L154). This is because this variable is also not persisted to the `trainer_state.json` file when checkpointing. This issue was reported in #10290, but was never resolved before the issue was closed as stale.
To reproduce the main issue, simply initiate a training run and set `early_stopping_patience` to a value of your choice, then interrupt training before the run gets there. Reinitiate training with `resume_from_checkpoint=True`. Rinse and repeat until `best_metric` increases for `early_stopping_patience`-many evaluation calls.
To reproduce the auxiliary issue, don't interrupt your run until it stops due to early stopping. When it is complete. reinitiate training with `resume_from_checkpoint=True`.
### Expected behavior
Early stopping patience should work exactly the same when stopping and resuming runs from a checkpoint as when training continuously without interruption. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28544/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28544/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28543 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28543/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28543/comments | https://api.github.com/repos/huggingface/transformers/issues/28543/events | https://github.com/huggingface/transformers/issues/28543 | 2,085,134,751 | I_kwDOCUB6oc58SKGf | 28,543 | IsADirectoryError: [Errno 21] Is a directory: 'my-company/my-llm' | {
"login": "gventuri",
"id": 15671184,
"node_id": "MDQ6VXNlcjE1NjcxMTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/15671184?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gventuri",
"html_url": "https://github.com/gventuri",
"followers_url": "https://api.github.com/users/gventuri/followers",
"following_url": "https://api.github.com/users/gventuri/following{/other_user}",
"gists_url": "https://api.github.com/users/gventuri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gventuri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gventuri/subscriptions",
"organizations_url": "https://api.github.com/users/gventuri/orgs",
"repos_url": "https://api.github.com/users/gventuri/repos",
"events_url": "https://api.github.com/users/gventuri/events{/privacy}",
"received_events_url": "https://api.github.com/users/gventuri/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 6 | 2024-01-16T23:30:56 | 2024-01-18T08:46:08 | 2024-01-17T20:48:04 | NONE | null | ### System Info
- `transformers` version: 4.37.0.dev0
- Platform: Linux-5.15.0-89-generic-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.27.0.dev0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
- Run the following code:
```
from transformers import AutoModelForCausalLM
import torch
m = AutoModelForCausalLM.from_pretrained(
"pretrained-model",
return_dict=True,
torch_dtype=torch.bfloat16,
device_map="auto"
)
m.push_to_hub("my-company/my-llm")
```
- I get the following error
```
---------------------------------------------------------------------------
IsADirectoryError Traceback (most recent call last)
Cell In[5], line 1
----> 1 m.push_to_hub("my-company/my-llm")
File /opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py:2530, in PreTrainedModel.push_to_hub(self, *args, **kwargs)
2528 if tags:
2529 kwargs["tags"] = tags
-> 2530 return super().push_to_hub(*args, **kwargs)
File /opt/conda/lib/python3.10/site-packages/transformers/utils/hub.py:865, in PushToHubMixin.push_to_hub(self, repo_id, use_temp_dir, commit_message, private, token, max_shard_size, create_pr, safe_serialization, revision, commit_description, tags, **deprecated_kwargs)
860 repo_id = self._create_repo(
861 repo_id, private=private, token=token, repo_url=repo_url, organization=organization
862 )
864 # Create a new empty model card and eventually tag it
--> 865 model_card = create_and_tag_model_card(
866 repo_id, tags, token=token, ignore_metadata_errors=ignore_metadata_errors
867 )
869 if use_temp_dir is None:
870 use_temp_dir = not os.path.isdir(working_dir)
File /opt/conda/lib/python3.10/site-packages/transformers/utils/hub.py:1120, in create_and_tag_model_card(repo_id, tags, token, ignore_metadata_errors)
1104 """
1105 Creates or loads an existing model card and tags it.
1106
(...)
1116 the process. Use it at your own risk.
1117 """
1118 try:
1119 # Check if the model card is present on the remote repo
-> 1120 model_card = ModelCard.load(repo_id, token=token, ignore_metadata_errors=ignore_metadata_errors)
1121 except EntryNotFoundError:
1122 # Otherwise create a simple model card from template
1123 model_description = "This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated."
File /opt/conda/lib/python3.10/site-packages/huggingface_hub/repocard.py:185, in RepoCard.load(cls, repo_id_or_path, repo_type, token, ignore_metadata_errors)
182 raise ValueError(f"Cannot load RepoCard: path not found on disk ({repo_id_or_path}).")
184 # Preserve newlines in the existing file.
--> 185 with card_path.open(mode="r", newline="", encoding="utf-8") as f:
186 return cls(f.read(), ignore_metadata_errors=ignore_metadata_errors)
File /opt/conda/lib/python3.10/pathlib.py:1119, in Path.open(self, mode, buffering, encoding, errors, newline)
1117 if "b" not in mode:
1118 encoding = io.text_encoding(encoding)
-> 1119 return self._accessor.open(self, mode, buffering, encoding, errors,
1120 newline)
IsADirectoryError: [Errno 21] Is a directory: 'my-company/my-llm'
```
This happen every time I run `push_to_hub`, even with other configurations.
### Expected behavior
I would expect this code to push the pretrained model to hugging face. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28543/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28543/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28542 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28542/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28542/comments | https://api.github.com/repos/huggingface/transformers/issues/28542/events | https://github.com/huggingface/transformers/pull/28542 | 2,084,992,834 | PR_kwDOCUB6oc5kQBBB | 28,542 | [docs] DeepSpeed | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-16T21:35:13 | 2024-01-24T16:31:32 | 2024-01-24T16:31:28 | MEMBER | null | Refactors the [DeepSpeed API page](https://huggingface.co/docs/transformers/main/en/main_classes/deepspeed#deepspeed-integration) to make it easier to find and view `HfDeepSpeedConfig`, the only actual API reference on this doc. The rest of the DeepSpeed content will go to the Efficient Training Techniques section as a standalone guide. I've also moved some of the troubleshooting content with building DeepSpeed to the [Debugging guide](https://huggingface.co/docs/transformers/main/en/debugging) with a link.
todo:
- [x] discuss choosing which ZeRO stage to use
- [x] get model weights out
- [x] ZeRO-3 and inference
- [x] memory requirements
- [x] troubleshooting/filing issues
- [x] non-Trainer DeepSpeed integration | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28542/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28542/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28542",
"html_url": "https://github.com/huggingface/transformers/pull/28542",
"diff_url": "https://github.com/huggingface/transformers/pull/28542.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28542.patch",
"merged_at": "2024-01-24T16:31:28"
} |
https://api.github.com/repos/huggingface/transformers/issues/28541 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28541/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28541/comments | https://api.github.com/repos/huggingface/transformers/issues/28541/events | https://github.com/huggingface/transformers/issues/28541 | 2,084,552,019 | I_kwDOCUB6oc58P71T | 28,541 | LLM fine-tuning with deepspeed | {
"login": "vallabh001",
"id": 88985147,
"node_id": "MDQ6VXNlcjg4OTg1MTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/88985147?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vallabh001",
"html_url": "https://github.com/vallabh001",
"followers_url": "https://api.github.com/users/vallabh001/followers",
"following_url": "https://api.github.com/users/vallabh001/following{/other_user}",
"gists_url": "https://api.github.com/users/vallabh001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vallabh001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vallabh001/subscriptions",
"organizations_url": "https://api.github.com/users/vallabh001/orgs",
"repos_url": "https://api.github.com/users/vallabh001/repos",
"events_url": "https://api.github.com/users/vallabh001/events{/privacy}",
"received_events_url": "https://api.github.com/users/vallabh001/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-16T18:13:40 | 2024-01-16T21:00:17 | null | NONE | null | I was trying to fine-tune llama2 by referring this [blog](https://www.philschmid.de/instruction-tune-llama-2l), but the training time is very high (in days). So, I was thinking of using deepspeed optimization for the training process. However, there is no proper documentation for fine-tuning llm's using deepspeed.
I executed the command below to start training, but encountered an error. I have single A100 40gb GPU.
`torchrun --num_gpus=1 --nnodes 1 --nproc_per_node 1 llm_training.py --deepspeed "ds_zero2_no_offload.json" --ddp_find_unused_parameters False
`
Error
`RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the forward function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple checkpoint functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations.
Parameter at index 447 with name base_model.model.model.layers.31.mlp.down_proj.lora_B.default.weight has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter during this iteration.` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28541/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28541/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28540 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28540/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28540/comments | https://api.github.com/repos/huggingface/transformers/issues/28540/events | https://github.com/huggingface/transformers/pull/28540 | 2,084,533,824 | PR_kwDOCUB6oc5kOY9P | 28,540 | Remove CaptureLogger | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-16T18:00:49 | 2024-01-22T15:40:07 | 2024-01-22T15:40:07 | COLLABORATOR | null | # What does this PR do?
Replace our custom CaptureLogger class with the python library `unittest.TestCase().assertLogs`.
Makes testing for no logs being raised more robust - asserts no logs rather than string matching.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28540/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28540/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28540",
"html_url": "https://github.com/huggingface/transformers/pull/28540",
"diff_url": "https://github.com/huggingface/transformers/pull/28540.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28540.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28539 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28539/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28539/comments | https://api.github.com/repos/huggingface/transformers/issues/28539/events | https://github.com/huggingface/transformers/issues/28539 | 2,084,510,356 | I_kwDOCUB6oc58PxqU | 28,539 | `load_best_model_at_end` is inconsistent with evaluation (and save) logic at end of training | {
"login": "antoine-lizee",
"id": 2957716,
"node_id": "MDQ6VXNlcjI5NTc3MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2957716?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antoine-lizee",
"html_url": "https://github.com/antoine-lizee",
"followers_url": "https://api.github.com/users/antoine-lizee/followers",
"following_url": "https://api.github.com/users/antoine-lizee/following{/other_user}",
"gists_url": "https://api.github.com/users/antoine-lizee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/antoine-lizee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antoine-lizee/subscriptions",
"organizations_url": "https://api.github.com/users/antoine-lizee/orgs",
"repos_url": "https://api.github.com/users/antoine-lizee/repos",
"events_url": "https://api.github.com/users/antoine-lizee/events{/privacy}",
"received_events_url": "https://api.github.com/users/antoine-lizee/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-16T17:43:53 | 2024-01-19T12:25:54 | null | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.10.201-191.748.amzn2.x86_64-x86_64-with-glibc2.26
- Python version: 3.10.13
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.3.3
- Accelerate version: 0.26.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@muellerzr @pacman100 @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Shortened script below:
```
model_checkpoint = "xlm-roberta-large"
model_name = model_checkpoint.split("/")[-1]
model = XLMRobertaForTokenClassification.from_pretrained(model_checkpoint, num_labels=len(label_list))
batch_size = 32
learning_rate = 2e-5
eval_steps = 0.1
# The data + batch size leads to having 11277 steps
training_args = TrainingArguments(
output_dir_name,
logging_dir=run_dir,
logging_strategy="steps",
logging_steps=eval_steps / 5,
evaluation_strategy="steps",
eval_steps=eval_steps,
save_strategy="steps",
save_steps=eval_steps,
learning_rate=learning_rate,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=epochs,
weight_decay=0.01,
push_to_hub=False,
save_total_limit=4,
load_best_model_at_end=True
)
data_collator = DataCollatorForTokenClassification(tokenizer)
# Initialize Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_ds,
eval_dataset=test_ds,
data_collator=data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics,
)
# Train the model
trainer.train()
```
### Expected behavior
I would expect that my model is evaluated (and saved!) at the last step.
It is not, and in most example scripts we see `trainer.evaluate()` after the `trainer.train()`.
As a result, when we set `load_best_model_at_end=True` we concretely **discard any training that happened after the last checkpoint**, which seems wrong. In my case, the last 10% of training is discarded.
My understanding of what's happening:
- In the trainer callback, we check ([here](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer_callback.py#L447)) if the `global_step` is a multiple of the `eval_steps`. If the total number of step is not a multiple of it, this condition is not met at the last step.
- If we `load_best_model_at_end`, the last accessible evaluation does not include the performance of the latest stages of training.
- As a side note, running `trainer.evaluate()` by hand after the training only re-evaluates the past checkpoint that was selected as the best. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28539/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28539/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28538 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28538/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28538/comments | https://api.github.com/repos/huggingface/transformers/issues/28538/events | https://github.com/huggingface/transformers/pull/28538 | 2,084,408,051 | PR_kwDOCUB6oc5kN9vs | 28,538 | [`gradient_checkpointing`] default to use it for torch 2.3 | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-16T16:39:44 | 2024-01-23T14:27:48 | null | COLLABORATOR | null | # What does this PR do?
Fixes #28536 in preparation for next torch release | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28538/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28538",
"html_url": "https://github.com/huggingface/transformers/pull/28538",
"diff_url": "https://github.com/huggingface/transformers/pull/28538.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28538.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28537 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28537/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28537/comments | https://api.github.com/repos/huggingface/transformers/issues/28537/events | https://github.com/huggingface/transformers/pull/28537 | 2,084,372,239 | PR_kwDOCUB6oc5kN15A | 28,537 | Fixes default value of `softmax_scale` in `PhiFlashAttention2`. | {
"login": "gugarosa",
"id": 4120639,
"node_id": "MDQ6VXNlcjQxMjA2Mzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4120639?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gugarosa",
"html_url": "https://github.com/gugarosa",
"followers_url": "https://api.github.com/users/gugarosa/followers",
"following_url": "https://api.github.com/users/gugarosa/following{/other_user}",
"gists_url": "https://api.github.com/users/gugarosa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gugarosa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gugarosa/subscriptions",
"organizations_url": "https://api.github.com/users/gugarosa/orgs",
"repos_url": "https://api.github.com/users/gugarosa/repos",
"events_url": "https://api.github.com/users/gugarosa/events{/privacy}",
"received_events_url": "https://api.github.com/users/gugarosa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2024-01-16T16:19:24 | 2024-01-17T13:45:44 | 2024-01-17T13:22:45 | CONTRIBUTOR | null | # What does this PR do?
- Phi has never used `softmax_scale=1.0` with Flash-Attention, so the default is being moved to `None`. This tentatively fixes any issue regarding fine-tuning Phi-based checkpoints when Flash-Attention 2 is turned on.
- Documentation is also updated to reflect the official Phi checkpoints.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #28488 (tentative)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
@ArthurZucker @susnato
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28537/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28537/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28537",
"html_url": "https://github.com/huggingface/transformers/pull/28537",
"diff_url": "https://github.com/huggingface/transformers/pull/28537.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28537.patch",
"merged_at": "2024-01-17T13:22:45"
} |
https://api.github.com/repos/huggingface/transformers/issues/28536 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28536/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28536/comments | https://api.github.com/repos/huggingface/transformers/issues/28536/events | https://github.com/huggingface/transformers/issues/28536 | 2,084,363,783 | I_kwDOCUB6oc58PN4H | 28,536 | Gradient checkpointing throws use_reentrant warning on PyTorch 2.1 | {
"login": "rosario-purple",
"id": 123594463,
"node_id": "U_kgDOB13m3w",
"avatar_url": "https://avatars.githubusercontent.com/u/123594463?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rosario-purple",
"html_url": "https://github.com/rosario-purple",
"followers_url": "https://api.github.com/users/rosario-purple/followers",
"following_url": "https://api.github.com/users/rosario-purple/following{/other_user}",
"gists_url": "https://api.github.com/users/rosario-purple/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rosario-purple/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rosario-purple/subscriptions",
"organizations_url": "https://api.github.com/users/rosario-purple/orgs",
"repos_url": "https://api.github.com/users/rosario-purple/repos",
"events_url": "https://api.github.com/users/rosario-purple/events{/privacy}",
"received_events_url": "https://api.github.com/users/rosario-purple/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-16T16:14:29 | 2024-01-16T16:37:26 | null | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: 0.25.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: DEEPSPEED
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 8
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- deepspeed_config: {'gradient_accumulation_steps': 1, 'offload_optimizer_device': 'none', 'offload_param_device': 'none', 'zero3_init_flag': True, 'zero3_save_16bit_model': False, 'zero_stage': 3}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.1.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu)
- Jax version: 0.4.21
- JaxLib version: 0.4.21
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Training any text model with gradient checkpointing enabled on PyTorch 2.1 and higher produces this warning:
```
/scratch/miniconda3/envs/brr/lib/python3.10/site-packages/torch/utils/checkpoint.py:429: Warning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.
```
This can be resolved by manually monkey-patching the model code with `use_reentrant=True`, eg. like so:
```
hidden_states, self_attns, decoder_cache = torch.utils.checkpoint.checkpoint(
create_custom_forward(decoder_layer),
hidden_states,
attention_mask,
position_ids,
None,
is_padded_inputs,
use_reentrant=True,
)
```
This is caused by an upstream change in PyTorch:
https://medium.com/pytorch/how-activation-checkpointing-enables-scaling-up-training-deep-learning-models-7a93ae01ff2d
### Expected behavior
No warning should be written | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28536/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28536/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28535 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28535/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28535/comments | https://api.github.com/repos/huggingface/transformers/issues/28535/events | https://github.com/huggingface/transformers/pull/28535 | 2,084,210,365 | PR_kwDOCUB6oc5kNR55 | 28,535 | Allow add_tokens for ESM | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-16T15:06:24 | 2024-01-19T12:32:07 | 2024-01-19T12:32:06 | MEMBER | null | The tokenizer code for ESM forces all added tokens to be special tokens, presumably because the authors felt that the list of amino acids in proteins was constant and therefore that there wouldn't be a need to actually expand the core vocabulary. However, there are definitely use-cases for expanding the vocabulary - see #28387.
This PR makes `add_tokens()` for ESM tokenizers behave like it does for other tokenizers, and doesn't force the added tokens to be special tokens.
Fixes #28387 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28535/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28535/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28535",
"html_url": "https://github.com/huggingface/transformers/pull/28535",
"diff_url": "https://github.com/huggingface/transformers/pull/28535.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28535.patch",
"merged_at": "2024-01-19T12:32:06"
} |
https://api.github.com/repos/huggingface/transformers/issues/28534 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28534/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28534/comments | https://api.github.com/repos/huggingface/transformers/issues/28534/events | https://github.com/huggingface/transformers/issues/28534 | 2,084,136,856 | I_kwDOCUB6oc58OWeY | 28,534 | run_glue_no_trainer.py script crashes on Mistral model due to tokenizer issue | {
"login": "rosario-purple",
"id": 123594463,
"node_id": "U_kgDOB13m3w",
"avatar_url": "https://avatars.githubusercontent.com/u/123594463?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rosario-purple",
"html_url": "https://github.com/rosario-purple",
"followers_url": "https://api.github.com/users/rosario-purple/followers",
"following_url": "https://api.github.com/users/rosario-purple/following{/other_user}",
"gists_url": "https://api.github.com/users/rosario-purple/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rosario-purple/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rosario-purple/subscriptions",
"organizations_url": "https://api.github.com/users/rosario-purple/orgs",
"repos_url": "https://api.github.com/users/rosario-purple/repos",
"events_url": "https://api.github.com/users/rosario-purple/events{/privacy}",
"received_events_url": "https://api.github.com/users/rosario-purple/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-16T14:39:44 | 2024-01-16T19:28:38 | null | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: 0.25.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: DEEPSPEED
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 8
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- deepspeed_config: {'gradient_accumulation_steps': 1, 'offload_optimizer_device': 'none', 'offload_param_device': 'none', 'zero3_init_flag': True, 'zero3_save_16bit_model': False, 'zero_stage': 3}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.1.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu)
- Jax version: 0.4.21
- JaxLib version: 0.4.21
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
@ArthurZucker @younesbelkada @pacman100
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Check out the transformers repo, and run this command (on a large server with appropriately configured `accelerate`, so it won't OOM):
`python run_glue_no_trainer.py --model_name_or_path mistralai/Mistral-7B-v0.1 --task_name sst2 --per_device_train_batch_size 4 --learning_rate 2e-5 --num_train_epochs 3 --output_dir /tmp/sst2`
It will crash with this error and stack trace:
```
You're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
Traceback (most recent call last):
File "/scratch/brr/run_glue.py", line 662, in <module>
main()
File "/scratch/brr/run_glue.py", line 545, in main
for step, batch in enumerate(active_dataloader):
File "/scratch/miniconda3/envs/brr/lib/python3.10/site-packages/accelerate/data_loader.py", line 448, in __iter__
current_batch = next(dataloader_iter)
File "/scratch/miniconda3/envs/brr/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 630, in __next__
data = self._next_data()
File "/scratch/miniconda3/envs/brr/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 674, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/scratch/miniconda3/envs/brr/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 54, in fetch
return self.collate_fn(data)
File "/scratch/miniconda3/envs/brr/lib/python3.10/site-packages/transformers/data/data_collator.py", line 249, in __call__
batch = self.tokenizer.pad(
File "/scratch/miniconda3/envs/brr/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 3259, in pad
padding_strategy, _, max_length, _ = self._get_padding_truncation_strategies(
File "/scratch/miniconda3/envs/brr/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2707, in _get_padding_truncation_strategies
raise ValueError(
ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.
/scratch/miniconda3/envs/brr/lib/python3.10/tempfile.py:860: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmprbynkmzk'>
_warnings.warn(warn_message, ResourceWarning)
```
### Expected behavior
It should train without crashing. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28534/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28533 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28533/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28533/comments | https://api.github.com/repos/huggingface/transformers/issues/28533/events | https://github.com/huggingface/transformers/pull/28533 | 2,083,951,133 | PR_kwDOCUB6oc5kMYb8 | 28,533 | Fix attention mask creation for GPTNeo | {
"login": "michaelbenayoun",
"id": 25418079,
"node_id": "MDQ6VXNlcjI1NDE4MDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/25418079?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelbenayoun",
"html_url": "https://github.com/michaelbenayoun",
"followers_url": "https://api.github.com/users/michaelbenayoun/followers",
"following_url": "https://api.github.com/users/michaelbenayoun/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelbenayoun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelbenayoun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelbenayoun/subscriptions",
"organizations_url": "https://api.github.com/users/michaelbenayoun/orgs",
"repos_url": "https://api.github.com/users/michaelbenayoun/repos",
"events_url": "https://api.github.com/users/michaelbenayoun/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelbenayoun/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-16T13:19:22 | 2024-01-30T00:49:10 | null | MEMBER | null | # What does this PR do?
It seems that #26486 broke the way the attention mask was created. It creates a causal attention mask by default, but there is already a causal attention mask in the `GPTNeoSelfAttention` modules, resulting in `NaN`s.
I am not sure the solution is perfect, so opened to suggestions. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28533/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28533",
"html_url": "https://github.com/huggingface/transformers/pull/28533",
"diff_url": "https://github.com/huggingface/transformers/pull/28533.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28533.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28532 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28532/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28532/comments | https://api.github.com/repos/huggingface/transformers/issues/28532/events | https://github.com/huggingface/transformers/issues/28532 | 2,083,781,693 | I_kwDOCUB6oc58M_w9 | 28,532 | Inconsistent check for is_accelerate_available() in transformers.training_args | {
"login": "faph",
"id": 8397805,
"node_id": "MDQ6VXNlcjgzOTc4MDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8397805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/faph",
"html_url": "https://github.com/faph",
"followers_url": "https://api.github.com/users/faph/followers",
"following_url": "https://api.github.com/users/faph/following{/other_user}",
"gists_url": "https://api.github.com/users/faph/gists{/gist_id}",
"starred_url": "https://api.github.com/users/faph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/faph/subscriptions",
"organizations_url": "https://api.github.com/users/faph/orgs",
"repos_url": "https://api.github.com/users/faph/repos",
"events_url": "https://api.github.com/users/faph/events{/privacy}",
"received_events_url": "https://api.github.com/users/faph/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2024-01-16T12:01:09 | 2024-01-16T16:30:29 | 2024-01-16T16:30:29 | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-4.18.0-477.21.1.el8_8.x86_64-x86_64-with-glibc2.28
- Python version: 3.11.2
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: unknown
- Using distributed or parallel set-up in script?: unknown
Output of `pip show accelerate`:
```
Name: accelerate
Version: 0.20.3
(...)
```
### Who can help?
@muellerz @pacman100
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Pip install `transformers 4.36.2` and `accelerate 0.20.3`
2. Instantiate `transformers.TrainingArguments`
This raises something like this:
```
@cached_property
def _setup_devices(self) -> "torch.device":
requires_backends(self, ["torch"])
logger.info("PyTorch: setting up devices")
if not is_sagemaker_mp_enabled():
if not is_accelerate_available(min_version="0.20.1"):
raise ImportError(
"Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1`: Please run `pip install transformers[torch]` or `pip install accelerate -U`"
)
> AcceleratorState._reset_state(reset_partial_state=True)
E NameError: name 'AcceleratorState' is not defined
```
This because the import of `AcceleratorState` is conditional upon `accelerate` with minimum version `0.21.0`. See https://github.com/huggingface/transformers/blob/v4.36.2/src/transformers/utils/import_utils.py#L684
### Expected behavior
Consistent min version check for `accelerate` and successful `TrainingArguments` instantiation. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28532/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28532/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28531 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28531/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28531/comments | https://api.github.com/repos/huggingface/transformers/issues/28531/events | https://github.com/huggingface/transformers/issues/28531 | 2,083,735,951 | I_kwDOCUB6oc58M0mP | 28,531 | A named Peft Model doesn't work with resume_from_checkpoint=True | {
"login": "chenbin11200",
"id": 5245644,
"node_id": "MDQ6VXNlcjUyNDU2NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5245644?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chenbin11200",
"html_url": "https://github.com/chenbin11200",
"followers_url": "https://api.github.com/users/chenbin11200/followers",
"following_url": "https://api.github.com/users/chenbin11200/following{/other_user}",
"gists_url": "https://api.github.com/users/chenbin11200/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chenbin11200/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chenbin11200/subscriptions",
"organizations_url": "https://api.github.com/users/chenbin11200/orgs",
"repos_url": "https://api.github.com/users/chenbin11200/repos",
"events_url": "https://api.github.com/users/chenbin11200/events{/privacy}",
"received_events_url": "https://api.github.com/users/chenbin11200/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2024-01-16T11:40:00 | 2024-01-17T11:51:03 | null | NONE | null | ### System Info
transformers==4.36.2
peft==0.5.0
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hi @muellerzr and @pacman100,
It seem like if I try to resume a lora training by using
```
trainer.train(resume_from_checkpoint=True)
```
it fails with the following error:
```
ValueError: Can't find a valid checkpoint at /my_output_dir/checkpoint-300
```
By checking the code, I figured out the resuming proccess is stopped by the following assert in `Trainer._load_from_checkpoint`
```
if not (
any(
os.path.isfile(f)
for f in [
weights_file,
safe_weights_file,
weights_index_file,
safe_weights_index_file,
adapter_weights_file,
adapter_safe_weights_file,
]
)
or is_fsdp_ckpt
):
raise ValueError(f"Can't find a valid checkpoint at {resume_from_checkpoint}")
```
Since I initialize the PeftModel with a adapter name, which is used to manage my adapters.
```
peft_model = get_peft_model(model=base_model,
peft_config=peft_config,
adapter_name='my_lora_model_name',
```
In this case, the `adapter_config.json` and `adapter_model.bin` files will be saved in `/my_output_dir/checkpoint-300/my_lora_model_name` instead of `/my_output_dir/checkpoint-300` directly. That's why the ValueError is raised.
I am not sure whether it is a known issue, and the proper way to fix it? Or I have to write my own PeftTrainer to handle this issue?
Thank you in advance for your support.
Best regards.
### Expected behavior
Resume the training with a named peft model is supported. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28531/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28531/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28530 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28530/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28530/comments | https://api.github.com/repos/huggingface/transformers/issues/28530/events | https://github.com/huggingface/transformers/issues/28530 | 2,083,730,860 | I_kwDOCUB6oc58MzWs | 28,530 | Early stopping required metric_for_best_model, but did not find eval_f1 so early stopping is disabled | {
"login": "ManishChandra12",
"id": 17062142,
"node_id": "MDQ6VXNlcjE3MDYyMTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/17062142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ManishChandra12",
"html_url": "https://github.com/ManishChandra12",
"followers_url": "https://api.github.com/users/ManishChandra12/followers",
"following_url": "https://api.github.com/users/ManishChandra12/following{/other_user}",
"gists_url": "https://api.github.com/users/ManishChandra12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ManishChandra12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ManishChandra12/subscriptions",
"organizations_url": "https://api.github.com/users/ManishChandra12/orgs",
"repos_url": "https://api.github.com/users/ManishChandra12/repos",
"events_url": "https://api.github.com/users/ManishChandra12/events{/privacy}",
"received_events_url": "https://api.github.com/users/ManishChandra12/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 11 | 2024-01-16T11:37:37 | 2024-01-24T14:18:16 | null | NONE | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-3.10.0-1160.49.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.18
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@muellerzr @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
import os
from transformers import AutoTokenizer
from transformers import AutoModelForSequenceClassification
from transformers import TrainingArguments, Trainer
from transformers import EarlyStoppingCallback, IntervalStrategy
import numpy as np
import evaluate
os.environ["CUDA_VISIBLE_DEVICES"]=str(gpu_id)
from datasets import Dataset, DatasetDict
train_k = pd.read_csv('train.csv', usecols=["text", "k"])
train_k.rename(columns={"text":"text", "k":"label"}, inplace=True)
val_k = pd.read_csv('val.csv', usecols=["text", "k"])
val_k.rename(columns={"text":"text", "k":"label"}, inplace=True)
test_k = pd.read_csv('test.csv', usecols=["text", "k"])
test_k.rename(columns={"text":"text", "k":"label"}, inplace=True)
train_k = Dataset.from_pandas(train_k)
val_k = Dataset.from_pandas(val_k)
test_k = Dataset.from_pandas(test_k)
ds = DatasetDict()
ds['train'] = train_k
ds['val'] = val_k
ds['test'] = test_k
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
def tokenize_function(examples):
return tokenizer(str(examples['text']), padding="max_length", truncation=True)
tokenized_datasets = ds.map(tokenize_function)
tokenized_train_k = tokenized_datasets["train"]
tokenized_val_k = tokenized_datasets["val"]
model_k = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=6)
training_args = TrainingArguments(output_dir="trained_k_predictors", evaluation_strategy="steps", eval_steps=100, metric_for_best_model = 'f1', learning_rate=1e-3, num_train_epochs=5, weight_decay=0.01, load_best_model_at_end=True, per_device_train_batch_size = 16, per_device_eval_batch_size = 32, save_total_limit = 3, optim="adafactor", label_names=['label'], remove_unused_columns=False,)
metric = evaluate.load("f1")
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
return {'f1': metric.compute(predictions=predictions, references=labels)}
trainer = Trainer(model=model_k, args=training_args, train_dataset=tokenized_train_k, eval_dataset=tokenized_val_k, compute_metrics=compute_metrics, callbacks = [EarlyStoppingCallback(early_stopping_patience=3)])
trainer.train()
```
## Error message:
{'eval_runtime': 21.6631, 'eval_samples_per_second': 208.926, 'eval_steps_per_second': 3.277, 'epoch': 0.47}
9%|████████████████ | 5 [26/1960]
00/5305 [06:15<41:00, 1.95it/s]
100%|████████████████████████████████████████████████████████████████████� $ [24/1960]
�█████████████████████████████████████████████�early stopping required metric_for_best_model, but did not find eval_f1 so [23/1960]
early stopping is disabled██████████████████████████| 71/71 [00:21<00:00, 3.42it/s]
Traceback (most recent call last):
File "/scratch/manish/apl/apl_env/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/scratch/manish/apl/apl_env/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/scratch/manish/apl/src/apl.py", line 386, in <module>
main()
File "/scratch/manish/apl/src/apl.py", line 139, in main
trainer.train()
File "/scratch/manish/apl/apl_env/lib/python3.8/site-packages/transformers/trainer.py", line 1555, in train
return inner_training_loop(
File "/scratch/manish/apl/apl_env/lib/python3.8/site-packages/transformers/trainer.py", line 1922, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/scratch/manish/apl/apl_env/lib/python3.8/site-packages/transformers/trainer.py", line 2282, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/scratch/manish/apl/apl_env/lib/python3.8/site-packages/transformers/trainer.py", line 2407, in _save_checkpoint
metric_value = metrics[metric_to_check]
KeyError: 'eval_f1'
9%|███████████████▉ | 500 [3/1960]
/5305 [06:18<1:00:34, 1.32it/s]
### Expected behavior
Train the model with early stopping enabled. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28530/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28530/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28529 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28529/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28529/comments | https://api.github.com/repos/huggingface/transformers/issues/28529/events | https://github.com/huggingface/transformers/issues/28529 | 2,083,589,686 | I_kwDOCUB6oc58MQ42 | 28,529 | Error while fetching adapter layer from huggingface library | {
"login": "Muskanb",
"id": 35324348,
"node_id": "MDQ6VXNlcjM1MzI0MzQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/35324348?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Muskanb",
"html_url": "https://github.com/Muskanb",
"followers_url": "https://api.github.com/users/Muskanb/followers",
"following_url": "https://api.github.com/users/Muskanb/following{/other_user}",
"gists_url": "https://api.github.com/users/Muskanb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Muskanb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Muskanb/subscriptions",
"organizations_url": "https://api.github.com/users/Muskanb/orgs",
"repos_url": "https://api.github.com/users/Muskanb/repos",
"events_url": "https://api.github.com/users/Muskanb/events{/privacy}",
"received_events_url": "https://api.github.com/users/Muskanb/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 7 | 2024-01-16T10:34:11 | 2024-01-16T12:55:47 | null | NONE | null | ### System Info
```
pa_extractor = LlamaForCausalLM.from_pretrained(LLAMA_MODEL_NAME,
token=HF_ACCESS_TOKEN,
max_length=LLAMA2_MAX_LENGTH,
pad_token_id=cls.tokenizer.eos_token_id,
device_map="auto",
quantization_config=bnb_config)
pa_extractor.load_adapter(PEFT_MODEL_NAME, token=HF_ACCESS_TOKEN, device_map="auto")
```
# getting the below error while executing :
401 client error, Repository Not Found for url: https://huggingface.co/muskan/llama2/resolve/main/adapter_model.safetensors. Please make sure you specified the correct `repo_id` and `repo_type`. If you are trying to access a private or gated repo, make sure you are authenticated. Invalid username or password. Even though works fine while fetching models but fails at load_adapter step.
### Who can help?
@Narsil @ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The model is present in my private repo, should be replicable if you will try to use load_adapter to fetch any adapter layer from hf directly.
### Expected behavior
Should be able to download the peft adapter layer successfullt | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28529/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28529/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28528 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28528/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28528/comments | https://api.github.com/repos/huggingface/transformers/issues/28528/events | https://github.com/huggingface/transformers/issues/28528 | 2,083,582,439 | I_kwDOCUB6oc58MPHn | 28,528 | The generation speed on NPU is too slow | {
"login": "hhllxx1121",
"id": 96508996,
"node_id": "U_kgDOBcCcRA",
"avatar_url": "https://avatars.githubusercontent.com/u/96508996?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hhllxx1121",
"html_url": "https://github.com/hhllxx1121",
"followers_url": "https://api.github.com/users/hhllxx1121/followers",
"following_url": "https://api.github.com/users/hhllxx1121/following{/other_user}",
"gists_url": "https://api.github.com/users/hhllxx1121/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hhllxx1121/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hhllxx1121/subscriptions",
"organizations_url": "https://api.github.com/users/hhllxx1121/orgs",
"repos_url": "https://api.github.com/users/hhllxx1121/repos",
"events_url": "https://api.github.com/users/hhllxx1121/events{/privacy}",
"received_events_url": "https://api.github.com/users/hhllxx1121/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-16T10:31:10 | 2024-01-16T15:01:08 | null | NONE | null | The generation speed on NPU device is too slow,The first conversation takes about 5 minutes, and it may be faster next. May I ask if there is any error? Below is my code demo
```python
import torch
import torch_npu
from transformers import LlamaForCausalLM, LlamaTokenizer, TextStreamer
tokenizer = LlamaTokenizer.from_pretrained(
"",
device_map="npu:2"
)
llama_model = LlamaForCausalLM.from_pretrained(
"",
device_map="npu:2"
)
streamer = TextStreamer(tokenizer)
while True:
ins = input("user: ")
res = tokenizer.encode(ins, return_tensors="pt").to("npu:2")
outputs = llama_model.generate(
inputs=res,
streamer=streamer,
max_new_tokens=100,
)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28528/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28528/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28527 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28527/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28527/comments | https://api.github.com/repos/huggingface/transformers/issues/28527/events | https://github.com/huggingface/transformers/pull/28527 | 2,083,559,986 | PR_kwDOCUB6oc5kLAk3 | 28,527 | [`TokenizationRoformerFast`] Fix the save and loading | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-16T10:21:40 | 2024-01-16T15:37:17 | 2024-01-16T15:37:16 | COLLABORATOR | null | # What does this PR do?
Fixes #28164, the pre tokenizer state was not correctly set after saving a fast tokenizer only. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28527/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28527/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28527",
"html_url": "https://github.com/huggingface/transformers/pull/28527",
"diff_url": "https://github.com/huggingface/transformers/pull/28527.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28527.patch",
"merged_at": "2024-01-16T15:37:16"
} |
https://api.github.com/repos/huggingface/transformers/issues/28526 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28526/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28526/comments | https://api.github.com/repos/huggingface/transformers/issues/28526/events | https://github.com/huggingface/transformers/pull/28526 | 2,083,550,366 | PR_kwDOCUB6oc5kK-cX | 28,526 | Fix labels encoding in RobertaForSequenceClassification when problem_type="multi_label_classification" | {
"login": "DamienAllonsius",
"id": 11852475,
"node_id": "MDQ6VXNlcjExODUyNDc1",
"avatar_url": "https://avatars.githubusercontent.com/u/11852475?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DamienAllonsius",
"html_url": "https://github.com/DamienAllonsius",
"followers_url": "https://api.github.com/users/DamienAllonsius/followers",
"following_url": "https://api.github.com/users/DamienAllonsius/following{/other_user}",
"gists_url": "https://api.github.com/users/DamienAllonsius/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DamienAllonsius/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DamienAllonsius/subscriptions",
"organizations_url": "https://api.github.com/users/DamienAllonsius/orgs",
"repos_url": "https://api.github.com/users/DamienAllonsius/repos",
"events_url": "https://api.github.com/users/DamienAllonsius/events{/privacy}",
"received_events_url": "https://api.github.com/users/DamienAllonsius/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-16T10:17:34 | 2024-01-16T18:01:42 | 2024-01-16T18:01:42 | NONE | null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
Here is a simple script that illustrates the problem
```python
import this
from transformers import (AutoConfig,
RobertaForSequenceClassification,
RobertaTokenizerFast, Trainer, TrainingArguments)
from itertools import cycle
from datasets import Dataset
def main():
# dataset
print("dataset")
text = this.s.split("\n")
num_labels = 4
labels = [int(cycle("1234").__next__()) for _ in range(len(text))]
ds = Dataset.from_dict({"text": text, "label": labels})
ds = ds.train_test_split(test_size=0.3)
output_folder_path = "/tmp/roberta"
# model and parameters
print("model and parameters")
model_id = "distilroberta-base"
config = AutoConfig.from_pretrained(model_id)
config.problem_type = "multi_label_classification"
config.num_labels = num_labels
model = RobertaForSequenceClassification.from_pretrained(
model_id, config=config
)
args = {
"batch_size": 100,
"tokenizer_max_length": 512,
"training_args": {
"num_train_epochs": 2,
"learning_rate": 1e-5,
"warmup_steps": 500,
"report_to": "none",
},
}
# tokenizer
print("tokenizer")
tokenizer = RobertaTokenizerFast.from_pretrained(model_id)
def tokenize(batch):
return tokenizer(batch["text"], padding=True, truncation=True, max_length=args["tokenizer_max_length"])
ds = ds.map(tokenize, batched=True, batch_size=args["batch_size"])
ds.set_format("torch", columns=["input_ids", "attention_mask", "label"])
# Training
print("training")
training_args = TrainingArguments(
output_dir=output_folder_path,
**args["training_args"]
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=ds["train"],
eval_dataset=ds["test"],
)
trainer.train(resume_from_checkpoint=False)
if __name__ == "__main__":
main()
```
Output error is
```
ValueError: Target size (torch.Size([2])) must be the same as input size (torch.Size([2, 4]))
```
Because transformers/models/roberta/modeling_roberta.py L1236 expects labels to be one hot encoded.
The code in this PR solves this issus.
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28526/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28526/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28526",
"html_url": "https://github.com/huggingface/transformers/pull/28526",
"diff_url": "https://github.com/huggingface/transformers/pull/28526.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28526.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28525 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28525/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28525/comments | https://api.github.com/repos/huggingface/transformers/issues/28525/events | https://github.com/huggingface/transformers/issues/28525 | 2,083,493,702 | I_kwDOCUB6oc58L5dG | 28,525 | [Whisper] TFWhisperFromPretrained : Can we run transcription by using the call method instead of generate from the transformers class | {
"login": "monowaranjum",
"id": 19803082,
"node_id": "MDQ6VXNlcjE5ODAzMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/19803082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/monowaranjum",
"html_url": "https://github.com/monowaranjum",
"followers_url": "https://api.github.com/users/monowaranjum/followers",
"following_url": "https://api.github.com/users/monowaranjum/following{/other_user}",
"gists_url": "https://api.github.com/users/monowaranjum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/monowaranjum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/monowaranjum/subscriptions",
"organizations_url": "https://api.github.com/users/monowaranjum/orgs",
"repos_url": "https://api.github.com/users/monowaranjum/repos",
"events_url": "https://api.github.com/users/monowaranjum/events{/privacy}",
"received_events_url": "https://api.github.com/users/monowaranjum/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2024-01-16T09:53:34 | 2024-01-19T04:08:56 | 2024-01-19T04:08:56 | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-6.5.0-14-generic-x86_64-with-glibc2.35
- Python version: 3.11.7
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.3.3
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (False)
- Tensorflow version (GPU?): 2.14.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@sanchit-gandhi @gante @Rocketknight1
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Here is a short script to reproduce the issue.
```
import tensorflow as tf
import numpy as np
from transformers import AutoProcessor, TFWhisperForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
import librosa
processor = AutoProcessor.from_pretrained("openai/whisper-tiny.en")
base_asr_model = TFWhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en")
# Read some inputs
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
inputs = processor(ds[0]["audio"]["array"], return_tensors="tf")
input_features = inputs.input_features
# Generate some predictions
generated_ids = base_asr_model.generate(input_features=input_features)
transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(transcription)
# Save the model
base_asr_model.save('./whisper-saved-unsigned')
# Load the model from saved state
loaded_base_asr_model = tf.keras.models.load_model('./whisper-saved-unsigned')
# Try running inference on the loaded model
new_generated_ids = loaded_base_asr_model.generate(input_features = input_features) # <-- This won't work
transcription = processor.batch_decode(new_generated_ids, skip_special_tokens=True)[0]
print(transcription)
```
The script fails for the second call of ```generate()``` function with the following error:
```
Traceback (most recent call last):
File "/home/rashik/Documents/reproduction/reproduction.py", line 31, in <module>
new_generated_ids = loaded_base_asr_model.generate(input_features = input_features) # <-- This won't work
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'TFWhisperForConditionalGeneration' object has no attribute 'generate'
```
### Expected behavior
I expected the loaded model to behave exactly like the model originally behaved. I listed all the attributes of the loaded model using ```dir(loaded_base_asr_model)```. Here is a screenshot of the output:

On the other hand, I did the same for the original model. Here is the screenshot of that output:

Clearly, I am missing something about how the model is saved and how it is loaded later.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28525/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28524 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28524/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28524/comments | https://api.github.com/repos/huggingface/transformers/issues/28524/events | https://github.com/huggingface/transformers/issues/28524 | 2,083,465,489 | I_kwDOCUB6oc58LykR | 28,524 | Exception in inference when using the pipeline with output_scores=True to get logits | {
"login": "andersonm-ibm",
"id": 63074550,
"node_id": "MDQ6VXNlcjYzMDc0NTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/63074550?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andersonm-ibm",
"html_url": "https://github.com/andersonm-ibm",
"followers_url": "https://api.github.com/users/andersonm-ibm/followers",
"following_url": "https://api.github.com/users/andersonm-ibm/following{/other_user}",
"gists_url": "https://api.github.com/users/andersonm-ibm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andersonm-ibm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andersonm-ibm/subscriptions",
"organizations_url": "https://api.github.com/users/andersonm-ibm/orgs",
"repos_url": "https://api.github.com/users/andersonm-ibm/repos",
"events_url": "https://api.github.com/users/andersonm-ibm/events{/privacy}",
"received_events_url": "https://api.github.com/users/andersonm-ibm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-16T09:41:58 | 2024-01-16T11:42:13 | 2024-01-16T11:42:13 | NONE | null | ### System Info
- `transformers` version: 4.29.1
- Python version: 3.10.13
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0+cu117 (True)
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Easily reproduced by [using the pipeline with output_scores=True](https://gist.github.com/andersonm-ibm/d8baeea66afca89cefebc108f1ce08f3)
Results in:
```
test_transformers_bug.py:27: in <module>
for out in pipe(KeyDataset(dataset, "text")):
/home/mayaa/miniconda3/envs/fme/lib/python3.10/site-packages/transformers/pipelines/pt_utils.py:124: in __next__
item = next(self.iterator)
/home/mayaa/miniconda3/envs/fme/lib/python3.10/site-packages/transformers/pipelines/pt_utils.py:125: in __next__
processed = self.infer(item, **self.params)
/home/mayaa/miniconda3/envs/fme/lib/python3.10/site-packages/transformers/pipelines/base.py:1025: in forward
model_outputs = self._forward(model_inputs, **forward_params)
/home/mayaa/miniconda3/envs/fme/lib/python3.10/site-packages/transformers/pipelines/text_generation.py:264: in _forward
out_b = generated_sequence.shape[0]
E AttributeError: 'GreedySearchEncoderDecoderOutput' object has no attribute 'shape'
```
### Expected behavior
Pipeline should handle the case where the model output is a `GreedySearchEncoderDecoderOutput `and not a simple tensor without exceptions, like in [the example](https://gist.github.com/andersonm-ibm/766d4892c92310a7889b2b3dfdc8ff44#file-model_generate_with_logits_output-py) . | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28524/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28524/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28523 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28523/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28523/comments | https://api.github.com/repos/huggingface/transformers/issues/28523/events | https://github.com/huggingface/transformers/issues/28523 | 2,083,462,217 | I_kwDOCUB6oc58LxxJ | 28,523 | Huggingface Agents Error 422: {'error': 'Input validation error: `max_new_tokens` must be <= 192 | {
"login": "dashapetr",
"id": 54349415,
"node_id": "MDQ6VXNlcjU0MzQ5NDE1",
"avatar_url": "https://avatars.githubusercontent.com/u/54349415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dashapetr",
"html_url": "https://github.com/dashapetr",
"followers_url": "https://api.github.com/users/dashapetr/followers",
"following_url": "https://api.github.com/users/dashapetr/following{/other_user}",
"gists_url": "https://api.github.com/users/dashapetr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dashapetr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dashapetr/subscriptions",
"organizations_url": "https://api.github.com/users/dashapetr/orgs",
"repos_url": "https://api.github.com/users/dashapetr/repos",
"events_url": "https://api.github.com/users/dashapetr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dashapetr/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-16T09:40:36 | 2024-01-22T16:43:44 | null | NONE | null | ### System Info
Transformers v4.29.0, v4.36.2
### Who can help?
@ArthurZucker, @younesbelkada
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run in Google Colab:
```
#@title Setup
transformers_version = "v4.36.2" # you can use "v4.29.0", the issue and output are the same
print(f"Setting up everything with transformers version {transformers_version}")
!pip install huggingface_hub>=0.14.1 git+https://github.com/huggingface/transformers@$transformers_version -q diffusers accelerate datasets torch soundfile sentencepiece opencv-python openai
from huggingface_hub import notebook_login
notebook_login()
from transformers import HfAgent
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder", token='hf_my_token') # token is passed directly here to avoid the issue https://github.com/huggingface/transformers/issues/28217
agent.run("Is the following `text` (in Spanish) positive or negative?", text="¡Este es un API muy agradable!")
```
### Expected behavior
It should generate results, but instead, I am getting an error:
`ValueError: Error 422: {'error': 'Input validation error: `max_new_tokens` must be <= 192. Given: 200', 'error_type': 'validation'}`

To my mind, it seems like it could be related to a strict limitation on max_new_tokens [here](https://github.com/huggingface/transformers/blob/a7cab3c283312b8d4de5df3bbe719971e24f4281/src/transformers/tools/agents.py#L640C40-L640C40) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28523/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28523/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28522 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28522/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28522/comments | https://api.github.com/repos/huggingface/transformers/issues/28522/events | https://github.com/huggingface/transformers/pull/28522 | 2,083,404,188 | PR_kwDOCUB6oc5kKdzb | 28,522 | [`SpeechT5Tokenization`] Add copied from and fix the `convert_tokens_to_string` to match the fast decoding scheme | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-16T09:16:28 | 2024-01-16T15:50:03 | 2024-01-16T15:50:02 | COLLABORATOR | null | # What does this PR do?
Fixes #26547 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28522/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28522",
"html_url": "https://github.com/huggingface/transformers/pull/28522",
"diff_url": "https://github.com/huggingface/transformers/pull/28522.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28522.patch",
"merged_at": "2024-01-16T15:50:02"
} |
https://api.github.com/repos/huggingface/transformers/issues/28521 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28521/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28521/comments | https://api.github.com/repos/huggingface/transformers/issues/28521/events | https://github.com/huggingface/transformers/pull/28521 | 2,083,219,859 | PR_kwDOCUB6oc5kJ0te | 28,521 | Add is_model_supported for fx | {
"login": "inisis",
"id": 46103969,
"node_id": "MDQ6VXNlcjQ2MTAzOTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/46103969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/inisis",
"html_url": "https://github.com/inisis",
"followers_url": "https://api.github.com/users/inisis/followers",
"following_url": "https://api.github.com/users/inisis/following{/other_user}",
"gists_url": "https://api.github.com/users/inisis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/inisis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/inisis/subscriptions",
"organizations_url": "https://api.github.com/users/inisis/orgs",
"repos_url": "https://api.github.com/users/inisis/repos",
"events_url": "https://api.github.com/users/inisis/events{/privacy}",
"received_events_url": "https://api.github.com/users/inisis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-16T07:52:06 | 2024-01-16T17:52:44 | 2024-01-16T17:52:44 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
symbolic_trace within transformers is only applicable to PreTrainedModel, by calling check_if_model_is_supported we can check if model to be traced is supported, however this function will raise if not supported, I think we can return True/Fasle, so others can can call it outside transformers.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28521/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28521",
"html_url": "https://github.com/huggingface/transformers/pull/28521",
"diff_url": "https://github.com/huggingface/transformers/pull/28521.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28521.patch",
"merged_at": "2024-01-16T17:52:44"
} |
https://api.github.com/repos/huggingface/transformers/issues/28520 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28520/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28520/comments | https://api.github.com/repos/huggingface/transformers/issues/28520/events | https://github.com/huggingface/transformers/pull/28520 | 2,083,197,751 | PR_kwDOCUB6oc5kJvxu | 28,520 | [ `TokenizationUtils`] Fix `add_special_tokens` when the token is already there | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-16T07:41:15 | 2024-01-16T15:36:30 | 2024-01-16T15:36:29 | COLLABORATOR | null | # What does this PR do?
Fixes #27888 : the method was missing a check that was overwriting the special token list. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28520/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28520/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28520",
"html_url": "https://github.com/huggingface/transformers/pull/28520",
"diff_url": "https://github.com/huggingface/transformers/pull/28520.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28520.patch",
"merged_at": "2024-01-16T15:36:29"
} |
https://api.github.com/repos/huggingface/transformers/issues/28519 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28519/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28519/comments | https://api.github.com/repos/huggingface/transformers/issues/28519/events | https://github.com/huggingface/transformers/issues/28519 | 2,083,084,349 | I_kwDOCUB6oc58KVg9 | 28,519 | AttributeError: module 'transformers_modules.Qwen-72B-Chat.tokenization_qwen' has no attribute 'QWenTokenizer' | {
"login": "pydaxing",
"id": 129026999,
"node_id": "U_kgDOB7DLtw",
"avatar_url": "https://avatars.githubusercontent.com/u/129026999?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pydaxing",
"html_url": "https://github.com/pydaxing",
"followers_url": "https://api.github.com/users/pydaxing/followers",
"following_url": "https://api.github.com/users/pydaxing/following{/other_user}",
"gists_url": "https://api.github.com/users/pydaxing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pydaxing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pydaxing/subscriptions",
"organizations_url": "https://api.github.com/users/pydaxing/orgs",
"repos_url": "https://api.github.com/users/pydaxing/repos",
"events_url": "https://api.github.com/users/pydaxing/events{/privacy}",
"received_events_url": "https://api.github.com/users/pydaxing/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-16T06:09:04 | 2024-01-17T14:32:52 | null | NONE | null | ### System Info
AttributeError: module 'transformers_modules.Qwen-72B-Chat.tokenization_qwen' has no attribute 'QWenTokenizer'
### Who can help?
AttributeError: module 'transformers_modules.Qwen-72B-Chat.tokenization_qwen' has no attribute 'QWenTokenizer'
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
AttributeError: module 'transformers_modules.Qwen-72B-Chat.tokenization_qwen' has no attribute 'QWenTokenizer'
Transformers==4.34.0
### Expected behavior
AttributeError: module 'transformers_modules.Qwen-72B-Chat.tokenization_qwen' has no attribute 'QWenTokenizer' | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28519/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28519/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28518 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28518/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28518/comments | https://api.github.com/repos/huggingface/transformers/issues/28518/events | https://github.com/huggingface/transformers/issues/28518 | 2,082,976,347 | I_kwDOCUB6oc58J7Jb | 28,518 | KOSMOS-2 Entities giving null | {
"login": "andysingal",
"id": 20493493,
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andysingal",
"html_url": "https://github.com/andysingal",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andysingal/subscriptions",
"organizations_url": "https://api.github.com/users/andysingal/orgs",
"repos_url": "https://api.github.com/users/andysingal/repos",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"received_events_url": "https://api.github.com/users/andysingal/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-16T03:58:25 | 2024-01-17T11:37:04 | null | NONE | null | ### System Info
google colab, T4
```
!pip install -q git+https://github.com/huggingface/transformers.git accelerate bitsandbytes
```
### Who can help?
@amy
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoProcessor, AutoModelForVision2Seq
processor = AutoProcessor.from_pretrained("microsoft/kosmos-2-patch14-224")
model = AutoModelForVision2Seq.from_pretrained("microsoft/kosmos-2-patch14-224", load_in_4bit=True, device_map={"":0})
import requests
from PIL import Image
prompt = "An image of"
url = "https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.png"
image = Image.open(requests.get(url, stream=True).raw)
image
inputs = processor(text=prompt, images=image, return_tensors="pt").to("cuda:0")
# autoregressively generate completion
generated_ids = model.generate(**inputs, max_new_tokens=128)
# convert generated token IDs back to strings
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_text)
# By default, the generated text is cleaned up and the entities are extracted.
processed_text, entities = processor.post_process_generation(generated_text)
print(processed_text)
print(entities)
```
gives
```
An image of a snowman warming up by a fire.
[]
```
### Expected behavior
needs to give entities https://github.com/NielsRogge/Transformers-Tutorials/blob/master/KOSMOS-2/Inference_with_KOSMOS_2_for_multimodal_grounding.ipynb | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28518/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28517 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28517/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28517/comments | https://api.github.com/repos/huggingface/transformers/issues/28517/events | https://github.com/huggingface/transformers/pull/28517 | 2,082,922,067 | PR_kwDOCUB6oc5kI1nz | 28,517 | Exclude the load balancing loss of padding tokens in Mixtral-8x7B | {
"login": "khaimt",
"id": 145790391,
"node_id": "U_kgDOCLCVtw",
"avatar_url": "https://avatars.githubusercontent.com/u/145790391?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/khaimt",
"html_url": "https://github.com/khaimt",
"followers_url": "https://api.github.com/users/khaimt/followers",
"following_url": "https://api.github.com/users/khaimt/following{/other_user}",
"gists_url": "https://api.github.com/users/khaimt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/khaimt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/khaimt/subscriptions",
"organizations_url": "https://api.github.com/users/khaimt/orgs",
"repos_url": "https://api.github.com/users/khaimt/repos",
"events_url": "https://api.github.com/users/khaimt/events{/privacy}",
"received_events_url": "https://api.github.com/users/khaimt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 13 | 2024-01-16T02:39:12 | 2024-01-24T09:12:14 | 2024-01-24T09:12:14 | CONTRIBUTOR | null | # What does this PR do?
This PR implements excluding the load balancing loss of padding tokens in Mixtral-8x7B
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #28505
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28517/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28517/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28517",
"html_url": "https://github.com/huggingface/transformers/pull/28517",
"diff_url": "https://github.com/huggingface/transformers/pull/28517.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28517.patch",
"merged_at": "2024-01-24T09:12:14"
} |
https://api.github.com/repos/huggingface/transformers/issues/28516 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28516/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28516/comments | https://api.github.com/repos/huggingface/transformers/issues/28516/events | https://github.com/huggingface/transformers/issues/28516 | 2,082,719,913 | I_kwDOCUB6oc58I8ip | 28,516 | EarlyStoppingCallback Not Working with Accelerate | {
"login": "superleesa",
"id": 88019950,
"node_id": "MDQ6VXNlcjg4MDE5OTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/88019950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/superleesa",
"html_url": "https://github.com/superleesa",
"followers_url": "https://api.github.com/users/superleesa/followers",
"following_url": "https://api.github.com/users/superleesa/following{/other_user}",
"gists_url": "https://api.github.com/users/superleesa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/superleesa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/superleesa/subscriptions",
"organizations_url": "https://api.github.com/users/superleesa/orgs",
"repos_url": "https://api.github.com/users/superleesa/repos",
"events_url": "https://api.github.com/users/superleesa/events{/privacy}",
"received_events_url": "https://api.github.com/users/superleesa/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2024-01-15T21:51:46 | 2024-01-19T15:36:07 | null | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.15.0-1042-gcp-x86_64-with-glibc2.38
- Python version: 3.11.7
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: Yes
### Who can help?
_No response_
### Information
- [] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Train llama2 (I used a fine-tuned llama2 for Japanese) with accelerate (I'm training with DDP using four GPUs), using a script that uses Trainer API and EarlyStoppingCallback
1. please download the scripts i used [finetune.py and data_loader.py](https://github.com/superleesa/dump)
1. on terminal run `accelerate launch finetune.py`
note: i ran this without any accelerate configurations
### Expected behavior
I'm fine-tuning llama2 with accelerate with DDP using four GPUs. I used a script that uses Trainer API and EarlyStoppingCallback. Whenever I run the code and after few iterations of training, I get following error:
```
[E ProcessGroupNCCL.cpp:475] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=3523, OpType=ALLREDUCE, NumelIn=7828736, NumelOut=7828736, Timeout(ms)=1800000) ran for 1800228 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:475] [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=3523, OpType=ALLREDUCE, NumelIn=7828736, NumelOut=7828736, Timeout(ms)=1800000) ran for 1800550 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:475] [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=3523, OpType=ALLREDUCE, NumelIn=7828736, NumelOut=7828736, Timeout(ms)=1800000) ran for 1800903 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:475] [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=3524, OpType=BROADCAST, NumelIn=33554432, NumelOut=33554432, Timeout(ms)=1800000) ran for 1800371 milliseconds before timing out.
972ee88fbdea:1599:1757 [0] NCCL INFO [Service thread] Connection closed by localRank 0
972ee88fbdea:1599:1724 [0] NCCL INFO comm 0x1c635a70 rank 0 nranks 4 cudaDev 0 busId 40 - Abort COMPLETE
[E ProcessGroupNCCL.cpp:489] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[E ProcessGroupNCCL.cpp:495] To avoid data inconsistency, we are taking the entire process down.
[E ProcessGroupNCCL.cpp:916] [Rank 0] NCCL watchdog thread terminated with exception: [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=3524, OpType=BROADCAST, NumelIn=33554432, NumelOut=33554432, Timeout(ms)=1800000) ran for 1800371 milliseconds before timing out.
terminate called after throwing an instance of 'std::runtime_error'
what(): [Rank 0] NCCL watchdog thread terminated with exception: [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=3524, OpType=BROADCAST, NumelIn=33554432, NumelOut=33554432, Timeout(ms)=1800000) ran for 1800371 milliseconds before timing out.
[2024-01-15 15:57:12,861] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1600 closing signal SIGTERM
[2024-01-15 15:57:12,861] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1601 closing signal SIGTERM
[2024-01-15 15:57:12,861] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1602 closing signal SIGTERM
[2024-01-15 15:57:12,976] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: -6) local_rank: 0
```
Some insights:
- I first searched online for the error and some people suggested disabling NCCL P2P and increasing the timeout threshold but both of them did not work.
- then, **when i removed theEarlyStoppingCallback the code worked fine, so i found that it must be related to EarlyStoppingCallback**. Therefore, although I use lots of other components in the code including LoRA and 4 bit quantization, the error must be related to EarlyStoppingCallback.
- because the error only happens whenever the training loop should stop with the earlystopping callback (i.e. the validation loss does not decrease for the specified "persistance" number of times), i suspected that this is caused by the training not being able to quit from loop.
- i read the source code and found that when the early stopping counter stops, control.should_training_stop is to set True. however, within the training loop in the Trainer class, i believe this is not handled properly for training that uses multiple GPUs.
- particularly, i suspect it's not breaking the training loop and there might be an improper use of the break statement in combination with how Accelerate works.
- i assume this is basically the same problem as [discussed here](https://discuss.huggingface.co/t/early-stopping-for-eval-loss-causes-timeout/51349). only the difference is that he is using his own training script but i'm using the Trainer.
- to fix this, i think we need to use accelerate.set_breakpoint and accelerate.check_breakpoint functions within the [_inner_training_loop function in the Trainer class](https://github.com/huggingface/transformers/blob/7e0ddf89f483f53107870cddabb2e1cc93069705/src/transformers/trainer.py#L1933C1-L1934C26), as they were used to fix the problem in the discussion link above.
- **so i believe the real problem here is not the EarlyStoppingCallback but the condition to exit the training loop.** | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28516/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28516/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28515 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28515/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28515/comments | https://api.github.com/repos/huggingface/transformers/issues/28515/events | https://github.com/huggingface/transformers/issues/28515 | 2,082,611,854 | I_kwDOCUB6oc58IiKO | 28,515 | AttributeError: 'LlamaForCausalLM' object has no attribute 'merge_and_unload' | {
"login": "tshrjn",
"id": 8372098,
"node_id": "MDQ6VXNlcjgzNzIwOTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8372098?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tshrjn",
"html_url": "https://github.com/tshrjn",
"followers_url": "https://api.github.com/users/tshrjn/followers",
"following_url": "https://api.github.com/users/tshrjn/following{/other_user}",
"gists_url": "https://api.github.com/users/tshrjn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tshrjn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tshrjn/subscriptions",
"organizations_url": "https://api.github.com/users/tshrjn/orgs",
"repos_url": "https://api.github.com/users/tshrjn/repos",
"events_url": "https://api.github.com/users/tshrjn/events{/privacy}",
"received_events_url": "https://api.github.com/users/tshrjn/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 7 | 2024-01-15T20:01:04 | 2024-01-19T10:17:01 | null | NONE | null | ### System Info
- `transformers` version: 4.37.0.dev0
- Platform: Linux-5.4.0-155-generic-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: 0.26.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When using the TinyLlama model with Lora training as in unsloth's [colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) shown in readme. On trying to 'merge_and_unload' post training, I get the following error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[43], line 1
----> 1 model = model.merge_and_unload()
File /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1695, in Module.__getattr__(self, name)
1693 if name in modules:
1694 return modules[name]
-> 1695 raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
AttributeError: 'LlamaForCausalLM' object has no attribute 'merge_and_unload'
```
### Expected behavior
To be able to merge the adaptors. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28515/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28515/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28514 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28514/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28514/comments | https://api.github.com/repos/huggingface/transformers/issues/28514/events | https://github.com/huggingface/transformers/pull/28514 | 2,082,547,057 | PR_kwDOCUB6oc5kHlXS | 28,514 | Config: warning when saving generation kwargs in the model config | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2024-01-15T18:58:42 | 2024-01-17T13:28:35 | 2024-01-16T18:31:02 | MEMBER | null | # What does this PR do?
## Context
`generate` is ideally controlled by a `GenerationConfig`. However, to remain retrocompatible, a `PretrainedConfig` may control `generate` in the following conditions:
1. `generate` does not receive a `generation_config` argument; AND
2. `model.generation_config._from_model_config is True`, which means the user never manually created a `GenerationConfig`; AND
3. the user has not modified `model.generation_config` since it was created, which (together with 2.) means that `model.generation_config` holds a copy of the generation parameterization in `model.config` at init time; AND
4. [added in this PR] the model config holds non-default generation kwargs, which means there is some intent to control generation through the model config
Having the legacy behavior active essentially means there are two places to control generation, which has been causing some GH issues. We can't get rid of it (we would have to submit PRs to thousands of models), but we can be more persuasive in slowly shifting new models entirely towards the `GenerationConfig`. This should help with documentation, ease of use across tasks such as fine-tuning modern models, as well as reducing the number of occurrences of the legacy behavior warning (see [1:03 in this video](https://twitter.com/reach_vb/status/1736471172970086792) -- many @TheBloke models suffer from it, as `max_length` is set in the model config and not in the generation config).
## This PR
This PR adds:
1. Clause 4. in the context list above, to avoid showing the warning when there was no intent of controlling `generate` through `model.config`
2. Two future deprecation warnings in the following legacy-triggering situations:
a. When saving a `PretrainedConfig` with non-default generation attributes, which demonstrates an intent to control `generate` through it. Users are nudged towards using `GenerationConfig` instead;
b. When saving a model where `model.generation_config` is built from `model.config`, but `model.config`'s generation attributes have been modified since the creation of `model.generation_config` (i.e. the two hold different `generate` parameterization). Users are nudged towards creating a brand new `GenerationConfig` instead. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28514/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28514/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28514",
"html_url": "https://github.com/huggingface/transformers/pull/28514",
"diff_url": "https://github.com/huggingface/transformers/pull/28514.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28514.patch",
"merged_at": "2024-01-16T18:31:02"
} |
https://api.github.com/repos/huggingface/transformers/issues/28513 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28513/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28513/comments | https://api.github.com/repos/huggingface/transformers/issues/28513/events | https://github.com/huggingface/transformers/pull/28513 | 2,082,412,751 | PR_kwDOCUB6oc5kHIwg | 28,513 | Exclude the load balancing loss of padding tokens in Mixtral-8x7B | {
"login": "khaimt",
"id": 145790391,
"node_id": "U_kgDOCLCVtw",
"avatar_url": "https://avatars.githubusercontent.com/u/145790391?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/khaimt",
"html_url": "https://github.com/khaimt",
"followers_url": "https://api.github.com/users/khaimt/followers",
"following_url": "https://api.github.com/users/khaimt/following{/other_user}",
"gists_url": "https://api.github.com/users/khaimt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/khaimt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/khaimt/subscriptions",
"organizations_url": "https://api.github.com/users/khaimt/orgs",
"repos_url": "https://api.github.com/users/khaimt/repos",
"events_url": "https://api.github.com/users/khaimt/events{/privacy}",
"received_events_url": "https://api.github.com/users/khaimt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-15T17:07:06 | 2024-01-16T02:35:57 | 2024-01-16T02:35:57 | CONTRIBUTOR | null |
# What does this PR do?
This PR implements excluding the load balancing loss of padding tokens in Mixtral-8x7B
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue) https://github.com/huggingface/transformers/issues/28505
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28513/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28513/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28513",
"html_url": "https://github.com/huggingface/transformers/pull/28513",
"diff_url": "https://github.com/huggingface/transformers/pull/28513.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28513.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28512 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28512/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28512/comments | https://api.github.com/repos/huggingface/transformers/issues/28512/events | https://github.com/huggingface/transformers/issues/28512 | 2,082,400,458 | I_kwDOCUB6oc58HujK | 28,512 | AMP autocast not invoked with CUDA 11.8 build of Pytorch | {
"login": "haixpham",
"id": 32718796,
"node_id": "MDQ6VXNlcjMyNzE4Nzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/32718796?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/haixpham",
"html_url": "https://github.com/haixpham",
"followers_url": "https://api.github.com/users/haixpham/followers",
"following_url": "https://api.github.com/users/haixpham/following{/other_user}",
"gists_url": "https://api.github.com/users/haixpham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/haixpham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haixpham/subscriptions",
"organizations_url": "https://api.github.com/users/haixpham/orgs",
"repos_url": "https://api.github.com/users/haixpham/repos",
"events_url": "https://api.github.com/users/haixpham/events{/privacy}",
"received_events_url": "https://api.github.com/users/haixpham/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-15T16:58:22 | 2024-01-16T16:35:36 | null | NONE | null | ### System Info
pytorch 2.1 + CUDA 11.8
transformers 4.36.2
accelerate 0.26.0
### Who can help?
@pacman100 , @muellerz
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
in `Trainer.autocast_smart_context_manager()`, only CPU AMP is supported, and CUDA autocast wrapper is managed by `accelerate` when training starts. This design works with pytorch 2.1 built with CUDA 12.1, but not with the CUDA 11.8 version.
### Expected behavior
CUDA AMP works with torch 2.1+CUDA 11.8
My simple fix is as follows:
- add `force_cuda_amp` to TrainingArguments to flag the code to enable CUDA AMP autocast
- derive `Trainer.autocast_smart_context_manager()` to return CUDA AMP context if `force_cuda_amp` is flagged.
A more systematic solution (more like a hack) is to detect CUDA version when Trainer is initialized, if CUDA is < 12 then enable this flag automatically.
~~Edit: my fix resulted in NaN so no fix yet~~
Edit 2: my fix actually worked. The NaN problem was from the hidden `_fast_init` flag of `from_pretrained` that caused some new modules not properly initialized. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28512/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28512/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28511 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28511/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28511/comments | https://api.github.com/repos/huggingface/transformers/issues/28511/events | https://github.com/huggingface/transformers/pull/28511 | 2,082,340,540 | PR_kwDOCUB6oc5kG5aK | 28,511 | Add a use_safetensors arg to TFPreTrainedModel.from_pretrained() | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-15T16:18:35 | 2024-01-15T17:00:56 | 2024-01-15T17:00:55 | MEMBER | null | PyTorch's `from_pretrained()` method has a `use_safetensors` argument. Our TF code doesn't, and just always tries safetensors if available. This PR adds the argument to match the PyTorch API. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28511/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28511/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28511",
"html_url": "https://github.com/huggingface/transformers/pull/28511",
"diff_url": "https://github.com/huggingface/transformers/pull/28511.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28511.patch",
"merged_at": "2024-01-15T17:00:55"
} |
https://api.github.com/repos/huggingface/transformers/issues/28510 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28510/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28510/comments | https://api.github.com/repos/huggingface/transformers/issues/28510/events | https://github.com/huggingface/transformers/issues/28510 | 2,082,098,659 | I_kwDOCUB6oc58Gk3j | 28,510 | With deepspeed zero3 enabled, loading from_pretrained() and resize_token_embeddings() do not work correctly | {
"login": "haixpham",
"id": 32718796,
"node_id": "MDQ6VXNlcjMyNzE4Nzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/32718796?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/haixpham",
"html_url": "https://github.com/haixpham",
"followers_url": "https://api.github.com/users/haixpham/followers",
"following_url": "https://api.github.com/users/haixpham/following{/other_user}",
"gists_url": "https://api.github.com/users/haixpham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/haixpham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haixpham/subscriptions",
"organizations_url": "https://api.github.com/users/haixpham/orgs",
"repos_url": "https://api.github.com/users/haixpham/repos",
"events_url": "https://api.github.com/users/haixpham/events{/privacy}",
"received_events_url": "https://api.github.com/users/haixpham/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-15T14:02:26 | 2024-01-17T17:00:12 | null | NONE | null | ### System Info
torch 2.1.1 - CUDA 12.1
transformers 4.36.2
accelerate 0.26.0
deepspeed 0.12.3
### Who can help?
@pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
This problem exists in `PretrainedModel` class in `modeling_utils.py` and would affect any code.
With deepspeed enabled, the model is wrapped by deepspeed engine, and normal model parameter `weight` and `bias` are changed: they are empty having shape = torch.Size([0]), and the actual weights are stored in `ds_tensor` attributes of `weight` and `bias`, respectively. This leads to a few problems in `modeling_utils.py`
- Calling `model.state_dict().keys()` to get expected model parameters. This would use pytorch Module's original function to get state_dict, and with deepspeed enabled, this method failed to get all param keys.
- Checking mismatched keys: `state_dict[checkpoint_key].shape != model_state_dict[model_key].shape`. Here `model_state_dict[model_key].shape` is 0, so this method failed, resulting in matched key becoming unmatched. This caused matched keys being removed from checkpoint's state_dict, and those params' weights are not loaded.
- `Tied_params`: Should call accelerate's `find_tied_parameters()` for search for tied parameters in case deepspeed is enabled, instead of relying on `model.state_dict().items()`
- `resize_token_embedding()`:
- when creating new_embedding, this call is not wrapped in a deepspeed context, so the new_embedding is not managed by deepspeed.
- With the above fixed, before tying weights, the `embedding.shape` check must be wrapped in deepspeed `GatheredParamters()` context.
### Expected behavior
I made a fork of `transformers` and modified `modeling_utils.py` as in the following commit:
https://github.com/haixpham/transformers/commit/e300792ccb6fc53666b4971bab87ea7179a4e3bb
I would love to hear any feedback about my changes. I checked and compared the result values with/without deepspeed and they appeared similar. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28510/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28510/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28509 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28509/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28509/comments | https://api.github.com/repos/huggingface/transformers/issues/28509/events | https://github.com/huggingface/transformers/pull/28509 | 2,081,998,273 | PR_kwDOCUB6oc5kFvWH | 28,509 | SiLU activation wrapper for safe importing | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-15T13:04:59 | 2024-01-15T19:37:06 | 2024-01-15T19:37:00 | COLLABORATOR | null | # What does this PR do?
The custom implementation of `SiLUActivation` was removed in #27136. This causes two issues:
1. Users unable to unpickle objects - c.f. #28177
2. Users unable to import the class - c.f. #28496
For 1. - the unpickling of modified transformers models through torch.load isn't something we officially support. However, this will (temporarily) provide an equivalent class.
For 2 - provides a class with deprecation warning that can be imported
Fixes #28177 #28496
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28509/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28509/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28509",
"html_url": "https://github.com/huggingface/transformers/pull/28509",
"diff_url": "https://github.com/huggingface/transformers/pull/28509.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28509.patch",
"merged_at": "2024-01-15T19:37:00"
} |
https://api.github.com/repos/huggingface/transformers/issues/28508 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28508/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28508/comments | https://api.github.com/repos/huggingface/transformers/issues/28508/events | https://github.com/huggingface/transformers/pull/28508 | 2,081,991,253 | PR_kwDOCUB6oc5kFtya | 28,508 | Fix `_speculative_sampling` implementation | {
"login": "ofirzaf",
"id": 18296312,
"node_id": "MDQ6VXNlcjE4Mjk2MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/18296312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ofirzaf",
"html_url": "https://github.com/ofirzaf",
"followers_url": "https://api.github.com/users/ofirzaf/followers",
"following_url": "https://api.github.com/users/ofirzaf/following{/other_user}",
"gists_url": "https://api.github.com/users/ofirzaf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ofirzaf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ofirzaf/subscriptions",
"organizations_url": "https://api.github.com/users/ofirzaf/orgs",
"repos_url": "https://api.github.com/users/ofirzaf/repos",
"events_url": "https://api.github.com/users/ofirzaf/events{/privacy}",
"received_events_url": "https://api.github.com/users/ofirzaf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 13 | 2024-01-15T13:00:54 | 2024-01-19T21:59:18 | 2024-01-19T14:07:32 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Current implementation of `_speculative_sampling` accepts the draft model tokens all the time due to faulty test of the number of matches (`n_matches`). After fixing this issue I found and fixed several more issues in the implementation to reproduce the exact algorithm presented in the paper.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@gante
@echarlaix
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28508/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28508/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28508",
"html_url": "https://github.com/huggingface/transformers/pull/28508",
"diff_url": "https://github.com/huggingface/transformers/pull/28508.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28508.patch",
"merged_at": "2024-01-19T14:07:32"
} |
https://api.github.com/repos/huggingface/transformers/issues/28507 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28507/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28507/comments | https://api.github.com/repos/huggingface/transformers/issues/28507/events | https://github.com/huggingface/transformers/pull/28507 | 2,081,721,163 | PR_kwDOCUB6oc5kEzd5 | 28,507 | Correct model_type in PretrainedConfig's to_dict | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-15T10:25:40 | 2024-01-16T15:04:00 | 2024-01-16T15:03:59 | COLLABORATOR | null | As per title, now
```python
from transformers import AutoConfig, PretrainedConfig
cfg = AutoConfig.from_pretrained("bert-base-uncased")
config = PretrainedConfig.from_dict(cfg.to_dict())
config.model_type = "my-model"
print(config.to_dict()["model_type"])
```
rightfully yields `my-model`, while it use to give `""` (the class attribute value of PretrainedConfig).
I think instance attributes (if any) should take precedence over class attributes.
Related to https://github.com/huggingface/optimum/pull/1645 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28507/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28507/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28507",
"html_url": "https://github.com/huggingface/transformers/pull/28507",
"diff_url": "https://github.com/huggingface/transformers/pull/28507.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28507.patch",
"merged_at": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.