url stringlengths 66 66 | repository_url stringclasses 1
value | labels_url stringlengths 80 80 | comments_url stringlengths 75 75 | events_url stringlengths 73 73 | html_url stringlengths 54 56 | id int64 2.03B 2.11B | node_id stringlengths 18 19 | number int64 27.9k 28.8k | title stringlengths 3 306 | user dict | labels list | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees list | milestone null | comments int64 0 39 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | active_lock_reason null | body stringlengths 19 42.4k ⌀ | reactions dict | timeline_url stringlengths 75 75 | performed_via_github_app null | state_reason stringclasses 3
values | draft bool 2
classes | pull_request dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/28808 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28808/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28808/comments | https://api.github.com/repos/huggingface/transformers/issues/28808/events | https://github.com/huggingface/transformers/issues/28808 | 2,111,643,277 | I_kwDOCUB6oc593R6N | 28,808 | It's an AlignModel or Deepspeed Zero3 bug. | {
"login": "necrophagists",
"id": 120618287,
"node_id": "U_kgDOBzB9Lw",
"avatar_url": "https://avatars.githubusercontent.com/u/120618287?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/necrophagists",
"html_url": "https://github.com/necrophagists",
"followers_url": "https://api.github.com/users/necrophagists/followers",
"following_url": "https://api.github.com/users/necrophagists/following{/other_user}",
"gists_url": "https://api.github.com/users/necrophagists/gists{/gist_id}",
"starred_url": "https://api.github.com/users/necrophagists/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/necrophagists/subscriptions",
"organizations_url": "https://api.github.com/users/necrophagists/orgs",
"repos_url": "https://api.github.com/users/necrophagists/repos",
"events_url": "https://api.github.com/users/necrophagists/events{/privacy}",
"received_events_url": "https://api.github.com/users/necrophagists/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-02-01T06:18:02 | 2024-02-01T06:21:11 | null | NONE | null | ### System Info
When I try to load the AlignModel weights locally and train them using zero3, I get the following error:
```
File "/opt/licy/MyVLM/model/builder.py", line 152, in load_model
model =AlignModel.from_pretrained(self.args.vm_path)
File "/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py", line 3307, in from_pretrained
) = cls._load_pretrained_model(
File "/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py", line 3559, in _load_pretrained_model
model.apply(model._initialize_weights)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 885, in apply
fn(self)
File "/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py", line 1388, in _initialize_weights
self._init_weights(module)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/align/modeling_align.py", line 1189, in _init_weights
nn.init.xavier_uniform_(module.text_projection.weight)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/init.py", line 323, in xavier_uniform_
fan_in, fan_out = _calculate_fan_in_and_fan_out(tensor)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/init.py", line 287, in _calculate_fan_in_and_fan_out
raise ValueError("Fan in and fan out can not be computed for tensor with fewer than 2 dimensions")
```
Switching to zero2 doesn't produce an error; also, ConvnextModel and ClipVisionModel don't report an error when trained under zero3, so I'm thinking that maybe there's a bug in AlignModel?
@amyeroberts @pacman100 @muellerz
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1.`model =AlignModel.from_pretrained(path)`
2.use zero3 to train model
3.get error about xavier_init
### Expected behavior
The expected behavior is to be able to load models | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28808/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28808/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28807 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28807/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28807/comments | https://api.github.com/repos/huggingface/transformers/issues/28807/events | https://github.com/huggingface/transformers/issues/28807 | 2,111,597,249 | I_kwDOCUB6oc593GrB | 28,807 | GPT2 minicons surprisal: IndexError: index out of range in self | {
"login": "joyce9936",
"id": 119527282,
"node_id": "U_kgDOBx_Xcg",
"avatar_url": "https://avatars.githubusercontent.com/u/119527282?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joyce9936",
"html_url": "https://github.com/joyce9936",
"followers_url": "https://api.github.com/users/joyce9936/followers",
"following_url": "https://api.github.com/users/joyce9936/following{/other_user}",
"gists_url": "https://api.github.com/users/joyce9936/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joyce9936/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joyce9936/subscriptions",
"organizations_url": "https://api.github.com/users/joyce9936/orgs",
"repos_url": "https://api.github.com/users/joyce9936/repos",
"events_url": "https://api.github.com/users/joyce9936/events{/privacy}",
"received_events_url": "https://api.github.com/users/joyce9936/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-02-01T05:38:02 | 2024-02-01T05:38:26 | null | NONE | null | ### System Info
I am trying to calculating surprisal value by feeding in a txt file with about 5000 sentences. But there is a error message I encounter: **IndexError: index out of range in self** Can anyone help with this issue?
Thank you!
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Here is the code:
<img width="1337" alt="Screenshot 2024-01-31 at 9 35 06 PM" src="https://github.com/huggingface/transformers/assets/119527282/486ae5fc-2a9f-4f04-a518-99fba53a7775">
Here is the error message:
<img width="1337" alt="Screenshot 2024-01-31 at 9 34 13 PM" src="https://github.com/huggingface/transformers/assets/119527282/8e50c012-276c-4a4d-810f-39a212280e55">
### Expected behavior
I would like to have the surprisal value for each word for the whole text file. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28807/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28806 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28806/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28806/comments | https://api.github.com/repos/huggingface/transformers/issues/28806/events | https://github.com/huggingface/transformers/pull/28806 | 2,111,564,467 | PR_kwDOCUB6oc5lpkdt | 28,806 | [docs] fix some bugs about parameter description | {
"login": "zspo",
"id": 26846598,
"node_id": "MDQ6VXNlcjI2ODQ2NTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/26846598?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zspo",
"html_url": "https://github.com/zspo",
"followers_url": "https://api.github.com/users/zspo/followers",
"following_url": "https://api.github.com/users/zspo/following{/other_user}",
"gists_url": "https://api.github.com/users/zspo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zspo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zspo/subscriptions",
"organizations_url": "https://api.github.com/users/zspo/orgs",
"repos_url": "https://api.github.com/users/zspo/repos",
"events_url": "https://api.github.com/users/zspo/events{/privacy}",
"received_events_url": "https://api.github.com/users/zspo/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-02-01T05:05:32 | 2024-02-01T05:05:32 | null | CONTRIBUTOR | null | # What does this PR do?
Fixes
1. fix missing spaces in parameter descriptions
2. add parameter description
@amyeroberts | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28806/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28806/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28806",
"html_url": "https://github.com/huggingface/transformers/pull/28806",
"diff_url": "https://github.com/huggingface/transformers/pull/28806.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28806.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28805 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28805/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28805/comments | https://api.github.com/repos/huggingface/transformers/issues/28805/events | https://github.com/huggingface/transformers/issues/28805 | 2,111,545,003 | I_kwDOCUB6oc59256r | 28,805 | sequence_bias feature is not working for Whisper ASR model. | {
"login": "vchagari",
"id": 10948110,
"node_id": "MDQ6VXNlcjEwOTQ4MTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/10948110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vchagari",
"html_url": "https://github.com/vchagari",
"followers_url": "https://api.github.com/users/vchagari/followers",
"following_url": "https://api.github.com/users/vchagari/following{/other_user}",
"gists_url": "https://api.github.com/users/vchagari/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vchagari/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vchagari/subscriptions",
"organizations_url": "https://api.github.com/users/vchagari/orgs",
"repos_url": "https://api.github.com/users/vchagari/repos",
"events_url": "https://api.github.com/users/vchagari/events{/privacy}",
"received_events_url": "https://api.github.com/users/vchagari/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-02-01T04:47:45 | 2024-02-01T04:47:45 | null | NONE | null | ### System Info
Hi @sanchit-gandhi and @gante
Sequence_bias feature is not working, I tried with similar example shown in the SequenceBiasLogitsProcessor class (https://github.com/huggingface/transformers/blob/7b2bd1fbbd50e57cf28013e2d0737912ecc0f2eb/src/transformers/generation/logits_process.py#L942) with fine-tuned Whisper ASR HF model,
I recorded an audio and gave it to the Whisper ASR model with biasing terms like mentioned below, unfortunately i didn't see any effect in the output.
**More details:**
Transformers Commit: 1c7e5e236823cd38faac8115f96205a82c17fff9
Test-Case: Steps how to reproduce the issue.
Audio contents: "The full name of Donald is Donald J. Trump Jr"
sequence_bias = {get_tokens_as_tuple("Donald Duck"): 10.0}
model = WhisperForConditionalGeneration.from_pretrained(model_dir).to("cuda")
feature_extractor = WhisperFeatureExtractor.from_pretrained(model_dir)
processor = WhisperProcessor.from_pretrained(model_dir)
input_features = feature_extractor(audio, sampling_rate=16000, return_tensors="pt").input_features
predicted_ids = model.generate(input_features.to("cuda"), sequence_bias=sequence_bias, num_beams=4)
text = [processor.decode(predicted_id, skip_special_tokens=True) for predicted_id in predicted_ids]
transcript = text[0]
The output is still came as "The full name of Donald is Donald J Trump Jr"
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Test-Case: Steps how to reproduce the issue.
Audio contents: "The full name of Donald is Donald J. Trump Jr"
sequence_bias = {get_tokens_as_tuple("Donald Duck"): 10.0}
model = WhisperForConditionalGeneration.from_pretrained(model_dir).to("cuda")
feature_extractor = WhisperFeatureExtractor.from_pretrained(model_dir)
processor = WhisperProcessor.from_pretrained(model_dir)
input_features = feature_extractor(audio, sampling_rate=16000, return_tensors="pt").input_features
predicted_ids = model.generate(input_features.to("cuda"), sequence_bias=sequence_bias, num_beams=4)
text = [processor.decode(predicted_id, skip_special_tokens=True) for predicted_id in predicted_ids]
transcript = text[0]
### Expected behavior
The full name of Donald is Donald Duck. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28805/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28805/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28804 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28804/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28804/comments | https://api.github.com/repos/huggingface/transformers/issues/28804/events | https://github.com/huggingface/transformers/pull/28804 | 2,111,349,806 | PR_kwDOCUB6oc5lo0Uj | 28,804 | Add missing None check for hf_quantizer | {
"login": "jganitkevitch",
"id": 190837,
"node_id": "MDQ6VXNlcjE5MDgzNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/190837?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jganitkevitch",
"html_url": "https://github.com/jganitkevitch",
"followers_url": "https://api.github.com/users/jganitkevitch/followers",
"following_url": "https://api.github.com/users/jganitkevitch/following{/other_user}",
"gists_url": "https://api.github.com/users/jganitkevitch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jganitkevitch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jganitkevitch/subscriptions",
"organizations_url": "https://api.github.com/users/jganitkevitch/orgs",
"repos_url": "https://api.github.com/users/jganitkevitch/repos",
"events_url": "https://api.github.com/users/jganitkevitch/events{/privacy}",
"received_events_url": "https://api.github.com/users/jganitkevitch/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-02-01T02:20:00 | 2024-02-01T02:40:05 | null | NONE | null | Adds None check for hf_quantizer that otherwise can blow up when `from_pretrained` is called with `quanitization_config=None`.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28804/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28804/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28804",
"html_url": "https://github.com/huggingface/transformers/pull/28804",
"diff_url": "https://github.com/huggingface/transformers/pull/28804.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28804.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28803 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28803/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28803/comments | https://api.github.com/repos/huggingface/transformers/issues/28803/events | https://github.com/huggingface/transformers/issues/28803 | 2,111,240,846 | I_kwDOCUB6oc591vqO | 28,803 | DeepSpeed ZeRO3 errors on config initialization | {
"login": "matthewdeng",
"id": 3967392,
"node_id": "MDQ6VXNlcjM5NjczOTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3967392?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matthewdeng",
"html_url": "https://github.com/matthewdeng",
"followers_url": "https://api.github.com/users/matthewdeng/followers",
"following_url": "https://api.github.com/users/matthewdeng/following{/other_user}",
"gists_url": "https://api.github.com/users/matthewdeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matthewdeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matthewdeng/subscriptions",
"organizations_url": "https://api.github.com/users/matthewdeng/orgs",
"repos_url": "https://api.github.com/users/matthewdeng/repos",
"events_url": "https://api.github.com/users/matthewdeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/matthewdeng/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-02-01T00:41:26 | 2024-02-01T00:42:17 | null | NONE | null | ### System Info
`transformers-cli env`:
- `transformers` version: 4.37.2
- Platform: Linux-6.2.0-1017-aws-x86_64-with-glibc2.31
- Python version: 3.9.18
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.26.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.11.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.7.2 (cpu)
- Jax version: 0.4.13
- JaxLib version: 0.4.13
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
**Relevant Dependencies:**
```
accelerate==0.26.1
deepspeed==0.12.3
ray==2.9.1
transformers==4.37.2
```
### Who can help?
@pacman100 @muellerzr
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm running the following script on a `g4dn.12xlarge` instance.
```python
import torch.distributed
from transformers import AutoModel, TrainingArguments
from ray.train import ScalingConfig
from ray.train.torch import TorchTrainer
def train_func():
assert torch.distributed.is_initialized(), "Torch Distributed must be initialized."
deepspeed_config = {
"zero_optimization": {
"stage": 3,
},
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
}
train_args = TrainingArguments(
output_dir="./",
deepspeed=deepspeed_config,
)
model = AutoModel.from_pretrained("bert-base-uncased")
trainer = TorchTrainer(
train_loop_per_worker=train_func,
scaling_config=ScalingConfig(
num_workers=2,
use_gpu=True,
)
)
trainer.fit()
```
This errors with:
```
File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/train/_internal/utils.py", line 118, in discard_return_wrapper
train_func(*args, **kwargs)
File "/home/ray/default/simple.py", line 22, in train_func
model = AutoModel.from_pretrained("bert-base-uncased")
File "/home/ray/anaconda3/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 566, in from_pretrained
return model_class.from_pretrained(
File "/home/ray/anaconda3/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3583, in from_pretrained
init_contexts = [deepspeed.zero.Init(config_dict_or_path=deepspeed_config())] + init_contexts
File "/home/ray/anaconda3/lib/python3.9/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 859, in __init__
_ds_config = deepspeed.runtime.config.DeepSpeedConfig(config_dict_or_path,
File "/home/ray/anaconda3/lib/python3.9/site-packages/deepspeed/runtime/config.py", line 781, in __init__
self._configure_train_batch_size()
File "/home/ray/anaconda3/lib/python3.9/site-packages/deepspeed/runtime/config.py", line 959, in _configure_train_batch_size
self._batch_assertion()
File "/home/ray/anaconda3/lib/python3.9/site-packages/deepspeed/runtime/config.py", line 907, in _batch_assertion
assert train_batch == micro_batch * grad_acc * self.world_size, (
AssertionError: Check batch related parameters. train_batch_size is not equal to micro_batch_per_gpu * gradient_acc_step * world_size 16 != 8 * 1 * 1
```
I did some debugging and it seems like `world_size` is being set to 1 because `dist` is not initialized yet [here](https://github.com/microsoft/DeepSpeed/blob/24f20ef0a105d32f6085fe0d3b1c2f9324a6262c/deepspeed/runtime/config.py#L712-L720).
I also did some bisection and saw that the error started occurring in `transformers==4.30.0`
**Related Issues:**
- https://github.com/microsoft/DeepSpeed/issues/3341 - this seems to be the exact same issue, but I haven't looked deep enough to understand if the issue lies in DeepSpeed or Transformers or Accelerate.
### Expected behavior
The script should run without error and `DeepSpeed` distributed environment should be inherited from the existing Torch process group.
The issue does not occur if I use ZeRO2.
```diff
"zero_optimization": {
- "stage": 3,
+ "stage": 2,
},
```
The issue can also be mitigated by manually initializing the DeepSpeed distributed environment with `deepspeed.init_distributed()`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28803/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28803/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28802 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28802/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28802/comments | https://api.github.com/repos/huggingface/transformers/issues/28802/events | https://github.com/huggingface/transformers/pull/28802 | 2,111,123,817 | PR_kwDOCUB6oc5loDqD | 28,802 | [`BERT`] Add support for sdpa | {
"login": "hackyon",
"id": 1557853,
"node_id": "MDQ6VXNlcjE1NTc4NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1557853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hackyon",
"html_url": "https://github.com/hackyon",
"followers_url": "https://api.github.com/users/hackyon/followers",
"following_url": "https://api.github.com/users/hackyon/following{/other_user}",
"gists_url": "https://api.github.com/users/hackyon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hackyon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hackyon/subscriptions",
"organizations_url": "https://api.github.com/users/hackyon/orgs",
"repos_url": "https://api.github.com/users/hackyon/repos",
"events_url": "https://api.github.com/users/hackyon/events{/privacy}",
"received_events_url": "https://api.github.com/users/hackyon/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-31T22:55:28 | 2024-01-31T23:04:09 | null | CONTRIBUTOR | null | # What does this PR do?
Adding support for SDPA (scaled dot product attention) for Bert. More context in #28005
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @younesbelkada
(cc @fxmarty) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28802/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28802/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28802",
"html_url": "https://github.com/huggingface/transformers/pull/28802",
"diff_url": "https://github.com/huggingface/transformers/pull/28802.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28802.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28801 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28801/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28801/comments | https://api.github.com/repos/huggingface/transformers/issues/28801/events | https://github.com/huggingface/transformers/issues/28801 | 2,111,003,139 | I_kwDOCUB6oc5901oD | 28,801 | Conversational Pipeline returns <|im_end|> in the assistant's output. | {
"login": "OfficialDelta",
"id": 51007646,
"node_id": "MDQ6VXNlcjUxMDA3NjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/51007646?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OfficialDelta",
"html_url": "https://github.com/OfficialDelta",
"followers_url": "https://api.github.com/users/OfficialDelta/followers",
"following_url": "https://api.github.com/users/OfficialDelta/following{/other_user}",
"gists_url": "https://api.github.com/users/OfficialDelta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OfficialDelta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OfficialDelta/subscriptions",
"organizations_url": "https://api.github.com/users/OfficialDelta/orgs",
"repos_url": "https://api.github.com/users/OfficialDelta/repos",
"events_url": "https://api.github.com/users/OfficialDelta/events{/privacy}",
"received_events_url": "https://api.github.com/users/OfficialDelta/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-31T21:31:34 | 2024-02-01T00:09:55 | null | NONE | null | ### System Info
- `transformers` version: 4.37.2
- Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.2
- Accelerate version: 0.26.1
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: DEEPSPEED
- use_cpu: False
- debug: True
- num_processes: 8
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- deepspeed_config: {'deepspeed_config_file': '/workspace/zero3.json', 'zero3_init_flag': True}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.2.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@Narsil
@Rocketknight1
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm trying to inference on a custom fine-tuned `Mixtral-8x7B-Instruct-v0.1` model. The fine-tuning dataset I generated used the chatml format for tokenizing the data, and when I try inferencing, the conversational pipeline returns the `<|im_end|>` text at the end.
Here is a minimal working example:
```py
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig
)
from peft import PeftModelForCausalLM
# load mixtral quantized because inferencing on a single GPU
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained(
"mistralai/Mixtral-8x7B-Instruct-v0.1", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2",
trust_remote_code=True, quantization_config=bnb_config,
)
# load the custom LoRA adapter for the fine-tuned chatml model
lora_model = PeftModelForCausalLM.from_pretrained(model, '/workspace/chatml-lora-checkpoint')
# load the tokenizer with the custom chatml format
tokenizer = AutoTokenizer.from_pretrained('mistralai/Mixtral-8x7B-Instruct-v0.1')
tokenizer.chat_template = "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}"
tokenizer.pad_token = tokenizer.eos_token
# finally, load the pipeline and try inferencing
generator = pipeline("conversational", model=lora_model, tokenizer=tokenizer)
output = generator([
{
'role': 'user',
'content': 'Hello, how are you today?'
}
])
print(output)
```
Output:
```
Conversation id: 7dc0e9fd-9d79-49c8-b4e1-a01b6ed63c98
user: Hello, how are you today?
assistant: I'm an artificial intelligence. How can I assist you today?<|im_end|>
```
After troubleshooting, I noticed in `postprocess` function of the conversational pipeline
```py
def postprocess(self, model_outputs, clean_up_tokenization_spaces=True):
output_ids = model_outputs["output_ids"]
answer = self.tokenizer.decode(
output_ids[0],
skip_special_tokens=True,
clean_up_tokenization_spaces=clean_up_tokenization_spaces,
)
conversation = model_outputs["conversation"]
conversation.add_message({"role": "assistant", "content": answer})
return conversation
```
The decoded `answer` has `skip_special_tokens` as `True`. So, to solve this issue, I considered adding `<|im_end|>` as a special token. However, the model itself wasn't trained on this token, and <|im_end|> was originally encoded as multiple tokens.
Before coming across this issue, I wanted to have the model consider <|im_end|> as a custom stopping token. In the process of implementing this, i realized that my model, which sometimes outputted `<|im_end|>` as `\n<|im_end|>` or `\n\n<|im_end|>` (variable number of `\n`'s), which were each tokenized differently than `<|im_end|>` by itself.
```py
print({
'no new line': tokenizer('<|im_end|>', add_special_tokens=False)['input_ids'],
'one new line': tokenizer('\n<|im_end|>', add_special_tokens=False)['input_ids'],
'two new lines': tokenizer('\n\n<|im_end|>', add_special_tokens=False)['input_ids']
})
```
```
{
'no new line': [523, 28766, 321, 28730, 416, 28766, 28767],
'one new line': [28705, 13, 28789, 28766, 321, 28730, 416, 28766, 28767],
'two new lines': [28705, 13, 13, 28789, 28766, 321, 28730, 416, 28766, 28767]
}
```
Notice how with new lines, the 523 token becomes 28789, which is preceeded by 28705 and a number of 13's. This means that having this as a special token is nearly impossible to do with the intended functionality of it ignoring the end token when post processing despite new lines. The main way to make it work, at least to me, would be to add custom logic for processing the token which is capable of handling the new line tokens.
In order to combat this for my early stopping, I decided to take the easy way out and decode the tokenized input_ids to see if the end contained my custom stop token:
```py
from transformers import StoppingCriteria, StoppingCriteriaList
class StoppingCriteriaSub(StoppingCriteria):
def __init__(self, stops = [], encounters=1, tokenizer=None):
super().__init__()
self.stops = stops
self.ENCOUNTERS = encounters
self.tokenizer = tokenizer
assert tokenizer is not None, "Tokenizer is required"
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor):
stop_count = 0
for input_ids_list in input_ids:
for stop in self.stops:
length = len(stop) + 5 # buffer for special tokens preceeding stop
if len(input_ids_list) < length:
continue
last_elements = input_ids_list[-length:]
decoded_elements = self.tokenizer.decode(last_elements)
if stop in decoded_elements:
stop_count += 1
if stop_count >= self.ENCOUNTERS:
return True
return False
stop_words = ["<|im_end|>"]
stopping_criteria = StoppingCriteriaList([StoppingCriteriaSub(stops=stop_words, tokenizer=tokenizer)])
```
The code above *works* but it doesn't feel like the best method of solving this.
### Expected behavior
I would like for there to be the potential of custom removing the <|im_end|> text at the end, despite the tokenization differences with new lines. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28801/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28801/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28800 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28800/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28800/comments | https://api.github.com/repos/huggingface/transformers/issues/28800/events | https://github.com/huggingface/transformers/pull/28800 | 2,110,827,898 | PR_kwDOCUB6oc5lnDE6 | 28,800 | `erfinv_` and `clamp_` ops do not exist in float16+cpu | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-31T19:37:37 | 2024-01-31T20:02:18 | null | MEMBER | null | On cpu, some ops are not defined. This messes up a lot of init weights operations for siglip.
I know fp16 + CPU is weird and probably never happening in practise. As such, feel free to ignore this PR.
Reproduction case:
```python
from transformers import AutoModel, AutoConfig
import torch
config = AutoConfig.from_pretrained("google/siglip-so400m-patch14-384")
model = AutoModel.from_config(config, torch_dtype=torch.float16)
print(sum([m.sum().item() for m in model.parameters()])) # sanity check
```
I am using torch==2.0.1 (+ cu118). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28800/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28800/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28800",
"html_url": "https://github.com/huggingface/transformers/pull/28800",
"diff_url": "https://github.com/huggingface/transformers/pull/28800.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28800.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28799 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28799/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28799/comments | https://api.github.com/repos/huggingface/transformers/issues/28799/events | https://github.com/huggingface/transformers/issues/28799 | 2,110,630,743 | I_kwDOCUB6oc59zatX | 28,799 | `token` parameter not respected for `AutoModel` | {
"login": "squidarth",
"id": 850115,
"node_id": "MDQ6VXNlcjg1MDExNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/850115?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/squidarth",
"html_url": "https://github.com/squidarth",
"followers_url": "https://api.github.com/users/squidarth/followers",
"following_url": "https://api.github.com/users/squidarth/following{/other_user}",
"gists_url": "https://api.github.com/users/squidarth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/squidarth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/squidarth/subscriptions",
"organizations_url": "https://api.github.com/users/squidarth/orgs",
"repos_url": "https://api.github.com/users/squidarth/repos",
"events_url": "https://api.github.com/users/squidarth/events{/privacy}",
"received_events_url": "https://api.github.com/users/squidarth/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-01-31T17:38:54 | 2024-01-31T17:38:54 | null | NONE | null | ### System Info
transformers version: 4.37.2
(`transformers-cli env` errored out for me)
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi there,
I am trying to use https://huggingface.co/jinaai/jina-embeddings-v2-base-en, and noticed the following problem:
When using the `token` parameter on `AutoModel`, I get the "You are trying to access a gated repo" error (I have accepted the terms for the model.
I'm working around this by using the `HF_TOKEN` environment variable.
Code:
```
# doesn't work
model = AutoModel.from_pretrained(
'jinaai/jina-embeddings-v2-base-en',
revision="0f472a4cde0e6e50067b8259a3a74d1110f4f8d8",
trust_remote_code=True,
token="MY_HF_TOKEN"
)
# works
os.environ["HF_TOKEN"] = "MY_HF_TOKEN"
model = AutoModel.from_pretrained(
'jinaai/jina-embeddings-v2-base-en',
revision="0f472a4cde0e6e50067b8259a3a74d1110f4f8d8",
trust_remote_code=True,
token="MY_HF_TOKEN"
)
```
thanks for any help here
### Expected behavior
Using the `token` parameter should lead to the same behavior as using the `HF_TOKEN` environment variable. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28799/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28798 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28798/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28798/comments | https://api.github.com/repos/huggingface/transformers/issues/28798/events | https://github.com/huggingface/transformers/pull/28798 | 2,110,437,958 | PR_kwDOCUB6oc5lltWB | 28,798 | fix some docs and tensor device bug | {
"login": "zspo",
"id": 26846598,
"node_id": "MDQ6VXNlcjI2ODQ2NTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/26846598?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zspo",
"html_url": "https://github.com/zspo",
"followers_url": "https://api.github.com/users/zspo/followers",
"following_url": "https://api.github.com/users/zspo/following{/other_user}",
"gists_url": "https://api.github.com/users/zspo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zspo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zspo/subscriptions",
"organizations_url": "https://api.github.com/users/zspo/orgs",
"repos_url": "https://api.github.com/users/zspo/repos",
"events_url": "https://api.github.com/users/zspo/events{/privacy}",
"received_events_url": "https://api.github.com/users/zspo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-31T16:02:11 | 2024-02-01T02:11:03 | 2024-02-01T02:11:03 | CONTRIBUTOR | null | # What does this PR do?
Fixes
1. fix missing spaces in parameter descriptions
2. fix tensor device
3. add parameter description
## Who can review?
@ArthurZucker @amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28798/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28798/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28798",
"html_url": "https://github.com/huggingface/transformers/pull/28798",
"diff_url": "https://github.com/huggingface/transformers/pull/28798.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28798.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28797 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28797/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28797/comments | https://api.github.com/repos/huggingface/transformers/issues/28797/events | https://github.com/huggingface/transformers/issues/28797 | 2,110,224,239 | I_kwDOCUB6oc59x3dv | 28,797 | Segmentation fault when importing ESMFold and Tokenizers from transformers along with Pyrosetta | {
"login": "SIAndersson",
"id": 117816326,
"node_id": "U_kgDOBwW8Bg",
"avatar_url": "https://avatars.githubusercontent.com/u/117816326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SIAndersson",
"html_url": "https://github.com/SIAndersson",
"followers_url": "https://api.github.com/users/SIAndersson/followers",
"following_url": "https://api.github.com/users/SIAndersson/following{/other_user}",
"gists_url": "https://api.github.com/users/SIAndersson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SIAndersson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SIAndersson/subscriptions",
"organizations_url": "https://api.github.com/users/SIAndersson/orgs",
"repos_url": "https://api.github.com/users/SIAndersson/repos",
"events_url": "https://api.github.com/users/SIAndersson/events{/privacy}",
"received_events_url": "https://api.github.com/users/SIAndersson/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-31T14:29:05 | 2024-01-31T16:00:48 | null | NONE | null | ### System Info
- `transformers` version: 4.37.1
- Platform: Linux-4.18.0-513.11.1.el8_9.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.16
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.3.3
- Accelerate version: 0.26.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu)
- Jax version: 0.4.23
- JaxLib version: 0.4.23
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import pyrosetta
from transformers import AutoTokenizer, EsmForProteinFolding
```
### Expected behavior
Expected behaviour: the module imports without issue.
I can import pyrosetta on its own without issue. I can import the transformers modules without issue and run inference on PDB modules, as described in the protein structure prediction Jupyter notebook. I can do this without issue in a separate script. It is only when I import both that the segmentation fault occurs. The import order does not matter. Given that both work separately, I would expect them to work together as I cannot find any package conflicts. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28797/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28797/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28796 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28796/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28796/comments | https://api.github.com/repos/huggingface/transformers/issues/28796/events | https://github.com/huggingface/transformers/pull/28796 | 2,110,160,851 | PR_kwDOCUB6oc5lkv1u | 28,796 | Make `is_torch_bf16_available_on_device` more strict | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-31T14:00:53 | 2024-01-31T14:52:18 | null | COLLABORATOR | null | # What does this PR do?
The layernorm op is required to be supported in order for this function to return `True`.
### detail
Previously, the function `is_torch_bf16_available_on_device` check
```python
x = torch.zeros(2, 2, dtype=torch.float16).to(device)
_ = x @ x
```
With torch < 2.2 on CPU, this will give
> RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'
and `is_torch_bf16_available_on_device` returns `False`.
With torch 2.2, this doesn't fail and the function return `True`. However, many models use `LayerNorm` and this is still not supported by torch 2.2 on CPU. We then get many failures for fp16 tests on CircleCI (cpu only)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28796/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28796/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28796",
"html_url": "https://github.com/huggingface/transformers/pull/28796",
"diff_url": "https://github.com/huggingface/transformers/pull/28796.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28796.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28795 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28795/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28795/comments | https://api.github.com/repos/huggingface/transformers/issues/28795/events | https://github.com/huggingface/transformers/pull/28795 | 2,110,014,175 | PR_kwDOCUB6oc5lkPbU | 28,795 | canonical repos moves | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 6 | 2024-01-31T12:42:35 | 2024-01-31T17:24:56 | 2024-01-31T13:18:31 | MEMBER | null | we are phasing out canonical models & datasets (as a reminder, "canonical" repos are those that were not under an org or user namespace) and moving them under ad hoc organization namespaces
Note that this move should be backward compatible i.e. old versions of transformers that do `AutoModel.from_pretrained("gpt2")` should still work. Download stats should also be backward-compatible.
Thanks for reading! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28795/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28795/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28795",
"html_url": "https://github.com/huggingface/transformers/pull/28795",
"diff_url": "https://github.com/huggingface/transformers/pull/28795.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28795.patch",
"merged_at": "2024-01-31T13:18:31"
} |
https://api.github.com/repos/huggingface/transformers/issues/28794 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28794/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28794/comments | https://api.github.com/repos/huggingface/transformers/issues/28794/events | https://github.com/huggingface/transformers/issues/28794 | 2,109,842,049 | I_kwDOCUB6oc59waKB | 28,794 | BART-base flash_attention_2 causes CUDA error | {
"login": "Kripner",
"id": 9218121,
"node_id": "MDQ6VXNlcjkyMTgxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9218121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kripner",
"html_url": "https://github.com/Kripner",
"followers_url": "https://api.github.com/users/Kripner/followers",
"following_url": "https://api.github.com/users/Kripner/following{/other_user}",
"gists_url": "https://api.github.com/users/Kripner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kripner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kripner/subscriptions",
"organizations_url": "https://api.github.com/users/Kripner/orgs",
"repos_url": "https://api.github.com/users/Kripner/repos",
"events_url": "https://api.github.com/users/Kripner/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kripner/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-31T11:02:53 | 2024-02-01T00:50:58 | null | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.37.0
- Platform: Linux-6.5.0-15-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.1
- Accelerate version: 0.26.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
model = AutoModelForSeq2SeqLM.from_pretrained(
"facebook/bart-base",
attn_implementation="flash_attention_2",
torch_dtype=torch.bfloat16,
)
model.to("cuda")
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
def preprocess_function(examples):
inputs = examples["document"]
outputs = examples["summary"]
tokenized_inputs = tokenizer(inputs, max_length=1024, padding="max_length", truncation=True)
tokenized_outputs = tokenizer(outputs, max_length=64, padding="max_length", truncation=True)
return {
"input_ids": tokenized_inputs["input_ids"],
"attention_mask": tokenized_inputs["attention_mask"],
"labels": tokenized_outputs["input_ids"],
}
train_data = datasets.load_dataset("xsum", split="train[:100]", trust_remote_code=True)
train_data = train_data.map(preprocess_function, batched=True, remove_columns=["document", "summary"])
training_args = Seq2SeqTrainingArguments(
output_dir="output",
)
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
tokenizer=tokenizer,
train_dataset=train_data,
)
trainer.train()
```
### Expected behavior
Expected behavior as per https://huggingface.co/docs/transformers/en/perf_train_gpu_one#flash-attention-2:
> You can speedup the training throughput by using Flash Attention 2 integration in transformers. Check out the appropriate section in the [single GPU section](https://huggingface.co/docs/transformers/en/perf_infer_gpu_one#Flash-Attention-2) to learn more about how to load a model with Flash Attention 2 modules.
With Bart being listed as supported in the quoted link.
However, the script triggers CUDA error. The full output is (with `CUDA_LAUNCH_BLOCKING=1`):
```
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
WARNING:dvclive:Can't save experiment without a Git Repo.
Create a Git repo (`git init`) and commit (`git commit`).
3%|█████ | 1/39 [00:00<00:13, 2.80it/s]Traceback (most recent call last):
File "/app/pt/experiments/playground.py", line 100, in <module>
trainer.train()
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train
return inner_training_loop(
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 1869, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2768, in training_step
loss = self.compute_loss(model, inputs)
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2791, in compute_loss
outputs = model(**inputs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/models/bart/modeling_bart.py", line 1731, in forward
outputs = self.model(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/models/bart/modeling_bart.py", line 1617, in forward
decoder_outputs = self.decoder(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/models/bart/modeling_bart.py", line 1470, in forward
layer_outputs = decoder_layer(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/models/bart/modeling_bart.py", line 779, in forward
hidden_states, cross_attn_weights, cross_attn_present_key_value = self.encoder_attn(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/models/bart/modeling_bart.py", line 403, in forward
attn_output = self._flash_attention_forward(
File "/opt/conda/lib/python3.10/site-packages/transformers/models/bart/modeling_bart.py", line 454, in _flash_attention_forward
attn_output_unpad = flash_attn_varlen_func(
File "/opt/conda/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py", line 1059, in flash_attn_varlen_func
return FlashAttnVarlenFunc.apply(
File "/opt/conda/lib/python3.10/site-packages/torch/autograd/function.py", line 539, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/opt/conda/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py", line 576, in forward
out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = _flash_attn_varlen_forward(
File "/opt/conda/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py", line 85, in _flash_attn_varlen_forward
out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = flash_attn_cuda.varlen_fwd(
RuntimeError: CUDA error: invalid configuration argument
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
3%|▎ | 1/39 [00:00<00:20, 1.82it/s]
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28794/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28793 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28793/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28793/comments | https://api.github.com/repos/huggingface/transformers/issues/28793/events | https://github.com/huggingface/transformers/issues/28793 | 2,109,683,159 | I_kwDOCUB6oc59vzXX | 28,793 | BART-base save_pretrained triggers a warning about GenerationConfig | {
"login": "Kripner",
"id": 9218121,
"node_id": "MDQ6VXNlcjkyMTgxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9218121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kripner",
"html_url": "https://github.com/Kripner",
"followers_url": "https://api.github.com/users/Kripner/followers",
"following_url": "https://api.github.com/users/Kripner/following{/other_user}",
"gists_url": "https://api.github.com/users/Kripner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kripner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kripner/subscriptions",
"organizations_url": "https://api.github.com/users/Kripner/orgs",
"repos_url": "https://api.github.com/users/Kripner/repos",
"events_url": "https://api.github.com/users/Kripner/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kripner/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-31T09:42:20 | 2024-01-31T13:44:31 | null | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.37.0
- Platform: Linux-6.5.0-15-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.1
- Accelerate version: 0.26.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-base")
model.save_pretrained("model")
```
### Expected behavior
Expected behavior: The model is saved without warnings.
Actual behavior: Following warning is triggered before saving the model:
> Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41.
> Non-default generation parameters: {'early_stopping': True, 'num_beams': 4, 'no_repeat_ngram_size': 3, 'forced_bos_token_id': 0, 'forced_eos_token_id': 2} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28793/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28793/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28792 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28792/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28792/comments | https://api.github.com/repos/huggingface/transformers/issues/28792/events | https://github.com/huggingface/transformers/issues/28792 | 2,109,525,178 | I_kwDOCUB6oc59vMy6 | 28,792 | Add InternLM1 & InternLM2 model | {
"login": "PommesPeter",
"id": 54879512,
"node_id": "MDQ6VXNlcjU0ODc5NTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/54879512?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PommesPeter",
"html_url": "https://github.com/PommesPeter",
"followers_url": "https://api.github.com/users/PommesPeter/followers",
"following_url": "https://api.github.com/users/PommesPeter/following{/other_user}",
"gists_url": "https://api.github.com/users/PommesPeter/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PommesPeter/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PommesPeter/subscriptions",
"organizations_url": "https://api.github.com/users/PommesPeter/orgs",
"repos_url": "https://api.github.com/users/PommesPeter/repos",
"events_url": "https://api.github.com/users/PommesPeter/events{/privacy}",
"received_events_url": "https://api.github.com/users/PommesPeter/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | 2 | 2024-01-31T08:16:01 | 2024-01-31T13:11:43 | 2024-01-31T13:11:43 | NONE | null | ### Model description
Hey,
the recently released [InternLM](https://github.com/InternLM/InternLM) seems like it would be a nice addition to transformers.
Basically, the model has achieved performance that currently exceeds LLaMA2, Mistral, and other models on many Benchmarks. Adding the model to transformers makes it easier to use the model. It has two parameter sizes of 7B and 20B, for v1 and v2 versions. In addition, it also includes three types of models: base, sft, and chat.
Maybe there are already plans of integrating it @NielsRogge ?
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Project Page: https://internlm.intern-ai.org.cn/
GitHub Repo: https://github.com/InternLM/InternLM | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28792/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28792/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28791 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28791/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28791/comments | https://api.github.com/repos/huggingface/transformers/issues/28791/events | https://github.com/huggingface/transformers/issues/28791 | 2,109,431,587 | I_kwDOCUB6oc59u18j | 28,791 | Inappropriate reduce operation of "num_input_tokens_seen" is prone to get training stuck. | {
"login": "YouliangHUANG",
"id": 56789071,
"node_id": "MDQ6VXNlcjU2Nzg5MDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/56789071?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YouliangHUANG",
"html_url": "https://github.com/YouliangHUANG",
"followers_url": "https://api.github.com/users/YouliangHUANG/followers",
"following_url": "https://api.github.com/users/YouliangHUANG/following{/other_user}",
"gists_url": "https://api.github.com/users/YouliangHUANG/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YouliangHUANG/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YouliangHUANG/subscriptions",
"organizations_url": "https://api.github.com/users/YouliangHUANG/orgs",
"repos_url": "https://api.github.com/users/YouliangHUANG/repos",
"events_url": "https://api.github.com/users/YouliangHUANG/events{/privacy}",
"received_events_url": "https://api.github.com/users/YouliangHUANG/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-01-31T07:10:16 | 2024-01-31T07:23:15 | null | NONE | null | ### System Info
Trivial
### Who can help?
@pacman100
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
See [src/transformers/trainer.py line 1870](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L1870)
`self.state.num_input_tokens_seen += self.accelerator.gather(inputs[main_input_name]).numel()`
The length of "inputs[main_input_name]" is not guaranteed to be the same when using ddp, which may make the training process hang. Besides, in a distributed setup, it costs a lot to gather the WHOLE input tensors on different workers. It is better to call .numel() first and then .gather().
Ref: [Stuck when using model.generate() and acclerator.gather() in the distributed setting](https://github.com/huggingface/accelerate/issues/1326#issuecomment-1513145864)
### Expected behavior
Fix:
input_device = inputs[main_input_name].device
self.state.num_input_tokens_seen += torch.sum(self.accelerator.gather(torch.tensor(inputs[main_input_name].numel(), device=input_device, dtype=torch.int64))).item() | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28791/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28791/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28790 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28790/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28790/comments | https://api.github.com/repos/huggingface/transformers/issues/28790/events | https://github.com/huggingface/transformers/pull/28790 | 2,109,295,655 | PR_kwDOCUB6oc5lhzmV | 28,790 | 🌐 [i18n-ZH] Translate chat_templating.md into Chinese | {
"login": "shibing624",
"id": 10249622,
"node_id": "MDQ6VXNlcjEwMjQ5NjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/10249622?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shibing624",
"html_url": "https://github.com/shibing624",
"followers_url": "https://api.github.com/users/shibing624/followers",
"following_url": "https://api.github.com/users/shibing624/following{/other_user}",
"gists_url": "https://api.github.com/users/shibing624/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shibing624/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shibing624/subscriptions",
"organizations_url": "https://api.github.com/users/shibing624/orgs",
"repos_url": "https://api.github.com/users/shibing624/repos",
"events_url": "https://api.github.com/users/shibing624/events{/privacy}",
"received_events_url": "https://api.github.com/users/shibing624/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-31T05:09:31 | 2024-02-01T06:34:45 | null | NONE | null | # What does this PR do?
Translate chat_templating.md into Chinese
part of https://github.com/huggingface/transformers/issues/20095
## Who can review?
Documentation: @stevhliu
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28790/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28790/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28790",
"html_url": "https://github.com/huggingface/transformers/pull/28790",
"diff_url": "https://github.com/huggingface/transformers/pull/28790.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28790.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28789 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28789/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28789/comments | https://api.github.com/repos/huggingface/transformers/issues/28789/events | https://github.com/huggingface/transformers/pull/28789 | 2,109,140,665 | PR_kwDOCUB6oc5lhS3S | 28,789 | [`HFQuantizer`] Remove `check_packages_compatibility` logic | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-31T02:05:21 | 2024-01-31T02:28:37 | 2024-01-31T02:21:28 | CONTRIBUTOR | null | # What does this PR do?
Fixes the currently failing tests for AWQ: https://github.com/huggingface/transformers/actions/runs/7705429360/job/21003940543
I propose to remove the `check_package_compatiblity` logic in the `HfQuantizer` as:1
1- it is a duplicate of `validate_environment`
2- For some packages such as awq, `_is_package_available()` returns False because `importlib.util.find_spec(pkg_name) is not None` retruns correctly `True` but `importlib.metadata.version(pkg_name)` fails since autoawq is registered as `awq` module but the pypi package name is `autoawq`.
As I expect to face similar behaviour in future quantization packages I propose to simply remove that logic and handle everything in `validate_environment`
cc @ArthurZucker
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28789/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28789/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28789",
"html_url": "https://github.com/huggingface/transformers/pull/28789",
"diff_url": "https://github.com/huggingface/transformers/pull/28789.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28789.patch",
"merged_at": "2024-01-31T02:21:28"
} |
https://api.github.com/repos/huggingface/transformers/issues/28788 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28788/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28788/comments | https://api.github.com/repos/huggingface/transformers/issues/28788/events | https://github.com/huggingface/transformers/pull/28788 | 2,109,011,519 | PR_kwDOCUB6oc5lg3IY | 28,788 | [`bnb`] Fix bnb slow tests | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-30T23:52:34 | 2024-01-31T00:31:24 | 2024-01-31T00:31:20 | CONTRIBUTOR | null | # What does this PR do?
Fixes the current failing BNB slow tests on main: https://github.com/huggingface/transformers/actions/runs/7705429360/job/21003940543
https://github.com/huggingface/transformers/pull/28266 broke the tests which has been merged right before the quantizer refactoring PR.
Since the attributes `load_in_4bit` and `load_in_8bit` have been removed in favor of a property method, the fix is simply to explicitly pass them in the `to_dict` method.
cc @ArthurZucker
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28788/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28788/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28788",
"html_url": "https://github.com/huggingface/transformers/pull/28788",
"diff_url": "https://github.com/huggingface/transformers/pull/28788.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28788.patch",
"merged_at": "2024-01-31T00:31:20"
} |
https://api.github.com/repos/huggingface/transformers/issues/28787 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28787/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28787/comments | https://api.github.com/repos/huggingface/transformers/issues/28787/events | https://github.com/huggingface/transformers/issues/28787 | 2,108,486,087 | I_kwDOCUB6oc59rPHH | 28,787 | Converting TF2 SavedModel models to Huggingface | {
"login": "jhyuklee",
"id": 7017152,
"node_id": "MDQ6VXNlcjcwMTcxNTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7017152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jhyuklee",
"html_url": "https://github.com/jhyuklee",
"followers_url": "https://api.github.com/users/jhyuklee/followers",
"following_url": "https://api.github.com/users/jhyuklee/following{/other_user}",
"gists_url": "https://api.github.com/users/jhyuklee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jhyuklee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jhyuklee/subscriptions",
"organizations_url": "https://api.github.com/users/jhyuklee/orgs",
"repos_url": "https://api.github.com/users/jhyuklee/repos",
"events_url": "https://api.github.com/users/jhyuklee/events{/privacy}",
"received_events_url": "https://api.github.com/users/jhyuklee/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-30T18:20:36 | 2024-01-31T13:25:28 | null | NONE | null | Hi, I'd like to know how I can convert a TF2 SavedModel (e.g. [gtr-base-1](https://www.kaggle.com/models/google/gtr/frameworks/tensorFlow2/variations/gtr-base/versions/1?tfhub-redirect=true)) to a Huggingface PyTorch model as described in the [README](https://huggingface.co/sentence-transformers/gtr-t5-base). We have a similar model in the same format and would like to use it in Huggingface. Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28787/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28787/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28786 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28786/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28786/comments | https://api.github.com/repos/huggingface/transformers/issues/28786/events | https://github.com/huggingface/transformers/pull/28786 | 2,108,463,918 | PR_kwDOCUB6oc5le-7o | 28,786 | [docs] Correct the statement in the docstirng of compute_transition_scores in generation/utils.py | {
"login": "Ki-Seki",
"id": 60967965,
"node_id": "MDQ6VXNlcjYwOTY3OTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/60967965?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ki-Seki",
"html_url": "https://github.com/Ki-Seki",
"followers_url": "https://api.github.com/users/Ki-Seki/followers",
"following_url": "https://api.github.com/users/Ki-Seki/following{/other_user}",
"gists_url": "https://api.github.com/users/Ki-Seki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ki-Seki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ki-Seki/subscriptions",
"organizations_url": "https://api.github.com/users/Ki-Seki/orgs",
"repos_url": "https://api.github.com/users/Ki-Seki/repos",
"events_url": "https://api.github.com/users/Ki-Seki/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ki-Seki/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-30T18:08:55 | 2024-02-01T00:16:24 | 2024-01-31T17:07:30 | CONTRIBUTOR | null | # What does this PR do?
In the `compute_transition_scores` function's docstring, the table in Example 1 should refer to `log probability` instead of `logits`. This is because setting `normalize_logits=True` transforms the `logits` into `log probability`, according: to [generation/utils.py#L1012-L1015](https://github.com/huggingface/transformers/blob/main/src/transformers/generation/utils.py#L1012-L1015)
Below is the table.
```text
... # | token | token string | logits | probability
... print(f"| {tok:5d} | {tokenizer.decode(tok):8s} | {score.numpy():.3f} | {np.exp(score.numpy()):.2%}")
| 262 | the | -1.414 | 24.33%
| 1110 | day | -2.609 | 7.36%
| 618 | when | -2.010 | 13.40%
| 356 | we | -1.859 | 15.58%
| 460 | can | -2.508 | 8.14%
```
You can also easily check this. It's quite clear that the values in the third column are the logarithms of the probability values in the fourth column, i.e., $ln(\text{forth column}) = \text{third column}$.
This PR updates the term from `logits` to `log probability` in the table. Without this change, users could become confused when utilizing this feature without referring to the source code.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@gante | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28786/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28786/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28786",
"html_url": "https://github.com/huggingface/transformers/pull/28786",
"diff_url": "https://github.com/huggingface/transformers/pull/28786.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28786.patch",
"merged_at": "2024-01-31T17:07:30"
} |
https://api.github.com/repos/huggingface/transformers/issues/28785 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28785/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28785/comments | https://api.github.com/repos/huggingface/transformers/issues/28785/events | https://github.com/huggingface/transformers/pull/28785 | 2,108,441,688 | PR_kwDOCUB6oc5le6GM | 28,785 | Pin Torch to <2.2.0 | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-30T17:57:10 | 2024-01-30T22:01:13 | 2024-01-30T22:01:12 | MEMBER | null | PyTorch 2.2.0 was pushed to `pip` about 30 minutes ago and is causing our CI to fail. It isn't showing up on Pytorch.org yet, so this may be an accidental push from the maintainers (the same thing happened with TF 2.16 last week)
For now, we pin `torch<2.2.0` to fix the CI. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28785/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28785/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28785",
"html_url": "https://github.com/huggingface/transformers/pull/28785",
"diff_url": "https://github.com/huggingface/transformers/pull/28785.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28785.patch",
"merged_at": "2024-01-30T22:01:12"
} |
https://api.github.com/repos/huggingface/transformers/issues/28784 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28784/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28784/comments | https://api.github.com/repos/huggingface/transformers/issues/28784/events | https://github.com/huggingface/transformers/pull/28784 | 2,108,418,715 | PR_kwDOCUB6oc5le1F4 | 28,784 | Backbone kwargs in config | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-30T17:43:22 | 2024-01-31T20:16:47 | null | COLLABORATOR | null | # What does this PR do?
This enables configuring the backbones through the config directly e.g. passing in `out_indices` to the backbone. This enables configuring a model's backbone when it's loaded from a pretrained checkpoint. At the moment, this is only possible when loading from a `backbone_config`.
Example:
```py
model = MaskFormer.from_pretrained(
"facebook/maskformer-swin-base-ade",
backbone="facebook/maskformer-swin-large-ade",
backbone_kwargs={"out_indices": (-2, -1)}
)
```
This is necessary to replace th `timm` code currently there for models like DETR e.g. [here](https://github.com/huggingface/transformers/blob/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/models/detr/modeling_detr.py#L341), which is often hard coded.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28784/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28784/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28784",
"html_url": "https://github.com/huggingface/transformers/pull/28784",
"diff_url": "https://github.com/huggingface/transformers/pull/28784.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28784.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28783 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28783/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28783/comments | https://api.github.com/repos/huggingface/transformers/issues/28783/events | https://github.com/huggingface/transformers/pull/28783 | 2,108,360,963 | PR_kwDOCUB6oc5leomi | 28,783 | Ability to override clean_code_for_run | {
"login": "w4ffl35",
"id": 25737761,
"node_id": "MDQ6VXNlcjI1NzM3NzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/25737761?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/w4ffl35",
"html_url": "https://github.com/w4ffl35",
"followers_url": "https://api.github.com/users/w4ffl35/followers",
"following_url": "https://api.github.com/users/w4ffl35/following{/other_user}",
"gists_url": "https://api.github.com/users/w4ffl35/gists{/gist_id}",
"starred_url": "https://api.github.com/users/w4ffl35/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/w4ffl35/subscriptions",
"organizations_url": "https://api.github.com/users/w4ffl35/orgs",
"repos_url": "https://api.github.com/users/w4ffl35/repos",
"events_url": "https://api.github.com/users/w4ffl35/events{/privacy}",
"received_events_url": "https://api.github.com/users/w4ffl35/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-30T17:10:56 | 2024-01-31T01:42:55 | null | NONE | null | # What does this PR do?
Adds an interface function called `clean_code_for_run` to the `Agent` class.
This function simply returns the results of `clean_code_for_run()`, which was happening on line 349.
The reason for this change is to allow developers to override the results of the `clean_code_for_run` function in an easy way.
Prior to this change, when I use `Agent` with the Mistral Instruct model, the following results are returned:
```
==Code generated by the agent==
result = add_tool(a=5, b=7)
print(f"The result is {result}")
```</s>
```
This would result in an eval error when executing the function. In order to work around this, I did the following:
```
from transformers import LocalAgent as LocalAgentBase
class LocalAgent(LocalAgentBase):
def format_prompt(self, task, chat_mode=False):
task = task.replace("```", "").replace("</s>", "")
return task
def run(self, task, *, return_code=False, remote=False, **kwargs):
prompt = self.format_prompt(task)
result = self.generate_one(prompt, stop=["Task:"])
explanation, code = clean_code_for_run(result)
self.log(f"==Explanation from the agent==\n{explanation}")
"""
This entire class exists as a work around in order
to run the following line of code. Without this, the evaluation
will fail with Mistral Instruct (possibly with other models as well)
"""
code = code.replace("```", "").replace("</s>", "")
self.log(f"\n\n==Code generated by the agent==\n{code}")
if not return_code:
self.log("\n\n==Result==")
self.cached_tools = resolve_tools(code, self.toolbox, remote=remote, cached_tools=self.cached_tools)
return evaluate(code, self.cached_tools, state=kwargs.copy())
else:
tool_code = get_tool_creation_code(code, self.toolbox, remote=remote)
return f"{tool_code}\n{code}"
```
As you can see from the comments, this line:
`code = code.replace("```", "").replace("</s>", "")`
Is required in order to strip the undesired EOS characters. This may be an issue with Mistral Instruct specifically.
Rather than overriding the entire run method as shown above, this new update would allow me to do this instead:
```
from transformers import LocalAgent as LocalAgentBase
class LocalAgent(LocalAgentBase):
def format_prompt(self, task, chat_mode=False):
task = super().format_prompt(task, chat_mode=chat_mode)
return task.replace("```", "").replace("</s>", "")
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28783/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28783/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28783",
"html_url": "https://github.com/huggingface/transformers/pull/28783",
"diff_url": "https://github.com/huggingface/transformers/pull/28783.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28783.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28782 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28782/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28782/comments | https://api.github.com/repos/huggingface/transformers/issues/28782/events | https://github.com/huggingface/transformers/pull/28782 | 2,108,349,430 | PR_kwDOCUB6oc5lemGG | 28,782 | Add ability to override clean code for run | {
"login": "w4ffl35",
"id": 25737761,
"node_id": "MDQ6VXNlcjI1NzM3NzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/25737761?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/w4ffl35",
"html_url": "https://github.com/w4ffl35",
"followers_url": "https://api.github.com/users/w4ffl35/followers",
"following_url": "https://api.github.com/users/w4ffl35/following{/other_user}",
"gists_url": "https://api.github.com/users/w4ffl35/gists{/gist_id}",
"starred_url": "https://api.github.com/users/w4ffl35/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/w4ffl35/subscriptions",
"organizations_url": "https://api.github.com/users/w4ffl35/orgs",
"repos_url": "https://api.github.com/users/w4ffl35/repos",
"events_url": "https://api.github.com/users/w4ffl35/events{/privacy}",
"received_events_url": "https://api.github.com/users/w4ffl35/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-30T17:04:59 | 2024-01-30T17:09:24 | 2024-01-30T17:09:19 | NONE | null | # What does this PR do?
Adds an interface function called `clean_code_for_run` to the `Agent` class.
This function simply returns the results of `clean_code_for_run()`, which was happening on line 349.
The reason for this change is to allow developers to override the results of the `clean_code_for_run` function in an easy way.
Prior to this change, when I use `Agent` with the Mistral Instruct model, the following results are returned:
```
==Code generated by the agent==
result = add_tool(a=5, b=7)
print(f"The result is {result}")
```</s>
```
This would result in an eval error when executing the function. In order to work around this, I did the following:
```
from transformers import LocalAgent as LocalAgentBase
class LocalAgent(LocalAgentBase):
def format_prompt(self, task, chat_mode=False):
task = task.replace("```", "").replace("</s>", "")
return task
def run(self, task, *, return_code=False, remote=False, **kwargs):
prompt = self.format_prompt(task)
result = self.generate_one(prompt, stop=["Task:"])
explanation, code = clean_code_for_run(result)
self.log(f"==Explanation from the agent==\n{explanation}")
"""
This entire class exists as a work around in order
to run the following line of code. Without this, the evaluation
will fail with Mistral Instruct (possibly with other models as well)
"""
code = code.replace("```", "").replace("</s>", "")
self.log(f"\n\n==Code generated by the agent==\n{code}")
if not return_code:
self.log("\n\n==Result==")
self.cached_tools = resolve_tools(code, self.toolbox, remote=remote, cached_tools=self.cached_tools)
return evaluate(code, self.cached_tools, state=kwargs.copy())
else:
tool_code = get_tool_creation_code(code, self.toolbox, remote=remote)
return f"{tool_code}\n{code}"
```
As you can see from the comments, this line:
`code = code.replace("```", "").replace("</s>", "")`
Is required in order to strip the undesired EOS characters. This may be an issue with Mistral Instruct specifically.
Rather than overriding the entire run method as shown above, this new update would allow me to do this instead:
```
from transformers import LocalAgent as LocalAgentBase
class LocalAgent(LocalAgentBase):
def format_prompt(self, task, chat_mode=False):
task = super().format_prompt(task, chat_mode=chat_mode)
return task.replace("```", "").replace("</s>", "")
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28782/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28782/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28782",
"html_url": "https://github.com/huggingface/transformers/pull/28782",
"diff_url": "https://github.com/huggingface/transformers/pull/28782.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28782.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28781 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28781/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28781/comments | https://api.github.com/repos/huggingface/transformers/issues/28781/events | https://github.com/huggingface/transformers/issues/28781 | 2,108,336,306 | I_kwDOCUB6oc59qqiy | 28,781 | Unable to use torch scripting to export Mask2Former model | {
"login": "rayryeng",
"id": 765375,
"node_id": "MDQ6VXNlcjc2NTM3NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/765375?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rayryeng",
"html_url": "https://github.com/rayryeng",
"followers_url": "https://api.github.com/users/rayryeng/followers",
"following_url": "https://api.github.com/users/rayryeng/following{/other_user}",
"gists_url": "https://api.github.com/users/rayryeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rayryeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rayryeng/subscriptions",
"organizations_url": "https://api.github.com/users/rayryeng/orgs",
"repos_url": "https://api.github.com/users/rayryeng/repos",
"events_url": "https://api.github.com/users/rayryeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/rayryeng/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "htt... | null | 0 | 2024-01-30T16:58:42 | 2024-01-30T17:17:41 | null | NONE | null | ### System Info
- `transformers` version: 4.34.1
- Platform: Linux-5.4.0-100-generic-x86_64-with-glibc2.17
- Python version: 3.8.18
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes, a GTX 1080 with 8 GB of VRAM
- Using distributed or parallel set-up in script?: No.
### Who can help?
@amyeroberts
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am attempting to export the [Mask2Former model available in huggingface](https://huggingface.co/facebook/mask2former-swin-base-coco-panoptic) through [`torch.jit.script`](https://pytorch.org/docs/stable/generated/torch.jit.script.html). Here's a minimal reproducible example:
```python
import torch
from transformers import Mask2FormerForUniversalSegmentation
device = "cuda" if torch.cuda.is_available() else "cpu"
model = Mask2FormerForUniversalSegmentation.from_pretrained(
"facebook/mask2former-swin-base-coco-panoptic", torchscript=True
).to(device)
scripted_model = torch.jit.script(model)
torch.jit.save(scripted_model, 'mask2former.pt')
```
By doing this, I get the following error using torch scripting (path to the offending file has been obfuscated for brevity):
```
torch.jit.frontend.NotSupportedError: Comprehension ifs are not supported yet:
File "/home/.../huggingface/lib/python3.8/site-packages/transformers/models/mask2former/modeling_mask2former.py", line 2559
if not return_dict:
output = tuple(v for v in output.values() if v is not None)
if loss is not None:
output = ((loss)) + output
```
As a hack, I've changed my local installation so that comprehension ifs are removed:
```
if not return_dict:
outputs = []
for v in output.values():
if v is not None:
outputs.append(v)
output = tuple(outputs)
```
This also occurs at line 2306 in the same file, so I've made the same changes there. Once I fix this, there is an error in the forward method for the SWIN backbone:
```
RuntimeError:
'Optional[Tensor]' object has no attribute or method 'shape'.:
File "/home/.../anaconda3/envs/huggingface/lib/python3.8/site-packages/transformers/models/swin/modeling_swin.py", line 313
def forward(self, pixel_values: Optional[torch.FloatTensor]) -> Tuple[torch.Tensor, Tuple[int]]:
_, num_channels, height, width = pixel_values.shape
~~~~~~~~~~~~~~~~~~ <--- HERE
if num_channels != self.num_channels:
raise ValueError(
```
The forward method for the SWIN backbone is confusing, as the input type is declared to be `Optional` but the output type is not. The definition of this method clearly indicates that a concrete tuple is to be returned.
As a final experiment, I've removed the `Optional` type declaration and tried to export it one more time:
```
aten::pad(Tensor self, SymInt[] pad, str mode="constant", float? value=None) -> Tensor:
Expected a value of type 'List[int]' for argument 'pad' but instead found type 'Tuple[int, Tensor]'.
:
File "/home/.../anaconda3/envs/huggingface/lib/python3.8/site-packages/transformers/models/swin/modeling_swin.py", line 306
if width % self.patch_size[1] != 0:
pad_values = (0, self.patch_size[1] - width % self.patch_size[1])
pixel_values = nn.functional.pad(pixel_values, pad_values)
~~~~~~~~~~~~~~~~~ <--- HERE
if height % self.patch_size[0] != 0:
pad_values = (0, 0, 0, self.patch_size[0] - height % self.patch_size[0])
'SwinPatchEmbeddings.maybe_pad' is being compiled since it was called from 'SwinPatchEmbeddings.forward'
File "/home/.../anaconda3/envs/huggingface/lib/python3.8/site-packages/transformers/models/swin/modeling_swin.py", line 319
)
# pad the input to be divisible by self.patch_size, if needed
pixel_values = self.maybe_pad(pixel_values, height, width)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
embeddings = self.projection(pixel_values)
_, _, height, width = embeddings.shape
```
It seems that what is being put into the forward pass is not, in fact, a `torch.Tensor` when being scripted.
Is torch scripting this model not supported at this time or am I missing something?
### Expected behavior
The model successfully being exported to disk with torch scripting. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28781/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28781/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28780 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28780/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28780/comments | https://api.github.com/repos/huggingface/transformers/issues/28780/events | https://github.com/huggingface/transformers/pull/28780 | 2,108,249,014 | PR_kwDOCUB6oc5leQT9 | 28,780 | Further pin pytest version (in a temporary way) | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-30T16:16:58 | 2024-01-30T16:48:50 | 2024-01-30T16:48:49 | COLLABORATOR | null | # What does this PR do?
#28758 tried to pin `pytest<8.0.0` version, however, in `doc_test_job` job, there is a command
> pip install --upgrade --upgrade-strategy eager pytest pytest-sugar
which bring it back to `8.0.0` again after the desired version being installed with `pip install -e .[dev]`.
This PR changes it to
> pip install --upgrade --upgrade-strategy eager 'pytest<8.0.0' pytest-sugar
this is not a very ideal solution, but let's fix it quickly and I will see if I can have a better solution for the long term. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28780/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28780/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28780",
"html_url": "https://github.com/huggingface/transformers/pull/28780",
"diff_url": "https://github.com/huggingface/transformers/pull/28780.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28780.patch",
"merged_at": "2024-01-30T16:48:49"
} |
https://api.github.com/repos/huggingface/transformers/issues/28779 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28779/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28779/comments | https://api.github.com/repos/huggingface/transformers/issues/28779/events | https://github.com/huggingface/transformers/pull/28779 | 2,108,242,406 | PR_kwDOCUB6oc5leO29 | 28,779 | Prevent MLflow exception from disrupting training | {
"login": "codiceSpaghetti",
"id": 71273533,
"node_id": "MDQ6VXNlcjcxMjczNTMz",
"avatar_url": "https://avatars.githubusercontent.com/u/71273533?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codiceSpaghetti",
"html_url": "https://github.com/codiceSpaghetti",
"followers_url": "https://api.github.com/users/codiceSpaghetti/followers",
"following_url": "https://api.github.com/users/codiceSpaghetti/following{/other_user}",
"gists_url": "https://api.github.com/users/codiceSpaghetti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/codiceSpaghetti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codiceSpaghetti/subscriptions",
"organizations_url": "https://api.github.com/users/codiceSpaghetti/orgs",
"repos_url": "https://api.github.com/users/codiceSpaghetti/repos",
"events_url": "https://api.github.com/users/codiceSpaghetti/events{/privacy}",
"received_events_url": "https://api.github.com/users/codiceSpaghetti/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-30T16:13:48 | 2024-01-31T01:10:44 | 2024-01-31T01:10:44 | CONTRIBUTOR | null | This PR prevents a training in progress from **being interrupted** due to a problem with the MLflow server (e.g., lack of connectivity) and will make the code more **faul-tolerant**, as also discussed in [this](https://github.com/mlflow/mlflow/issues/1550) MLflow issue.
This can be achieved by simply changing the `synchronous` parameter of `mlflow.log_metrics` to `False` (previously the default value was left, which is `True`)
In this way, as described in the [MLflow documentation](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.log_metrics), the function logs the metrics asynchronously and return a future representing the logging operation, instead of blocking until the log is successful. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28779/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28779",
"html_url": "https://github.com/huggingface/transformers/pull/28779",
"diff_url": "https://github.com/huggingface/transformers/pull/28779.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28779.patch",
"merged_at": "2024-01-31T01:10:44"
} |
https://api.github.com/repos/huggingface/transformers/issues/28778 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28778/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28778/comments | https://api.github.com/repos/huggingface/transformers/issues/28778/events | https://github.com/huggingface/transformers/issues/28778 | 2,108,209,721 | I_kwDOCUB6oc59qLo5 | 28,778 | OWL-VIT Finetuning code for custom dataset in Hugging Face | {
"login": "solomonmanuelraj",
"id": 25194971,
"node_id": "MDQ6VXNlcjI1MTk0OTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/25194971?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/solomonmanuelraj",
"html_url": "https://github.com/solomonmanuelraj",
"followers_url": "https://api.github.com/users/solomonmanuelraj/followers",
"following_url": "https://api.github.com/users/solomonmanuelraj/following{/other_user}",
"gists_url": "https://api.github.com/users/solomonmanuelraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/solomonmanuelraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/solomonmanuelraj/subscriptions",
"organizations_url": "https://api.github.com/users/solomonmanuelraj/orgs",
"repos_url": "https://api.github.com/users/solomonmanuelraj/repos",
"events_url": "https://api.github.com/users/solomonmanuelraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/solomonmanuelraj/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-30T15:58:23 | 2024-02-01T00:48:55 | null | NONE | null | ### Feature request
Hi team,
when i am trying to finetune owl-vit base 32 model with custom data cppe-5 i am receiving the following error in the time of trainer.train() function.
######################################################################################################################
ValueError Traceback (most recent call last)
Cell In[40], line 11
1 from transformers import Trainer
3 trainer = Trainer(
4 model=lora_model,
5 args=training_args,
(...)
8 tokenizer=processor,
9 )
---> 11 trainer.train()
File ~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:1537, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1535 hf_hub_utils.enable_progress_bars()
1536 else:
-> 1537 return inner_training_loop(
1538 args=args,
1539 resume_from_checkpoint=resume_from_checkpoint,
1540 trial=trial,
1541 ignore_keys_for_eval=ignore_keys_for_eval,
1542 )
File ~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:1854, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1851 self.control = self.callback_handler.on_step_begin(args, self.state, self.control)
1853 with self.accelerator.accumulate(model):
-> 1854 tr_loss_step = self.training_step(model, inputs)
1856 if (
1857 args.logging_nan_inf_filter
1858 and not is_torch_tpu_available()
1859 and (torch.isnan(tr_loss_step) or torch.isinf(tr_loss_step))
1860 ):
1861 # if loss is nan or inf simply add the average of previous logged losses
1862 tr_loss += tr_loss / (1 + self.state.global_step - self._globalstep_last_logged)
File ~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:2735, in Trainer.training_step(self, model, inputs)
2732 return loss_mb.reduce_mean().detach().to(self.args.device)
2734 with self.compute_loss_context_manager():
-> 2735 loss = self.compute_loss(model, inputs)
2737 if self.args.n_gpu > 1:
2738 loss = loss.mean() # mean() to average on multi-gpu parallel training
File ~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:2776, in Trainer.compute_loss(self, model, inputs, return_outputs)
2774 else:
2775 if isinstance(outputs, dict) and "loss" not in outputs:
-> 2776 raise ValueError(
2777 "The model did not return a loss from the inputs, only the following keys: "
2778 f"{','.join(outputs.keys())}. For reference, the inputs it received are {','.join(inputs.keys())}."
2779 )
2780 # We don't use .loss here since the model may return tuples instead of ModelOutput.
2781 loss = outputs["loss"] if isinstance(outputs, dict) else outputs[0]
ValueError: The model did not return a loss from the inputs, only the following keys: logits,pred_boxes,text_embeds,image_embeds,class_embeds,text_model_output,vision_model_output. For reference, the inputs it received are input_ids,attention_mask,pixel_values
#####################################################################################################################
Collate_fn() definition
#######################################################################################################################
def collate_fn(batch):
input_ids = torch.Tensor([item["input_ids"].tolist() for item in batch]).int()
input_ids = input_ids.to(device)
attention_mask = torch.Tensor([item["attention_mask"].tolist() for item in batch]).int()
attention_mask = attention_mask.to(device)
pixel_values = torch.Tensor([item["pixel_values"].tolist() for item in batch])
pixel_values = pixel_values.to(device)
batch = {}
batch["input_ids"] = input_ids
batch["attention_mask"] = attention_mask
batch["pixel_values"] = pixel_values
print(batch)
return batch
####################################################################################################################
i am using cppe-5 dataset from HF for custom training and testing.
let me know your feedback comments.
### Motivation
Fine tuning the owl-vit model in custom dataset using HF Trainer. It will help to PEFT fine tune the model with Lora
### Your contribution
Will test this feature with the custom dataset | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28778/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28778/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28777 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28777/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28777/comments | https://api.github.com/repos/huggingface/transformers/issues/28777/events | https://github.com/huggingface/transformers/pull/28777 | 2,108,139,886 | PR_kwDOCUB6oc5ld4rb | 28,777 | Adds Auto Model support for llama question answering. | {
"login": "nakranivaibhav",
"id": 67785830,
"node_id": "MDQ6VXNlcjY3Nzg1ODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/67785830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nakranivaibhav",
"html_url": "https://github.com/nakranivaibhav",
"followers_url": "https://api.github.com/users/nakranivaibhav/followers",
"following_url": "https://api.github.com/users/nakranivaibhav/following{/other_user}",
"gists_url": "https://api.github.com/users/nakranivaibhav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nakranivaibhav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nakranivaibhav/subscriptions",
"organizations_url": "https://api.github.com/users/nakranivaibhav/orgs",
"repos_url": "https://api.github.com/users/nakranivaibhav/repos",
"events_url": "https://api.github.com/users/nakranivaibhav/events{/privacy}",
"received_events_url": "https://api.github.com/users/nakranivaibhav/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 4 | 2024-01-30T15:26:34 | 2024-01-31T13:40:19 | null | CONTRIBUTOR | null | # What does this PR do?
Adds AutoModelForQuestionAnswering support for llama.
Fixes # (issue)
#28265
## Who can review?
@ArthurZucker @NielsRogge
I haven't added a copy statement. When I try to add the following copy statement
`# Copied from transformers.models.bloom.modeling_bloom.BloomForQuestionAnswering.__init__ with Bloom->LLama`
It throws an error even though both the init functions are same.
When I try to make fix-copies. The fix copy wants to capitalize both the L's in Llamamodel on this line
`self.transformer = LlamaModel(config)` -> `self.transformer = LLamaModel(config`
Perhaps I am missing something and @ArthurZucker can guide me further.
I tried training the model on a subset of SQUAD and it does train fine. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28777/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28777/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28777",
"html_url": "https://github.com/huggingface/transformers/pull/28777",
"diff_url": "https://github.com/huggingface/transformers/pull/28777.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28777.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28776 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28776/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28776/comments | https://api.github.com/repos/huggingface/transformers/issues/28776/events | https://github.com/huggingface/transformers/issues/28776 | 2,108,044,294 | I_kwDOCUB6oc59pjQG | 28,776 | Allow disabling of deletion of leading SPIECE_UNDERLINE during llama decoding (tokenizer). | {
"login": "JoshC8C7",
"id": 32071009,
"node_id": "MDQ6VXNlcjMyMDcxMDA5",
"avatar_url": "https://avatars.githubusercontent.com/u/32071009?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JoshC8C7",
"html_url": "https://github.com/JoshC8C7",
"followers_url": "https://api.github.com/users/JoshC8C7/followers",
"following_url": "https://api.github.com/users/JoshC8C7/following{/other_user}",
"gists_url": "https://api.github.com/users/JoshC8C7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JoshC8C7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoshC8C7/subscriptions",
"organizations_url": "https://api.github.com/users/JoshC8C7/orgs",
"repos_url": "https://api.github.com/users/JoshC8C7/repos",
"events_url": "https://api.github.com/users/JoshC8C7/events{/privacy}",
"received_events_url": "https://api.github.com/users/JoshC8C7/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-30T14:48:03 | 2024-01-31T01:08:12 | null | NONE | null | ### Feature request
The `LlamaTokenizer`'s `convert_tokens_to_string` method used during decoding [has the statement](https://github.com/huggingface/transformers/blob/6f7d5db58c7c149c75642b5a4647b5cbc6c55643/src/transformers/models/llama/tokenization_llama.py#L286):
```
if tokens[0].startswith(SPIECE_UNDERLINE):
tokens[0] = tokens[0][1:]
```
which deletes a space if it falls at the start of the first token being decoded. There are cases where this is undesirable - namely in building streaming applications where an output is decoded in chunks and so a decoded sequence may begin with a space, which is unhelpfully deleted here. AFAIK there is then no way of knowing if the space was deleted without looking up the first token of each chunk in the vocabulary, and thus no way to faithfully recombine the chunks into a complete output.
**Ideally this deletion should be parameterised so it can be turned off in cases like these.**
My current workaround is to prefix a 'fake' token to the start of every sequence, before deleting it from the outputted text. I believe TGI [have a similar workaround](https://github.com/huggingface/text-generation-inference/blob/2d56f106a60c7b698705494e7539f8a7e4c85dd9/server/text_generation_server/models/model.py#L86).
### Motivation
When writing streaming applications with llama/codellama, you may want to decode in chunks. Where this chunk boundary falls between two words (i.e. the last token of the previous chunk is a word, and the first token of the next chunk is a word), then when decoding the second chunk the output string does not have a preceding space (even if its mapping in `tokenizer.json` has a SPIECE_UNDERLINE at the start). This information loss means the outputted chunks cannot faithfully be joined up, as it isn't known whether a space is deleted.
### Your contribution
I could submit a PR - this requires changing individual tokenizer files so I would have to read up on the procedure for that. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28776/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28776/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28775 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28775/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28775/comments | https://api.github.com/repos/huggingface/transformers/issues/28775/events | https://github.com/huggingface/transformers/pull/28775 | 2,107,766,333 | PR_kwDOCUB6oc5lcnIP | 28,775 | add regnet chinese doc | {
"login": "a-strong-python",
"id": 65645246,
"node_id": "MDQ6VXNlcjY1NjQ1MjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/65645246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/a-strong-python",
"html_url": "https://github.com/a-strong-python",
"followers_url": "https://api.github.com/users/a-strong-python/followers",
"following_url": "https://api.github.com/users/a-strong-python/following{/other_user}",
"gists_url": "https://api.github.com/users/a-strong-python/gists{/gist_id}",
"starred_url": "https://api.github.com/users/a-strong-python/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/a-strong-python/subscriptions",
"organizations_url": "https://api.github.com/users/a-strong-python/orgs",
"repos_url": "https://api.github.com/users/a-strong-python/repos",
"events_url": "https://api.github.com/users/a-strong-python/events{/privacy}",
"received_events_url": "https://api.github.com/users/a-strong-python/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-01-30T12:37:17 | 2024-01-30T12:37:17 | null | NONE | null | Add Chinese reference documents to the regnet model | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28775/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28775/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28775",
"html_url": "https://github.com/huggingface/transformers/pull/28775",
"diff_url": "https://github.com/huggingface/transformers/pull/28775.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28775.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28774 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28774/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28774/comments | https://api.github.com/repos/huggingface/transformers/issues/28774/events | https://github.com/huggingface/transformers/pull/28774 | 2,107,572,860 | PR_kwDOCUB6oc5lb7xE | 28,774 | Fix transformers.utils.fx compatibility with torch<2.0 | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-30T11:14:30 | 2024-01-30T13:54:43 | 2024-01-30T13:54:42 | COLLABORATOR | null | Fixes https://github.com/huggingface/transformers/issues/28690
Tested on 1.13 that `pytest tests/models/opt/ -k "test_torch_fx" -s -vvvvv` passes, while it is currently failing with torch<2.0 following https://github.com/huggingface/transformers/pull/28447 (`AttributeError: module 'torch.nn.functional' has no attribute 'scaled_dot_product_attention'`). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28774/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28774/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28774",
"html_url": "https://github.com/huggingface/transformers/pull/28774",
"diff_url": "https://github.com/huggingface/transformers/pull/28774.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28774.patch",
"merged_at": "2024-01-30T13:54:42"
} |
https://api.github.com/repos/huggingface/transformers/issues/28773 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28773/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28773/comments | https://api.github.com/repos/huggingface/transformers/issues/28773/events | https://github.com/huggingface/transformers/pull/28773 | 2,107,424,909 | PR_kwDOCUB6oc5lbbbd | 28,773 | Split daily CI using 2 level matrix | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-30T10:02:12 | 2024-01-31T17:04:44 | 2024-01-31T17:04:43 | COLLABORATOR | null | # What does this PR do?
This PR aims to bypass the 256 jobs limitation (in a matrix) on GitHub Actions, as we are approaching that limit soon.
The idea:
- move the model job logic into a new workflow file (it **still uses matrix**)
- call the new workflow file in the original workflow file, but pass some inputs to it
- a (nested) list: each element is a list: a subset of model names
- a slice id
When the new workflow file is called with the inputs, it will use the slice id to get the corresponding subset of model names, and uses the matrix on that to generate the jobs to run.
In the original workflow file, we **generate the slice ids by a matrix**.
See example runs
**Full version**
https://github.com/huggingface/transformers/actions/runs/7701814182
<img width="512" alt="Screenshot 2024-01-30 111539" src="https://github.com/huggingface/transformers/assets/2521628/465554f6-ff92-4ac2-bf27-658353a982b3">
**Demo version**
https://github.com/huggingface/transformers/actions/runs/7702628211
<img width="512" alt="Screenshot 2024-01-30 110004" src="https://github.com/huggingface/transformers/assets/2521628/9262b176-4955-4215-a6c5-977fa124a4a8">
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28773/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28773",
"html_url": "https://github.com/huggingface/transformers/pull/28773",
"diff_url": "https://github.com/huggingface/transformers/pull/28773.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28773.patch",
"merged_at": "2024-01-31T17:04:43"
} |
https://api.github.com/repos/huggingface/transformers/issues/28772 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28772/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28772/comments | https://api.github.com/repos/huggingface/transformers/issues/28772/events | https://github.com/huggingface/transformers/pull/28772 | 2,107,269,695 | PR_kwDOCUB6oc5la5br | 28,772 | doc: fix a typo | {
"login": "ThibaultLengagne",
"id": 11950126,
"node_id": "MDQ6VXNlcjExOTUwMTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/11950126?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ThibaultLengagne",
"html_url": "https://github.com/ThibaultLengagne",
"followers_url": "https://api.github.com/users/ThibaultLengagne/followers",
"following_url": "https://api.github.com/users/ThibaultLengagne/following{/other_user}",
"gists_url": "https://api.github.com/users/ThibaultLengagne/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ThibaultLengagne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ThibaultLengagne/subscriptions",
"organizations_url": "https://api.github.com/users/ThibaultLengagne/orgs",
"repos_url": "https://api.github.com/users/ThibaultLengagne/repos",
"events_url": "https://api.github.com/users/ThibaultLengagne/events{/privacy}",
"received_events_url": "https://api.github.com/users/ThibaultLengagne/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-30T08:53:08 | 2024-01-30T08:53:50 | 2024-01-30T08:53:50 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28772/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28772",
"html_url": "https://github.com/huggingface/transformers/pull/28772",
"diff_url": "https://github.com/huggingface/transformers/pull/28772.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28772.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28771 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28771/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28771/comments | https://api.github.com/repos/huggingface/transformers/issues/28771/events | https://github.com/huggingface/transformers/issues/28771 | 2,107,131,812 | I_kwDOCUB6oc59mEek | 28,771 | Mistral with FlashAttention2 | {
"login": "khalil-Hennara",
"id": 90086758,
"node_id": "MDQ6VXNlcjkwMDg2NzU4",
"avatar_url": "https://avatars.githubusercontent.com/u/90086758?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/khalil-Hennara",
"html_url": "https://github.com/khalil-Hennara",
"followers_url": "https://api.github.com/users/khalil-Hennara/followers",
"following_url": "https://api.github.com/users/khalil-Hennara/following{/other_user}",
"gists_url": "https://api.github.com/users/khalil-Hennara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/khalil-Hennara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/khalil-Hennara/subscriptions",
"organizations_url": "https://api.github.com/users/khalil-Hennara/orgs",
"repos_url": "https://api.github.com/users/khalil-Hennara/repos",
"events_url": "https://api.github.com/users/khalil-Hennara/events{/privacy}",
"received_events_url": "https://api.github.com/users/khalil-Hennara/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2024-01-30T07:26:00 | 2024-01-31T00:54:49 | 2024-01-31T00:54:31 | NONE | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.1
- Accelerate version: 0.27.0.dev0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): 2.15.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu)
- Jax version: 0.4.23
- JaxLib version: 0.4.23
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1", torch_dtype=torch.float16, attn_implementation="flash_attention_2")`
The code line has taken from the official website [Mistral](https://huggingface.co/docs/transformers/v4.37.2/model_doc/mistral#model-details)
TypeError: MistralForCausalLM.__init__() got an unexpected keyword argument 'attn_implementation'
when using `use_flash_attention_2=True` it's work fine
### Expected behavior
The model should be loaded without error, using flash attention2 in the background. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28771/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28771/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28770 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28770/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28770/comments | https://api.github.com/repos/huggingface/transformers/issues/28770/events | https://github.com/huggingface/transformers/issues/28770 | 2,107,013,041 | I_kwDOCUB6oc59lnex | 28,770 | Lora + DeepSpeed non-trainer integration does not work | {
"login": "hrushikesh198",
"id": 6188036,
"node_id": "MDQ6VXNlcjYxODgwMzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6188036?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hrushikesh198",
"html_url": "https://github.com/hrushikesh198",
"followers_url": "https://api.github.com/users/hrushikesh198/followers",
"following_url": "https://api.github.com/users/hrushikesh198/following{/other_user}",
"gists_url": "https://api.github.com/users/hrushikesh198/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hrushikesh198/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hrushikesh198/subscriptions",
"organizations_url": "https://api.github.com/users/hrushikesh198/orgs",
"repos_url": "https://api.github.com/users/hrushikesh198/repos",
"events_url": "https://api.github.com/users/hrushikesh198/events{/privacy}",
"received_events_url": "https://api.github.com/users/hrushikesh198/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-30T05:56:56 | 2024-01-30T10:06:32 | null | NONE | null | ### System Info
- `transformers` version: 4.37.2
- Platform: Linux-4.19.0-24-cloud-amd64-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: 0.26.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes, trying to run deepspeed zero 3 on 8 A100 gpus
### Who can help?
cc: @pacman100
Tagging few folks who were discussing a similar issue before https://github.com/huggingface/transformers/issues/24445
@1ytic, @don-tpanic, @olegsinavski
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am trying to finetune Mistral-7B using LoRa, deepspeed zero 3, pytorch-lightning. As per the deepspeed non trainer integration, I have created the `dschf` object and kept it alive.
```python
class Module(LightningModule):
def configure_model(self) -> None:
if self.model is not None:
return
deepspeed_config = self.trainer.strategy.config
self.dschf = HfDeepSpeedConfig(deepspeed_config)
self.model = AutoModelForSequenceClassification.from_pretrained(...)
self.model = get_peft_model(
self.model,
LoraConfig(
task_type=TaskType.SEQ_CLS,
inference_mode=False,
target_modules=target_modules,
r=256,
lora_alpha=256,
lora_dropout=0.5,
),
)
def main():
trainer = lightning.Trainer(
...
strategy = DeepSpeedStrategy(stage=3)
)
model=Module()
trainer.fit(model, datamodules)
```
The training script throws an error on the `get_peft_model` line
```
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/peft/peft_model.py", line 528, in __getattr__
return super().__getattr__(name) # defer to nn.Module's logic
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1695, in __getattr__
raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
AttributeError: 'PeftModelForSequenceClassification' object has no attribute '_ds_child_entered'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/peft/peft_model.py", line 528, in __getattr__
return super().__getattr__(name) # defer to nn.Module's logic
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1695, in __getattr__
raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
AttributeError: 'PeftModelForSequenceClassification' object has no attribute 'base_model'```
```
the seconds one continues until it reaches max recursion depth.
### Expected behavior
Lora model should initialize seamlessly and train as it works for deepspeed stage 2. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28770/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28770/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28769 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28769/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28769/comments | https://api.github.com/repos/huggingface/transformers/issues/28769/events | https://github.com/huggingface/transformers/pull/28769 | 2,106,899,551 | PR_kwDOCUB6oc5lZqM9 | 28,769 | Trainer - add cache clearing and the option for batched eval metrics computation | {
"login": "FoamoftheSea",
"id": 50897218,
"node_id": "MDQ6VXNlcjUwODk3MjE4",
"avatar_url": "https://avatars.githubusercontent.com/u/50897218?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FoamoftheSea",
"html_url": "https://github.com/FoamoftheSea",
"followers_url": "https://api.github.com/users/FoamoftheSea/followers",
"following_url": "https://api.github.com/users/FoamoftheSea/following{/other_user}",
"gists_url": "https://api.github.com/users/FoamoftheSea/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FoamoftheSea/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FoamoftheSea/subscriptions",
"organizations_url": "https://api.github.com/users/FoamoftheSea/orgs",
"repos_url": "https://api.github.com/users/FoamoftheSea/repos",
"events_url": "https://api.github.com/users/FoamoftheSea/events{/privacy}",
"received_events_url": "https://api.github.com/users/FoamoftheSea/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-30T03:53:44 | 2024-01-30T18:10:37 | null | CONTRIBUTOR | null | # What does this PR do?
This PR does two things which are necessary for using the Trainer in resource constrained environments (like my RTX-3070Ti machine):
1. Add cache clearing in training and evaluation loops
- This reduces peak GPU load and prevents CUDA OOM errors when running near capacity.
2. Add Trainer arg `batch_eval_metrics` for batched eval metrics computation.
- When working with limited RAM, storing all logits across the entire evaluation set may not be feasible. A user working in this condition can pass `True` to `batch_eval_metrics` and construct a `compute_metrics` function which can update average metrics at a batch level to prevent OOM errors with large eval sets. Particularly useful for vision transformers.
- Previous functionality is unaltered if option is not set to `True`
@muellerzr | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28769/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/28769/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28769",
"html_url": "https://github.com/huggingface/transformers/pull/28769",
"diff_url": "https://github.com/huggingface/transformers/pull/28769.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28769.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28768 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28768/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28768/comments | https://api.github.com/repos/huggingface/transformers/issues/28768/events | https://github.com/huggingface/transformers/pull/28768 | 2,106,808,476 | PR_kwDOCUB6oc5lZXU- | 28,768 | [`HfQuantizer`] Move it to "Developper guides" | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-30T02:09:23 | 2024-01-30T06:20:22 | 2024-01-30T06:20:21 | CONTRIBUTOR | null | # What does this PR do?
Move the "How to add a new quantization method" tutorial into "Developper Guide" which seems to be a better appropriate place for that tutorial rather than in "Performance and scalability"
cc @stevhliu @ArthurZucker @amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28768/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28768/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28768",
"html_url": "https://github.com/huggingface/transformers/pull/28768",
"diff_url": "https://github.com/huggingface/transformers/pull/28768.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28768.patch",
"merged_at": "2024-01-30T06:20:21"
} |
https://api.github.com/repos/huggingface/transformers/issues/28767 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28767/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28767/comments | https://api.github.com/repos/huggingface/transformers/issues/28767/events | https://github.com/huggingface/transformers/pull/28767 | 2,106,654,935 | PR_kwDOCUB6oc5lY2kq | 28,767 | added support for llama v2 and codellama in weight conversion for issue #28241 | {
"login": "christoukmaji",
"id": 51040574,
"node_id": "MDQ6VXNlcjUxMDQwNTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/51040574?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/christoukmaji",
"html_url": "https://github.com/christoukmaji",
"followers_url": "https://api.github.com/users/christoukmaji/followers",
"following_url": "https://api.github.com/users/christoukmaji/following{/other_user}",
"gists_url": "https://api.github.com/users/christoukmaji/gists{/gist_id}",
"starred_url": "https://api.github.com/users/christoukmaji/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/christoukmaji/subscriptions",
"organizations_url": "https://api.github.com/users/christoukmaji/orgs",
"repos_url": "https://api.github.com/users/christoukmaji/repos",
"events_url": "https://api.github.com/users/christoukmaji/events{/privacy}",
"received_events_url": "https://api.github.com/users/christoukmaji/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-29T23:44:46 | 2024-01-31T01:43:32 | null | NONE | null | # What does this PR do?
This PR adds support for LLaMa V2 and CodeLLaMa while maintaining backwards compatibility for LLaMa V1 in the LLaMa-HuggingFace weight conversion script `src/transformers/models/llama/convert_llama_weights_to_hf.py`. This PR changes the max_position_embeddings for LLaMa V2 to 4096, and for CodeLLaMa to 16384, while maintaining a default max_position_embeddings of 2048 for LLaMa V1.
Fixes #28241
## Who can review?
@ArthurZucker @amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28767/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28767",
"html_url": "https://github.com/huggingface/transformers/pull/28767",
"diff_url": "https://github.com/huggingface/transformers/pull/28767.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28767.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28766 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28766/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28766/comments | https://api.github.com/repos/huggingface/transformers/issues/28766/events | https://github.com/huggingface/transformers/pull/28766 | 2,106,480,912 | PR_kwDOCUB6oc5lYQHT | 28,766 | #27237 | {
"login": "oublalkhalid",
"id": 76509145,
"node_id": "MDQ6VXNlcjc2NTA5MTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/76509145?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oublalkhalid",
"html_url": "https://github.com/oublalkhalid",
"followers_url": "https://api.github.com/users/oublalkhalid/followers",
"following_url": "https://api.github.com/users/oublalkhalid/following{/other_user}",
"gists_url": "https://api.github.com/users/oublalkhalid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oublalkhalid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oublalkhalid/subscriptions",
"organizations_url": "https://api.github.com/users/oublalkhalid/orgs",
"repos_url": "https://api.github.com/users/oublalkhalid/repos",
"events_url": "https://api.github.com/users/oublalkhalid/events{/privacy}",
"received_events_url": "https://api.github.com/users/oublalkhalid/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-01-29T21:37:52 | 2024-01-30T09:44:19 | null | NONE | null | #27237 Solved ✅
I suggest a resolution for situations where ``sequence_lags=[0]``. In such instances, the length of the context past sequence aligns with the overall sequence length. Let me provide some clarification on this matter
The issue stems from line `1230` in `modeling_time_series_transformer.py.` In cases where the lag parameter is set to 0, the index is assigned a value of -1. This results in only one data point being lagged, creating a discrepancy when using model.generate(). For example, if the size is 48, selecting a lag of [0] produces 49, rendering it unsuitable for model.generate().
```
sequence_length = sequence.shape[1]
indices = [lag - shift for lag in self.config.lags_sequence]
if max(indices) + subsequences_length > sequence_length:
raise ValueError(
f"lags cannot go further than history length, found lag {max(indices)} "
f"while history length is only {sequence_length}"
)
```
We can modify the code as shown below to rectify index 0 when negative values are encountered ✅:
```
sequence_length = sequence.shape[1]
# (Khalid Oublal) -> addressed the issue regarding the scenario where lag equals 0.
# The previous implementation was: indices = [lag - shift for lag in self.config.lags_sequence]
indices = [lag - shift if lag > 0 else 0 for lag in self.config.lags_sequence]
if max(indices) + subsequences_length > sequence_length:
raise ValueError(
f"lags cannot go further than history length, found lag {max(indices)} "
f"while history length is only {sequence_length}"
)
```
### Check DataLoader
In the analysis below, it's evident that there are no lags indicated by `sequence_lags=[0]`. The length of the context in this batch matches the provided context length.

### Confirming Training Status
Below, it's apparent that the training is progressing smoothly. Some additional print statements were added to verify that the lags are 0, implying the indices should be `[0]`.

### Generating with `model.generate()`
Now, with `sequence_lags=[0]`, we observe that predictions can be made without any issues.

Best,
khalid oublal | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28766/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28766",
"html_url": "https://github.com/huggingface/transformers/pull/28766",
"diff_url": "https://github.com/huggingface/transformers/pull/28766.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28766.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28765 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28765/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28765/comments | https://api.github.com/repos/huggingface/transformers/issues/28765/events | https://github.com/huggingface/transformers/pull/28765 | 2,106,444,478 | PR_kwDOCUB6oc5lYIGC | 28,765 | Added model Sigma-MoE | {
"login": "jubueche",
"id": 30778073,
"node_id": "MDQ6VXNlcjMwNzc4MDcz",
"avatar_url": "https://avatars.githubusercontent.com/u/30778073?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jubueche",
"html_url": "https://github.com/jubueche",
"followers_url": "https://api.github.com/users/jubueche/followers",
"following_url": "https://api.github.com/users/jubueche/following{/other_user}",
"gists_url": "https://api.github.com/users/jubueche/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jubueche/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jubueche/subscriptions",
"organizations_url": "https://api.github.com/users/jubueche/orgs",
"repos_url": "https://api.github.com/users/jubueche/repos",
"events_url": "https://api.github.com/users/jubueche/events{/privacy}",
"received_events_url": "https://api.github.com/users/jubueche/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 10 | 2024-01-29T21:14:40 | 2024-01-31T22:03:32 | null | NONE | null | # What does this PR do?
Added model from "[Approximating Two-Layer Feedforward Networks for Efficient Transformers](https://openreview.net/pdf?id=zM3mlyflTt)" and replicated experiments on WikiText-103.
The Sigma-MoE is different to the conventional Switch-like architecture when it comes to initialisation of the expert-weights, the routing function and the load balancing loss. Using the sigmoid function instead of the softmax avoids competition between the experts and leads to better training behaviour. Furthermore, Sigma-MoE employs a simple load balancing function that simply uses the entropy of the router outputs as a regulariser.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28765/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28765/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28765",
"html_url": "https://github.com/huggingface/transformers/pull/28765",
"diff_url": "https://github.com/huggingface/transformers/pull/28765.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28765.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28764 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28764/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28764/comments | https://api.github.com/repos/huggingface/transformers/issues/28764/events | https://github.com/huggingface/transformers/pull/28764 | 2,106,228,159 | PR_kwDOCUB6oc5lXY1T | 28,764 | Add tip on setting tokenizer attributes | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-01-29T19:08:04 | 2024-01-31T18:04:58 | null | MEMBER | null | This PR adds a quick tip to the chat template docs on setting tokenizer attributes (after some discussion on Slack) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28764/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28764",
"html_url": "https://github.com/huggingface/transformers/pull/28764",
"diff_url": "https://github.com/huggingface/transformers/pull/28764.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28764.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28763 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28763/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28763/comments | https://api.github.com/repos/huggingface/transformers/issues/28763/events | https://github.com/huggingface/transformers/issues/28763 | 2,106,179,792 | I_kwDOCUB6oc59icDQ | 28,763 | Allow setting different decoder_start_token_ids for each item in a batch in the generate function. | {
"login": "dpernes",
"id": 25008929,
"node_id": "MDQ6VXNlcjI1MDA4OTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/25008929?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dpernes",
"html_url": "https://github.com/dpernes",
"followers_url": "https://api.github.com/users/dpernes/followers",
"following_url": "https://api.github.com/users/dpernes/following{/other_user}",
"gists_url": "https://api.github.com/users/dpernes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dpernes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dpernes/subscriptions",
"organizations_url": "https://api.github.com/users/dpernes/orgs",
"repos_url": "https://api.github.com/users/dpernes/repos",
"events_url": "https://api.github.com/users/dpernes/events{/privacy}",
"received_events_url": "https://api.github.com/users/dpernes/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | 0 | 2024-01-29T18:38:41 | 2024-01-30T09:36:39 | null | NONE | null | ### Feature request
@gante
The `generate` function has a `decoder_start_token_id` argument that allows the specification of the decoder start token when generating from an encoder-decoder model (e.g. mT5). Currently, `decoder_start_token_id` must be an integer, which means that the same start token is used for all elements in the batch. I request that you allow the specification of different start tokens for each element of the batch. For this purpose, `decoder_start_token_id` must be a tensor with shape `(batch_size,)`.
### Motivation
Some multilingual encoder-decoder models use the `decoder_start_token_id` to indicate the target language. Thus, this change would allow generation into multiple target languages in parallel, as illustrated in the code below.
### Your contribution
```
import re
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip()))
article_text = """Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs "spill over into misinformation about vaccines in general". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. "We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO," the post said, referring to the World Health Organization."""
model_name = "csebuetnlp/mT5_m2m_crossSum_enhanced"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
get_lang_id = lambda lang: tokenizer._convert_token_to_id(
model.config.task_specific_params["langid_map"][lang][1]
)
target_langs = ["portuguese", "spanish"]
input_ids = tokenizer(
[WHITESPACE_HANDLER(article_text)],
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=512
)["input_ids"]
input_ids = input_ids.expand(len(target_langs), -1) # shape (num_target_languages, num_input_tokens)
decoder_start_token_id = torch.tensor(
[get_lang_id(t) for t in target_langs],
dtype=input_ids.dtype,
device=input_ids.device
) # shape (num_target_languages,)
output_ids = model.generate(
input_ids=input_ids,
decoder_start_token_id=decoder_start_token_id,
max_length=84,
no_repeat_ngram_size=2,
num_beams=4,
)
summaries = tokenizer.batch_decode(
output_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)
print(summaries)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28763/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28763/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28762 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28762/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28762/comments | https://api.github.com/repos/huggingface/transformers/issues/28762/events | https://github.com/huggingface/transformers/issues/28762 | 2,106,086,540 | I_kwDOCUB6oc59iFSM | 28,762 | Update the default depth estimation model of the pipeline | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-29T17:41:53 | 2024-01-31T18:16:39 | null | CONTRIBUTOR | null | ### Feature request
The current depth estimation pipeline leverages https://huggingface.co/Intel/dpt-large by default.
However, with recent models like Depth Anything, it might make sense to update the default model.
For reference, DPT-large (also called DPT 3.0) was released in 2021, we do have better models in 2024 :)
### Motivation
Having better depth estimation models by default would be great.
### Your contribution
I could contribute this, but let's perhaps first discuss | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28762/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28762/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28761 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28761/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28761/comments | https://api.github.com/repos/huggingface/transformers/issues/28761/events | https://github.com/huggingface/transformers/issues/28761 | 2,106,008,347 | I_kwDOCUB6oc59hyMb | 28,761 | requests.exceptions.SSLError: HTTPSConnectionPool(host='api-inference.huggingface.co', port=443) | {
"login": "shoang22",
"id": 54875725,
"node_id": "MDQ6VXNlcjU0ODc1NzI1",
"avatar_url": "https://avatars.githubusercontent.com/u/54875725?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shoang22",
"html_url": "https://github.com/shoang22",
"followers_url": "https://api.github.com/users/shoang22/followers",
"following_url": "https://api.github.com/users/shoang22/following{/other_user}",
"gists_url": "https://api.github.com/users/shoang22/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shoang22/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shoang22/subscriptions",
"organizations_url": "https://api.github.com/users/shoang22/orgs",
"repos_url": "https://api.github.com/users/shoang22/repos",
"events_url": "https://api.github.com/users/shoang22/events{/privacy}",
"received_events_url": "https://api.github.com/users/shoang22/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-29T17:08:37 | 2024-01-29T18:07:08 | 2024-01-29T18:07:08 | NONE | null | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.4.0-1103-aws-x86_64-with-debian-buster-sid
- Python version: 3.7.16
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.4.0 but is ignored because of PyTorch version too old.
- PyTorch version (GPU?): 1.8.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @SunMarc
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Running an example in the [documentation](https://huggingface.co/blog/getting-started-with-embeddings) produces the following error:
```
requests.exceptions.ProxyError: HTTPSConnectionPool(host='api-inference.huggingface.co', port=443): Max retries exceeded with url: /pipeline/feature-extraction/sentence-transformers/all-MiniLM-L6-v2 (Caused by ProxyError('Cannot connect to proxy.', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f7c78487cd0>: Failed to establish a new connection: [Errno 111] Connection refused')))
```
I tried installing`requests==2.27.1` as described [here](https://github.com/huggingface/transformers/issues/17611), but I'm still getting the same error.
Code to reproduce error:
```
import os
import requests
os.environ['CURL_CA_BUNDLE'] = ''
os.environ['HTTP_PROXY'] = "http://127.0.0.1:7890"
os.environ['HTTPS_PROXY'] = "http://127.0.0.1:7890"
os.environ['ALL_PROXY'] = "socks5://127.0.0.1:7890"
hf_token = os.environ["HUGGINGFACE_TOKEN"]
model_id = "sentence-transformers/all-MiniLM-L6-v2"
api_url = f"https://api-inference.huggingface.co/pipeline/feature-extraction/{model_id}"
headers = {"Authorization": f"Bearer {hf_token}"}
def query(texts):
response = requests.post(api_url, headers=headers, json={"inputs": texts, "options":{"wait_for_model":True}})
return response.json()
texts = ["How do I get a replacement Medicare card?",
"What is the monthly premium for Medicare Part B?",
"How do I terminate my Medicare Part B (medical insurance)?",
"How do I sign up for Medicare?",
"Can I sign up for Medicare Part B if I am working and have health insurance through an employer?",
"How do I sign up for Medicare Part B if I already have Part A?",
"What are Medicare late enrollment penalties?",
"What is Medicare and who can get it?",
"How can I get help with my Medicare Part A and Part B premiums?",
"What are the different parts of Medicare?",
"Will my Medicare premiums be higher because of my higher income?",
"What is TRICARE ?",
"Should I sign up for Medicare Part B if I have Veterans' Benefits?"]
output = query(texts)
```
### Expected behavior
Produce embeddings | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28761/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28761/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28760 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28760/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28760/comments | https://api.github.com/repos/huggingface/transformers/issues/28760/events | https://github.com/huggingface/transformers/pull/28760 | 2,105,891,568 | PR_kwDOCUB6oc5lWN1x | 28,760 | DeepSpeed: hardcode `torch.arange` dtype on `float` usage to avoid incorrect initialization | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-29T16:32:54 | 2024-01-31T14:39:11 | 2024-01-31T14:39:08 | MEMBER | null | # What does this PR do?
Addresses #28685 -- check the issue (and related issues) for a full discussion.
TL;DR: some frameworks, such as DeepSpeed, may patch the initialization of a tensor. For instance, a `float32` tensor may be initialized as `bfloat16` instead. This is particularly problematic when `torch.arange` is used as a non-integer: its initialized value may be a source of problems if not in the right type. This PR casts `torch.arange` to `int64` at initialization time, preventing the frameworks' float type conversion, and subsequently casts the tensor to the desired type. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28760/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28760/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28760",
"html_url": "https://github.com/huggingface/transformers/pull/28760",
"diff_url": "https://github.com/huggingface/transformers/pull/28760.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28760.patch",
"merged_at": "2024-01-31T14:39:08"
} |
https://api.github.com/repos/huggingface/transformers/issues/28759 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28759/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28759/comments | https://api.github.com/repos/huggingface/transformers/issues/28759/events | https://github.com/huggingface/transformers/pull/28759 | 2,105,665,598 | PR_kwDOCUB6oc5lVcep | 28,759 | fix num_assistant_tokens with heuristic schedule | {
"login": "jmamou",
"id": 19263306,
"node_id": "MDQ6VXNlcjE5MjYzMzA2",
"avatar_url": "https://avatars.githubusercontent.com/u/19263306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmamou",
"html_url": "https://github.com/jmamou",
"followers_url": "https://api.github.com/users/jmamou/followers",
"following_url": "https://api.github.com/users/jmamou/following{/other_user}",
"gists_url": "https://api.github.com/users/jmamou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmamou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmamou/subscriptions",
"organizations_url": "https://api.github.com/users/jmamou/orgs",
"repos_url": "https://api.github.com/users/jmamou/repos",
"events_url": "https://api.github.com/users/jmamou/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmamou/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-01-29T14:46:35 | 2024-01-30T09:26:04 | null | NONE | null | # What does this PR do?
We have defined 2 different num_assistant_tokens_schedule values:
- `heuristic`: When all _speculative_ tokens are correct, increase `num_assistant_tokens` by 2 else reduce by 1. `num_assistant_tokens` value is persistent over multiple generation calls with the same assistant model.
- `heuristic_transient`: Same as `"heuristic` but `num_assistant_tokens` is reset to its initial value after each generation call.
Fixes # (issue)
https://github.com/huggingface/transformers/pull/27979#issuecomment-1908153882
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
@gante @amyeroberts | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28759/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28759/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28759",
"html_url": "https://github.com/huggingface/transformers/pull/28759",
"diff_url": "https://github.com/huggingface/transformers/pull/28759.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28759.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28758 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28758/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28758/comments | https://api.github.com/repos/huggingface/transformers/issues/28758/events | https://github.com/huggingface/transformers/pull/28758 | 2,105,636,363 | PR_kwDOCUB6oc5lVWBb | 28,758 | Pin pytest version <8.0.0 | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2024-01-29T14:33:38 | 2024-01-29T15:22:30 | 2024-01-29T15:22:14 | COLLABORATOR | null | # What does this PR do?
pytest released a new major version, 8 two days ago: https://pypi.org/project/pytest/8.0.0/
This breaks doctest runs on CI e.g. https://app.circleci.com/pipelines/github/huggingface/transformers/83241/workflows/7ca5119f-b434-4c93-89fb-28378e63c449/jobs/1073188
Pinning until we make our doctests compatible. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28758/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28758/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28758",
"html_url": "https://github.com/huggingface/transformers/pull/28758",
"diff_url": "https://github.com/huggingface/transformers/pull/28758.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28758.patch",
"merged_at": "2024-01-29T15:22:14"
} |
https://api.github.com/repos/huggingface/transformers/issues/28757 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28757/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28757/comments | https://api.github.com/repos/huggingface/transformers/issues/28757/events | https://github.com/huggingface/transformers/pull/28757 | 2,105,570,960 | PR_kwDOCUB6oc5lVHpd | 28,757 | Mark test_constrained_beam_search_generate as flaky | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-29T14:01:45 | 2024-01-29T15:22:29 | 2024-01-29T15:22:25 | COLLABORATOR | null | # What does this PR do?
Test occasionally fails on CI runs e.g. https://app.circleci.com/pipelines/github/huggingface/transformers/83241/workflows/6cb424b9-229b-412f-abfd-71cc6cfc7392/jobs/1073186/tests#failed-test-0
Marking as flaky to trigger retries to help prevent failing CI runs on unrelated PRs. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28757/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28757/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28757",
"html_url": "https://github.com/huggingface/transformers/pull/28757",
"diff_url": "https://github.com/huggingface/transformers/pull/28757.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28757.patch",
"merged_at": "2024-01-29T15:22:25"
} |
https://api.github.com/repos/huggingface/transformers/issues/28756 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28756/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28756/comments | https://api.github.com/repos/huggingface/transformers/issues/28756/events | https://github.com/huggingface/transformers/pull/28756 | 2,105,537,006 | PR_kwDOCUB6oc5lVAJt | 28,756 | Workaround for #27758 to avoid ZeroDivisionError | {
"login": "tleyden",
"id": 296876,
"node_id": "MDQ6VXNlcjI5Njg3Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/296876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tleyden",
"html_url": "https://github.com/tleyden",
"followers_url": "https://api.github.com/users/tleyden/followers",
"following_url": "https://api.github.com/users/tleyden/following{/other_user}",
"gists_url": "https://api.github.com/users/tleyden/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tleyden/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tleyden/subscriptions",
"organizations_url": "https://api.github.com/users/tleyden/orgs",
"repos_url": "https://api.github.com/users/tleyden/repos",
"events_url": "https://api.github.com/users/tleyden/events{/privacy}",
"received_events_url": "https://api.github.com/users/tleyden/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2024-01-29T13:45:23 | 2024-01-30T10:20:29 | null | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
It can speed up devloops to test with very small datasets which end up being a single batch. However, that can trigger the error described in #27758.
This PR works around it by changing the division by zero to division by a very small number. The loss metric will already be meaningless if `self.state.global_step == 0`. This PR won't change that, however it will prevent the unhelpful `ZeroDivisionError`
I have not written any tests yet, but would be happy to if the reviewers agree with the overall approach.
<!-- Remove if not applicable -->
Fixes #27758
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker @pacman100 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28756/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28756/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28756",
"html_url": "https://github.com/huggingface/transformers/pull/28756",
"diff_url": "https://github.com/huggingface/transformers/pull/28756.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28756.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28755 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28755/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28755/comments | https://api.github.com/repos/huggingface/transformers/issues/28755/events | https://github.com/huggingface/transformers/pull/28755 | 2,105,467,429 | PR_kwDOCUB6oc5lUwxp | 28,755 | Expose `offload_buffers` parameter of `accelerate` to `PreTrainedModel.from_pretrained` method | {
"login": "notsyncing",
"id": 2649806,
"node_id": "MDQ6VXNlcjI2NDk4MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2649806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/notsyncing",
"html_url": "https://github.com/notsyncing",
"followers_url": "https://api.github.com/users/notsyncing/followers",
"following_url": "https://api.github.com/users/notsyncing/following{/other_user}",
"gists_url": "https://api.github.com/users/notsyncing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/notsyncing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/notsyncing/subscriptions",
"organizations_url": "https://api.github.com/users/notsyncing/orgs",
"repos_url": "https://api.github.com/users/notsyncing/repos",
"events_url": "https://api.github.com/users/notsyncing/events{/privacy}",
"received_events_url": "https://api.github.com/users/notsyncing/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-01-29T13:09:53 | 2024-01-30T09:13:19 | null | NONE | null | # What does this PR do?
This PR will expose the `offload_buffers` parameter of the `dispatch_model` method in `accelerate` to the `PreTrainedModel.from_pretrained`, then we can make the following code easier if we want to use this parameter:
```python
config = AutoConfig.from_pretrained(model_id)
with accelerate.init_empty_weights():
empty_model = AutoModelForCausalLM.from_config(config)
device_map = accelerate.infer_auto_device_map(empty_model, max_memory={0: "4GB", "cpu": "128GB"}, dtype=torch.bfloat16, no_split_module_classes=["LlamaDecoderLayer"])
model = AutoModelForCausalLM.from_pretrained(model_id, device_map=device_map)
```
now simplifies to
```python
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", max_memory={0: "4GB", "cpu": "128GB"}, dtype=torch.bfload16, offload_buffers=True)
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28755/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28755/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28755",
"html_url": "https://github.com/huggingface/transformers/pull/28755",
"diff_url": "https://github.com/huggingface/transformers/pull/28755.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28755.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28754 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28754/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28754/comments | https://api.github.com/repos/huggingface/transformers/issues/28754/events | https://github.com/huggingface/transformers/pull/28754 | 2,105,365,550 | PR_kwDOCUB6oc5lUaJH | 28,754 | Fix max_position_embeddings default value for llama2 to 4096 #28241 | {
"login": "karl-hajjar",
"id": 24575019,
"node_id": "MDQ6VXNlcjI0NTc1MDE5",
"avatar_url": "https://avatars.githubusercontent.com/u/24575019?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/karl-hajjar",
"html_url": "https://github.com/karl-hajjar",
"followers_url": "https://api.github.com/users/karl-hajjar/followers",
"following_url": "https://api.github.com/users/karl-hajjar/following{/other_user}",
"gists_url": "https://api.github.com/users/karl-hajjar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/karl-hajjar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/karl-hajjar/subscriptions",
"organizations_url": "https://api.github.com/users/karl-hajjar/orgs",
"repos_url": "https://api.github.com/users/karl-hajjar/repos",
"events_url": "https://api.github.com/users/karl-hajjar/events{/privacy}",
"received_events_url": "https://api.github.com/users/karl-hajjar/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 5 | 2024-01-29T12:18:01 | 2024-01-31T01:44:09 | null | NONE | null | This PR fixes issue #28241 related the config of llama which has max_position_embeddings=2048 by default since this was the case for llama1, but the newest version llama2 should have max_position_embeddings=4096 by default, hence the fix.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28754/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28754/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28754",
"html_url": "https://github.com/huggingface/transformers/pull/28754",
"diff_url": "https://github.com/huggingface/transformers/pull/28754.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28754.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28753 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28753/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28753/comments | https://api.github.com/repos/huggingface/transformers/issues/28753/events | https://github.com/huggingface/transformers/issues/28753 | 2,105,355,793 | I_kwDOCUB6oc59fS4R | 28,753 | Adding CrossMAE | {
"login": "johko",
"id": 2843485,
"node_id": "MDQ6VXNlcjI4NDM0ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2843485?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johko",
"html_url": "https://github.com/johko",
"followers_url": "https://api.github.com/users/johko/followers",
"following_url": "https://api.github.com/users/johko/following{/other_user}",
"gists_url": "https://api.github.com/users/johko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johko/subscriptions",
"organizations_url": "https://api.github.com/users/johko/orgs",
"repos_url": "https://api.github.com/users/johko/repos",
"events_url": "https://api.github.com/users/johko/events{/privacy}",
"received_events_url": "https://api.github.com/users/johko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | null | 1 | 2024-01-29T12:12:40 | 2024-01-29T19:09:15 | null | CONTRIBUTOR | null | ### Model description
Hey,
the recently released [CrossMAE](https://crossmae.github.io/) seems like it would be a nice addition to transformers.
Basically the model improves MAE by using Cross-Attention instead of Self-Attention on the tokens and thereby decreasing the needed FLOPS quite significantly. At the same time it seems to be able to keep the performance of MAE or even improve it a bit.
Maybe there are already plans of integrating it @NielsRogge ?
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Project Page: https://crossmae.github.io/
GitHub Repo: https://github.com/TonyLianLong/CrossMAE
Paper: https://arxiv.org/pdf/2401.14391.pdf | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28753/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28753/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28752 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28752/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28752/comments | https://api.github.com/repos/huggingface/transformers/issues/28752/events | https://github.com/huggingface/transformers/issues/28752 | 2,104,370,311 | I_kwDOCUB6oc59biSH | 28,752 | Seq2SeqTrainingArguments.__init__() got an unexpected keyword argument 'save_only_model' | {
"login": "nic-olo",
"id": 89006260,
"node_id": "MDQ6VXNlcjg5MDA2MjYw",
"avatar_url": "https://avatars.githubusercontent.com/u/89006260?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nic-olo",
"html_url": "https://github.com/nic-olo",
"followers_url": "https://api.github.com/users/nic-olo/followers",
"following_url": "https://api.github.com/users/nic-olo/following{/other_user}",
"gists_url": "https://api.github.com/users/nic-olo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nic-olo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nic-olo/subscriptions",
"organizations_url": "https://api.github.com/users/nic-olo/orgs",
"repos_url": "https://api.github.com/users/nic-olo/repos",
"events_url": "https://api.github.com/users/nic-olo/events{/privacy}",
"received_events_url": "https://api.github.com/users/nic-olo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-28T22:09:00 | 2024-01-28T22:13:19 | 2024-01-28T22:13:19 | NONE | null | ### System Info

### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
training_args = Seq2SeqTrainingArguments(
output_dir=DIR,
# Parameters
per_device_train_batch_size=hyperparameters["batch_size"],
per_device_eval_batch_size=hyperparameters["batch_size"],
learning_rate=hyperparameters["learning_rate"],
weight_decay=hyperparameters["weight_decay"],
num_train_epochs=hyperparameters["nb_epochs"],
fp16=False,
optim="adamw_torch",
# Logging
logging_dir=f"{DIR}/training_logs",
logging_strategy="epoch",
# report_to=["wandb", "tensorboard"],
report_to=["tensorboard"],
# Saving
save_strategy="epoch",
# Evaluating
evaluation_strategy="epoch",
predict_with_generate=True,
generation_max_length=550,
generation_num_beams=3,
save_safetensors=True,
save_total_limit=1,
# metric_for_best_model='eval_loss',
load_best_model_at_end=True,
save_only_model=True,
# metric_for_best_model="Weighted_comb",
# greater_is_better=True,
)
### Expected behavior | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28752/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28751 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28751/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28751/comments | https://api.github.com/repos/huggingface/transformers/issues/28751/events | https://github.com/huggingface/transformers/pull/28751 | 2,104,191,054 | PR_kwDOCUB6oc5lQd6U | 28,751 | [Docs] Fix Typo in English & Japanese CLIP Model Documentation (TMBD -> TMDB) | {
"login": "Vinyzu",
"id": 50874994,
"node_id": "MDQ6VXNlcjUwODc0OTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/50874994?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vinyzu",
"html_url": "https://github.com/Vinyzu",
"followers_url": "https://api.github.com/users/Vinyzu/followers",
"following_url": "https://api.github.com/users/Vinyzu/following{/other_user}",
"gists_url": "https://api.github.com/users/Vinyzu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vinyzu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vinyzu/subscriptions",
"organizations_url": "https://api.github.com/users/Vinyzu/orgs",
"repos_url": "https://api.github.com/users/Vinyzu/repos",
"events_url": "https://api.github.com/users/Vinyzu/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vinyzu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-28T14:29:42 | 2024-01-29T10:06:52 | 2024-01-29T10:06:52 | CONTRIBUTOR | null | Fixes Typo in TMBD to TMDB (for "TheMovieDatabase")
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
[/] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
[/] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
[/] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
[/] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28751/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28751",
"html_url": "https://github.com/huggingface/transformers/pull/28751",
"diff_url": "https://github.com/huggingface/transformers/pull/28751.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28751.patch",
"merged_at": "2024-01-29T10:06:52"
} |
https://api.github.com/repos/huggingface/transformers/issues/28750 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28750/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28750/comments | https://api.github.com/repos/huggingface/transformers/issues/28750/events | https://github.com/huggingface/transformers/pull/28750 | 2,104,165,406 | PR_kwDOCUB6oc5lQYhk | 28,750 | Fix the StarCoder agent max_new_tokens input validation error | {
"login": "dashapetr",
"id": 54349415,
"node_id": "MDQ6VXNlcjU0MzQ5NDE1",
"avatar_url": "https://avatars.githubusercontent.com/u/54349415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dashapetr",
"html_url": "https://github.com/dashapetr",
"followers_url": "https://api.github.com/users/dashapetr/followers",
"following_url": "https://api.github.com/users/dashapetr/following{/other_user}",
"gists_url": "https://api.github.com/users/dashapetr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dashapetr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dashapetr/subscriptions",
"organizations_url": "https://api.github.com/users/dashapetr/orgs",
"repos_url": "https://api.github.com/users/dashapetr/repos",
"events_url": "https://api.github.com/users/dashapetr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dashapetr/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-28T13:31:11 | 2024-01-31T16:50:09 | null | NONE | null | Committer: Darya Petrashka <dashacheb15@gmail.com>
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #28523
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
link: https://github.com/huggingface/transformers/issues/28523
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker and @younesbelkada
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28750/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28750/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28750",
"html_url": "https://github.com/huggingface/transformers/pull/28750",
"diff_url": "https://github.com/huggingface/transformers/pull/28750.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28750.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28749 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28749/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28749/comments | https://api.github.com/repos/huggingface/transformers/issues/28749/events | https://github.com/huggingface/transformers/issues/28749 | 2,104,111,121 | I_kwDOCUB6oc59ajAR | 28,749 | combining feature from two pre-train model in transformer then passing them into a classifer | {
"login": "Arwa491",
"id": 117582570,
"node_id": "U_kgDOBwIq6g",
"avatar_url": "https://avatars.githubusercontent.com/u/117582570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Arwa491",
"html_url": "https://github.com/Arwa491",
"followers_url": "https://api.github.com/users/Arwa491/followers",
"following_url": "https://api.github.com/users/Arwa491/following{/other_user}",
"gists_url": "https://api.github.com/users/Arwa491/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Arwa491/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Arwa491/subscriptions",
"organizations_url": "https://api.github.com/users/Arwa491/orgs",
"repos_url": "https://api.github.com/users/Arwa491/repos",
"events_url": "https://api.github.com/users/Arwa491/events{/privacy}",
"received_events_url": "https://api.github.com/users/Arwa491/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-28T11:21:50 | 2024-01-28T15:37:30 | null | NONE | null | I'm trying to extract the feature using two different per-train models combine these features in one vector and then pass this vector into a classifier for a final classification
is that possible using hugging face pre-train models , because i already trained the model on my data and uploaded them into hugging face | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28749/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28749/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28748 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28748/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28748/comments | https://api.github.com/repos/huggingface/transformers/issues/28748/events | https://github.com/huggingface/transformers/issues/28748 | 2,104,076,485 | I_kwDOCUB6oc59aajF | 28,748 | RuntimeError: CUDA error: device-side assert triggered | {
"login": "KaifAhmad1",
"id": 98801504,
"node_id": "U_kgDOBeOXYA",
"avatar_url": "https://avatars.githubusercontent.com/u/98801504?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KaifAhmad1",
"html_url": "https://github.com/KaifAhmad1",
"followers_url": "https://api.github.com/users/KaifAhmad1/followers",
"following_url": "https://api.github.com/users/KaifAhmad1/following{/other_user}",
"gists_url": "https://api.github.com/users/KaifAhmad1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KaifAhmad1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KaifAhmad1/subscriptions",
"organizations_url": "https://api.github.com/users/KaifAhmad1/orgs",
"repos_url": "https://api.github.com/users/KaifAhmad1/repos",
"events_url": "https://api.github.com/users/KaifAhmad1/events{/privacy}",
"received_events_url": "https://api.github.com/users/KaifAhmad1/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-28T09:48:58 | 2024-01-31T09:17:53 | null | NONE | null | ### System Info
``` YAML
OS: Windows 11
Driver Version: 532.09
CUDA Version: 12.1
bitsandbytes: 0.42.0
transformers: 4.37.1
trl: 0.7.10
torch: 2.1.0+cu121
peft: 0.7.1
optimum: 1.16.2
einops: 0.7.0
```
I am using Google Colab T4 GPU for fine tuniing `mistralai/Mistral-7B-v0.1` in custom dataset
### Who can help?
Hey, @ArthurZucker @muellerzr @SunMarc Please help me for this issue.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
``` Python
# Configuration for quantization
compute_dtype = getattr(torch, "bfloat16")
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=compute_dtype,
)
```
``` Python
model = AutoModelForCausalLM.from_pretrained(model_name,
quantization_config=bnb_config,
device_map="auto",
use_cache=False,
trust_remote_code=True,
low_cpu_mem_usage=True
)
```
``` Python
trainer.train()
```
Here is the error detail
```
RuntimeError Traceback (most recent call last)
[<ipython-input-88-6b7a202bc4f8>](https://localhost:8080/#) in <cell line: 1>()
----> 1 mode
RuntimeError Traceback (most recent call last)
[<ipython-input-88-6b7a202bc4f8>](https://localhost:8080/#) in <cell line: 1>()
----> 1 model = AutoModelForCausalLM.from_pretrained(model_name,
2 quantization_config=bnb_config,
3 device_map="auto",
4 use_cache=False,
5 trust_remote_code=True,
3 frames
[/usr/local/lib/python3.10/dist-packages/accelerate/utils/modeling.py](https://localhost:8080/#) in get_max_memory(max_memory)
718 else:
719 for i in range(torch.cuda.device_count()):
--> 720 _ = torch.tensor([0], device=i)
721 max_memory = {i: torch.cuda.mem_get_info(i)[0] for i in range(torch.cuda.device_count())}
722 # allocate everything in the mps device as the RAM is shared
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` tol = AutoModelForCausalLM.from_pretrained(model_name,
2 quantization_config=bnb_config,
3 device_map="auto",
4 use_cache=False,
5 trust_remote_code=True,
3 frames
[/usr/local/lib/python3.10/dist-packages/accelerate/utils/modeling.py](https://localhost:8080/#) in get_max_memory(max_memory)
718 else:
719 for i in range(torch.cuda.device_count()):
--> 720 _ = torch.tensor([0], device=i)
721 max_memory = {i: torch.cuda.mem_get_info(i)[0] for i in range(torch.cuda.device_count())}
722 # allocate everything in the mps device as the RAM is shared
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
### Expected behavior
This cell will be run without raising any error
Use Colab for your refence for better understanding: https://colab.research.google.com/drive/1rPYEVeXVlRrR3_oM1odaIuL60R9-jm6j?usp=sharing | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28748/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28747 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28747/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28747/comments | https://api.github.com/repos/huggingface/transformers/issues/28747/events | https://github.com/huggingface/transformers/issues/28747 | 2,104,070,205 | I_kwDOCUB6oc59aZA9 | 28,747 | In the RoPE paper they don't compute actual softmax | {
"login": "wendlerc",
"id": 8095362,
"node_id": "MDQ6VXNlcjgwOTUzNjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8095362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wendlerc",
"html_url": "https://github.com/wendlerc",
"followers_url": "https://api.github.com/users/wendlerc/followers",
"following_url": "https://api.github.com/users/wendlerc/following{/other_user}",
"gists_url": "https://api.github.com/users/wendlerc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wendlerc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wendlerc/subscriptions",
"organizations_url": "https://api.github.com/users/wendlerc/orgs",
"repos_url": "https://api.github.com/users/wendlerc/repos",
"events_url": "https://api.github.com/users/wendlerc/events{/privacy}",
"received_events_url": "https://api.github.com/users/wendlerc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-28T09:32:38 | 2024-01-28T10:02:19 | 2024-01-28T10:02:19 | NONE | null | Is this a feature or a bug?
https://github.com/huggingface/transformers/blob/03cc17775b961d16cc4d0d7ab0c8487120d0b708/src/transformers/models/llama/modeling_llama.py#L429C9-L429C22
In RoPE paper equation 19: https://arxiv.org/pdf/2104.09864.pdf they don't compute the actual softmax.
Best,
Chris | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28747/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28746 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28746/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28746/comments | https://api.github.com/repos/huggingface/transformers/issues/28746/events | https://github.com/huggingface/transformers/pull/28746 | 2,104,069,320 | PR_kwDOCUB6oc5lQEes | 28,746 | Resolve DeepSpeed cannot resume training with PeftModel | {
"login": "lh0x00",
"id": 9839768,
"node_id": "MDQ6VXNlcjk4Mzk3Njg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9839768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lh0x00",
"html_url": "https://github.com/lh0x00",
"followers_url": "https://api.github.com/users/lh0x00/followers",
"following_url": "https://api.github.com/users/lh0x00/following{/other_user}",
"gists_url": "https://api.github.com/users/lh0x00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lh0x00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lh0x00/subscriptions",
"organizations_url": "https://api.github.com/users/lh0x00/orgs",
"repos_url": "https://api.github.com/users/lh0x00/repos",
"events_url": "https://api.github.com/users/lh0x00/events{/privacy}",
"received_events_url": "https://api.github.com/users/lh0x00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 9 | 2024-01-28T09:30:01 | 2024-01-31T12:58:27 | 2024-01-31T12:58:27 | CONTRIBUTOR | null | Hi all,
I found an issue about resume ft **PeftModel** with **DeepSpeed** while I using `accelerate launch` to follow [zephyr-7b-beta recipes](https://github.com/huggingface/alignment-handbook/tree/c74ed111710d57f563cfbf1806cfb8f07dd3dc67/recipes/zephyr-7b-beta) with **qlora**.
In details, the process will be crashing on load resume checkpoint with **DeepSpeed** with **PeftModel**. I referred to https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/559#issuecomment-1585948697 and created a PR to resolve this issue. I was updated and it works accurately on my fork.
Thanks for review @pacman100 @muellerzr
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28746/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28746/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28746",
"html_url": "https://github.com/huggingface/transformers/pull/28746",
"diff_url": "https://github.com/huggingface/transformers/pull/28746.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28746.patch",
"merged_at": "2024-01-31T12:58:27"
} |
https://api.github.com/repos/huggingface/transformers/issues/28745 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28745/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28745/comments | https://api.github.com/repos/huggingface/transformers/issues/28745/events | https://github.com/huggingface/transformers/issues/28745 | 2,103,603,552 | I_kwDOCUB6oc59YnFg | 28,745 | Pydantic V2 support | {
"login": "FanaticPythoner",
"id": 45826736,
"node_id": "MDQ6VXNlcjQ1ODI2NzM2",
"avatar_url": "https://avatars.githubusercontent.com/u/45826736?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FanaticPythoner",
"html_url": "https://github.com/FanaticPythoner",
"followers_url": "https://api.github.com/users/FanaticPythoner/followers",
"following_url": "https://api.github.com/users/FanaticPythoner/following{/other_user}",
"gists_url": "https://api.github.com/users/FanaticPythoner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FanaticPythoner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FanaticPythoner/subscriptions",
"organizations_url": "https://api.github.com/users/FanaticPythoner/orgs",
"repos_url": "https://api.github.com/users/FanaticPythoner/repos",
"events_url": "https://api.github.com/users/FanaticPythoner/events{/privacy}",
"received_events_url": "https://api.github.com/users/FanaticPythoner/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2024-01-27T14:07:20 | 2024-01-28T15:44:41 | null | NONE | null | ### Feature request
Please migrate to the latest Pydantic version.
### Motivation
The current library appears to be incompatible with Pydantic Version 2. I find that being able to utilize new features, such as the `model_dump` function, would be highly beneficial. This function is particularly useful for ensuring robust data validation in a scenario like mine, where JSON serialization of a Pydantic model is required for an HTTP request body.
### Your contribution
I am not able to perform the migration myself due to lack of time. Let me know if you guys need screenshots/tracebacks of errors when attempting to use more recent versions of Pydantic. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28745/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28744 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28744/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28744/comments | https://api.github.com/repos/huggingface/transformers/issues/28744/events | https://github.com/huggingface/transformers/issues/28744 | 2,103,444,751 | I_kwDOCUB6oc59YAUP | 28,744 | Handling offload when calling AutoModelForCausalLM.from_pretrained() | {
"login": "YourSaDady",
"id": 99607923,
"node_id": "U_kgDOBe_lcw",
"avatar_url": "https://avatars.githubusercontent.com/u/99607923?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YourSaDady",
"html_url": "https://github.com/YourSaDady",
"followers_url": "https://api.github.com/users/YourSaDady/followers",
"following_url": "https://api.github.com/users/YourSaDady/following{/other_user}",
"gists_url": "https://api.github.com/users/YourSaDady/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YourSaDady/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YourSaDady/subscriptions",
"organizations_url": "https://api.github.com/users/YourSaDady/orgs",
"repos_url": "https://api.github.com/users/YourSaDady/repos",
"events_url": "https://api.github.com/users/YourSaDady/events{/privacy}",
"received_events_url": "https://api.github.com/users/YourSaDady/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-27T08:16:55 | 2024-01-30T10:20:40 | null | NONE | null | ### System Info
Transformers version: 4.33.3
Python version: 3.9.18
Platform: Win11 WSL
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
1. `from transformers import AutoTokenizer,AutoModelForCausalLM`
2. `model = AutoModelForCausalLM.from_pretrained(args.model,device_map='auto')`
3. Error occurs:
<img width="866" alt="error2" src="https://github.com/huggingface/transformers/assets/99607923/86201897-10a7-41b2-a1f8-3c961b198337">
### Expected behavior
Expected: model should be loaded without any errors.
I also tried to specify the offload folder by changing the line 67 to `model = AutoModelForCausalLM.from_pretrained(args.model,device_map='auto', offload_folder='offload')', but the program is killed without any error or tracebacks shown. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28744/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28744/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28743 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28743/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28743/comments | https://api.github.com/repos/huggingface/transformers/issues/28743/events | https://github.com/huggingface/transformers/issues/28743 | 2,103,112,940 | I_kwDOCUB6oc59WvTs | 28,743 | when running rag example, errors in the 'generating train split' step (wiki_dpr.py) | {
"login": "kiehls90",
"id": 101498700,
"node_id": "U_kgDOBgy_TA",
"avatar_url": "https://avatars.githubusercontent.com/u/101498700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kiehls90",
"html_url": "https://github.com/kiehls90",
"followers_url": "https://api.github.com/users/kiehls90/followers",
"following_url": "https://api.github.com/users/kiehls90/following{/other_user}",
"gists_url": "https://api.github.com/users/kiehls90/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kiehls90/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kiehls90/subscriptions",
"organizations_url": "https://api.github.com/users/kiehls90/orgs",
"repos_url": "https://api.github.com/users/kiehls90/repos",
"events_url": "https://api.github.com/users/kiehls90/events{/privacy}",
"received_events_url": "https://api.github.com/users/kiehls90/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-27T01:05:27 | 2024-01-29T15:48:24 | null | NONE | null | ### System Info
I'm trying to run a rag example, and the dataset is wiki_dpr.
wiki_dpr download and extracting have been completed successfully.
However, at the generating train split stage, an error from wiki_dpr.py keeps popping up.
Especially in "_generate_examples" :
1. The following error occurs in the line **id, text, title = line.strip().split("\t")**
ValueError: not enough values to unpack (expected 3, got 2)
-> This part handles exceptions so that even if an error occurs, it passes.
2. **ID mismatch between lines {id} and vector {vec_id}**
This error seems to occur at the line " assert int(id) == int(vec_id),".
After I handled the exception in the split error, generating train split progressed to 80%, but an id mismatch error occurred at about the 16200000th vector id.
Debugging is even more difficult because it takes a long time to download and split wiki_dpr. I need help. thank you in advance!!
version: python 3.8, and others are referenced from requirements.txt.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. running rag example: python examples/research_projects/rag/finetune_rag.py \
--data_dir $DATA_DIR \
--output_dir $OUTPUT_DIR \
--model_name_or_path $MODEL_NAME_OR_PATH \
--model_type rag_sequence \
--fp16 \
--gpus 8
2. after downloading and extracting wiki_dpr, then error occurs in "generating train split"
### Expected behavior
. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28743/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28743/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28742 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28742/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28742/comments | https://api.github.com/repos/huggingface/transformers/issues/28742/events | https://github.com/huggingface/transformers/issues/28742 | 2,103,109,910 | I_kwDOCUB6oc59WukW | 28,742 | safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization | {
"login": "tamanna-mostafa",
"id": 156403336,
"node_id": "U_kgDOCVKGiA",
"avatar_url": "https://avatars.githubusercontent.com/u/156403336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tamanna-mostafa",
"html_url": "https://github.com/tamanna-mostafa",
"followers_url": "https://api.github.com/users/tamanna-mostafa/followers",
"following_url": "https://api.github.com/users/tamanna-mostafa/following{/other_user}",
"gists_url": "https://api.github.com/users/tamanna-mostafa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tamanna-mostafa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tamanna-mostafa/subscriptions",
"organizations_url": "https://api.github.com/users/tamanna-mostafa/orgs",
"repos_url": "https://api.github.com/users/tamanna-mostafa/repos",
"events_url": "https://api.github.com/users/tamanna-mostafa/events{/privacy}",
"received_events_url": "https://api.github.com/users/tamanna-mostafa/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-27T00:58:40 | 2024-01-29T06:57:25 | null | NONE | null | ### System Info
transformers version: 4.35.2
Platform: Linux-5.15.0-1050-aws-x86_64-with-glibc2.31
Python version: 3.10.12
Huggingface_hub version: 0.20.2
Safetensors version: 0.4.1
Accelerate version: 0.26.1
Accelerate config: not found
PyTorch version (GPU?): 2.1.2+cu121 (True)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
### Who can help?
@gante @Rocketknight1
@muellerzr and @pacman100
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. I ran supervised fine tuning of Mistral 7b model (with 32k preference data)
2. I ran DPO on the output of SFT
3. I ran the following code to load the DPO model and run docker:
```
model=/data/DPO_output_mistral_32k
volume=/mnt/efs/data/tammosta/files_t:/data
num_shard=8
docker run --gpus all --shm-size 1g -p 172.31.8.218:80:80 -v $volume ghcr.io/huggingface/text-generation-inference:1.1.0 --model-id $model --num-shard $num_shard --max-input-length 4095 --max-total-tokens 12000
```
However, the docker run failed with the following error:
`OSError: /data/DPO_output_mistral_32k does not appear to have a file named config.json. Checkout 'https://huggingface.co//data/DPO_output_mistral_32k/None' for available files.`
5. Assuming I need to merge the lora adaptors while loading the model, I ran the following command (the content of the script is also given below):
`python merge_peft_adaptors_gpu.py --base_model_name_or_path /mnt/efs/data/tammosta/files_t/output_sft_32k --peft_model_path /mnt/efs/data/tammosta/files_t/DPO_output_mistral_32k --output_dir /mnt/efs/data/tammosta/files_t/DPO_output_mistral_32k_merged --safe_serialization`
Here is the content of `merge_peft_adaptors_gpu.py`:
```
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
import os
import argparse
def get_args():
parser = argparse.ArgumentParser()
parser.add_argument("--base_model_name_or_path", type=str)
parser.add_argument("--peft_model_path", type=str)
parser.add_argument("--output_dir", type=str)
parser.add_argument("--device", type=str, default="auto")
parser.add_argument("--safe_serialization", action="store_true")
return parser.parse_args()
####
def main():
args = get_args()
if args.device == 'auto':
device_arg = { 'device_map': 'auto' }
else:
device_arg = { 'device_map': { "": args.device} }
print(f"Loading base model: {args.base_model_name_or_path}")
base_model = AutoModelForCausalLM.from_pretrained(
args.base_model_name_or_path,
return_dict=True,
torch_dtype=torch.float16,
trust_remote_code=True,
**device_arg
)
#device = torch.device('cpu')
#base_model.to(device)
print(f"Loading PEFT: {args.peft_model_path}")
model = PeftModel.from_pretrained(base_model, args.peft_model_path)
print("Peft Model : ", model.device)
print(f"Running merge_and_unload")
model = model.merge_and_unload()
tokenizer = AutoTokenizer.from_pretrained(args.base_model_name_or_path)
model.save_pretrained(f"{args.output_dir}",max_shard_size='9GB',safe_serialization=args.safe_serialization)
tokenizer.save_pretrained(f"{args.output_dir}",max_shard_size='9GB',safe_serialization=args.safe_serialization)
print(f"Model saved to {args.output_dir}")
####
if __name__ == "__main__" :
main()
```
However, I'm getting this error:
```
Loading base model: /mnt/efs/data/tammosta/files_t/output_sft_32k
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:04<00:00, 1.40s/it]
Loading PEFT: /mnt/efs/data/tammosta/files_t/DPO_output_mistral_32k
Traceback (most recent call last):
File "/mnt/efs/data/tammosta/scripts_hb/merge_peft_adaptors_gpu.py", line 51, in <module>
main()
File "/mnt/efs/data/tammosta/scripts_hb/merge_peft_adaptors_gpu.py", line 38, in main
model = PeftModel.from_pretrained(base_model, args.peft_model_path)
File "/opt/conda/envs/ml_v4/lib/python3.10/site-packages/peft/peft_model.py", line 352, in from_pretrained
model.load_adapter(model_id, adapter_name, is_trainable=is_trainable, **kwargs)
File "/opt/conda/envs/ml_v4/lib/python3.10/site-packages/peft/peft_model.py", line 689, in load_adapter
adapters_weights = load_peft_weights(model_id, device=torch_device, **hf_hub_download_kwargs)
File "/opt/conda/envs/ml_v4/lib/python3.10/site-packages/peft/utils/save_and_load.py", line 270, in load_peft_weights
adapters_weights = safe_load_file(filename, device=device)
File "/opt/conda/envs/ml_v4/lib/python3.10/site-packages/safetensors/torch.py", line 308, in load_file
with safe_open(filename, framework="pt", device=device) as f:
safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization
```
Any idea why I'm getting this error?
### Expected behavior
The merged model will successfully load in the output directory. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28742/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28742/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28741 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28741/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28741/comments | https://api.github.com/repos/huggingface/transformers/issues/28741/events | https://github.com/huggingface/transformers/pull/28741 | 2,103,022,287 | PR_kwDOCUB6oc5lM8q8 | 28,741 | Fix input data file extension in examples | {
"login": "khipp",
"id": 9824526,
"node_id": "MDQ6VXNlcjk4MjQ1MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9824526?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/khipp",
"html_url": "https://github.com/khipp",
"followers_url": "https://api.github.com/users/khipp/followers",
"following_url": "https://api.github.com/users/khipp/following{/other_user}",
"gists_url": "https://api.github.com/users/khipp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/khipp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/khipp/subscriptions",
"organizations_url": "https://api.github.com/users/khipp/orgs",
"repos_url": "https://api.github.com/users/khipp/repos",
"events_url": "https://api.github.com/users/khipp/events{/privacy}",
"received_events_url": "https://api.github.com/users/khipp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-26T22:51:48 | 2024-01-29T10:06:31 | 2024-01-29T10:06:31 | CONTRIBUTOR | null | Ensure that the input data file extension is set correctly when running example scripts without specifying a training data file. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28741/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28741/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28741",
"html_url": "https://github.com/huggingface/transformers/pull/28741",
"diff_url": "https://github.com/huggingface/transformers/pull/28741.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28741.patch",
"merged_at": "2024-01-29T10:06:31"
} |
https://api.github.com/repos/huggingface/transformers/issues/28740 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28740/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28740/comments | https://api.github.com/repos/huggingface/transformers/issues/28740/events | https://github.com/huggingface/transformers/issues/28740 | 2,103,016,558 | I_kwDOCUB6oc59WXxu | 28,740 | DETR: IndexError: Caught IndexError in replica 0 on device 0. IndexError: index 8 is out of bounds for dimension 0 with size 8 | {
"login": "michaelgruner",
"id": 717880,
"node_id": "MDQ6VXNlcjcxNzg4MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/717880?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelgruner",
"html_url": "https://github.com/michaelgruner",
"followers_url": "https://api.github.com/users/michaelgruner/followers",
"following_url": "https://api.github.com/users/michaelgruner/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelgruner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelgruner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelgruner/subscriptions",
"organizations_url": "https://api.github.com/users/michaelgruner/orgs",
"repos_url": "https://api.github.com/users/michaelgruner/repos",
"events_url": "https://api.github.com/users/michaelgruner/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelgruner/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "htt... | null | 0 | 2024-01-26T22:44:31 | 2024-01-28T15:52:06 | null | NONE | null | ### System Info
- `transformers` version: 4.37.1
- Platform: Linux-6.2.0-32-generic-x86_64-with-glibc2.37
- Python version: 3.10.12
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: 0.26.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Not explicitly, but Trainer is picking up 2 GPUs
### Who can help?
@amyeroberts Hi. I'm getting the error in the title trying to reproduce [this example](https://huggingface.co/docs/transformers/tasks/object_detection). The error is real. I don't know what caused it, but I've narrowed the cause to DETR receiving `BatchSize x NumGPUs` number of targets, but expecting only `BatchSize` which causes the overflow. If I limit the amount of GPUs to 1 (via `CUDA_VISIBLE_DEVICES=0`, for example), it runs ok.
Here's the stack trace:
```
Traceback (most recent call last):
File "/home/mgruner/cellphones-in-the-wild/./train.py", line 116, in <module>
trainer.train()
File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train
return inner_training_loop(
File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/transformers/trainer.py", line 1869, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/transformers/trainer.py", line 2768, in training_step
loss = self.compute_loss(model, inputs)
File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/transformers/trainer.py", line 2791, in compute_loss
outputs = model(**inputs)
File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/torch/nn/parallel/data_parallel.py", line 185, in forward
outputs = self.parallel_apply(replicas, inputs, module_kwargs)
File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/torch/nn/parallel/data_parallel.py", line 200, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/torch/nn/parallel/parallel_apply.py", line 110, in parallel_apply
output.reraise()
File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/torch/_utils.py", line 694, in reraise
raise exception
IndexError: Caught IndexError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in _worker
output = module(*input, **kwargs)
File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/transformers/models/detr/modeling_detr.py", line 1603, in forward
loss_dict = criterion(outputs_loss, labels)
File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/transformers/models/detr/modeling_detr.py", line 2202, in forward
indices = self.matcher(outputs_without_aux, targets)
File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/transformers/models/detr/modeling_detr.py", line 2330, in forward
indices = [linear_sum_assignment(c[i]) for i, c in enumerate(cost_matrix.split(sizes, -1))]
File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/transformers/models/detr/modeling_detr.py", line 2330, in <listcomp>
indices = [linear_sum_assignment(c[i]) for i, c in enumerate(cost_matrix.split(sizes, -1))]
IndexError: index 8 is out of bounds for dimension 0 with size 8
```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Follow this tutorial: https://huggingface.co/docs/transformers/tasks/object_detection
### Expected behavior
I expect the model to train. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28740/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28740/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28739 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28739/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28739/comments | https://api.github.com/repos/huggingface/transformers/issues/28739/events | https://github.com/huggingface/transformers/pull/28739 | 2,102,816,029 | PR_kwDOCUB6oc5lMPVQ | 28,739 | [docs] Backbone | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-26T19:59:26 | 2024-01-31T19:27:49 | null | MEMBER | null | This PR adds some updates to the backbone docs:
- have API references for `AutoBackbone`, `BackboneConfig`, `BackboneConfigMixin`, `TimmBackbone`, and `TimmBackboneConfig` in the docs so users can easily check them out
- include a list of supported backbones
- break up and move the content into `autoclass_tutorial.md` and `create_a_model.md`
- move initializing backbone from config to `create_a_model.md` which is more similar to the other examples we have there for creating something from their config
- update with loading a backbone by setting for example `config = MaskFormerConfig(backbone="microsoft/resnet50", use_pretrained_backbone=False)` - from #28214 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28739/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28739/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28739",
"html_url": "https://github.com/huggingface/transformers/pull/28739",
"diff_url": "https://github.com/huggingface/transformers/pull/28739.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28739.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28738 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28738/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28738/comments | https://api.github.com/repos/huggingface/transformers/issues/28738/events | https://github.com/huggingface/transformers/issues/28738 | 2,102,804,438 | I_kwDOCUB6oc59Vj_W | 28,738 | Any plans to support KV Cache offloading to CPU (and NVMe)? | {
"login": "goelayu",
"id": 31916840,
"node_id": "MDQ6VXNlcjMxOTE2ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/31916840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/goelayu",
"html_url": "https://github.com/goelayu",
"followers_url": "https://api.github.com/users/goelayu/followers",
"following_url": "https://api.github.com/users/goelayu/following{/other_user}",
"gists_url": "https://api.github.com/users/goelayu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/goelayu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/goelayu/subscriptions",
"organizations_url": "https://api.github.com/users/goelayu/orgs",
"repos_url": "https://api.github.com/users/goelayu/repos",
"events_url": "https://api.github.com/users/goelayu/events{/privacy}",
"received_events_url": "https://api.github.com/users/goelayu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | 1 | 2024-01-26T19:50:08 | 2024-01-26T20:22:42 | null | NONE | null | ### Feature request
Similar to how model parameter and optimizer offload is supported using the [deepspeed library](https://github.com/huggingface/transformers/blob/de13a951b38b85195984164819f1ab05fe508677/docs/source/en/perf_train_gpu_one.md#deepspeed-zero), are there plans for natively supporting KV cache offloading as well?
### Motivation
Apart from helping accommodate larger batch sizes on a single GPU, this can also significantly improve overall throughput, specially when batch sizes grow very large (resulting in a linear increase in KV cache size).
### Your contribution
I see there already exists an implementation of this: https://github.com/tjruwase/transformers/tree/kvcache-offload-cpu, so maybe this is simply about incorporating those changes in the main repo? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28738/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28738/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28737 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28737/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28737/comments | https://api.github.com/repos/huggingface/transformers/issues/28737/events | https://github.com/huggingface/transformers/pull/28737 | 2,102,788,555 | PR_kwDOCUB6oc5lMJSq | 28,737 | [`Siglip`] protect from imports if sentencepiece not installed | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-26T19:37:19 | 2024-01-28T15:10:19 | 2024-01-28T15:10:14 | COLLABORATOR | null | # What does this PR do?
Complement to #28636.
Mistake on my part - when testing importing from top level `from transformers import *`, environment was running on different python / site packages that originally thought (sentence piece was installed) covering up these requirements.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28737/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28737/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28737",
"html_url": "https://github.com/huggingface/transformers/pull/28737",
"diff_url": "https://github.com/huggingface/transformers/pull/28737.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28737.patch",
"merged_at": "2024-01-28T15:10:14"
} |
https://api.github.com/repos/huggingface/transformers/issues/28736 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28736/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28736/comments | https://api.github.com/repos/huggingface/transformers/issues/28736/events | https://github.com/huggingface/transformers/pull/28736 | 2,102,641,052 | PR_kwDOCUB6oc5lLpCl | 28,736 | Use the old style of cache-management when using DS-Inference | {
"login": "RezaYazdaniAminabadi",
"id": 44502768,
"node_id": "MDQ6VXNlcjQ0NTAyNzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/44502768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RezaYazdaniAminabadi",
"html_url": "https://github.com/RezaYazdaniAminabadi",
"followers_url": "https://api.github.com/users/RezaYazdaniAminabadi/followers",
"following_url": "https://api.github.com/users/RezaYazdaniAminabadi/following{/other_user}",
"gists_url": "https://api.github.com/users/RezaYazdaniAminabadi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RezaYazdaniAminabadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RezaYazdaniAminabadi/subscriptions",
"organizations_url": "https://api.github.com/users/RezaYazdaniAminabadi/orgs",
"repos_url": "https://api.github.com/users/RezaYazdaniAminabadi/repos",
"events_url": "https://api.github.com/users/RezaYazdaniAminabadi/events{/privacy}",
"received_events_url": "https://api.github.com/users/RezaYazdaniAminabadi/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-01-26T17:55:25 | 2024-01-26T17:55:25 | null | CONTRIBUTOR | null | This PR intends to revert back the cache-management to the old style when optimizing HF models with DeepSpeed-Inference.
I just added some changes to make it work with Llama.
I will add a test to show the usage of this and why this is needed for the DS-Inference to work. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28736/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28736/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28736",
"html_url": "https://github.com/huggingface/transformers/pull/28736",
"diff_url": "https://github.com/huggingface/transformers/pull/28736.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28736.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28735 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28735/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28735/comments | https://api.github.com/repos/huggingface/transformers/issues/28735/events | https://github.com/huggingface/transformers/pull/28735 | 2,102,606,842 | PR_kwDOCUB6oc5lLhh5 | 28,735 | [Flax] Update no init test for Flax v0.7.1 | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-26T17:38:24 | 2024-01-26T18:20:45 | 2024-01-26T18:20:40 | CONTRIBUTOR | null | # What does this PR do?
PR to unblock new model additions in Flax. As of version 0.7.1, Flax defaults to returning regular dictionaries with the methods .init and .apply, not frozen dictionaries as was the case before: https://github.com/google/flax/discussions/3191.
This means our "no automatic init" method returns regular dicts, instead of frozen dicts. Until we merge a bigger change to bring ourselves in-line with Flax (_c.f._ https://github.com/huggingface/transformers/issues/28368#issue-2068557686), we need to update our "no automatic init" test to account for both possible dicts, otherwise we'll have a red CI.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28735/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28735/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28735",
"html_url": "https://github.com/huggingface/transformers/pull/28735",
"diff_url": "https://github.com/huggingface/transformers/pull/28735.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28735.patch",
"merged_at": "2024-01-26T18:20:40"
} |
https://api.github.com/repos/huggingface/transformers/issues/28734 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28734/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28734/comments | https://api.github.com/repos/huggingface/transformers/issues/28734/events | https://github.com/huggingface/transformers/pull/28734 | 2,102,554,379 | PR_kwDOCUB6oc5lLWGc | 28,734 | Wrap Keras methods to support BatchEncoding | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-26T17:05:14 | 2024-01-31T13:18:43 | 2024-01-31T13:18:42 | MEMBER | null | One last Keras PR before I go back to chat templates - a recurring annoyance that I (and the forum users) have always had with our Keras models is that our tokenizers output `BatchEncoding` by default, which behaves like a mixed dict/list. Keras doesn't understand this at all and fails to handle it when passed to `fit()` or `predict()`. The result is that you have to manually remember to convert tokenizer outputs to a dict or you get a confusing error.
The right time to do this was about two and a half years ago, but late is better than never! This PR wraps the Keras methods to do that transparently, without changing other behaviour. Note that because we're changing the exact Keras we're importing, this PR shouldn't be merged before the `tf_keras` PR is in. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28734/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28734",
"html_url": "https://github.com/huggingface/transformers/pull/28734",
"diff_url": "https://github.com/huggingface/transformers/pull/28734.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28734.patch",
"merged_at": "2024-01-31T13:18:42"
} |
https://api.github.com/repos/huggingface/transformers/issues/28733 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28733/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28733/comments | https://api.github.com/repos/huggingface/transformers/issues/28733/events | https://github.com/huggingface/transformers/pull/28733 | 2,102,544,063 | PR_kwDOCUB6oc5lLT4w | 28,733 | Fix `DepthEstimationPipeline`'s docstring | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2024-01-26T16:57:54 | 2024-01-29T09:42:56 | 2024-01-29T09:42:56 | COLLABORATOR | null | # What does this PR do?
Fix #28729 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28733/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28733",
"html_url": "https://github.com/huggingface/transformers/pull/28733",
"diff_url": "https://github.com/huggingface/transformers/pull/28733.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28733.patch",
"merged_at": "2024-01-29T09:42:56"
} |
https://api.github.com/repos/huggingface/transformers/issues/28732 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28732/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28732/comments | https://api.github.com/repos/huggingface/transformers/issues/28732/events | https://github.com/huggingface/transformers/issues/28732 | 2,102,510,335 | I_kwDOCUB6oc59UcL_ | 28,732 | Output logits differ for the same input text in a batch of size 1 with half precision on GPU | {
"login": "zhukpm",
"id": 52332744,
"node_id": "MDQ6VXNlcjUyMzMyNzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/52332744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhukpm",
"html_url": "https://github.com/zhukpm",
"followers_url": "https://api.github.com/users/zhukpm/followers",
"following_url": "https://api.github.com/users/zhukpm/following{/other_user}",
"gists_url": "https://api.github.com/users/zhukpm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhukpm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhukpm/subscriptions",
"organizations_url": "https://api.github.com/users/zhukpm/orgs",
"repos_url": "https://api.github.com/users/zhukpm/repos",
"events_url": "https://api.github.com/users/zhukpm/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhukpm/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-26T16:34:34 | 2024-01-29T10:49:03 | null | NONE | null | ### System Info
Linux 20.04.1-Ubuntu x86_64 GNU/Linux
Python 3.10.12
transformers==4.37.1
torch==2.1.2+cu121
GPU A100
NVIDIA-SMI 525.147.05 Driver Version: 525.147.05 CUDA Version: 12.0
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
We run an inference with a CausalLM model, providing the same text, but in different batches. One of the batches is of size `1`, and the other - of size `> 1`. Output logits differ *slightly* for the same input sequence.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
# MODEL_ID = 'mistralai/Mistral-7B-Instruct-v0.2'
MODEL_ID = 'facebook/opt-350m'
model = AutoModelForCausalLM.from_pretrained(
MODEL_ID,
torch_dtype=torch.bfloat16,
device_map='auto',
return_dict=True
)
model.eval()
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
tokenizer.pad_token = tokenizer.eos_token
model.config.pad_token_id = tokenizer.pad_token_id
batches = [
['hello, world'],
['hello, world', 'hello', 'world']
]
tokenized = [tokenizer(b, padding='longest', return_tensors='pt').to(model.device) for b in batches]
assert (tokenized[0]['input_ids'][0] == tokenized[1]['input_ids'][0]).all().item()
set_seed(0)
with torch.inference_mode():
logits = [model(**t).logits for t in tokenized]
assert torch.allclose(logits[0][0], logits[1][0], atol=1e-3)
```
### Expected behavior
Output logits should be the same (at least very close to other) regardless of the batch size.
Note that we observe this problem only with `torch.float16` and `torch.bfloat16` on GPUs.
The code above works without errors
- on CPUs
- when using `float32`
- when comparing batches of sizes e.g. 2 and 3:
```python
batches = [
['hello, world', 'hello'],
['hello, world', 'hello', 'world']
]
```
So for some reason the problem occurs for half precision and `batch_size=1` only.
I think that [this thread](https://discuss.huggingface.co/t/results-of-model-generate-are-different-for-different-batch-sizes-of-the-decode-only-model/34878) might be related somehow, but I'm not sure. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28732/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28732/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28731 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28731/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28731/comments | https://api.github.com/repos/huggingface/transformers/issues/28731/events | https://github.com/huggingface/transformers/issues/28731 | 2,102,480,997 | I_kwDOCUB6oc59UVBl | 28,731 | torch.bfloat16 inference failed with RuntimeError: cutlassF: no kernel found to launch! | {
"login": "VINUK0",
"id": 58259367,
"node_id": "MDQ6VXNlcjU4MjU5MzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/58259367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VINUK0",
"html_url": "https://github.com/VINUK0",
"followers_url": "https://api.github.com/users/VINUK0/followers",
"following_url": "https://api.github.com/users/VINUK0/following{/other_user}",
"gists_url": "https://api.github.com/users/VINUK0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VINUK0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VINUK0/subscriptions",
"organizations_url": "https://api.github.com/users/VINUK0/orgs",
"repos_url": "https://api.github.com/users/VINUK0/repos",
"events_url": "https://api.github.com/users/VINUK0/events{/privacy}",
"received_events_url": "https://api.github.com/users/VINUK0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2024-01-26T16:16:10 | 2024-01-31T09:14:09 | 2024-01-31T09:14:09 | NONE | null | ### System Info
***[Environment Information]***
*GPU :* `Nvidia T4 (15GB)`
*Python Version:* `3.10.12`
*Pytorch Version :* `2.1.1 (CUDA 12.1 | 11.8) [Both same results.]`
*Transformers Version :* `4.37.1`
*Accelerate Version :* `0.26.1`
*Optimum Version:* `1.16.2`
*Bitsandbytes:* `0.42.0`
***[Task Information]***
*Model Type:* `Text-Generation`
*Model Architecture:* `llama`
*Attention Implementation:* `sdpa "flash attention 1"`
*Model Load In Float16:* `True`
*Model Load In Bfloat16:* `True`
*Model Generate With Float16:* `True`
*Model Generate With Bfloat16:* `False (RuntimeError: cutlassF: no kernel found to launch!)`
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
***[Replicate Requirements]***
*Load a pre trained model with `attention_Implementation="sdpa", torch_dtype=torch.bfloat16` to generate a sequence of tokens. It will show the error.*
### Expected behavior
*(RuntimeError: cutlassF: no kernel found to launch)* | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28731/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28731/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28730 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28730/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28730/comments | https://api.github.com/repos/huggingface/transformers/issues/28730/events | https://github.com/huggingface/transformers/issues/28730 | 2,102,472,373 | I_kwDOCUB6oc59US61 | 28,730 | Freely Long-Thinking Transformer (FraiLT) | {
"login": "akbayt",
"id": 11097700,
"node_id": "MDQ6VXNlcjExMDk3NzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/11097700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akbayt",
"html_url": "https://github.com/akbayt",
"followers_url": "https://api.github.com/users/akbayt/followers",
"following_url": "https://api.github.com/users/akbayt/following{/other_user}",
"gists_url": "https://api.github.com/users/akbayt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akbayt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akbayt/subscriptions",
"organizations_url": "https://api.github.com/users/akbayt/orgs",
"repos_url": "https://api.github.com/users/akbayt/repos",
"events_url": "https://api.github.com/users/akbayt/events{/privacy}",
"received_events_url": "https://api.github.com/users/akbayt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | null | 3 | 2024-01-26T16:10:18 | 2024-01-29T10:35:34 | null | NONE | null | ### Model description
Hi!
I am the author of the following study:
https://arxiv.org/abs/2401.11626
I want to add this model to 🤗 transformers.
Implementation is in progress... 👨💻
*Abstract:*
Freely Long-Thinking Transformer (FraiLT) is an improved transformer model designed to enhance processing capabilities without scaling up size. It utilizes a recursive approach, iterating over a subset of layers multiple times, and introduces iteration encodings to maintain awareness across these cycles. Iteration encoding allows FraiLT to achieve the interpretive depth of larger models in a compact form. When evaluated on a synthetic story dataset, FraiLT outperformed larger models, showcasing its ability to deliver high-quality performance while reducing memory demands. This model represents a step forward towards more efficient and accessible language models.
### Open source status
- [X] The model explained in the paper
### Provide useful links for the implementation
Paper: https://arxiv.org/abs/2401.11626, https://www.academia.edu/113629981 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28730/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28730/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28729 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28729/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28729/comments | https://api.github.com/repos/huggingface/transformers/issues/28729/events | https://github.com/huggingface/transformers/issues/28729 | 2,102,351,194 | I_kwDOCUB6oc59T1Va | 28,729 | Depth Estimation Pipeline docstrings are wrong | {
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.githu... | null | 1 | 2024-01-26T14:54:44 | 2024-01-29T09:42:57 | 2024-01-29T09:42:57 | MEMBER | null | The docstring is a paste from the image classification pipeline it seems https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/depth_estimation.py#L54-L83 . The correct output, as per https://huggingface.co/docs/transformers/main/tasks/monocular_depth_estimation, should be a dictionary with an image and a tensor
cc @amyeroberts @ydshieh | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28729/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28729/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28728 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28728/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28728/comments | https://api.github.com/repos/huggingface/transformers/issues/28728/events | https://github.com/huggingface/transformers/pull/28728 | 2,102,214,135 | PR_kwDOCUB6oc5lKMGs | 28,728 | Unpin pydantic | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-26T13:26:58 | 2024-01-26T16:39:34 | 2024-01-26T16:39:33 | COLLABORATOR | null | # What does this PR do?
Unpin pydantic as no failure anymore (CircleCI, docker image build).
Fix #27933 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28728/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28728/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28728",
"html_url": "https://github.com/huggingface/transformers/pull/28728",
"diff_url": "https://github.com/huggingface/transformers/pull/28728.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28728.patch",
"merged_at": "2024-01-26T16:39:33"
} |
https://api.github.com/repos/huggingface/transformers/issues/28727 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28727/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28727/comments | https://api.github.com/repos/huggingface/transformers/issues/28727/events | https://github.com/huggingface/transformers/pull/28727 | 2,102,090,556 | PR_kwDOCUB6oc5lJxWz | 28,727 | Fix typo of `Block`. | {
"login": "xkszltl",
"id": 5203025,
"node_id": "MDQ6VXNlcjUyMDMwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5203025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xkszltl",
"html_url": "https://github.com/xkszltl",
"followers_url": "https://api.github.com/users/xkszltl/followers",
"following_url": "https://api.github.com/users/xkszltl/following{/other_user}",
"gists_url": "https://api.github.com/users/xkszltl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xkszltl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xkszltl/subscriptions",
"organizations_url": "https://api.github.com/users/xkszltl/orgs",
"repos_url": "https://api.github.com/users/xkszltl/repos",
"events_url": "https://api.github.com/users/xkszltl/events{/privacy}",
"received_events_url": "https://api.github.com/users/xkszltl/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 10 | 2024-01-26T11:50:00 | 2024-01-30T01:44:07 | 2024-01-29T15:25:00 | CONTRIBUTOR | null | Models:
- text models: @ArthurZucker and @younesbelkada
Was introduced in:
- https://github.com/huggingface/transformers/pull/27942 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28727/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28727/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28727",
"html_url": "https://github.com/huggingface/transformers/pull/28727",
"diff_url": "https://github.com/huggingface/transformers/pull/28727.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28727.patch",
"merged_at": "2024-01-29T15:25:00"
} |
https://api.github.com/repos/huggingface/transformers/issues/28726 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28726/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28726/comments | https://api.github.com/repos/huggingface/transformers/issues/28726/events | https://github.com/huggingface/transformers/issues/28726 | 2,102,067,406 | I_kwDOCUB6oc59SwDO | 28,726 | Correct way for Wav2vec2 feature extraction from huggingface like Fairseq | {
"login": "hungdinhxuan",
"id": 79694464,
"node_id": "MDQ6VXNlcjc5Njk0NDY0",
"avatar_url": "https://avatars.githubusercontent.com/u/79694464?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hungdinhxuan",
"html_url": "https://github.com/hungdinhxuan",
"followers_url": "https://api.github.com/users/hungdinhxuan/followers",
"following_url": "https://api.github.com/users/hungdinhxuan/following{/other_user}",
"gists_url": "https://api.github.com/users/hungdinhxuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hungdinhxuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hungdinhxuan/subscriptions",
"organizations_url": "https://api.github.com/users/hungdinhxuan/orgs",
"repos_url": "https://api.github.com/users/hungdinhxuan/repos",
"events_url": "https://api.github.com/users/hungdinhxuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/hungdinhxuan/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-26T11:30:06 | 2024-01-26T11:44:47 | null | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: NVIDIA RTX 4090
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
import fairseq
from transformers import Wav2Vec2ForPreTraining, Wav2Vec2Config
import torchaudio
model_file = 'wav2vec_small.pt'
model, _, _ = fairseq.checkpoint_utils.load_model_ensemble_and_task([model_file])
original = model[0]
waveform, _ = torchaudio.load('MMSTTS_ara_000008.wav')
reference = original(waveform, mask=False, features_only=True)['x']
model_name = 'facebook/wav2vec2-base'
config = Wav2Vec2Config.from_pretrained(model_name)
w2vhf = Wav2Vec2ForPreTraining.from_pretrained(model_name, config=config)
res = w2vhf(waveform, attention_mask=None, output_hidden_states = True).hidden_states[-1]
torch.testing.assert_close(res, reference)
```
Error:
```
AssertionError: Tensor-likes are not close!
Mismatched elements: 155126 / 155136 (100.0%)
Greatest absolute difference: 5.448995590209961 at index (0, 140, 96) (up to 1e-05 allowed)
Greatest relative difference: 89616.765625 at index (0, 6, 144) (up to 1.3e-06 allowed)
```
### Expected behavior
I am using pre-trained wav2vec-base model download from fairseq GitHub. I expected that the pre-trained model provided by huggingface is also the same as fairseq. It means feature extractions from fairseq and hugging face should be as close as possible. What am I doing wrong, or is my feature extraction using Wav2Vec2ForPreTraining not correct? Pre-trained model provides by huggingface with the Pre-trained model from Fairseq are the same? Are there any differences between their model architectures? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28726/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28726/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28725 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28725/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28725/comments | https://api.github.com/repos/huggingface/transformers/issues/28725/events | https://github.com/huggingface/transformers/pull/28725 | 2,102,030,424 | PR_kwDOCUB6oc5lJkLz | 28,725 | Fix `weights_only` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-26T11:03:21 | 2024-01-26T12:00:50 | 2024-01-26T12:00:49 | COLLABORATOR | null | # What does this PR do?
The changes in #28506 is incorrect, and after that we still have issue with torch < 1.13. This PR fixes the issue in a correct way.
Fix #28720 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28725/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28725/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28725",
"html_url": "https://github.com/huggingface/transformers/pull/28725",
"diff_url": "https://github.com/huggingface/transformers/pull/28725.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28725.patch",
"merged_at": "2024-01-26T12:00:49"
} |
https://api.github.com/repos/huggingface/transformers/issues/28724 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28724/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28724/comments | https://api.github.com/repos/huggingface/transformers/issues/28724/events | https://github.com/huggingface/transformers/pull/28724 | 2,101,948,592 | PR_kwDOCUB6oc5lJTCc | 28,724 | Fix symbolic_trace with kv cache | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-26T10:15:47 | 2024-02-01T02:58:55 | null | COLLABORATOR | null | We should NOT trace models with 0-shaped concrete metas as we otherwise miss https://github.com/huggingface/transformers/blob/bb6aa8bc5ff8537f58c4b6ac80611101ba556226/src/transformers/modeling_attn_mask_utils.py#L162-L163 in the captured graph. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28724/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/28724/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28724",
"html_url": "https://github.com/huggingface/transformers/pull/28724",
"diff_url": "https://github.com/huggingface/transformers/pull/28724.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28724.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28723 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28723/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28723/comments | https://api.github.com/repos/huggingface/transformers/issues/28723/events | https://github.com/huggingface/transformers/issues/28723 | 2,101,847,913 | I_kwDOCUB6oc59R6dp | 28,723 | `UserWarning: TypedStorage is deprecated.` on loading `pytorch_model.bin` files from disk. | {
"login": "tomaarsen",
"id": 37621491,
"node_id": "MDQ6VXNlcjM3NjIxNDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomaarsen",
"html_url": "https://github.com/tomaarsen",
"followers_url": "https://api.github.com/users/tomaarsen/followers",
"following_url": "https://api.github.com/users/tomaarsen/following{/other_user}",
"gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions",
"organizations_url": "https://api.github.com/users/tomaarsen/orgs",
"repos_url": "https://api.github.com/users/tomaarsen/repos",
"events_url": "https://api.github.com/users/tomaarsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomaarsen/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-26T09:08:16 | 2024-01-26T09:33:03 | null | MEMBER | null | ### System Info
- `transformers` version: 4.37.1
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.9.17
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.2
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@amyeroberts
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("bert-base-uncased", use_safetensors=False)
```
This resulted in:
```
C:\Users\tom\.conda\envs\transformers\lib\site-packages\torch\_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.__get__(instance, owner)()
```
See also https://github.com/UKPLab/sentence-transformers/issues/2450
Notably, I do not get this warning at transformers v4.36.2.
### Expected behavior
I don't expect any warnings from loading a model with `pytorch_model.bin`.
- Tom Aarsen | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28723/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28723/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28722 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28722/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28722/comments | https://api.github.com/repos/huggingface/transformers/issues/28722/events | https://github.com/huggingface/transformers/issues/28722 | 2,101,830,611 | I_kwDOCUB6oc59R2PT | 28,722 | AWQ models including activation as previous operation seems broken | {
"login": "kevin3314",
"id": 37268015,
"node_id": "MDQ6VXNlcjM3MjY4MDE1",
"avatar_url": "https://avatars.githubusercontent.com/u/37268015?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kevin3314",
"html_url": "https://github.com/kevin3314",
"followers_url": "https://api.github.com/users/kevin3314/followers",
"following_url": "https://api.github.com/users/kevin3314/following{/other_user}",
"gists_url": "https://api.github.com/users/kevin3314/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kevin3314/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kevin3314/subscriptions",
"organizations_url": "https://api.github.com/users/kevin3314/orgs",
"repos_url": "https://api.github.com/users/kevin3314/repos",
"events_url": "https://api.github.com/users/kevin3314/events{/privacy}",
"received_events_url": "https://api.github.com/users/kevin3314/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-01-26T08:55:42 | 2024-01-26T08:58:20 | null | NONE | null | ### System Info
transformer==4.37.0
autoawq==0.16.0
### Who can help?
@SunMarc @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
>>> from transformers import AutoModelForCausalLM
/usr/lib/python3/dist-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (2.1.0) or chardet (5.2.0) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
>>> model = AutoModelForCausalLM.from_pretrained("casperhansen/falcon-7b-awq", trust_remote_code=True)
You have loaded an AWQ model on CPU and have a CUDA device available, make sure to set your model on a GPU device in order to run your model.
/usr/local/lib/python3.10/dist-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.__get__(instance, owner)()
Some weights of the model checkpoint at casperhansen/falcon-7b-awq were not used when initializing RWForCausalLM: ['transformer.h.0.mlp.act.scales', 'transformer.h.1.mlp.act.scales', 'transformer.h.10.mlp.act.scales', 'transformer.h.11.mlp.act.scales', 'transformer.h.12.mlp.act.scales', 'transformer.h.13.mlp.act.scales', 'transformer.h.14.mlp.act.scales', 'transformer.h.15.mlp.act.scales', 'transformer.h.16.mlp.act.scales', 'transformer.h.17.mlp.act.scales', 'transformer.h.18.mlp.act.scales', 'transformer.h.19.mlp.act.scales', 'transformer.h.2.mlp.act.scales', 'transformer.h.20.mlp.act.scales', 'transformer.h.21.mlp.act.scales', 'transformer.h.22.mlp.act.scales', 'transformer.h.23.mlp.act.scales', 'transformer.h.24.mlp.act.scales', 'transformer.h.25.mlp.act.scales', 'transformer.h.26.mlp.act.scales', 'transformer.h.27.mlp.act.scales', 'transformer.h.28.mlp.act.scales', 'transformer.h.29.mlp.act.scales', 'transformer.h.3.mlp.act.scales', 'transformer.h.30.mlp.act.scales', 'transformer.h.31.mlp.act.scales', 'transformer.h.4.mlp.act.scales', 'transformer.h.5.mlp.act.scales', 'transformer.h.6.mlp.act.scales', 'transformer.h.7.mlp.act.scales', 'transformer.h.8.mlp.act.scales', 'transformer.h.9.mlp.act.scales']
- This IS expected if you are initializing RWForCausalLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing RWForCausalLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
```
### Expected behavior
Some weights of the model checkpoint at casperhansen/falcon-7b-awq were not used when initializing RWForCausalLM:... should not appear.
I suspect that the root cause is that only Linear layers are replaced. This does not work if the precede is not Linear layer (e.g. Activation) and it is the case for falcon.
https://github.com/huggingface/transformers/blob/8eb74c1c8961e3dc8549bb1a76463c7658a63d43/src/transformers/integrations/awq.py#L108 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28722/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28722/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28721 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28721/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28721/comments | https://api.github.com/repos/huggingface/transformers/issues/28721/events | https://github.com/huggingface/transformers/issues/28721 | 2,101,804,893 | I_kwDOCUB6oc59Rv9d | 28,721 | Load an EncoderDecoderModel as AutoModel | {
"login": "Bachstelze",
"id": 19904888,
"node_id": "MDQ6VXNlcjE5OTA0ODg4",
"avatar_url": "https://avatars.githubusercontent.com/u/19904888?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bachstelze",
"html_url": "https://github.com/Bachstelze",
"followers_url": "https://api.github.com/users/Bachstelze/followers",
"following_url": "https://api.github.com/users/Bachstelze/following{/other_user}",
"gists_url": "https://api.github.com/users/Bachstelze/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bachstelze/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bachstelze/subscriptions",
"organizations_url": "https://api.github.com/users/Bachstelze/orgs",
"repos_url": "https://api.github.com/users/Bachstelze/repos",
"events_url": "https://api.github.com/users/Bachstelze/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bachstelze/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-26T08:36:34 | 2024-01-26T11:42:56 | null | NONE | null | ### System Info
- `transformers` version: 4.35.0
- Platform: Linux-5.15.0-91-lowlatency-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Bachstelze/instructionRoberta-base")
model = AutoModel.from_pretrained("Bachstelze/instructionRoberta-base", output_attentions=True)
### Expected behavior
Load the EncoderDecoderModel as AutoModel. "BertGenerationConfig" is supported, though this seems outdated. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28721/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28720 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28720/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28720/comments | https://api.github.com/repos/huggingface/transformers/issues/28720/events | https://github.com/huggingface/transformers/issues/28720 | 2,101,730,896 | I_kwDOCUB6oc59Rd5Q | 28,720 | Current version 4.37.1 only match torch>=1.13.0, not torch > 1.11 | {
"login": "StrivedTye",
"id": 19620650,
"node_id": "MDQ6VXNlcjE5NjIwNjUw",
"avatar_url": "https://avatars.githubusercontent.com/u/19620650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StrivedTye",
"html_url": "https://github.com/StrivedTye",
"followers_url": "https://api.github.com/users/StrivedTye/followers",
"following_url": "https://api.github.com/users/StrivedTye/following{/other_user}",
"gists_url": "https://api.github.com/users/StrivedTye/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StrivedTye/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StrivedTye/subscriptions",
"organizations_url": "https://api.github.com/users/StrivedTye/orgs",
"repos_url": "https://api.github.com/users/StrivedTye/repos",
"events_url": "https://api.github.com/users/StrivedTye/events{/privacy}",
"received_events_url": "https://api.github.com/users/StrivedTye/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2024-01-26T07:36:06 | 2024-01-26T12:01:47 | 2024-01-26T12:00:50 | NONE | null | when using torch<1.13.0, current version (4.37.1) will report a OSError, because `torch.load()` in torch==1.12 does not has the keyword of `weights_only`. This issue occurs in the file `modeling_utils, Line 533`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28720/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28720/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28719 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28719/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28719/comments | https://api.github.com/repos/huggingface/transformers/issues/28719/events | https://github.com/huggingface/transformers/pull/28719 | 2,101,656,110 | PR_kwDOCUB6oc5lIUOM | 28,719 | [`docs`] Update preprocessing.md | {
"login": "velaia",
"id": 1515904,
"node_id": "MDQ6VXNlcjE1MTU5MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1515904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/velaia",
"html_url": "https://github.com/velaia",
"followers_url": "https://api.github.com/users/velaia/followers",
"following_url": "https://api.github.com/users/velaia/following{/other_user}",
"gists_url": "https://api.github.com/users/velaia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/velaia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/velaia/subscriptions",
"organizations_url": "https://api.github.com/users/velaia/orgs",
"repos_url": "https://api.github.com/users/velaia/repos",
"events_url": "https://api.github.com/users/velaia/events{/privacy}",
"received_events_url": "https://api.github.com/users/velaia/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2024-01-26T06:19:33 | 2024-01-26T11:58:58 | 2024-01-26T11:58:57 | CONTRIBUTOR | null | adjust ImageProcessor link to working target (same as in lower section of file)
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28719/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28719/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28719",
"html_url": "https://github.com/huggingface/transformers/pull/28719",
"diff_url": "https://github.com/huggingface/transformers/pull/28719.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28719.patch",
"merged_at": "2024-01-26T11:58:57"
} |
https://api.github.com/repos/huggingface/transformers/issues/28718 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28718/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28718/comments | https://api.github.com/repos/huggingface/transformers/issues/28718/events | https://github.com/huggingface/transformers/pull/28718 | 2,101,633,403 | PR_kwDOCUB6oc5lIPa8 | 28,718 | Update preprocessing.md | {
"login": "velaia",
"id": 1515904,
"node_id": "MDQ6VXNlcjE1MTU5MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1515904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/velaia",
"html_url": "https://github.com/velaia",
"followers_url": "https://api.github.com/users/velaia/followers",
"following_url": "https://api.github.com/users/velaia/following{/other_user}",
"gists_url": "https://api.github.com/users/velaia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/velaia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/velaia/subscriptions",
"organizations_url": "https://api.github.com/users/velaia/orgs",
"repos_url": "https://api.github.com/users/velaia/repos",
"events_url": "https://api.github.com/users/velaia/events{/privacy}",
"received_events_url": "https://api.github.com/users/velaia/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-26T05:50:50 | 2024-01-31T03:37:07 | 2024-01-31T03:37:07 | CONTRIBUTOR | null | fixed link to old version of documentation
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28718/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28718/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28718",
"html_url": "https://github.com/huggingface/transformers/pull/28718",
"diff_url": "https://github.com/huggingface/transformers/pull/28718.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28718.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28717 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28717/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28717/comments | https://api.github.com/repos/huggingface/transformers/issues/28717/events | https://github.com/huggingface/transformers/pull/28717 | 2,101,603,477 | PR_kwDOCUB6oc5lIJQO | 28,717 | Initialize _tqdm_active with hf_hub_utils.are_progress_bars_disabled(… | {
"login": "ShukantPal",
"id": 22450567,
"node_id": "MDQ6VXNlcjIyNDUwNTY3",
"avatar_url": "https://avatars.githubusercontent.com/u/22450567?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShukantPal",
"html_url": "https://github.com/ShukantPal",
"followers_url": "https://api.github.com/users/ShukantPal/followers",
"following_url": "https://api.github.com/users/ShukantPal/following{/other_user}",
"gists_url": "https://api.github.com/users/ShukantPal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ShukantPal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShukantPal/subscriptions",
"organizations_url": "https://api.github.com/users/ShukantPal/orgs",
"repos_url": "https://api.github.com/users/ShukantPal/repos",
"events_url": "https://api.github.com/users/ShukantPal/events{/privacy}",
"received_events_url": "https://api.github.com/users/ShukantPal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-26T05:05:40 | 2024-01-26T17:05:20 | 2024-01-26T11:59:34 | CONTRIBUTOR | null | # What does this PR do?
…) to respect HF_HUB_DISABLE_PROGRESS_BARS
It seems like enable_progress_bar() and disable_progress_bar() sync up with huggingface_hub, but the initial value is always True. This changes will make sure the user's preference is respected implicity on initialization.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28717/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28717/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28717",
"html_url": "https://github.com/huggingface/transformers/pull/28717",
"diff_url": "https://github.com/huggingface/transformers/pull/28717.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28717.patch",
"merged_at": "2024-01-26T11:59:34"
} |
https://api.github.com/repos/huggingface/transformers/issues/28716 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28716/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28716/comments | https://api.github.com/repos/huggingface/transformers/issues/28716/events | https://github.com/huggingface/transformers/issues/28716 | 2,101,597,907 | I_kwDOCUB6oc59Q9bT | 28,716 | PermissionError occurs when calling Trainer.trainer using transformers | {
"login": "Mickls",
"id": 41884581,
"node_id": "MDQ6VXNlcjQxODg0NTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/41884581?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mickls",
"html_url": "https://github.com/Mickls",
"followers_url": "https://api.github.com/users/Mickls/followers",
"following_url": "https://api.github.com/users/Mickls/following{/other_user}",
"gists_url": "https://api.github.com/users/Mickls/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mickls/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mickls/subscriptions",
"organizations_url": "https://api.github.com/users/Mickls/orgs",
"repos_url": "https://api.github.com/users/Mickls/repos",
"events_url": "https://api.github.com/users/Mickls/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mickls/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-26T04:57:30 | 2024-01-26T12:07:41 | null | NONE | null | ### System Info
- `transformers` version: 4.37.1
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.10.13
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: 0.26.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@muellerzr @pacman100
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The error code block comes from the official example https://huggingface.co/docs/transformers/tasks/sequence_classification
The following is the specific code
```python
training_args = TrainingArguments(
output_dir="my_awesome_model",
learning_rate=2e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=2,
weight_decay=0.01,
evaluation_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
push_to_hub=False,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_imdb["train"],
eval_dataset=tokenized_imdb["test"],
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
)
trainer.train()
```
### Expected behavior
In trainer.py:2418 line `fd = os.open(output_dir, os.O_RDONLY)`, if the windows system tries to open a folder, a PermissionError exception will be triggered, so this piece of code will cause the train function to save the trained model to be interrupted.If possible, I hope you can be compatible with windows platform | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28716/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28715 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28715/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28715/comments | https://api.github.com/repos/huggingface/transformers/issues/28715/events | https://github.com/huggingface/transformers/pull/28715 | 2,101,332,854 | PR_kwDOCUB6oc5lHXhs | 28,715 | [docs] Fix datasets in guides | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-25T23:29:50 | 2024-01-26T17:29:11 | 2024-01-26T17:29:07 | MEMBER | null | An issue was raised at https://github.com/huggingface/datasets/issues/6605 that the ELI5 dataset is no longer accessible, impacting the causal/masked language modeling guides. This PR replaces it with the [ELI5-Category](https://huggingface.co/datasets/eli5_category) dataset, which should work fine as a drop-in replacement. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28715/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28715/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28715",
"html_url": "https://github.com/huggingface/transformers/pull/28715",
"diff_url": "https://github.com/huggingface/transformers/pull/28715.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28715.patch",
"merged_at": "2024-01-26T17:29:07"
} |
https://api.github.com/repos/huggingface/transformers/issues/28714 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28714/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28714/comments | https://api.github.com/repos/huggingface/transformers/issues/28714/events | https://github.com/huggingface/transformers/issues/28714 | 2,101,234,552 | I_kwDOCUB6oc59Pkt4 | 28,714 | Models with a sentencepiece tokenizers have problems with special tokens and encode decode | {
"login": "ekgren",
"id": 1921821,
"node_id": "MDQ6VXNlcjE5MjE4MjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1921821?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ekgren",
"html_url": "https://github.com/ekgren",
"followers_url": "https://api.github.com/users/ekgren/followers",
"following_url": "https://api.github.com/users/ekgren/following{/other_user}",
"gists_url": "https://api.github.com/users/ekgren/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ekgren/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ekgren/subscriptions",
"organizations_url": "https://api.github.com/users/ekgren/orgs",
"repos_url": "https://api.github.com/users/ekgren/repos",
"events_url": "https://api.github.com/users/ekgren/events{/privacy}",
"received_events_url": "https://api.github.com/users/ekgren/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-25T21:58:06 | 2024-01-29T12:20:24 | 2024-01-29T12:20:23 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (False)
- Tensorflow version (GPU?): 2.15.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu)
- Jax version: 0.4.23
- JaxLib version: 0.4.23
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://colab.research.google.com/drive/1vujbKaRkIpk7qli7eUKAZQDRksHSRW51?usp=sharing
### Expected behavior
Huggingface tokenizers with sentencepiece in the back have inconsistent encoding decoding behaviour. If you encode and decode a string with special characters white spaces are inserted.
Expected behaviour would be to get the exact same string back.
This is both present with the Llama2 tokenizer, the gpt-sw3 tokenizers and more | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28714/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28714/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28713 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28713/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28713/comments | https://api.github.com/repos/huggingface/transformers/issues/28713/events | https://github.com/huggingface/transformers/pull/28713 | 2,100,928,500 | PR_kwDOCUB6oc5lGAZA | 28,713 | Add FlashAttention2 for XLM-RoBERTa | {
"login": "DavidAfonsoValente",
"id": 74915610,
"node_id": "MDQ6VXNlcjc0OTE1NjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/74915610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DavidAfonsoValente",
"html_url": "https://github.com/DavidAfonsoValente",
"followers_url": "https://api.github.com/users/DavidAfonsoValente/followers",
"following_url": "https://api.github.com/users/DavidAfonsoValente/following{/other_user}",
"gists_url": "https://api.github.com/users/DavidAfonsoValente/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DavidAfonsoValente/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DavidAfonsoValente/subscriptions",
"organizations_url": "https://api.github.com/users/DavidAfonsoValente/orgs",
"repos_url": "https://api.github.com/users/DavidAfonsoValente/repos",
"events_url": "https://api.github.com/users/DavidAfonsoValente/events{/privacy}",
"received_events_url": "https://api.github.com/users/DavidAfonsoValente/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 4 | 2024-01-25T18:16:36 | 2024-01-30T16:52:12 | null | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #27957
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28713/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28713/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28713",
"html_url": "https://github.com/huggingface/transformers/pull/28713",
"diff_url": "https://github.com/huggingface/transformers/pull/28713.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28713.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28712 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28712/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28712/comments | https://api.github.com/repos/huggingface/transformers/issues/28712/events | https://github.com/huggingface/transformers/pull/28712 | 2,100,919,565 | PR_kwDOCUB6oc5lF-gR | 28,712 | Stop confusing the TF compiler with ModelOutput objects | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-25T18:09:44 | 2024-01-26T12:22:31 | 2024-01-26T12:22:30 | MEMBER | null | The `test_saved_model_creation` test was failing for BLIP with the rather unusual symptom that the `loss` key of one of the intermediate outputs had transformed into a strange generator object. I still don't know **why** this happened, but it requires the following:
1) TF compilation (doesn't happen in eager mode)
2) `ModelOutput` dicts with `loss` as the first, optional key
3) The `ModelOutput` dicts have to be returned internally and then used in a subsequent step, rather than returned as the last step of the outermost model
Since this is a nightmare zone, I'm going to work around the issue by just setting `return_dict` to `False` when calling the sub-model and get the tensors we need from the output tuple instead. This should be invisible for our users!
I also slipped a quick fix into the loss calculation to avoid potential NaNs from passing negative labels to one of the built-in TF loss functions. Even though all the negative-label positions should be masked, NaNs tend to persist (because `nan * 0 == nan`). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28712/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28712/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28712",
"html_url": "https://github.com/huggingface/transformers/pull/28712",
"diff_url": "https://github.com/huggingface/transformers/pull/28712.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28712.patch",
"merged_at": "2024-01-26T12:22:30"
} |
https://api.github.com/repos/huggingface/transformers/issues/28711 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28711/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28711/comments | https://api.github.com/repos/huggingface/transformers/issues/28711/events | https://github.com/huggingface/transformers/pull/28711 | 2,100,880,202 | PR_kwDOCUB6oc5lF2Au | 28,711 | [WIP] Improve multimodal processors - rely less on kwargs | {
"login": "molbap",
"id": 39954772,
"node_id": "MDQ6VXNlcjM5OTU0Nzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/39954772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/molbap",
"html_url": "https://github.com/molbap",
"followers_url": "https://api.github.com/users/molbap/followers",
"following_url": "https://api.github.com/users/molbap/following{/other_user}",
"gists_url": "https://api.github.com/users/molbap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/molbap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/molbap/subscriptions",
"organizations_url": "https://api.github.com/users/molbap/orgs",
"repos_url": "https://api.github.com/users/molbap/repos",
"events_url": "https://api.github.com/users/molbap/events{/privacy}",
"received_events_url": "https://api.github.com/users/molbap/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-25T17:42:35 | 2024-01-26T18:38:50 | null | CONTRIBUTOR | null | # What does this PR do?
This PR aims at a better control on the logic flow through `Processor` classes, in particular those leveraging `ImageProcessor` with a `Tokenizer`. Linked with #27768.
`ImageProcessors` compared to `Nougat` (as a reference point) have different signatures in their preprocess. One can list them here
```
TvltImageProcessor:
videos, patch_size, crop_size, do_center_crop, is_mixed, num_frames
IdeficsImageProcessor:
transform, image_num_channels, image_size
ViTImageProcessor:
No difference in args
Mask2FormerImageProcessor:
segmentation_maps, ignore_index, size_divisor, reduce_labels, instance_id_to_semantic_id
MaskFormerImageProcessor:
segmentation_maps, ignore_index, size_divisor, do_reduce_labels, instance_id_to_semantic_id
YolosImageProcessor:
format, return_segmentation_masks, annotations, masks_path
MobileNetV1ImageProcessor:
do_center_crop, crop_size
DeiTImageProcessor:
do_center_crop, crop_size
EfficientNetImageProcessor:
include_top, do_center_crop, rescale_offset, crop_size
BeitImageProcessor:
do_reduce_labels, do_center_crop, segmentation_maps, crop_size
MobileViTImageProcessor:
do_flip_channel_order, do_center_crop, segmentation_maps, crop_size
PerceiverImageProcessor:
do_center_crop, crop_size
DeformableDetrImageProcessor:
format, return_segmentation_masks, annotations, masks_path
EfficientFormerImageProcessor:
do_center_crop, crop_size
SegformerImageProcessor:
do_reduce_labels, segmentation_maps
LayoutLMv2ImageProcessor:
apply_ocr, ocr_lang, tesseract_config
BridgeTowerImageProcessor:
do_center_crop, size_divisor
SamImageProcessor:
segmentation_maps, pad_size, do_convert_rgb, mask_pad_size, mask_size
BlipImageProcessor:
do_convert_rgb
Owlv2ImageProcessor:
No difference in args
LayoutLMv3ImageProcessor:
apply_ocr, ocr_lang, tesseract_config
DetaImageProcessor:
format, return_segmentation_masks, annotations, masks_path
BitImageProcessor:
do_center_crop, do_convert_rgb, crop_size
ViTHybridImageProcessor:
do_center_crop, do_convert_rgb, crop_size
FuyuImageProcessor:
patch_size, padding_mode, padding_value
PvtImageProcessor:
No difference in args
Pix2StructImageProcessor:
max_patches, header_text, do_convert_rgb, patch_size
VitMatteImageProcessor:
trimaps, size_divisibility
VideoMAEImageProcessor:
videos, do_center_crop, crop_size
MobileNetV2ImageProcessor:
do_center_crop, crop_size
OneFormerImageProcessor:
segmentation_maps, ignore_index, task_inputs, do_reduce_labels, instance_id_to_semantic_id
FlavaImageProcessor:
crop_size, codebook_crop_size, codebook_rescale_factor, mask_group_max_patches, mask_group_min_patches, mask_group_max_aspect_ratio, codebook_image_mean, codebook_do_resize, return_image_mask, input_size_patches, codebook_do_center_crop, codebook_resample, mask_group_min_aspect_ratio, codebook_do_normalize, codebook_do_map_pixels, return_codebook_pixels, codebook_image_std, do_center_crop, codebook_size, codebook_do_rescale, total_mask_patches
DonutImageProcessor:
random_padding
TvpImageProcessor:
videos, crop_size, constant_values, do_flip_channel_order, do_center_crop, pad_size, pad_mode
GLPNImageProcessor:
size_divisor
PoolFormerImageProcessor:
crop_pct, do_center_crop, crop_size
CLIPImageProcessor:
do_center_crop, do_convert_rgb, crop_size
DPTImageProcessor:
ensure_multiple_of, keep_aspect_ratio, size_divisor
ViltImageProcessor:
size_divisor
Swin2SRImageProcessor:
pad_size
ImageGPTImageProcessor:
clusters, do_color_quantize
SiglipImageProcessor:
No difference in args
VivitImageProcessor:
videos, do_center_crop, offset, crop_size
ConvNextImageProcessor:
crop_pct
OwlViTImageProcessor:
do_center_crop, crop_size
ChineseCLIPImageProcessor:
do_center_crop, do_convert_rgb, crop_size
LevitImageProcessor:
do_center_crop, crop_size
ConditionalDetrImageProcessor:
format, return_segmentation_masks, annotations, masks_path
DetrImageProcessor:
format, return_segmentation_masks, annotations, masks_path
```
This helps standardize a bit in the first place, and then, will allow uniformizing `Processors`.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28711/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28711/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28711",
"html_url": "https://github.com/huggingface/transformers/pull/28711",
"diff_url": "https://github.com/huggingface/transformers/pull/28711.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28711.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28710 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28710/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28710/comments | https://api.github.com/repos/huggingface/transformers/issues/28710/events | https://github.com/huggingface/transformers/pull/28710 | 2,100,865,170 | PR_kwDOCUB6oc5lFyww | 28,710 | Flass attention 2 for xml roberta | {
"login": "DavidAfonsoValente",
"id": 74915610,
"node_id": "MDQ6VXNlcjc0OTE1NjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/74915610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DavidAfonsoValente",
"html_url": "https://github.com/DavidAfonsoValente",
"followers_url": "https://api.github.com/users/DavidAfonsoValente/followers",
"following_url": "https://api.github.com/users/DavidAfonsoValente/following{/other_user}",
"gists_url": "https://api.github.com/users/DavidAfonsoValente/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DavidAfonsoValente/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DavidAfonsoValente/subscriptions",
"organizations_url": "https://api.github.com/users/DavidAfonsoValente/orgs",
"repos_url": "https://api.github.com/users/DavidAfonsoValente/repos",
"events_url": "https://api.github.com/users/DavidAfonsoValente/events{/privacy}",
"received_events_url": "https://api.github.com/users/DavidAfonsoValente/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-25T17:32:47 | 2024-01-25T17:42:17 | 2024-01-25T17:42:17 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #27957
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28710/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28710/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28710",
"html_url": "https://github.com/huggingface/transformers/pull/28710",
"diff_url": "https://github.com/huggingface/transformers/pull/28710.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28710.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28709 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28709/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28709/comments | https://api.github.com/repos/huggingface/transformers/issues/28709/events | https://github.com/huggingface/transformers/pull/28709 | 2,100,726,140 | PR_kwDOCUB6oc5lFUmE | 28,709 | Don't fail when `LocalEntryNotFoundError` during `processor_config.json` loading | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-25T16:15:38 | 2024-01-26T08:02:34 | 2024-01-26T08:02:33 | COLLABORATOR | null | # What does this PR do?
Fix #28697. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28709/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28709/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28709",
"html_url": "https://github.com/huggingface/transformers/pull/28709",
"diff_url": "https://github.com/huggingface/transformers/pull/28709.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28709.patch",
"merged_at": "2024-01-26T08:02:33"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.