url
stringlengths 66
66
| text
stringlengths 141
41.9k
| num_labels
sequencelengths 1
8
| arr_labels
sequencelengths 82
82
| labels
sequencelengths 1
8
|
---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/35429 |
TITLE
`GPT2Attention()` class with `_attn()` method when `is_cross_attention=True`.
COMMENTS
0
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Model description
It seems like `GPT2Attention()` class only allows`_attn()` method with `causal_mask` when `is_cross_attention=False`, but not with when `is_cross_attention=True`.
It would be more productive if `GPT2Attention()` supports `_attn()` method with `causal_mask` even with `is_cross_attention=True`.
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
_No response_ | [
77
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
"New model"
] |
https://api.github.com/repos/huggingface/transformers/issues/35312 |
TITLE
Fixing device mismatch for InternVL2_5-78B rotary embeddings
COMMENTS
6
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
Fixing problem with Multi-GPU management of InternVL2_5-78B (https://huggingface.co/OpenGVLab/InternVL2_5-78B)
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
No specific open issue fixing. I was working on inference using the documentation provided by the official model card of InternVL2_5-78B for multiple GPUs [here](https://huggingface.co/OpenGVLab/InternVL2_5-78B). I got the error of mismatching devices GPU:0 and cpu, I traced back the error to this line.
It may happen to other models, maybe to newer llama vision models (3.2) but I've no access to these models in Europe (see circleci "copies" error).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts, @qubvel, @ArthurZucker (being Text+Vision, I mentioned all the related ones)
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @ylacombe, @eustlb
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @muellerzr
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| [
69,
73
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Big Model Inference",
"run-slow"
] |
https://api.github.com/repos/huggingface/transformers/issues/33744 |
TITLE
load_adapter method device setting bug in the from_pretrained
COMMENTS
5
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
python==3.11.9
transformers==4.39.3
peft==0.12.0
### Who can help?
_No response_
### Information
- The official transformers and peft libraries.
### Tasks
- from_pretrained method in the PreTrainedModel class.
### Reproduction
Problem:
I am trying to load a model onto the second GPU (index 2) of an 8-GPU server. I have already set the device using `torch.cuda.set_device(2)`. However, when I load the adapter using `from_pretrained`, the adapter is loaded onto GPU 0, while the model is correctly loaded onto GPU 2.

Bug Occurrence:
Even though I am specifying the `device_map` as `cuda:2` in `from_pretrained`, when an adapter_model_path is provided, the following part of the code does not pass the `device_map`, and internally, it defaults to `auto`. As a result, the adapter gets loaded onto GPU 0 after passing through the function below.

More specifically, the `load_peft_weights` does not give specific device_map in the `peft.py` line 193.

### Expected behavior
It must be loaded on the device specified in `from_pretrained`.
| [
23,
64,
53
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Core: Modeling",
"bug",
"PEFT"
] |
https://api.github.com/repos/huggingface/transformers/issues/34306 |
TITLE
ValueError: Some specified arguments are not used by the HfArgumentParser: ['model_name_or_path', 'show_model/model001', 'train_type', 'use_lora', 'data_path', 'data/AS_2022_train+test', 'per_device_train_batch_size', '1', 'per_device_eval_batch_size', '1', 'num_train_epochs', '5']
COMMENTS
5
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
transformers version:'4.45.2'
python version: 3.9.20
torch version: '2.4.1+cu124'

### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```
### run_show.py
import copy
import logging
import os
from dataclasses import dataclass, field
from functools import partial
from typing import Dict, List, Optional, Sequence
import torch
import transformers
from torch.utils.data import Dataset
from tqdm import tqdm
from transformers import (
LlavaForConditionalGeneration,
LlavaProcessor,
Trainer,
TrainingArguments,
)
from show_llava.data import LlavaDataset, TrainLlavaModelCollator
from show_llava.util import print_trainable_parameters
logger = logging.getLogger(__name__)
# import debugpy
# try:
# # 5678 is the default attach port in the VS Code debug configurations. Unless a host and port are specified, host defaults to 127.0.0.1
# debugpy.listen(("localhost", 9501))
# print("Waiting for debugger attach")
# debugpy.wait_for_client()
# except Exception as e:
# pass
@dataclass
class ModelArguments:
model_name_or_path: Optional[str] = field(default="test_model/model001")
train_type: Optional[str] = field(
default="use_lora",
metadata={
"help": """
1. use_lora:使用lora训练,
2. none:全量参数训练;
3. freeze_vision:只冻结vision_tower进行训练
"""
},
)
@dataclass
class DataArguments:
data_path: str = field(
default=None, metadata={"help": "Path to the training data."}
)
# source_length: int = field(default=128)
# target_length: int = field(default=512)
def load_model_processor(modelargs: ModelArguments):
model = LlavaForConditionalGeneration.from_pretrained(
modelargs.model_name_or_path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
)
processor = LlavaProcessor.from_pretrained(modelargs.model_name_or_path)
if modelargs.train_type == "use_lora":
logging.warning("Loading model to Lora")
from peft import LoraConfig, get_peft_model
LORA_R = 32
# LORA_ALPHA = 16
LORA_DROPOUT = 0.05
TARGET_MODULES = ["q_proj", "k_proj", "v_proj", "o_proj"]
config = LoraConfig(
r=LORA_R,
# lora_alpha=LORA_ALPHA,
target_modules=TARGET_MODULES,
lora_dropout=LORA_DROPOUT,
bias="none",
task_type="CAUSAL_LM",
modules_to_save=["multi_modal_projector"],
)
model = get_peft_model(model, config)
# model.print_trainable_parameters()
elif modelargs.train_type == "none":
logging.warning("使用全量参数进行训练")
pass
elif modelargs.train_type == "freeze_vision":
logging.warning("冻结vision_tower网络层,剩下的网络权重进行训练")
for param in model.vision_tower.parameters():
param.requires_grad = False
print_trainable_parameters(model)
return model, processor
def load_dataset_collator(processor, dataargs: DataArguments):
llava_dataset = LlavaDataset(
dataargs.data_path # "data/liuhaotian/LLaVA-CC3M-Pretrain-595K"
)
data_collator = TrainLlavaModelCollator(processor, -100)
return llava_dataset, data_collator
def train():
parser = transformers.HfArgumentParser(
(ModelArguments, DataArguments, TrainingArguments)
)
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
model, processor = load_model_processor(model_args)
train_dataset, data_collator = load_dataset_collator(processor, data_args)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=None,
data_collator=data_collator,
)
trainer.train()
trainer.save_state()
trainer.save_model(output_dir=training_args.output_dir)
if __name__ == "__main__":
logging.basicConfig(
format="%(asctime)s %(levelname)s [%(name)s] %(message)s",
level=logging.INFO,
datefmt="%Y-%m-%d %H:%M:%S",
)
train()
```
```
### launch.json
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Python Debugger: Current File",
"type": "debugpy",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"justMyCode": true,
"args": [
"--output_dir", "output20241021",
"model_name_or_path", "show_model/model001",
"train_type", "use_lora",
"data_path", "data/AS_2022_train+test",
"per_device_train_batch_size", "1",
"per_device_eval_batch_size", "1",
"num_train_epochs", "5",
]
}
]
}
```
### Expected behavior
I can run the following command in CMD without issues:
```
python run_show.py --output_dir output20241021 --model_name_or_path show_model/model001 --train_type use_lora --data_path data/AS_2022_train+test --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --num_train_epochs 5
```
However, when I try to debug in the IDE, I encounter the following error:
```
ValueError: Some specified arguments are not used by the HfArgumentParser: ['model_name_or_path', 'show_model/model001', 'train_type', 'use_lora', 'data_path', 'data/AS_2022_train+test', 'per_device_train_batch_size', '1', 'per_device_eval_batch_size', '1', 'num_train_epochs', '5']
```
| [
75,
67,
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
"Discussion",
"Usage",
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/33790 |
TITLE
ValueError: The checkpoint you are trying to load has model type `florence2` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
COMMENTS
16
REACTIONS
+1: 3
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
Windows 10x64
pytorch version: 2.4.0+cu124
Python 3.11.8
transformers-4.46.0. dev0
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
# ComfyUI Error Report
## Error Details
- **Node Type:** DownloadAndLoadFlorence2Model
- **Exception Type:** ValueError
- **Exception Message:** The checkpoint you are trying to load has model type `florence2` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
## Stack Trace
```
File "C:\pinokio\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\pinokio\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\pinokio\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "C:\pinokio\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\pinokio\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Florence2\nodes.py", line 97, in loadmodel
model = AutoModelForCausalLM.from_pretrained(model_path, attn_implementation=attention, device_map=device, torch_dtype=dtype,trust_remote_code=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\pinokio\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\models\auto\auto_factory.py", line 526, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\pinokio\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\models\auto\configuration_auto.py", line 1027, in from_pretrained
raise ValueError(
### Expected behavior
DownloadAndLoadFlorence2Model
The checkpoint you are trying to load has model type `florence2` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34634 |
TITLE
BarkProcessor voice_preset doesn't work
COMMENTS
8
REACTIONS
+1: 2
-1: 0
laugh: 0
hooray: 0
heart: 1
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.47.0.dev0
- Platform: Windows-11-10.0.22631-SP0
- Python version: 3.12.7
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 1.1.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 4080 SUPER
### Who can help?
@ylacombe
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
**Code:**
from bark import SAMPLE_RATE, generate_audio, preload_models
import sounddevice
from transformers import BarkModel, BarkProcessor
import torch
import numpy as np
from optimum.bettertransformer import BetterTransformer
from scipy.io.wavfile import write as write_wav
import re
def barkspeed(text_prompt):
processor = BarkProcessor.from_pretrained("suno/bark-small")
model = BarkModel.from_pretrained("suno/bark-small", torch_dtype=torch.float16).to(device)
model = BetterTransformer.transform(model, keep_original_model=False)
model.enable_cpu_offload()
sentences = re.split(r'[.?!]', text_prompt)
pieces = []
for sentence in sentences:
inp = processor(sentence.strip(), voice_preset=SPEAKER).to(device)
audio = model.generate(**inp, do_sample=True, fine_temperature=0.4, coarse_temperature=0.5)
audio = ((audio/torch.max(torch.abs(audio))).numpy(force=True).squeeze()*pow(2, 15)).astype(np.int16)
pieces.append(audio)
write_wav("bark_generation.wav", SAMPLE_RATE, np.concatenate(pieces))
sounddevice.play(np.concatenate(pieces), samplerate=24000)
sounddevice.wait()
**Error Message:**
****The attention mask is not set and cannot be inferred from input because pad token is same as eos token. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Traceback (most recent call last):
File "F:\OllamaRAG\BarkUsage\BarkUsage.py", line 56, in <module>
barkspeed("""Hey, have you heard about this new text-to-audio model called "Bark"?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\OllamaRAG\BarkUsage\BarkUsage.py", line 47, in barkspeed
audio = model.generate(**inp, do_sample=True, fine_temperature=0.4, coarse_temperature=0.5)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\Program Files\anaconda3\envs\ollamaRAG\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "F:\Program Files\anaconda3\envs\ollamaRAG\Lib\site-packages\transformers\models\bark\modeling_bark.py", line 1737, in generate
coarse_output = self.coarse_acoustics.generate(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\Program Files\anaconda3\envs\ollamaRAG\Lib\site-packages\transformers\models\bark\modeling_bark.py", line 1078, in generate
semantic_output = torch.hstack([x_semantic_history, semantic_output])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument tensors in method wrapper_CUDA_cat)
### Expected behavior
I used the code to generate some audio. Before I upgraded transformers and bark, the voice preset didn't work, bark kept changing preset. In the first half part of call function in Barkprocessor, it seemed fine, tensors were loaded properly. But in the generate function history_prompt was empty at first, then it was loaded as all 10000, After I upgraded transformers and bark, the error message shows. And after I delete the voice_preset=SPEAKER part, the code works, but with changing preset as well. Please could anyone tell me how I can get the preset to work. | [
64,
43
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug",
"Audio"
] |
https://api.github.com/repos/huggingface/transformers/issues/36182 |
TITLE
Add MLCD model
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 3
heart: 0
rocket: 4
eyes: 0
BODY
# What does this PR do?
This PR adds **MLCD** model from DeepGlint-AI Team.
Fixes #36181
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@amyeroberts @qubvel @ArthurZucker
## Quick Test
```python
from transformers import AutoProcessor, MLCDVisionModel
from PIL import Image
import requests
# Load model and processor
model = MLCDVisionModel.from_pretrained("DeepGlint-AI/mlcd-vit-bigG-patch14-336")
processor = AutoProcessor.from_pretrained("DeepGlint-AI/mlcd-vit-bigG-patch14-336")
# Process single image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
# Get visual features
outputs = model(**inputs)
features = outputs.last_hidden_state
print(f"Extracted features shape: {features.shape}")
``` | [
77,
62
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
"New model",
"Vision"
] |
https://api.github.com/repos/huggingface/transformers/issues/34917 |
TITLE
Bump tornado from 6.4.1 to 6.4.2 in /examples/research_projects/lxmert
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
Bumps [tornado](https://github.com/tornadoweb/tornado) from 6.4.1 to 6.4.2.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/tornadoweb/tornado/blob/v6.4.2/docs/releases.rst">tornado's changelog</a>.</em></p>
<blockquote>
<h1>Release notes</h1>
<p>.. toctree::
:maxdepth: 2</p>
<p>releases/v6.4.2
releases/v6.4.1
releases/v6.4.0
releases/v6.3.3
releases/v6.3.2
releases/v6.3.1
releases/v6.3.0
releases/v6.2.0
releases/v6.1.0
releases/v6.0.4
releases/v6.0.3
releases/v6.0.2
releases/v6.0.1
releases/v6.0.0
releases/v5.1.1
releases/v5.1.0
releases/v5.0.2
releases/v5.0.1
releases/v5.0.0
releases/v4.5.3
releases/v4.5.2
releases/v4.5.1
releases/v4.5.0
releases/v4.4.3
releases/v4.4.2
releases/v4.4.1
releases/v4.4.0
releases/v4.3.0
releases/v4.2.1
releases/v4.2.0
releases/v4.1.0
releases/v4.0.2
releases/v4.0.1
releases/v4.0.0
releases/v3.2.2
releases/v3.2.1
releases/v3.2.0
releases/v3.1.1
releases/v3.1.0
releases/v3.0.2
releases/v3.0.1
releases/v3.0.0
releases/v2.4.1
releases/v2.4.0</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/tornadoweb/tornado/commit/a5ecfab15e52202a46d34638aad93cddca86d87b"><code>a5ecfab</code></a> Bump version to 6.4.2</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/bc7df6bafdec61155e7bf385081feb205463857d"><code>bc7df6b</code></a> Fix tests with Twisted 24.7.0</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/d5ba4a1695fbf7c6a3e54313262639b198291533"><code>d5ba4a1</code></a> httputil: Fix quadratic performance of cookie parsing</li>
<li>See full diff in <a href="https://github.com/tornadoweb/tornado/compare/v6.4.1...v6.4.2">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | [
27,
60
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"dependencies",
"python"
] |
https://api.github.com/repos/huggingface/transformers/issues/35446 |
TITLE
`tokenizer` should be replaced to `processing_class` in `Seq2SeqTrainer`?
COMMENTS
2
REACTIONS
+1: 1
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.47.1
- Platform: Linux-5.4.0-200-generic-x86_64-with-glibc2.31
- Python version: 3.10.16
- Huggingface_hub version: 0.27.0
- Safetensors version: 0.4.5
- Accelerate version: 1.2.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 2070 SUPER
### Who can help?
@amyeroberts @ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
In [`trainer_seq2seq.py`](https://github.com/huggingface/transformers/blob/5c75087aeee7081025370e10d1f571a11600f1ae/src/transformers/trainer_seq2seq.py#L372) file, there is still calling `self.tokenizer.` which produces deprecation warning "Trainer.tokenizer is now deprecated. You should use Trainer.processing_class instead."
```python
def _pad_tensors_to_max_len(self, tensor, max_length):
if self.tokenizer is not None and hasattr(self.tokenizer, "pad_token_id"):
# If PAD token is not defined at least EOS token has to be defined
pad_token_id = (
self.tokenizer.pad_token_id if self.tokenizer.pad_token_id is not None else self.tokenizer.eos_token_id
)
```
### Expected behavior
I believe self.tokenizer should be replaced to self.processing_class
```python
def _pad_tensors_to_max_len(self, tensor, max_length):
if self.processing_class is not None and hasattr(self.processing_class, "pad_token_id"):
# If PAD token is not defined at least EOS token has to be defined
pad_token_id = (
self.processing_class.pad_token_id if self.processing_class.pad_token_id is not None else self.processing_class.eos_token_id
)
```
Is it okay for me to make a PR for this issue? :smile: | [
47,
66,
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Core: Tokenization",
"trainer",
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35348 |
TITLE
Add DINOv2 with registers
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 2
eyes: 1
BODY
# What does this PR do?
This PR adds DINOv2 with registers, this time using the new modular tool.
Fixes #27379
To do:
- [x] make @bernardzach a co-author | [
77,
62
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
"New model",
"Vision"
] |
https://api.github.com/repos/huggingface/transformers/issues/36098 |
TITLE
Remove loading custom kernel for RT-DETRv2
COMMENTS
8
REACTIONS
+1: 1
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
RT-DETRv2 does not have a custom kernel implemented, but still loads the one for the first version. This PR removes the unnecessary loading.
cc @jadechoghari | [
28,
62
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"cleanup",
"Vision"
] |
https://api.github.com/repos/huggingface/transformers/issues/34016 |
TITLE
PeftModel is not an instance of PreTrainedModel. `No liger kernels will be applied.`
COMMENTS
7
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
`transformers==4.45.1`
`peft==0.13.0`
`liger-kernel==0.3.1`
So `isinstance(model.base_model.model, PreTrainedModel)` returns true but `isinstance(model, PreTrainedModel)` returns false so no liger kernels are used.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Load any model with a PeftModel wrapper via `get_peft_model` and try to run with the `use_liger_kernel` flag in the trainer.
### Expected behavior
Apply liger kernels. Be as simple as add a check for peft models that then checks `model.base_model.model` | [
64,
39,
53
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug",
"optimization",
"PEFT"
] |
https://api.github.com/repos/huggingface/transformers/issues/35612 |
TITLE
Trainer: TensorBoardCallback not working for "on_save" and "on_save_end" events
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
transformers 4.47.1
torch 2.5.0+cu121
Ubuntu 22.04 LTS
### Who can help?
Hi, @muellerz and @SunMarc
I'm trying to get storage I/O related to the Trainer's checkpoint-saving operations. For that matter I implemented the following class:
```
import time
import psutil
from transformers.integrations import TensorBoardCallback
class DiskIOMonitoringCallback(TensorBoardCallback):
def __init__(self, tb_writer=None):
super().__init__(tb_writer=tb_writer)
self.start_time = None
self.initial_disk_io = None
def _compute_disk_io_metrics(self):
"""Compute disk I/O metrics."""
final_disk_io = psutil.disk_io_counters()
if self.initial_disk_io:
read_bytes = final_disk_io.read_bytes - self.initial_disk_io.read_bytes
write_bytes = final_disk_io.write_bytes - self.initial_disk_io.write_bytes
else:
read_bytes, write_bytes = 0, 0
return read_bytes, write_bytes
def _init_summary_writer(self, args, log_dir=None):
"""Ensure TensorBoard writer is initialized."""
log_dir = log_dir or args.logging_dir # Use provided log_dir or fallback to args.logging_dir
if self.tb_writer is None:
from torch.utils.tensorboard import SummaryWriter
self.tb_writer = SummaryWriter(log_dir=log_dir)
def on_save(self, args, state, control, **kwargs):
"""Hook triggered before saving a checkpoint."""
if self.tb_writer is None:
self._init_summary_writer(args)
# Record start time and initial disk I/O stats
self.start_time = time.time()
self.initial_disk_io = psutil.disk_io_counters()
def on_save_end(self, args, state, control, **kwargs):
"""Hook triggered after saving a checkpoint."""
# Calculate save duration
save_duration = time.time() - self.start_time
# Compute disk I/O metrics
read_bytes, write_bytes = self._compute_disk_io_metrics()
# Log metrics to TensorBoard
if self.tb_writer:
self.tb_writer.add_scalar("DiskIO/Save Duration (s)", save_duration, state.global_step)
self.tb_writer.add_scalar("DiskIO/Read Bytes", read_bytes, state.global_step)
self.tb_writer.add_scalar("DiskIO/Write Bytes", write_bytes, state.global_step)
# Print metrics for debugging purposes
print(f"Checkpoint saved in {save_duration:.2f}s. Read: {read_bytes} bytes, Write: {write_bytes} bytes.")
```
My trainer session is invoked line this:
```
training_args = TrainingArguments(
output_dir="./results",
optim="adamw_torch",
num_train_epochs=6,
per_device_train_batch_size=64,
gradient_accumulation_steps=8,
learning_rate=3e-5,
weight_decay=0,
warmup_steps=100,
lr_scheduler_type="cosine",
gradient_checkpointing=True,
dataloader_num_workers=8,
bf16=True,
logging_steps=10,
report_to="tensorboard",
save_strategy="epoch",
save_total_limit=2,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=None,
data_collator=data_collator,
callbacks=[
GpuCpuMonitoringCallback(),
SystemMonitoringCallback(),
DiskIOMonitoringCallback(),
],
)
trainer.train()
```
The `GpuCpuMonitoringCallback` and `SystemMonitoringCallback` work properly, but I'm now getting data from `DiskIOMonitoringCallback` despite multiple implementation changes. I'm missing something, or something might not be working at the callbacks layer.
I really appreciate any help you can provide.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I provided the code for reproduction in the description.
### Expected behavior
I expect Tensorboard to provide a card to show I/O disk data upon checkpoint saving operations. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34791 |
TITLE
<spam>
COMMENTS
7
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
<!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete)
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu and @MKhalusova for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
## Get Started section
- [ ] [index.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md) https://github.com/huggingface/transformers/pull/20180
- [ ] [quicktour.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.md) (waiting for initial PR to go through)
- [ ] [installation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md).
## Tutorial section
- [ ] [pipeline_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.md)
- [ ] [autoclass_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/autoclass_tutorial.md)
- [ ] [preprocessing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.md)
- [ ] [training.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.md)
- [ ] [accelerate.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.md)
- [ ] [model_sharing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.md)
- [ ] [multilingual.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.md)
<!--
Keep on adding more as you go 🔥
-->
| [
1
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"WIP"
] |
https://api.github.com/repos/huggingface/transformers/issues/34567 |
TITLE
TrainerState's property `num_input_tokens_seen` is not updating
COMMENTS
5
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
```
- `transformers` version: 4.46.0
- Python version: 3.10.15
- Huggingface_hub version: 0.26.1
- Safetensors version: 0.4.5
- Accelerate version: 1.0.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA A100 80GB PCIe
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Here is the sample code to reproduce the error
```python
from transformers import TrainerCallback, TrainingArguments, Trainer
from transformers import AutoTokenizer, AutoModelForCausalLM
from datasets import Dataset
import torch
# Simple callback to monitor tokens
class TokenMonitorCallback(TrainerCallback):
def on_step_end(self, args, state, control, **kwargs):
if state.global_step % 10 == 0: # Print every 10 steps
print(f"Step {state.global_step}, Tokens seen: {state.num_input_tokens_seen}")
def on_epoch_end(self, args, state, control, **kwargs):
print(f"Epoch end - Total tokens processed: {state.num_input_tokens_seen}")
# Create a tiny dataset
texts = ["Hello world", "This is a test", "Another example"] * 10
dataset = Dataset.from_dict({"text": texts})
# Initialize model and tokenizer
model_name = "distilgpt2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForCausalLM.from_pretrained(model_name)
# Tokenization function
def tokenize_function(examples):
tokenized = tokenizer(
examples["text"],
padding="max_length",
truncation=True,
max_length=32,
return_tensors="pt"
)
# Create labels by shifting input_ids
tokenized["labels"] = tokenized["input_ids"].clone()
return tokenized
# Tokenize dataset
tokenized_dataset = dataset.map(tokenize_function, batched=True, remove_columns=dataset.column_names)
# Training arguments
training_args = TrainingArguments(
output_dir="./test-trainer",
num_train_epochs=2,
per_device_train_batch_size=4,
logging_steps=10,
save_steps=1000,
learning_rate=2e-5,
report_to="none"
)
# Initialize trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_dataset,
callbacks=[TokenMonitorCallback()]
)
# Start training
trainer.train()
```
Following is the output
```
Epoch end - Total tokens processed: 0
Step 10, Tokens seen: 0
Epoch end - Total tokens processed: 0
TrainOutput(global_step=16, training_loss=5.371496677398682, metrics={'train_runtime': 56.2378, 'train_samples_per_second': 1.067, 'train_steps_per_second': 0.285, 'total_flos': 489931407360.0, 'train_loss': 5.371496677398682, 'epoch': 2.0})
```
### Expected behavior
In the expected behaviour this property should be kept updating withing training loop with the number of input tokens seen on every step. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35291 |
TITLE
bugfix: torch.export failure caused by `_make_causal_mask`
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
Fix the `torch.export` failure caused by `AttentionMaskConverter._make_causal_mask`.
Recent changes in torch dynamo prevent mutations on tensors converted with aten::_to_copy. To address this, we can clone such tensor before performing in-place operation `masked_fill_` only when the code is being compiled by torch dynamo. (relevant issue on PyTorch: https://github.com/pytorch/pytorch/issues/127571)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
PyTorch: @gante @Rocketknight1
| [
11
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"torch export"
] |
https://api.github.com/repos/huggingface/transformers/issues/33547 |
TITLE
Wrong ValueError in modeling_videomae.py?
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
Hello,
In **line 928** of modeling_videomae.py, I was wondering if the condition and `ValueError `raised is _wrong_ as **line 883** already checks if `num_channels != 3` and if so, does not _unnormalize_. Whilst the `else` block of line 928 is concerned with 'not normalizing' the pixels and not 'unnormalizing' in the sense of reversing the preprocessing steps (**line 887**). Even though I am setting `norm_pix_loss = False` when using `num_channels = 1`, I am still getting a `ValueError: Can't unnormalize non-RGB images. Consider setting config.norm_pix_loss to False.`
Thank you in advance.
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Not needed
### Expected behavior
N/A | [
64,
62
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug",
"Vision"
] |
https://api.github.com/repos/huggingface/transformers/issues/34808 |
TITLE
The usage of the "forced_decoder_ids" parameter
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
How to use the "forced_decoder_ids" parameter for decoder-only models? This parameter seems to be deprecated in the latest version.

### Motivation
This is an important function.
### Your contribution
This is an important function.
| [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/35501 |
TITLE
Fix typo is_soundfile_availble
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
Please fix the typo in transformers.utils
is_soundfile_availble should be is_soundfile_available
### Motivation
Should be self explanatory
### Your contribution
There already is a PR but it hasn't been merged for multiple months even though this fix takes 1 minute.
https://github.com/huggingface/transformers/pull/35030 | [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/35385 |
TITLE
Support modernBERT for encoder-decoder models
COMMENTS
2
REACTIONS
+1: 1
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
The docs state that the [EncoderDecoderModel](https://huggingface.co/docs/transformers/main/en/model_doc/encoder-decoder#transformers.EncoderDecoderModel) can be used to initialize a sequence-to-sequence model with any pretrained autoencoding model as the encoder. Though [ModernBERT](https://huggingface.co/answerdotai/ModernBERT-base) isn't supported:
```
File "/content/syntax_transformer/data/../models/encoderDecoder.py", line 40, in __init__
self.model = EncoderDecoderModel.from_encoder_decoder_pretrained(
File "/usr/local/lib/python3.10/dist-packages/transformers/models/encoder_decoder/modeling_encoder_decoder.py", line 538, in from_encoder_decoder_pretrained
decoder = AutoModelForCausalLM.from_pretrained(decoder_pretrained_model_name_or_path, **kwargs_decoder)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py", line 567, in from_pretrained
raise ValueError(
ValueError: Unrecognized configuration class <class 'transformers.models.modernbert.configuration_modernbert.ModernBertConfig'> for this kind of AutoModel: AutoModelForCausalLM.
Model type should be one of AriaTextConfig, BambaConfig, BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CohereConfig, Cohere2Config, CpmAntConfig, CTRLConfig, Data2VecTextConfig, DbrxConfig, ElectraConfig, ErnieConfig, FalconConfig, FalconMambaConfig, FuyuConfig, GemmaConfig, Gemma2Config, GitConfig, GlmConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, GraniteConfig, GraniteMoeConfig, JambaConfig, JetMoeConfig, LlamaConfig, MambaConfig, Mamba2Config, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MllamaConfig, MoshiConfig, MptConfig, MusicgenConfig, MusicgenMelodyConfig, MvpConfig, NemotronConfig, OlmoConfig, Olmo2Config, OlmoeConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, Phi3Config, PhimoeConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, Qwen2Config, Qwen2MoeConfig, RecurrentGemmaConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, StableLmConfig, Starcoder2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig, ZambaConfig.
```
### Motivation
ModernBert has a better performance and a longer context length.
### Your contribution
How is it possible to support monderBERT? It isn't that different from other BERT models. | [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/33504 |
TITLE
Video Processor as a separate class
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 1
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
Since we currently have more and more VLMs that support image and video, and not always videos are processed same way as images are, I want to add a `VideoProcessor` class that inherits from `ImageProcessingMixin`. Thus we can have two separate classes for processing visuals, each with its own set of attributes and methods. We can also save different configs for both to avoid issues as #33484. The `VideoProcessor` will mainly use the same transform methods as slow image processors, by iterating over each frame and stacking it. Some additional helper fn can be added, like `load_video` and `make_list_of_videos`. The main input name will be videos and the output var name is `pixel_values_videos`.
For the `load_video` we can prob rely on `av`, but I find it super slow compared to other video decoders. I'll try to get a small comparison benchmarks for that, and unfortunately `decord` can't be used as it had problems with models on cuda.
In the long term we might consider adding video transforms where each video is transformed in one call, instead of each video frame, similar to fast image processing with `torchvision`.
To Do:
- [ ] Add the VideoProcessor class and integrate with llava-next-video which is one of the models with different processing for image and videos.
- [ ] After the changed are approved and merged, the following models will be easy to modify:
- [ ] Video-LLaVa
- [ ] Qwen2-VL
- [ ] LLaVA-OneVision
- [ ] Instructblip-Video might need deprecation as it currently accepts images as main arg and returns pixel_values . TBH, it is a video-only model so we can disregard changing it, same was as we won't touch VIVIT and other video-only models
### Motivation
Easier integration of multimodal LLMs
### Your contribution
@amyeroberts WDYT about this suggestion? Would love to hear your opinion 🤗 | [
76,
62,
12
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request",
"Vision",
"Multimodal"
] |
https://api.github.com/repos/huggingface/transformers/issues/34055 |
TITLE
little suggestion on pad_token check in LlamaForSequenceClassification class
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
When I use LlamaForSequenceClassification for training the official llama3.1 8b model on multi label classification task, as we know, llama model doesn't have a pad_token, we should set the pad_token.
Usually, we write
`tokenizer.pad_token = tokenizer.eos_token` or `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`
However, the code for check pad_token in the LlamaForSequenceClassification class is:
```
if self.config.pad_token_id is None and batch_size != 1:
raise ValueError("Cannot handle batch sizes > 1 if no padding token is defined.")
```
It reads the model config file and then checks it. So I have to modify the config file instead of simply adding the pad_token set code in my code. Otherwise, it will raise the error. It's quite inconvenient. I think maybe it can be improved.
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
`tokenizer = PreTrainedTokenizerFast.from_pretrained("meta-llama/Meta-Llama-3.1-8B")`
`model = LlamaForSequenceClassification.from_pretrained("meta-llama/Meta-Llama-3.1-8B", num_labels=num_labels)`
`if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token`
call any function which invovles the forward process, like training or infering.
### Expected behavior
don't need to change the config file to avoid these exception | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34885 |
TITLE
GroundingDINO cannot work with MiniGPT4
COMMENTS
6
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.46.2
- Platform: Linux-5.15.0-120-generic-x86_64-with-glibc2.35
- Python version: 3.12.4
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: 1
- GPU type: NVIDIA A100-PCIE-40GB
### Who can help?
@zucchini-nlp @amyeroberts
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When I use the minigpt4 model from the repository ·https://github.com/Vision-CAIR/MiniGPT-4·, I find that grounding dino cannot be used together with it.
Specifically, when I import some necessary content from the minigpt4 repository into my project (without doing anything else about the minigpt4 repo) and use transformers grounding dino model, dino crashes the program directly at the `model(**encoded_inputs)` call with an error code of SIG(117), and no traceback or other information is provided.
Other models, such as flan-t5-base-VG-factual-sg, do not crash during their forward pass even when minigpt4 is imported.
After commenting out the four import lines related to minigpt4, there are no issues anymore.
```python
import torch
from PIL import Image
from transformers import (
GroundingDinoForObjectDetection,
GroundingDinoProcessor,
)
# imports modules for registration
from minigpt4.datasets.builders import * # noqa
from minigpt4.models import * # noqa
from minigpt4.processors import * # noqa
from minigpt4.tasks import * # noqa
image_path = "/root/llm-project/LVLM/eval/Extended_CHAIR/images/chair-500/000000006763.jpg"
image: Image.Image = Image.open(image_path)
model: GroundingDinoForObjectDetection = (
GroundingDinoForObjectDetection.from_pretrained(
"IDEA-Research/grounding-dino-base",
cache_dir="/root/llm-project/utils/models/hub",
torch_dtype="auto",
low_cpu_mem_usage=True,
)
.to("cuda")
.eval()
)
processor: GroundingDinoProcessor = GroundingDinoProcessor.from_pretrained(
"IDEA-Research/grounding-dino-base",
cache_dir="/root/llm-project/utils/models/hub",
)
text = "man.umbrella.top hat."
with torch.inference_mode():
encoded_inputs = processor(
images=image,
text=text,
max_length=200,
return_tensors="pt",
padding=True,
truncation=True,
).to("cuda")
outputs = model(**encoded_inputs) # Crash here
results = processor.post_process_grounded_object_detection(
outputs=outputs,
input_ids=encoded_inputs["input_ids"],
box_threshold=0.25,
text_threshold=0.25,
)
print(results)
```
### Expected behavior
Since this issue is related to other repositories, I would like to ask if you can help resolve this problem? Or kindly just guide me on how to find the deeper cause? Combining multiple models is significant for my project, but this issue does not provide any traceback, leaving me without a starting point. | [
64,
62
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug",
"Vision"
] |
https://api.github.com/repos/huggingface/transformers/issues/34466 |
TITLE
Flash attention build running forever on colab
COMMENTS
4
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### BUG DESCRIPTION
Running on google colab a script to finetune LLAMA 3 8B with flash attention.
This issue is not directly related to transformers but to an extension library: flash attention
**During the installation of the last package "flash-attn" i get the following line in the console running forever:**
`Building wheels for collected packages: flash-attn`
**The issue was not present before october 15 2024 and this installation worked fine**
### System Info
Running on google colab a script to finetune LLAMA 3 8B with flash attention.
Setup of packages:
```
!pip install -U transformers
!pip install -U datasets
!pip install -U accelerate
!pip install -U peft
!pip install -U trl
!pip install -U bitsandbytes
!pip install -U wandb
!pip install -U flash-attn
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. go to google colab, and set A100 gpu
2. Setup the following code for downloading packages
```
!pip install -U transformers
!pip install -U datasets
!pip install -U accelerate
!pip install -U peft
!pip install -U trl
!pip install -U bitsandbytes
!pip install -U wandb
!pip install -U flash-attn
```
4. wait
The issue was not present before october 15 2024 and this installation worked fine
### Expected behavior
Building wheels should terminate in 1-2 minutes max, instead it never ends, tried also to wait for 30 minutes. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35896 |
TITLE
Qwen2FlashAttention sliding windows are applied to wrong layers
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
According to the definition of max_window_layers
```max_window_layers — The number of layers that use SWA (Sliding Window Attention). The bottom layers use SWA while the top use full attention.````
When `max_window_layers == num_hidden_layers` all layers should be applied with SWA. But current implementation only uses SWA when `self.layer_idx >= self.config.max_window_layers`, making all all layers using full attention instead.
https://github.com/huggingface/transformers/blob/fc269f77da72d4c65b2e71e6d4896cd16c6f1e76/src/transformers/models/qwen2/modular_qwen2.py#L71C1-L75C11
Changing `self.layer_idx >= self.config.max_window_layers` to `self.layer_idx < self.config.max_window_layers` may solve issue.
Please correct me if my understanding of max_window_layers is wrong.
### Who can help?
@ArthurZucker
### Reproduction
1. Change the `config.json` of a Qwen2 model to use sliding window
```
{
"architectures": [
"Qwen2ForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151643,
"hidden_act": "silu",
"hidden_size": 2048,
"initializer_range": 0.02,
"intermediate_size": 11008,
"max_position_embeddings": 32768,
"max_window_layers": 36,
"model_type": "qwen2",
"num_attention_heads": 16,
"num_hidden_layers": 36,
"num_key_value_heads": 2,
"rms_norm_eps": 1e-06,
"rope_theta": 1000000.0,
"sliding_window": 500,
"tie_word_embeddings": true,
"torch_dtype": "bfloat16",
"transformers_version": "4.40.1",
"use_cache": true,
"use_mrope": false,
"use_sliding_window": true,
"vocab_size": 151936
}
````
2. Add `print(sliding_window, self.config.max_window_layers, self.layer_idx)` to the forward function of `Qwen2FlashAttention2`.
3. run generation with a Qwen2 model
```python
from transformers import AutoTokenizer, Qwen2ForCausalLM
import torch
# Load the model and tokenizer
model = Qwen2ForCausalLM.from_pretrained("/home/hanzhenhua/Qwen2.5-3B", attn_implementation="flash_attention_2", torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained("/home/hanzhenhua/Qwen2.5-3B")
# Move the model to GPU
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
# Prepare the input prompt
prompt = '\n'.join(["Hey, are you conscious? Can you talk to me? " for _ in range(300)])
print(prompt)
# Tokenize the input
input_ids = tokenizer.encode(prompt, return_tensors="pt")
input_ids = input_ids.to(device)
# Generate text
# Generate output with a maximum of 50 tokens
output = model.generate(input_ids, max_new_tokens=50)
# Decode the output tokens to text
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
```
### Expected behavior
The debug printer shows `sliding_window = None`, which means sliding_window is not taking effect in flash attention.
```None 36 0
None 36 1
None 36 2
None 36 3
None 36 4
None 36 5
None 36 6
None 36 7
None 36 8
None 36 9
None 36 10
None 36 11
None 36 12
None 36 13
None 36 14
None 36 15
None 36 16
None 36 17
None 36 18
None 36 19
None 36 20
None 36 21
None 36 22
None 36 23
None 36 24
None 36 25
None 36 26
None 36 27
None 36 28
None 36 29
None 36 30
None 36 31
None 36 32
None 36 33
None 36 34
None 36 35
``` | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35265 |
TITLE
inconsistent execution time
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
```
- `transformers` version: 4.28.1
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.10.6
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@Arther
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Basically, I have 100 rows dataframe, total 10 files. I like to send it each one by one to my 4 GPUs. I use Llama 3. I'm testing the execution or completion time. Now, either use the data-parallel or model-parallel for inference, I got the around same execution time. Let's say
```
# using 1 GPU
one dataframe - > prompt template
response = model.generate(prompt)
execution time: 20 second.
```
```
# using 3 GPU
# with data-parallel
one dataframe - > prompt template
response = model.generate(prompt)
execution time: 20 second (GPU:0)
execution time: 21 second (GPU:0)
execution time: 19 second (GPU:0)
```
Model definition
```python
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
torch_dtype = torch.float16
attn_implementation = "eager"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch_dtype,
bnb_4bit_use_double_quant=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_id,
quantization_config=bnb_config,
device_map={"": torch.cuda.current_device()},
attn_implementation=attn_implementation
)
peft_config = LoraConfig(
r=16,
lora_alpha=32,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
target_modules=[
'up_proj', 'down_proj', 'gate_proj',
'k_proj', 'q_proj', 'v_proj', 'o_proj'
]
)
model = get_peft_model(model, peft_config)
for df_file in tqdm(xcel_list):
df = pd.read_excel(df_file)
messages = prepare_prompt_from_excel(df)
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer(
prompt,
return_tensors='pt',
padding=True,
truncation=True
).to("cuda")
start = time.time()
outputs = model.generate(
**inputs,
max_length=2048,
num_return_sequences=1
)
exe = time.time() - start
...
```
### Expected behavior
If I use data-parallel on multiple GPU, a replicate of model will be placed to all GPUs and data will be splitted across GPUs. Or, if I use model-parallel (`device_map=auto'), layers of that model will be distributed across GPUs. Apart from this, if no data or model parallel, just single GPU inference, I was expecting to get longer time for inference, and less time or faster inference if I use multi-gpu, either data or model parallel. But single GPU inference time and multi-gpu inference times are almost comparable. My another concern is that, while using data-parallel, as I am sending one data-frame / prompt template to all GPUs - does this single prompt template gets splitted into many chunks? I doubt that. Each GPUs still see the full data in data-parallel setup here, and that's the cause to get similar execution time, no! | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/36059 |
TITLE
Code for VisionT5Model
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
So right now you can't use T5 as decoder block in VisionEncoderDecoderModel , I wrote a code here which almost does that trying to get some help if it covers everything I need and can use it directly I am planning to use it for a OCR code base
``` python
import copy
import torch
import torch.nn as nn
from torch.nn import CrossEntropyLoss
from typing import Optional, Tuple, Union
from transformers import (
PreTrainedModel,
GenerationMixin,
VisionEncoderDecoderConfig,
AutoModel,
T5Config,
T5Stack,
ViTModel
)
from transformers.modeling_outputs import Seq2SeqLMOutput
class VisionT5Model(PreTrainedModel, GenerationMixin):
"""
A vision-text model using a ViT-like encoder and a T5 decoder stack.
It mimics the design of VisionEncoderDecoderModel but replaces the decoder
with a T5 decoder. Useful for tasks like OCR, image captioning, etc.
"""
config_class = VisionEncoderDecoderConfig
base_model_prefix = "vision_t5"
main_input_name = "pixel_values"
def __init__(self, config: VisionEncoderDecoderConfig):
"""
Args:
config (VisionEncoderDecoderConfig):
Configuration for the vision-encoder–text-decoder model.
- config.encoder should be a vision config (e.g. ViTConfig)
- config.decoder should be a T5Config
"""
super().__init__(config)
# ----------------------
# 1) Load the Vision Encoder
# ----------------------
self.encoder = ViTModel(config.encoder)
# Make sure it does NOT have a "head" for classification etc.
if self.encoder.get_output_embeddings() is not None:
raise ValueError("The encoder should not have a LM head; please use a bare vision backbone.")
# ----------------------
# 2) Build the T5 decoder stack (no encoder part!)
# ----------------------
# We copy the T5 config from config.decoder
# Then ensure is_decoder=True, is_encoder_decoder=False, etc.
t5_decoder_config = T5Config.from_dict(config.decoder.to_dict())
t5_decoder_config.is_decoder = True
t5_decoder_config.is_encoder_decoder = False
t5_decoder_config.num_layers = config.decoder.num_layers
# If you want cross-attention in T5, it must have `add_cross_attention=True`.
# Usually T5's is_decoder implies that anyway, but just to be safe:
t5_decoder_config.add_cross_attention = True
self.decoder = T5Stack(t5_decoder_config)
# Optionally, if the hidden sizes differ, we need a projection:
if self.encoder.config.hidden_size != t5_decoder_config.d_model:
self.enc_to_dec_proj = nn.Linear(
self.encoder.config.hidden_size, t5_decoder_config.d_model, bias=False
)
else:
self.enc_to_dec_proj = None
# ----------------------
# 3) Final LM head (same as T5's)
# ----------------------
self.lm_head = nn.Linear(t5_decoder_config.d_model, t5_decoder_config.vocab_size, bias=False)
if t5_decoder_config.tie_word_embeddings:
self.lm_head.weight = self.decoder.embed_tokens.weight
self.model_dim = t5_decoder_config.d_model # keep track if we want the T5 scaling
# Initialize weights, etc.
self.post_init()
def get_encoder(self):
return self.encoder
def get_decoder(self):
return self.decoder
def get_input_embeddings(self):
"""By convention, the 'input embeddings' come from the decoder if needed."""
return self.decoder.embed_tokens
def set_input_embeddings(self, new_embeddings):
self.decoder.set_input_embeddings(new_embeddings)
def get_output_embeddings(self):
return self.lm_head
def set_output_embeddings(self, new_embeddings):
self.lm_head = new_embeddings
def forward(
self,
pixel_values: torch.FloatTensor,
decoder_input_ids: Optional[torch.LongTensor] = None,
decoder_attention_mask: Optional[torch.BoolTensor] = None,
encoder_outputs: Optional[Tuple[torch.FloatTensor]] = None,
past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None,
labels: Optional[torch.LongTensor] = None,
use_cache: Optional[bool] = None,
return_dict: Optional[bool] = None,
**decoder_kwargs
) -> Union[Seq2SeqLMOutput, Tuple[torch.FloatTensor]]:
"""
pixel_values: (batch, channels, height, width)
The images to encode (e.g. from ViTFeatureExtractor).
decoder_input_ids: (batch, tgt_seq_len)
Input tokens to the T5 decoder.
labels: (batch, tgt_seq_len)
If given, we compute LM loss by teacher-forcing and produce CrossEntropyLoss.
"""
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
use_cache = use_cache if use_cache is not None else self.config.decoder.use_cache
# 1) Run the vision encoder if needed
if encoder_outputs is None:
encoder_outputs = self.encoder(pixel_values=pixel_values, return_dict=True)
# encoder_outputs.last_hidden_state shape => (batch, seq_len, hidden_size)
hidden_states = encoder_outputs.last_hidden_state
# Possibly project to match T5 dimension
if self.enc_to_dec_proj is not None:
hidden_states = self.enc_to_dec_proj(hidden_states)
# 2) Prepare decoder inputs
# If we have labels but no decoder_input_ids, shift-right internally
if labels is not None and decoder_input_ids is None:
# Standard T5 shift-right:
decoder_input_ids = self._shift_right(labels)
# T5 decoder forward
decoder_outputs = self.decoder(
input_ids=decoder_input_ids,
attention_mask=decoder_attention_mask,
encoder_hidden_states=hidden_states,
encoder_attention_mask=None, # If you want to mask out padding in hidden_states, pass it here
past_key_values=past_key_values,
use_cache=use_cache,
return_dict=True,
**decoder_kwargs,
)
sequence_output = decoder_outputs[0] # (batch, tgt_len, d_model)
# 3) Final LM head
# T5 typically scales by d_model^-0.5 if tie_word_embeddings = True,
# but you can do that if needed.
if self.config.decoder.tie_word_embeddings:
sequence_output = sequence_output * (self.model_dim ** -0.5)
logits = self.lm_head(sequence_output)
loss = None
if labels is not None:
# Compute standard LM loss
loss_fct = CrossEntropyLoss(ignore_index=-100)
loss = loss_fct(logits.view(-1, logits.size(-1)), labels.view(-1))
if not return_dict:
# Return (loss, logits, past, decoder_outputs, encoder_outputs)
out = (logits,) + decoder_outputs[1:] + (encoder_outputs,)
return ((loss,) + out) if loss is not None else out
return Seq2SeqLMOutput(
loss=loss,
logits=logits,
past_key_values=decoder_outputs.past_key_values,
decoder_hidden_states=decoder_outputs.hidden_states,
decoder_attentions=decoder_outputs.attentions,
cross_attentions=decoder_outputs.cross_attentions,
encoder_last_hidden_state=hidden_states,
encoder_hidden_states=encoder_outputs.hidden_states,
encoder_attentions=encoder_outputs.attentions,
)
def prepare_inputs_for_generation(
self,
decoder_input_ids,
past_key_values=None,
encoder_outputs=None,
**kwargs,
):
"""
During generation, the `generate()` method calls this to assemble the inputs for each step.
"""
if past_key_values is not None:
# we only need to pass the last token of decoder_input_ids
decoder_input_ids = decoder_input_ids[:, -1:].clone()
return {
"pixel_values": None, # not needed if `encoder_outputs` is already computed
"decoder_input_ids": decoder_input_ids,
"past_key_values": past_key_values,
"encoder_outputs": encoder_outputs,
"use_cache": kwargs.get("use_cache"),
}
def prepare_decoder_input_ids_from_labels(self, labels: torch.Tensor) -> torch.Tensor:
return self._shift_right(labels)
def _reorder_cache(self, past_key_values, beam_idx):
# if decoder past is not included in output
# speedy decoding is disabled and no need to reorder
if past_key_values is None:
print("You might want to consider setting `use_cache=True` to speed up decoding")
return past_key_values
reordered_decoder_past = ()
for layer_past_states in past_key_values:
# get the correct batch idx from layer past batch dim
# batch dim of `past` is at 2nd position
reordered_layer_past_states = ()
for layer_past_state in layer_past_states:
# need to set correct `past` for each of the four key / value states
reordered_layer_past_states = reordered_layer_past_states + (
layer_past_state.index_select(0, beam_idx.to(layer_past_state.device)),
)
if reordered_layer_past_states[0].shape != layer_past_states[0].shape:
raise ValueError(
f"reordered_layer_past_states[0] shape {reordered_layer_past_states[0].shape} and layer_past_states[0] shape {layer_past_states[0].shape} mismatched"
)
if len(reordered_layer_past_states) != len(layer_past_states):
raise ValueError(
f"length of reordered_layer_past_states {len(reordered_layer_past_states)} and length of layer_past_states {len(layer_past_states)} mismatched"
)
reordered_decoder_past = reordered_decoder_past + (reordered_layer_past_states,)
return reordered_decoder_past
def _shift_right(self, labels: torch.LongTensor) -> torch.LongTensor:
"""
Same shifting that T5 does: pad -> start token -> ... -> y[0..-2]
"""
# In T5, the decoder_start_token_id is often the same as pad_token_id
# But check or override as needed.
decoder_start_token_id = self.config.decoder.decoder_start_token_id
if decoder_start_token_id is None:
# default fallback
decoder_start_token_id = self.config.decoder.pad_token_id
pad_token_id = self.config.decoder.pad_token_id
# create shifted ids
shifted = labels.new_zeros(labels.shape)
shifted[..., 1:] = labels[..., :-1].clone()
shifted[..., 0] = decoder_start_token_id
# replace -100 with pad_token_id
shifted.masked_fill_(shifted == -100, pad_token_id)
return shifted
```
### Motivation
For OCR Project
### Your contribution
T5 can be used a decoder block for vision models | [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/35414 |
TITLE
`modular_model_converter` can not handle objects import via try - except
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
transformers 4.48.0.dev0 d8c1db2f568d4bcc254bc046036acf0d6bba8373
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
# How to reproduce?
1. Clone the Transformers repository and check out the specified commit:
`git clone git@github.com:huggingface/transformers.git && cd transformers && git checkout d8c1db2f568d4bcc254bc046036acf0d6bba8373`
2. Create a new folder named `xxx_model` in `src/transformers/models/`
3. Inside this folder, create a new Python file called `modular_xxx.py` with the following content:
```
import torch
import torch.nn as nn
try:
import torch.nn.functional as F
except:
pass
from ..llama.modeling_llama import (
LlamaMLP,
)
class Model(nn.Module):
def forward(self, x, w):
return F.linear(x, w)
```
4. Run the following command to execute the model converter:
`python utils/modular_model_converter.py --files_to_parse src/transformers/models/xxx_model/modular_xxx.py`
This will generate the modeling file at: `src/transformers/models/xxx_model/modeling_xxx.py`.
### Expected behavior
# Expected vs Actual Contents in src/transformers/models/xxx_model/modeling_xxx.py
The **expected** contents in `src/transformers/models/xxx_model/modeling_xxx.py` is :
```
# 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
# This file was automatically generated from src/transformers/models/xxx_model/modular_xxx.py.
# Do NOT edit this file manually as any edits will be overwritten by the generation of
# the file from the modular. If any change should be done, please apply the change to the
# modular_xxx.py file directly. One of our CI enforces this.
# 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
import torch.nn as nn
try:
import torch.nn.functional as F
except:
pass
class Model(nn.Module):
def forward(self, x, w):
return F.linear(x, w)
```
However, the **actual** content generated in `src/transformers/models/xxx_model/modeling_xxx.py` is :
```
# 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
# This file was automatically generated from src/transformers/models/xxx_model/modular_xxx.py.
# Do NOT edit this file manually as any edits will be overwritten by the generation of
# the file from the modular. If any change should be done, please apply the change to the
# modular_xxx.py file directly. One of our CI enforces this.
# 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
import torch.nn as nn
class Model(nn.Module):
def forward(self, x, w):
return F.linear(x, w)
```
# Issue
The lines `try: import torch.nn.functional as F except: pass` are missing in the actual content, even though it exists in the original modular file.
| [
64,
45
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug",
"Modular"
] |
https://api.github.com/repos/huggingface/transformers/issues/35882 |
TITLE
Request to add Co-DETR
COMMENTS
2
REACTIONS
+1: 1
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Model description
> A collaborative hybrid assignments training scheme, namely **Co-DETR**, learns more efficient and effective DETR-based detectors from versatile label assignment manners. This new training scheme can easily enhance the encoder’s learning ability in end-to-end detectors by training the multiple parallel auxiliary heads supervised by one-to-many label assignments such as ATSS and Faster RCNN. In addition, we conduct extra customized positive queries by extracting the positive coordinates from these auxiliary heads to improve the training efficiency of positive samples in the decoder. In inference, these auxiliary heads are discarded and thus our method introduces no additional parameters and computational cost to the original detector while requiring no hand-crafted non-maximum suppression (NMS).
Quote from their paper: https://arxiv.org/pdf/2211.12860
SotA on **Object Detection on COCO test-dev**: https://paperswithcode.com/sota/object-detection-on-coco
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
Repository & Weight Links: https://github.com/Sense-X/Co-DETR
MMDet Implementation: https://github.com/open-mmlab/mmdetection/tree/main/projects/CO-DETR
Author: Zhuofan Zong https://github.com/TempleX98 | [
77
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
"New model"
] |
https://api.github.com/repos/huggingface/transformers/issues/33404 |
TITLE
Bug: The elements of the batch contain different keys. Cannot batch them ...
COMMENTS
7
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
DGX Station
Ubuntu 20.04
Transformers 4.44.2
torch 2.4.1
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
So I ran into this bug which I did not experience in prior versions of transformers (Unfortunately I did not document which was the last one working). What I basically do in transcribe_audio() is to call different model architectures depending on their name:
```
def transcribe_audio(audiopath: List[str], model, repo_name: str, parameters: {}, lang: str) -> List[str]:
tmp = model(audiopath,
generate_kwargs={"language": lang}
)
res = [i["text"] for i in tmp]
return res
```
The bug:
```
You have passed task=transcribe, but also have set `forced_decoder_ids` to [[1, None], [2, 50359]] which creates a conflict. `forced_decoder_ids` will be ignored in favor of task=transcribe.
The attention mask is not set and cannot be inferred from input because pad token is same as eos token. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Traceback (most recent call last):
File "/raid/.../Whisper/main.py", line 139, in <module>
transcript = transcribe_audio(filenames, m, repo, par, lang.value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/raid/.../Whisper/benchmark.py", line 247, in transcribe_audio
tmp = model(audiopath,
^^^^^^^^^^^^^^^^
File "/raid/.../Whisper/venv2/lib/python3.12/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 284, in __call__
return super().__call__(inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/raid/.../Whisper/venv2/lib/python3.12/site-packages/transformers/pipelines/base.py", line 1238, in __call__
outputs = list(final_iterator)
^^^^^^^^^^^^^^^^^^^^
File "/raid/.../Whisper/venv2/lib/python3.12/site-packages/transformers/pipelines/pt_utils.py", line 124, in __next__
item = next(self.iterator)
^^^^^^^^^^^^^^^^^^^
File "/raid/.../Whisper/venv2/lib/python3.12/site-packages/transformers/pipelines/pt_utils.py", line 269, in __next__
processed = self.infer(next(self.iterator), **self.params)
^^^^^^^^^^^^^^^^^^^
File "/raid/.../Whisper/venv2/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 630, in __next__
data = self._next_data()
^^^^^^^^^^^^^^^^^
File "/raid/.../Whisper/venv2/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 673, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/raid/.../Whisper/venv2/lib/python3.12/site-packages/torch/utils/data/_utils/fetch.py", line 43, in fetch
return self.collate_fn(data)
^^^^^^^^^^^^^^^^^^^^^
File "/raid/.../Whisper/venv2/lib/python3.12/site-packages/transformers/pipelines/base.py", line 175, in inner
raise ValueError(
ValueError: The elements of the batch contain different keys. Cannot batch them ({'input_features', 'is_last'} != {'num_frames', 'input_features', 'is_last'})
```
This error is only raised when I initialize a new model. So I loop through different models, whisper-v2, whisper-v3, ... and initialize a new model at the beginng of each loop. First loop with inference runs through, then in the second loop after a few inferences this bug appears. Do you have an idea what this can relate to? I just overwrite my old model object with the new one. I also empty any torch cache before the new init.
### Expected behavior
Just to work like before, looping through models and doing inference. There should not be any internal bug like this. | [
67,
64,
43
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Usage",
"bug",
"Audio"
] |
https://api.github.com/repos/huggingface/transformers/issues/34407 |
TITLE
The maximum value of input_ids must be smaller than the embedding layer's input dimension. (TFBartEncoder)
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
transformers 4.46.0 (tried all >= 4.39)
tokenizers 0.20.1 (tried all >= 0.19)
tensorflow 2.18.0
tf-keras 2.18.0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Code:
```
from transformers import AutoTokenizer, TFAutoModel
import tensorflow as tf
from typing import Callable, List
def map_embeddings_to_words(encoding, vectors, reduction_function: Callable = tf.stack):
"""
Maps subword token embeddings from a Huggingface transformer model onto words (or predefined tokens).
:param encoding: Encoding from tokenizer's output.
:param vectors: Embedding from model's output.
:param reduction_function: Optional, TensorFlow function to reduce subword token embeddings to a single word embedding.
:return: List[
"""
reduced_vectors = []
length = max(i for i in encoding.word_ids if i is not None) + 1
for i in range(length):
start, end = encoding.word_to_tokens(i)
w = reduction_function(vectors[start:end, :])
reduced_vectors.append(w)
reduced_vectors = list(map(tf.constant, reduced_vectors))
return reduced_vectors
def embedding(corpus: List[List[str]], tokenizer, transformer, batch_size: int = 128, reduction: Callable = tf.stack, input_transform: Callable = None, tensors_: str = "tf"):
"""
Function to embed tokenized corpus into TensorFlow tensors.
:param corpus: List of tokenized pseudo-sentences.
:param tokenizer: HuggingFace tokenizer.
:param transformer: HuggingFace transformer model with encoder module.
:param batch_size: Batch size.
:param reduction: Optional, function to reduce subword embeddings to a single word embedding vector.
:return: List[List[tf.Tensor]]
"""
final_embeddings = []
for i in range(0, len(corpus), batch_size):
batch = corpus[i:i+batch_size]
encoded_input = tokenizer(batch, is_split_into_words=True, padding=True, return_tensors=tensors_, add_special_tokens = False)
if hasattr(transformer, "encoder"):
output = transformer.encoder(**encoded_input)
else:
output = transformer(**encoded_input)
word_embeddings = [map_embeddings_to_words(encoding, vectors, reduction) for encoding, vectors in
zip(encoded_input.encodings, output.last_hidden_state)]
final_embeddings.extend(word_embeddings)
gc.collect()
return final_embeddings
if __name__ == "__main__":
model = "facebook/bart-base"
tokenizer = AutoTokenizer.from_pretrained(model)
transformer = TFAutoModel.from_pretrained(model)
TENSORS_ = "tf"
REDUCTION = tf.stack
triples_train = [("a", "p", "c"), ("x", "p", "y"), ("s", "p", "o")]
X_train = embedding(triples_train, tokenizer, transformer, reduction=REDUCTION, tensors_=TENSORS_)
```
Error:
```
Traceback (most recent call last):
File "/home/serusr01/lognets/scripts/entity_learning.py", line 85, in <module>
X_train = embedding(triples_train, tokenizer, transformer, reduction=REDUCTION, tensors_=TENSORS_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/serusr01/lognets/scripts/utils.py", line 159, in embedding
output = transformer(**encoded_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/serusr01/lognets/lognets/lib/python3.12/site-packages/tf_keras/src/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/serusr01/lognets/lognets/lib/python3.12/site-packages/transformers/modeling_tf_utils.py", line 437, in run_call_with_unpacked_inputs
return func(self, **unpacked_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/serusr01/lognets/lognets/lib/python3.12/site-packages/transformers/models/bart/modeling_tf_bart.py", line 1311, in call
outputs = self.model(
^^^^^^^^^^^
File "/home/serusr01/lognets/lognets/lib/python3.12/site-packages/transformers/modeling_tf_utils.py", line 437, in run_call_with_unpacked_inputs
return func(self, **unpacked_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/serusr01/lognets/lognets/lib/python3.12/site-packages/transformers/models/bart/modeling_tf_bart.py", line 1196, in call
encoder_outputs = self.encoder(
^^^^^^^^^^^^^
File "/home/serusr01/lognets/lognets/lib/python3.12/site-packages/transformers/modeling_tf_utils.py", line 437, in run_call_with_unpacked_inputs
return func(self, **unpacked_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/serusr01/lognets/lognets/lib/python3.12/site-packages/transformers/models/bart/modeling_tf_bart.py", line 830, in call
check_embeddings_within_bounds(input_ids, self.embed_tokens.input_dim)
File "/home/serusr01/lognets/lognets/lib/python3.12/site-packages/transformers/tf_utils.py", line 190, in check_embeddings_within_bounds
tf.debugging.assert_less(
tensorflow.python.framework.errors_impl.InvalidArgumentError: Exception encountered when calling layer 'encoder' (type TFBartEncoder).
The maximum value of input_ids (50265) must be smaller than the embedding layer's input dimension (50265). The likely cause is some problem at tokenization time.
Condition x < y did not hold.
First 3 elements of x:
[ 3268 102 37314]
First 1 elements of y:
[50265]
Call arguments received by layer 'encoder' (type TFBartEncoder):
• input_ids=tf.Tensor(shape=(128, 16), dtype=int32)
• inputs_embeds=None
• attention_mask=tf.Tensor(shape=(128, 16), dtype=int32)
• head_mask=None
• output_attentions=False
• output_hidden_states=False
• return_dict=True
• training=False
```
### Expected behavior
The code is expected to return a list of lists of 3 tf tensors. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35075 |
TITLE
When extending embeddings, multivariate distribution isn't correctly estimated even when the calculated sigma matrix is symmetric and positive definite
COMMENTS
7
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.37.1
- Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
When resizing token embeddings for models like MobileBert, iBert etc, `resize_token_embeddings` calls an underlying `transformers.modeling_utils._init_added_embeddings_with_mean`. It should initialize new embedding weights using the old ones:
1. calculate the mean vector of old embedding vectors
2. calculate a sigma matrix using this vector - `vector * vector.T / vector_dim`
3. check if its positive-definite, i.e. can be used as a covariance matrix for a new distribution
- if so, sample from estimated distribution
- else just initialize the new embeddings from the mean vector of previous ones
I noticed the check in step `3` ALWAYS fails, i.e. no matrix is considered as positive definite.
The problem seems to be in [these lines](https://github.com/huggingface/transformers/blob/329f5dbf97a5cb2473914c88c05aa3dcb242e19a/src/transformers/modeling_utils.py#L2436C1-L2438C10)
```
eigenvalues = torch.linalg.eigvals(covariance)
is_covariance_psd = bool(
(covariance == covariance.T).all() and not torch.is_complex(eigenvalues) and (eigenvalues > 0).all()
)
```
since the eigenvalues calculated with `torch.linalg.eigvals` are complex and `torch.is_complex` returns `True` for them. Hence, the main logic, i.e. constructing a multivariate distribution from the previous embeddings and sample from it, might never work (at least in my experiments).
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Here's an isolated example testing the lines I mentioned above:
```
import torch
covariance = torch.Tensor([[5,4],[1,2]])
eigenvalues = torch.linalg.eigvals(covariance)
is_covariance_psd = bool((covariance == covariance.T).all() and not torch.is_complex(eigenvalues) and (eigenvalues > 0).all())
print(is_covariance_psd)
```
This outputs `False` despite the matrix having two positive real eigenvalues - `6` and `1`
### Expected behavior
The function should successfully generate a multivariate normal distribution whenever the calculated sigma is positive definite and symmetric.
I think the check might be replaced with something like:
```
from torch.distributions import constraints
is_psd = constraints.positive_definite.check(covariance).item()
```
| [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/36048 |
TITLE
Saving nested configs crashes in Pixtral
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
Issue from https://huggingface.co/mistral-community/pixtral-12b/discussions/24. When saving the config `head_dim` is skipped for being same as default value but when loaded back `head_dim` is inferred as a different value
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoConfig
config = AutoConfig.rom_pretrained("mistral-community/pixtral-12b")
config.save_pretrained("tmp")
config_second = config.from_pretrained("tmp")
# config != config_second, head-dim is missing and inferred incorrectly
```
### Expected behavior
Will work on it, issue here so I don't forget :)
EDIT: might also be fixed with the help of community (https://github.com/huggingface/transformers/pull/36077) 🤗 | [
64,
19
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug",
"VLM"
] |
https://api.github.com/repos/huggingface/transformers/issues/34888 |
TITLE
xpu: test_eager_matches_sdpa_inference tests fail with pytorch XPU backend
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
With:
* https://github.com/huggingface/transformers/commit/54be2d7ae87e873482b984cc956e165ca4dc0ba3
* https://github.com/huggingface/accelerate/commit/e11d3ceff3a49378796cdff5b466586d877d5c60
* https://github.com/pytorch/pytorch/commit/e429a3b72e787ddcc26ee2ba177643c9177bab24
```
$ cat spec.py
import torch
DEVICE_NAME = 'xpu'
MANUAL_SEED_FN = torch.xpu.manual_seed
EMPTY_CACHE_FN = torch.xpu.empty_cache
DEVICE_COUNT_FN = torch.xpu.device_count
$ TRANSFORMERS_TEST_DEVICE_SPEC=spec.py python3 -m pytest --pspec tests/models -k test_eager_matches_sdpa_inference
<...>
FAILED tests/models/audio_spectrogram_transformer/test_modeling_audio_spectrogram_transformer.py::
Here we also overwrite some of the tests of test_modeling_common.py, as AST does not use input_ids, inputs_embeds,
attention_mask and seq_length.
::test_eager_matches_sdpa_inference_0_float16 - AssertionError: False is not true : padding_side=left, use_mask=False, enable_kernels=False: mean relative difference: 4.739e-05,...
FAILED tests/models/audio_spectrogram_transformer/test_modeling_audio_spectrogram_transformer.py::
Here we also overwrite some of the tests of test_modeling_common.py, as AST does not use input_ids, inputs_embeds,
attention_mask and seq_length.
::test_eager_matches_sdpa_inference_1_bfloat16 - AssertionError: False is not true : padding_side=left, use_mask=False, enable_kernels=False: mean relative difference: 5.913e-04,...
FAILED tests/models/bart/test_modeling_bart.py::BartModelTest::test_eager_matches_sdpa_inference_0_float16 - AssertionError: False is not true : padding_side=left, use_mask=False, enable_kernels=False: mean relative difference: 7.510e-06,...
FAILED tests/models/bart/test_modeling_bart.py::BartModelTest::test_eager_matches_sdpa_inference_1_bfloat16 - AssertionError: False is not true : padding_side=left, use_mask=False, enable_kernels=False: mean relative difference: 7.772e-05,...
FAILED tests/models/bart/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_eager_matches_sdpa_inference_0_float16 - AssertionError: False is not true : padding_side=left, use_mask=False, enable_kernels=False: mean relative difference: 2.402e-05,...
FAILED tests/models/bart/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_eager_matches_sdpa_inference_1_bfloat16 - AssertionError: False is not true : padding_side=left, use_mask=False, enable_kernels=False: mean relative difference: 3.490e-04,...
FAILED tests/models/bert/test_modeling_bert.py::BertModelTest::test_eager_matches_sdpa_inference_0_float16 - AssertionError: False is not true : padding_side=left, use_mask=False, enable_kernels=False: mean relative difference: 5.555e-05,...
FAILED tests/models/bert/test_modeling_bert.py::BertModelTest::test_eager_matches_sdpa_inference_1_bfloat16 - AssertionError: False is not true : padding_side=left, use_mask=False, enable_kernels=False: mean relative difference: 3.567e-04,...
<...>
======================= 159 failed, 89 passed, 793 skipped, 75366 deselected, 319 warnings in 74.89s (0:01:14) =======================
```
CC: @amyeroberts @ydshieh | [
2,
64
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Tests",
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35874 |
TITLE
ZeroShotClassificationArgumentHandler should be explicit it has a somewhat unsafe internal behaviour.
COMMENTS
4
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
Currently, `ZeroShotClassificationArgumentHandler::__call__` will execute https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/zero_shot_classification.py#L41 , that is, it will call python `.format()` on the hypothesis provided to format the label in it, while allowing the full extent of .format() placeholders syntax, which is quite large.
For example, passing `hypothesis_template = "{:>9999999999}"` and any label will happily eat 100Go of RAM because the whole scope of python formatting is allowed.
This is not made clear annywhere, but library users need to know they have to sanitize those inputs very carefully.
I think that at least the docstring of the class, and ideally the reference doc for "hypothesis_template" on https://huggingface.co/docs/huggingface_hub/package_reference/inference_client#huggingface_hub.InferenceClient.zero_shot_classification should be updated to mention this, it's quite important for users of the lib (in particular for parameters that will naturally tend to be user facing in the end).
Alternatively, this call could accept {} only as a placeholder, it's hard to see a legitimate use case for exotic formatting of labels in the hypothesis template.
Thanks :-)
### Motivation
I think it's good to help the internet be a safer place in general :-)
### Your contribution
It's unclear to me whether I can contribute to the documentation on hugginface.com.
I could contribute a fix to be stricter on allowed hypothesis_template in transformers though if you want to take this route (I'm pretty sure even an AI model could contribute the two lines needed though...) | [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/33283 |
TITLE
transformers 4.44.2 doesn't work with torch.compile and torch.export on T5 generate()
COMMENTS
4
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.44.2
- Platform: Linux-5.19.0-0_fbk12_hardened_11583_g0bef9520ca2b-x86_64-with-glibc2.34
- Python version: 3.10.14
- Huggingface_hub version: 0.24.6
- Safetensors version: 0.4.4
- Accelerate version: 0.33.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.0a0+git33ba952 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA PG509-210
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The following code breaks:
```python
import torch
import transformers
from transformers import GenerationConfig
from transformers import AutoConfig
def generate_inputs_for_model(model_cls, model):
eval_context = torch.randint(0, model.config.vocab_size, (4, 2048)).to("cuda")
return {"input_ids": eval_context}
config = AutoConfig.from_pretrained("t5-small")
model_cls = getattr(transformers, "AutoModelForSeq2SeqLM")
model = model_cls.from_config(config).to("cuda")
example_inputs = generate_inputs_for_model(model_cls, model)
example_inputs = (example_inputs["input_ids"],)
generation_config = GenerationConfig(
max_new_tokens=256,
pad_token_id=0,
eos_token_id=None,
do_sample=False,
num_beams=1,
use_cache=True,
)
class GenerationWrapper(torch.nn.Module):
def __init__(self, model, generation_config):
super().__init__()
self.model = model
self.generation_config = generation_config
def forward(self, inputs):
return self.model.generate(inputs, self.generation_config)
model = GenerationWrapper(model, generation_config)
# torch.compile repro
model_opt = torch.compile(model)
output = model_opt(*example_inputs)
# torch.export repro
torch.export.export(model, args=example_inputs, strict=False)
```
With the following error:
```
ValueError: `decoder_start_token_id` or `bos_token_id` has to be defined for encoder-decoder generation.
```
If I manually add `decoder_start_token_id=0` to the GenerationConfig. Then both compile and export work, although very slow.
### Expected behavior
Expected generate to work like before without manually specifying `decoder_start_token_id` or `bos_token_id` in the `GenerationConfig`. | [
1,
64,
18,
59
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"WIP",
"bug",
"Generation",
"Compilation"
] |
https://api.github.com/repos/huggingface/transformers/issues/35279 |
TITLE
KeyError: 'intern_vit_6b'
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
Ubuntu 24.04.1
transformers 4.47.0
### Who can help?
I want to use the latest OpenGVLab/InternViT-300M-448px-V2_5 as the vision encoder of llava, but an error occurred when running the following code. I think it should be that transformers do not support this visual encoder. I tried to modify the code, but it didn't work.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
import torch
from PIL import Image
from transformers import AutoModel, CLIPImageProcessor,AutoTokenizer, AutoProcessor,AutoModelForCausalLM
from transformers import LlavaForConditionalGeneration,LlavaConfig
clip_model_name_or_path = "/home/wangyu/model/models--OpenGVLab--InternViT-300M-448px-V2_5/snapshots/8f86a5e87697180b439811ca69fabbfccd38d996"
qwen_model_name_or_path = "/home/wangyu/model/Qwen2.5-0.5B-Instruct"
clip_model = AutoModel.from_pretrained(clip_model_name_or_path, device_map="cuda:0",trust_remote_code='True')
llm_model = AutoModelForCausalLM.from_pretrained(qwen_model_name_or_path, device_map="cuda:0")
llm_tokenizer = AutoTokenizer.from_pretrained(qwen_model_name_or_path)
vision_config = clip_model.config
text_config = llm_model.config
configuration = LlavaConfig(vision_config, text_config)
model = LlavaForConditionalGeneration(configuration)
model.save_pretrained("slvm/model001")
from transformers import LlavaProcessor, LlavaForConditionalGeneration
import torch
import os
from typing import Union
from transformers.configuration_utils import PretrainedConfig
from transformers.utils import logging
model_name_or_path = "slvm/model001" # 需要确认路径是否正确
llava_processor = LlavaProcessor.from_pretrained(model_name_or_path)
model = LlavaForConditionalGeneration.from_pretrained(
model_name_or_path,
device_map="cuda:0",
torch_dtype=torch.bfloat16,
)
from PIL import Image
prompt_text = "<image>\nWhat are these?"
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt_text},
]
prompt = llava_processor.tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_path = "000000039769.jpg"
image = Image.open(image_path)
inputs = llava_processor(text=prompt, images=image, return_tensors="pt")
for tk in inputs.keys():
if inputs[tk].dtype == torch.float32:
inputs[tk] = inputs[tk].to(dtype=torch.bfloat16)
inputs[tk] = inputs[tk].to(model.device)
generate_ids = model.generate(**inputs, max_new_tokens=20)
gen_text = llava_processor.batch_decode(
generate_ids, skip_special_tokens=False, clean_up_tokenization_spaces=False
)[0]
print(gen_text)
error:
{
"name": "KeyError",
"message": "'intern_vit_6b'",
"stack": "---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[5], line 9
6 from transformers.utils import logging
7 model_name_or_path = \"slvm/model001\" # 需要确认路径是否正确
----> 9 llava_processor = LlavaProcessor.from_pretrained(model_name_or_path)
10 model = LlavaForConditionalGeneration.from_pretrained(
11 model_name_or_path,
12 device_map=\"cuda:0\",
13 torch_dtype=torch.bfloat16,
14 )
16 from PIL import Image
File ~/miniconda3/envs/llm/lib/python3.10/site-packages/transformers/processing_utils.py:974, in ProcessorMixin.from_pretrained(cls, pretrained_model_name_or_path, cache_dir, force_download, local_files_only, token, revision, **kwargs)
971 if token is not None:
972 kwargs[\"token\"] = token
--> 974 args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs)
975 processor_dict, kwargs = cls.get_processor_dict(pretrained_model_name_or_path, **kwargs)
977 return cls.from_args_and_dict(args, processor_dict, **kwargs)
File ~/miniconda3/envs/llm/lib/python3.10/site-packages/transformers/processing_utils.py:1020, in ProcessorMixin._get_arguments_from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
1017 else:
1018 attribute_class = getattr(transformers_module, class_name)
-> 1020 args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, **kwargs))
1021 return args
File ~/miniconda3/envs/llm/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:878, in AutoTokenizer.from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
876 config = AutoConfig.for_model(**config_dict)
877 else:
--> 878 config = AutoConfig.from_pretrained(
879 pretrained_model_name_or_path, trust_remote_code=trust_remote_code, **kwargs
880 )
881 config_tokenizer_class = config.tokenizer_class
882 if hasattr(config, \"auto_map\") and \"AutoTokenizer\" in config.auto_map:
File ~/miniconda3/envs/llm/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:1045, in AutoConfig.from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
1039 except KeyError:
1040 raise ValueError(
1041 f\"The checkpoint you are trying to load has model type `{config_dict['model_type']}` \"
1042 \"but Transformers does not recognize this architecture. This could be because of an \"
1043 \"issue with the checkpoint, or because your version of Transformers is out of date.\"
1044 )
-> 1045 return config_class.from_dict(config_dict, **unused_kwargs)
1046 else:
1047 # Fallback: use pattern matching on the string.
1048 # We go from longer names to shorter names to catch roberta before bert (for instance)
1049 for pattern in sorted(CONFIG_MAPPING.keys(), key=len, reverse=True):
File ~/miniconda3/envs/llm/lib/python3.10/site-packages/transformers/configuration_utils.py:734, in PretrainedConfig.from_dict(cls, config_dict, **kwargs)
731 # We remove it from kwargs so that it does not appear in `return_unused_kwargs`.
732 config_dict[\"attn_implementation\"] = kwargs.pop(\"attn_implementation\", None)
--> 734 config = cls(**config_dict)
736 if hasattr(config, \"pruned_heads\"):
737 config.pruned_heads = {int(key): value for key, value in config.pruned_heads.items()}
File ~/miniconda3/envs/llm/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:108, in LlavaConfig.__init__(self, vision_config, text_config, ignore_index, image_token_index, projector_hidden_act, vision_feature_select_strategy, vision_feature_layer, image_seq_length, **kwargs)
104 if isinstance(vision_config, dict):
105 vision_config[\"model_type\"] = (
106 vision_config[\"model_type\"] if \"model_type\" in vision_config else \"clip_vision_model\"
107 )
--> 108 vision_config = CONFIG_MAPPING[vision_config[\"model_type\"]](**vision_config)
109 elif vision_config is None:
110 vision_config = CONFIG_MAPPING[\"clip_vision_model\"](
111 intermediate_size=4096,
112 hidden_size=1024,
(...)
118 projection_dim=768,
119 )
File ~/miniconda3/envs/llm/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:740, in _LazyConfigMapping.__getitem__(self, key)
738 return self._extra_content[key]
739 if key not in self._mapping:
--> 740 raise KeyError(key)
741 value = self._mapping[key]
742 module_name = model_type_to_module_name(key)
KeyError: 'intern_vit_6b'"
}
### Expected behavior
I think I need to modify the source code and add internvit, just like clip. I hope the official staff can tell me where to modify it.
I love transformers! | [
1,
64,
12
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"WIP",
"bug",
"Multimodal"
] |
https://api.github.com/repos/huggingface/transformers/issues/33670 |
TITLE
Add a language model probability pipeline
COMMENTS
6
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
A common use case of language models is to estimate the probability (or log-probability, or perplexity) of a sequence of tokens. It could be convenient to add a pipeline that returns the log-probability / score of a sequence.
### Motivation
It's possible to calculate model predictions for the probability of a sequence, e.g.: https://huggingface.co/docs/transformers/en/perplexity
But pipelines automatically handle tokenization, batching, etc. so it would be more convenient to have a pipeline.
Alternatively, it might be possible to achieve this behavior by creating a "text-generation" pipeline and calling the pipeline with `max_new_tokens=0` as well as passing certain `generate_kwargs` to get the sequence LM scores, but how to do this is not immediately obvious.
### Your contribution
This should be quite easy to achieve by creating a new pipeline with `_forward` essentially containing the logic for calculating sequence probability at https://huggingface.co/docs/transformers/en/perplexity | [
51,
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Core: Pipeline",
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/34527 |
TITLE
[Feature] Will there be any integration of using Flex-attention (and Paged attention)?
COMMENTS
5
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
Using (https://pytorch.org/blog/flexattention/) Flex-attention (and [Paged attention](https://github.com/pytorch/pytorch/pull/121845/files)) to speedup transformers models and provide flexibility
### Motivation
FlexAttention was proposed as a performant attention implementation leveraging torch.compile with easy APIs for adding support for complex attention variants such as Causal, [Relative Positional Embeddings](https://paperswithcode.com/method/relative-position-encodings), [Alibi](https://paperswithcode.com/method/alibi), [Sliding Window Attention](https://mistral.ai/news/announcing-mistral-7b/), [PrefixLM](https://twitter.com/andersonbcdefg/status/1800907703688339569), https://github.com/pytorch/torchtune/pull/875, [Tanh Soft-Capping](https://twitter.com/LysandreJik/status/1807779471891538199), [PagedAttention](https://arxiv.org/abs/2309.06180), etc.
### Your contribution
Not sure. | [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/35444 |
TITLE
Allow static cache to be larger than sequence length / batch size for encoder-decoder models
COMMENTS
5
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
In encoder decoder models using an encoder-decoder cache object when using a static cache:
1. the cross-attention cache size must equal the encoder sequence length.
2. batch size for both self-attention and cross-attention caches must be the same as the generating batch size.
### Motivation
I have been working on executorch export for encoder-decoder models. as part of that I have been digging into the implementation of the encoder-decoder cache and static cache.
How I would expect static caches to work is that when you initialize the cache, then as long as your generation (batch size, encoder sequence length, decoder sequence length) is less than the associated cache values, it should work.
Currently however:
1. The cross attention cache must be exactly the size as the encoder sequence length.
2. The batch size that the cache is initialized with must be exactly the batch size that the cache is run with.
### Your contribution
As I was digging through this, I updated the T5 attention and the static cache implementation in an attempt to handle both these cases.
#35445
That being said, I am just starting to learn transformers (both the hf library and in general), and have no real idea what I am doing.
#### Here is the code I have been using to generate the issue:
```python
import torch
from transformers import (
AutoTokenizer,
AutoModelForSeq2SeqLM,
)
from transformers.cache_utils import (
StaticCache,
EncoderDecoderCache,
)
model_name = "google-t5/t5-small"
dtype = torch.float16
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(
model_name,
torch_dtype=dtype,
)
encoder_cache = StaticCache(
model.config, max_cache_len=170, max_batch_size=4, dtype=dtype
)
decoder_cache = StaticCache(
model.config, max_cache_len=200, max_batch_size=4, dtype=dtype
)
cache = EncoderDecoderCache(decoder_cache, encoder_cache)
strings_1 = [
"When the night has come and the land is dark, and the moon is the only light we will see.",
"Abba is the best",
# "No lindy is the best",
# "No Elton john is the absolute best.",
]
input_ids = tokenizer(strings_1, return_tensors="pt", padding=True)
tokens = model.generate(**input_ids, past_key_values=cache)
text_translated = [tokenizer.decode(t, skip_special_tokens=False) for t in tokens]
print(text_translated)
```
| [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/33745 |
TITLE
Add support for TimesFM
COMMENTS
4
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Model description
**TimesFM** (Time Series Foundation Model) is a pretrained time-series foundation model developed by Google Research for time-series forecasting.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
- Research Paper: https://arxiv.org/abs/2310.10688
- Authors: [Abhimanyu Das](https://arxiv.org/search/cs?searchtype=author&query=Das,+A), [Weihao Kong](https://arxiv.org/search/cs?searchtype=author&query=Kong,+W), [Rajat Sen](https://arxiv.org/search/cs?searchtype=author&query=Sen,+R), [Yichen Zhou](https://arxiv.org/search/cs?searchtype=author&query=Zhou,+Y)
- Implementation: [google-research/timesfm](https://github.com/google-research/timesfm)
- The linked repository contains code for implementation in `jax` as well `pytorch`. To implement this in `huggingface` the `pytorch` specific code can be found at [src/timesfm/pytorch_patched_decoder.py](https://github.com/google-research/timesfm/blob/master/src/timesfm/pytorch_patched_decoder.py)
- Models Weights: [google/timesfm-1.0-200m-pytorch](https://huggingface.co/google/timesfm-1.0-200m-pytorch)
- Although there are weights given in the repository, yet there are missing config files that are to be completed to ensure smooth loading of weights. | [
77,
3
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
"New model",
"Time Series"
] |
https://api.github.com/repos/huggingface/transformers/issues/34811 |
TITLE
`SeamlessM4TForTextToSpeech.generate` not working if `generation_config` is passed
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
For `SeamlessM4TForTextToSpeech` with checkpoint ` "facebook/hf-seamless-m4t-medium"`,
> model.generate(**model_inputs, tgt_lang="eng", generation_config=model.generation_config)
will fail with index error.
If not passing `generation_config` argument, it could run without error.
### Reproduction
```python
from transformers import AutoProcessor, SeamlessM4TForTextToSpeech
ckpt = "facebook/hf-seamless-m4t-medium"
processor = AutoProcessor.from_pretrained(ckpt)
model = SeamlessM4TForTextToSpeech.from_pretrained(ckpt)
model_inputs = processor(["This is a test"], return_tensors="pt")
# This works
outputs = model.generate(**model_inputs, tgt_lang="eng")
print(outputs)
# Index error
outputs2 = model.generate(**model_inputs, tgt_lang="eng", generation_config=model.generation_config)
print(outputs2)
```
### Expected behavior
Should not fail if `generation_config` is passed. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35270 |
TITLE
Strange behavior with attn_implementation="eager"
COMMENTS
18
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 2
BODY
### System Info
- `transformers` version: 4.47.0
- Platform: Linux-5.15.0-120-generic-x86_64-with-glibc2.35
- Python version: 3.10.15
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 1.1.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: Yes
- GPU type: NVIDIA A100-PCIE-40GB
### Who can help?
@zucchini-nlp
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am trying to analyze the attention pattern of the `LLAVA v1.5 7B` model, so I used `attn_implementation="eager"` when initing the model to obtain the attention weights. However, this has led to several issues. Firstly, the output IDs are incorrect, and secondly, errors may occur. I've noticed that this problem only appears with specific images and user prompts, while it does not occur in other cases, which is quite peculiar. Below is my code:
``` python
import numpy as np
import torch
from dotenv import load_dotenv
from PIL import Image
from transformers import (
LlavaForConditionalGeneration,
LlavaProcessor,
)
from transformers.generation.utils import GenerateDecoderOnlyOutput
np.set_printoptions(threshold=np.inf)
model_name = "llava-hf/llava-1.5-7b-hf"
model: LlavaForConditionalGeneration = LlavaForConditionalGeneration.from_pretrained(
model_name,
cache_dir="/root/llm/utils/models/hub",
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
device_map="cuda:0",
attn_implementation="eager",
)
processor: LlavaProcessor = LlavaProcessor.from_pretrained(
model_name,
cache_dir="/root/llm/utils/models/hub",
padding_side="left",
patch_size=model.config.vision_config.patch_size,
vision_feature_select_strategy=model.config.vision_feature_select_strategy,
)
images = [
Image.open("/root/llm/utils/eval/Object_HalBench/images/339761.jpg"),
Image.open("/root/llm/utils/eval/Object_HalBench/images/431256.jpg"),
]
users = [
"Provide a thorough description of the given image.",
"What is this photo about? Please answer in great detail.",
]
prompts: list[str] = []
for u in users:
conversation: list[dict[str]] = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": u},
],
},
]
prompt: str = processor.apply_chat_template(
conversation,
tokenize=False,
add_generation_prompt=True,
)
prompts.append(prompt)
with torch.inference_mode():
encoded_inputs: dict[str, torch.Tensor] = processor(
images=images,
text=prompts,
return_tensors="pt",
return_token_type_ids=False,
padding=True,
).to("cuda:0", torch.float16)
output: GenerateDecoderOnlyOutput = model.generate(
**encoded_inputs,
max_new_tokens=50,
num_beams=1,
do_sample=False,
temperature=0.7,
output_attentions=True,
use_cache=True,
return_legacy_cache=True,
return_dict_in_generate=True,
)
generated_ids: list[torch.LongTensor] = output.sequences # list of shape (batch_size, sequence_length)
print(generated_ids.cpu().numpy())
generated_ids = [o[len(i) :] for i, o in zip(encoded_inputs.input_ids, generated_ids)]
print()
decoded_outputs: list[str] = processor.batch_decode(
generated_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=True,
)
print(decoded_outputs)
decoded_outputs = [d.rstrip("\n").strip(" ") for d in decoded_outputs]
print(decoded_outputs)
print(len(output.attentions))
```
Notice: the image I used is from Object_HalBench benchmark
The output is:
Some other warning:
```
/root/anaconda3/envs/LVLM/lib/python3.10/site-packages/transformers/generation/configuration_utils.py:628: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.7` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`.
warnings.warn(
Expanding inputs for image tokens in LLaVa should be done in processing. Please add `patch_size` and `vision_feature_select_strategy` to the model's processing config or set directly with `processor.patch_size = {{patch_size}}` and processor.vision_feature_select_strategy = {{vision_feature_select_strategy}}`. Using processors without these attributes in the config is deprecated and will throw an error in v4.50.
From v4.47 onwards, when a model cache is to be returned, `generate` will return a `Cache` instance instead by default (as opposed to the legacy tuple of tuples format). If you want to keep returning the legacy format, please set `return_legacy_cache=True`.
```
The generated_ids: (I remove a large number of `<image>` token for readability)
```
[[32001 1 3148 1001 29901 29871 32000 32000 32000 32000 32000 32000
32000 32000 32000 32000 32000 32000 29871 13 1184 29894 680 263
17826 6139 310 278 2183 1967 29889 319 1799 9047 13566 29901
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0]
[ 1 3148 1001 29901 29871 32000 32000 32000 32000 32000 32000 32000
32000 32000 32000 32000 32000 29871 13 5618 338 445 15373 1048
29973 3529 1234 297 2107 9493 29889 319 1799 9047 13566 29901
450 1967 4332 1973 263 15007 3377 261 297 3158 29892 15859
263 8938 373 263 15007 29899 11911 287 364 1160 29889 450
15007 3377 261 338 297 278 7256 310 278 9088 29892 411
1009 15007 3377 7962 19540 963 29889 29871 13 13 8439 526
3196 916]]
```
Notice that `<image>`: 32000, `<pad>`: 32001
The output after `batch_decode` is:
```
['', 'The image captures a snowboarder in action, performing a trick on a snow-covered ramp. The snowboarder is in the middle of the scene, with their snowboard visible beneath them. \n\nThere are several other']
```
It's strange that there is token id 0 generated.
Only set `output_attentions=False` and `return_dict_in_generate=False` without removing `attn_implementation="eager",` won't make any change.
Notice that removing `attn_implementation="eager"`, and not returning dict overcome this question, the output then become correct:
```
[[32001 1 3148 1001 29901 29871 32000 32000 32000 32000 32000 32000
32000 32000 32000 32000 32000 32000 29871 13 1184 29894 680 263
17826 6139 310 278 2183 1967 29889 319 1799 9047 13566 29901
450 1967 5680 263 27683 8345 411 263 2919 7933 8024 15678
701 278 10090 29892 4969 263 301 1878 322 325 4626 424
25005 29889 450 8024 338 24046 2978 278 1510 261 4038 29892
4417 263 6023 310 5469 304 278 2913 29889 29871 13 13
797 278]
[ 1 3148 1001 29901 29871 32000 32000 32000 32000 32000 32000 32000
32000 32000 32000 32000 32000 29871 13 5618 338 445 15373 1048
29973 3529 1234 297 2107 9493 29889 319 1799 9047 13566 29901
450 1967 4332 1973 263 15007 3377 261 297 3158 29892 15859
263 8938 373 263 15007 29899 11911 287 364 1160 29889 450
15007 3377 261 338 297 278 7256 310 278 9088 29892 411
1009 15007 3377 7962 19540 963 29889 29871 13 13 8439 526
3196 916]]
['The image features a bathroom with a large green plant growing up the wall, creating a lush and vibrant atmosphere. The plant is situated near the shower area, adding a touch of nature to the space. \n\nIn the',
'The image captures a snowboarder in action, performing a trick on a snow-covered ramp. The snowboarder is in the middle of the scene, with their snowboard visible beneath them. \n\nThere are several other']
```
Beside this, some error may occur with `attn_implementation="eager"`, in some other case (different Image input)
```
Loading checkpoint shards: 100%|████| 3/3 [00:03<00:00, 1.12s/it]
../aten/src/ATen/native/cuda/TensorCompare.cu:110: _assert_async_cuda_kernel: block: [0,0,0], thread: [0,0,0] Assertion `probability tensor contains either `inf`, `nan` or element < 0` failed.
Traceback (most recent call last):
File "/root/llm/LVLM/test2.py", line 38, in <module>
print(generator.gen(images, users,do_sample=True,
File "/root/llm/LVLM/model/generator/llava.py", line 184, in gen
out = gen_hf(
File "/root/llm/LVLM/model/generator/utils.py", line 279, in gen_hf
output, encoded_inputs = _gen_hf(
File "/root/llm/LVLM/model/generator/utils.py", line 229, in _gen_hf
output: GenerateDecoderOnlyOutput = model.generate(
File "/root/anaconda3/envs/LVLM/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/root/anaconda3/envs/LVLM/lib/python3.10/site-packages/transformers/generation/utils.py", line 2252, in generate
result = self._sample(
File "/root/anaconda3/envs/LVLM/lib/python3.10/site-packages/transformers/generation/utils.py", line 3297, in _sample
next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
### Expected behavior
fix it | [
64,
12
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug",
"Multimodal"
] |
https://api.github.com/repos/huggingface/transformers/issues/33441 |
TITLE
Support Pixtral
COMMENTS
3
REACTIONS
+1: 2
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
Hi,
It would be great to have support for Pixtral!
https://huggingface.co/mistral-community/pixtral-12b-240910
thanks!
### Motivation
Pixtral is a new model released by Mistral AI
### Your contribution
N/A | [
77,
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
1,
0,
0,
0,
0
] | [
"New model",
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/33469 |
TITLE
OperatorNotAllowedInGraphError when using TFDebertaV2Model after recent Keras update
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
System Info
transformers version: 4.44.2
Platform: Google Colab | Apple M1 Max 14.6.1
Python version: 3.10.12
Tensorflow version (GPU?): 2.15.0 (True)
Keras version: 2.15.0
tf_keras: 2.15.1
Huggingface_hub version: 0.23.4
Safetensors version: 0.4.4
Accelerate version: not installed
Accelerate config: not found
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: Yes
Who can help?
@ArthurZucker, @Rocketknight1
I’m encountering an OperatorNotAllowedInGraphError when using TFDebertaV2Model from the Transformers library in graph mode. The code works fine in eager mode. This issue might be related to a recent update in the Keras version, as the same code previously worked on Google Colab.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
tf.config.run_functions_eagerly(True)
encoder = TFAutoModel.from_pretrained('microsoft/deberta-v3-base', cache_dir='./cache')
src_tokens = tf_keras.layers.Input((None,), dtype="int64", name="src_tokens")#, batch_size=1)
src_tokens_mask = tf_keras.layers.Input((None,), dtype="int64", name="src_tokens_mask")#, batch_size=1)
encoder_output = encoder(input_ids=src_tokens, attention_mask=src_tokens_mask)
```
```error
File "/Users/bogdan.didenko/wsc/gec-tf2/venv_310/lib/python3.10/site-packages/transformers/modeling_tf_utils.py", line 1199, in run_call_with_unpacked_inputs *
return func(self, **unpacked_inputs)
File "/Users/bogdan.didenko/wsc/gec-tf2/venv_310/lib/python3.10/site-packages/transformers/models/deberta_v2/modeling_tf_deberta_v2.py", line 1201, in call *
outputs = self.deberta(
File "/Users/bogdan.didenko/wsc/gec-tf2/venv_310/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py", line 122, in error_handler **
raise e.with_traceback(filtered_tb) from None
File "/Users/bogdan.didenko/wsc/gec-tf2/venv_310/lib/python3.10/site-packages/transformers/modeling_tf_utils.py", line 437, in run_call_with_unpacked_inputs
return func(self, **unpacked_inputs)
File "/Users/bogdan.didenko/wsc/gec-tf2/venv_310/lib/python3.10/site-packages/transformers/models/deberta_v2/modeling_tf_deberta_v2.py", line 1053, in call
encoder_outputs = self.encoder(
File "/Users/bogdan.didenko/wsc/gec-tf2/venv_310/lib/python3.10/site-packages/transformers/models/deberta_v2/modeling_tf_deberta_v2.py", line 423, in call
layer_outputs = layer_module(
File "/Users/bogdan.didenko/wsc/gec-tf2/venv_310/lib/python3.10/site-packages/transformers/models/deberta_v2/modeling_tf_deberta_v2.py", line 258, in call
attention_outputs = self.attention(
File "/Users/bogdan.didenko/wsc/gec-tf2/venv_310/lib/python3.10/site-packages/transformers/models/deberta_v2/modeling_tf_deberta_v2.py", line 179, in call
self_outputs = self.self(
File "/Users/bogdan.didenko/wsc/gec-tf2/venv_310/lib/python3.10/site-packages/transformers/models/deberta_v2/modeling_tf_deberta_v2.py", line 682, in call
rel_att = self.disentangled_att_bias(query_layer, key_layer, relative_pos, rel_embeddings, scale_factor)
File "/Users/bogdan.didenko/wsc/gec-tf2/venv_310/lib/python3.10/site-packages/transformers/models/deberta_v2/modeling_tf_deberta_v2.py", line 776, in disentangled_att_bias
if shape_list(key_layer)[-2] != shape_list(query_layer)[-2]:
OperatorNotAllowedInGraphError: Exception encountered when calling TFDebertaV2DisentangledSelfAttention.call().
Using a symbolic `tf.Tensor` as a Python `bool` is not allowed. You can attempt the following resolutions to the problem: If you are running in Graph mode, use Eager execution mode or decorate this function with @tf.function. If you are using AutoGraph, you can try decorating this function with @tf.function. If that does not work, then you may be using an unsupported feature or your source code may not be visible to AutoGraph. See https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/limitations.md#access-to-source-code for more information.
```
### Expected behavior
the code should work | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35237 |
TITLE
[i18n-Chinese] Translating perf_infer_gpu_multi.md to Chinese
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
<!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
I noticed that no one has translated docs/source/en/perf_infer_gpu_multi.md before. Could I translate this to Chinese?
| [
1
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"WIP"
] |
https://api.github.com/repos/huggingface/transformers/issues/35509 |
TITLE
When gradient checkpointing is enabled, flash_attn_kwargs cannot be passed into the decoder_layer
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
transformers 4.47.1
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://github.com/huggingface/transformers/blob/241c04d36867259cdf11dbb4e9d9a60f9cb65ebc/src/transformers/models/llama/modeling_llama.py#L896C1-L931C54
```python
for decoder_layer in self.layers[: self.config.num_hidden_layers]:
if output_hidden_states:
all_hidden_states += (hidden_states,)
if self.gradient_checkpointing and self.training:
layer_outputs = self._gradient_checkpointing_func(
decoder_layer.__call__,
hidden_states,
causal_mask,
position_ids,
past_key_values,
output_attentions,
use_cache,
cache_position,
position_embeddings,
)
else:
layer_outputs = decoder_layer(
hidden_states,
attention_mask=causal_mask,
position_ids=position_ids,
past_key_value=past_key_values,
output_attentions=output_attentions,
use_cache=use_cache,
cache_position=cache_position,
position_embeddings=position_embeddings,
**flash_attn_kwargs,
)
```
### Expected behavior
x | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34017 |
TITLE
RuntimeError: Internal: could not parse ModelProto from /Data_disk/meta_llama/meta_llama3.2/Llama3.2-1B-Instruct/tokenizer.model
COMMENTS
6
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message.
Traceback (most recent call last):
File "/Data_disk/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py", line 479, in <module>
main()
File "/Data_disk/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py", line 457, in main
write_tokenizer(
File "/Data_disk/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py", line 367, in write_tokenizer
tokenizer = tokenizer_class(input_tokenizer_path)
File "/home/transformers/src/transformers/models/llama/tokenization_llama_fast.py", line 157, in __init__
super().__init__(
File "/home/transformers/src/transformers/tokenization_utils_fast.py", line 132, in __init__
slow_tokenizer = self.slow_tokenizer_class(*args, **kwargs)
File "/home/transformers/src/transformers/models/llama/tokenization_llama.py", line 171, in __init__
self.sp_model = self.get_spm_processor(kwargs.pop("from_slow", False))
File "/home/transformers/src/transformers/models/llama/tokenization_llama.py", line 198, in get_spm_processor
tokenizer.Load(self.vocab_file)
File "/usr/local/lib/python3.10/dist-packages/sentencepiece/__init__.py", line 961, in Load
return self.LoadFromFile(model_file)
File "/usr/local/lib/python3.10/dist-packages/sentencepiece/__init__.py", line 316, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
RuntimeError: Internal: could not parse ModelProto from /Data_disk/meta_llama/meta_llama3.2/Llama3.2-1B-Instruct/tokenizer.model
### Who can help?
@ArthurZucker @itazap
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
python3 /Data_disk/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir /Data_disk/meta_llama/meta_llama3.2/Llama3.2-1B-Instruct \
--model_size 1B \
--output_dir /Data_disk/meta_llama/meta_llama3.2/out
### Expected behavior
get safetensors | [
47,
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Core: Tokenization",
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35545 |
TITLE
ModernBERT export to onnx error
COMMENTS
4
REACTIONS
+1: 1
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.48.0.dev0
- Platform: Linux-5.15.0-84-generic-x86_64-with-glibc2.35
- Python version: 3.11.11
- Huggingface_hub version: 0.27.0
- Safetensors version: 0.4.5
- Accelerate version: 1.2.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 4090
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When I trained a classification model based on ModernBERT tried to export to onnx with the following script.
```
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
def export():
tokenizer = AutoTokenizer.from_pretrained("answerdotai/ModernBERT-base", model_max_length=4096)
model = AutoModelForSequenceClassification.from_pretrained(
"./checkpoints",
num_labels=3,
# reference_compile=False,
)
model.eval()
samples = ['examples']
tokenized = tokenizer(samples,
return_tensors='pt',
max_length=tokenizer.model_max_length,
padding='max_length',
truncation=True)
input_ids = tokenized['input_ids'].to('cuda')
attention_mask = tokenized['attention_mask'].to('cuda')
model = model.to('cuda')
with torch.no_grad():
torch.onnx.export(
model,
(input_ids, attention_mask),
'./model.onnx',
input_names=["input_ids", "attention_mask"],
output_names=["logits"],
)
if __name__ == '__main__':
export()
```
Got errors. May Be related https://github.com/pytorch/pytorch/issues/104748
```
You are attempting to use Flash Attention 2.0 without specifying a torch dtype. This might lead to unexpected behaviour
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in ModernBertForSequenceClassification is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `torch_dtype` argument. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="flash_attention_2", torch_dtype=torch.float16)`
/miniconda3/envs/bert/lib/python3.11/site-packages/transformers/models/modernbert/modeling_modernbert.py:711: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
max_seqlen_in_batch = int(seqlens_in_batch.max().item())
Traceback (most recent call last):
File "/modernBERT/export_onnx.py", line 39, in <module>
export()
File "/modernBERT/export_onnx.py", line 28, in export
torch.onnx.export(
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/onnx/__init__.py", line 375, in export
export(
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/onnx/utils.py", line 502, in export
_export(
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/onnx/utils.py", line 1564, in _export
graph, params_dict, torch_out = _model_to_graph(
^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/onnx/utils.py", line 1113, in _model_to_graph
graph, params, torch_out, module = _create_jit_graph(model, args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/onnx/utils.py", line 997, in _create_jit_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/onnx/utils.py", line 904, in _trace_and_get_graph_from_model
trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/jit/_trace.py", line 1500, in _get_trace_graph
outs = ONNXTracedModule(
^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/jit/_trace.py", line 139, in forward
graph, out = torch._C._create_graph_by_tracing(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/jit/_trace.py", line 130, in wrapper
outs.append(self.inner(*trace_inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1726, in _slow_forward
result = self.forward(*input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 1160, in forward
outputs = self.model(
^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1726, in _slow_forward
result = self.forward(*input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 895, in forward
hidden_states = self.embeddings(input_ids)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1726, in _slow_forward
result = self.forward(*input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 210, in forward
self.compiled_embeddings(input_ids)
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 444, in _fn
raise RuntimeError(
RuntimeError: Detected that you are using FX to torch.jit.trace a dynamo-optimized function. This is not supported at the moment.
```
https://huggingface.co/answerdotai/ModernBERT-base/discussions/10
When I read this post I modified part of the code as follows.
```
model = AutoModelForSequenceClassification.from_pretrained(
"./checkpoints",
num_labels=3,
reference_compile=False,
)
```
I got another error.
```
You are attempting to use Flash Attention 2.0 without specifying a torch dtype. This might lead to unexpected behaviour
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in ModernBertForSequenceClassification is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `torch_dtype` argument. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="flash_attention_2", torch_dtype=torch.float16)`
/miniconda3/envs/bert/lib/python3.11/site-packages/transformers/models/modernbert/modeling_modernbert.py:711: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
max_seqlen_in_batch = int(seqlens_in_batch.max().item())
/miniconda3/envs/bert/lib/python3.11/site-packages/flash_attn/ops/triton/rotary.py:166: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert sin.shape == cos.shape
/miniconda3/envs/bert/lib/python3.11/site-packages/flash_attn/ops/triton/rotary.py:168: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert rotary_dim <= headdim, "rotary_dim must be <= headdim"
/miniconda3/envs/bert/lib/python3.11/site-packages/flash_attn/ops/triton/rotary.py:169: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert headdim <= 256, "Only support headdim <= 256"
/miniconda3/envs/bert/lib/python3.11/site-packages/flash_attn/ops/triton/rotary.py:170: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert seqlen_ro >= seqlen, "seqlen_ro must be >= seqlen"
/miniconda3/envs/bert/lib/python3.11/site-packages/flash_attn/ops/triton/rotary.py:185: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert seqlen_offsets + seqlen <= seqlen_ro
/miniconda3/envs/bert/lib/python3.11/site-packages/flash_attn/ops/triton/rotary.py:188: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if rotary_dim < headdim and not inplace:
/miniconda3/envs/bert/lib/python3.11/site-packages/flash_attn/ops/triton/rotary.py:193: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if rotary_dim <= 32
/miniconda3/envs/bert/lib/python3.11/site-packages/flash_attn/ops/triton/rotary.py:194: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
else (64 if rotary_dim <= 64 else (128 if rotary_dim <= 128 else 256))
/miniconda3/envs/bert/lib/python3.11/site-packages/flash_attn/ops/triton/rotary.py:197: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
BLOCK_M = 4 if interleaved else (8 if rotary_dim <= 128 else 4)
Traceback (most recent call last):
File "/modernBERT/export_onnx.py", line 39, in <module>
export()
File "/modernBERT/export_onnx.py", line 28, in export
torch.onnx.export(
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/onnx/__init__.py", line 375, in export
export(
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/onnx/utils.py", line 502, in export
_export(
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/onnx/utils.py", line 1564, in _export
graph, params_dict, torch_out = _model_to_graph(
^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/onnx/utils.py", line 1113, in _model_to_graph
graph, params, torch_out, module = _create_jit_graph(model, args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/onnx/utils.py", line 997, in _create_jit_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/onnx/utils.py", line 904, in _trace_and_get_graph_from_model
trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/jit/_trace.py", line 1500, in _get_trace_graph
outs = ONNXTracedModule(
^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/jit/_trace.py", line 139, in forward
graph, out = torch._C._create_graph_by_tracing(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/jit/_trace.py", line 130, in wrapper
outs.append(self.inner(*trace_inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1726, in _slow_forward
result = self.forward(*input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 1160, in forward
outputs = self.model(
^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1726, in _slow_forward
result = self.forward(*input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 913, in forward
layer_outputs = encoder_layer(
^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1726, in _slow_forward
result = self.forward(*input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 529, in forward
attn_outputs = self.attn(
^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1726, in _slow_forward
result = self.forward(*input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 487, in forward
attn_outputs = MODERNBERT_ATTENTION_FUNCTION[self.config._attn_implementation](
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 349, in flash_attention_forward
qkv = rotary_emb(qkv, cu_seqlens=cu_seqlens, max_seqlen=max_seqlen)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1726, in _slow_forward
result = self.forward(*input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 178, in forward
qkv = apply_rotary_unpadded(
^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 136, in apply_rotary_unpadded
return ApplyRotaryEmbUnpad.apply(qkv, cos, sin, cu_seqlens, max_seqlen)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/autograd/function.py", line 575, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 75, in forward
apply_rotary(
File "/miniconda3/envs/bert/lib/python3.11/site-packages/flash_attn/ops/triton/rotary.py", line 202, in apply_rotary
rotary_kernel[grid](
File "/miniconda3/envs/bert/lib/python3.11/site-packages/triton/runtime/jit.py", line 345, in <lambda>
return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/triton/runtime/jit.py", line 662, in run
kernel = self.compile(
^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/triton/compiler/compiler.py", line 276, in compile
module = src.make_ir(options, codegen_fns, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/bert/lib/python3.11/site-packages/triton/compiler/compiler.py", line 113, in make_ir
return ast_to_ttir(self.fn, self, context=context, options=options, codegen_fns=codegen_fns)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
triton.compiler.errors.CompilationError: at 32:22:
# Meta-parameters
BLOCK_K: tl.constexpr,
IS_SEQLEN_OFFSETS_TENSOR: tl.constexpr,
IS_VARLEN: tl.constexpr,
INTERLEAVED: tl.constexpr,
CONJUGATE: tl.constexpr,
BLOCK_M: tl.constexpr,
):
pid_m = tl.program_id(axis=0)
pid_batch = tl.program_id(axis=1)
pid_head = tl.program_id(axis=2)
rotary_dim_half = rotary_dim // 2
^
IncompatibleTypeErrorImpl('invalid operands of type pointer<int64> and triton.language.int32')
```
### Expected behavior
export to model.onnx | [
64,
46
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug",
"ONNX"
] |
https://api.github.com/repos/huggingface/transformers/issues/33765 |
TITLE
[i18n-<languageCode>] Translating docs to <languageName>
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
<!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete)
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu and @MKhalusova for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
## Get Started section
- [ ] [index.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md) https://github.com/huggingface/transformers/pull/20180
- [ ] [quicktour.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.md) (waiting for initial PR to go through)
- [ ] [installation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md).
## Tutorial section
- [ ] [pipeline_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.md)
- [ ] [autoclass_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/autoclass_tutorial.md)
- [ ] [preprocessing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.md)
- [ ] [training.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.md)
- [ ] [accelerate.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.md)
- [ ] [model_sharing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.md)
- [ ] [multilingual.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.md)
<!--
Keep on adding more as you go 🔥
--> | [
1
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"WIP"
] |
https://api.github.com/repos/huggingface/transformers/issues/34577 |
TITLE
Mismatched keyword argument names of llama make GA fix invalid
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- Transformers 4.47.0.dev0 (latest commit 33868a057c02f0368ba63bd1edb746be38fe3d90)
### Who can help?
@ArthurZucker @muellerzr
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
https://github.com/huggingface/transformers/pull/33932 may breaks the logic for the trainer's `model_accepts_loss_kwargs`. The llama model would not receive a `num_items_in_batch` argument, making the fix of https://github.com/huggingface/transformers/pull/34283 invalid again
https://github.com/huggingface/transformers/blob/33868a057c02f0368ba63bd1edb746be38fe3d90/src/transformers/trainer.py#L605
Moreover, the names of keyword arguments are different for llama and other models, we might expect the same keyword arguments for different models.
https://github.com/huggingface/transformers/blob/33868a057c02f0368ba63bd1edb746be38fe3d90/src/transformers/models/llama/modeling_llama.py#L1146-L1161
https://github.com/huggingface/transformers/blob/33868a057c02f0368ba63bd1edb746be38fe3d90/src/transformers/models/gemma/modeling_gemma.py#L1015-L1030
### Expected behavior
The models' forward functions should have a consistent keyword argument list. | [
66,
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"trainer",
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/33302 |
TITLE
Out-of-Index Error when training by `Qwen2VLFlashAttention2`
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.45.0.dev0
- Platform: Linux-5.15.0-94-generic-x86_64-with-glibc2.35
- Python version: 3.10.14
- Huggingface_hub version: 0.24.5
- Safetensors version: 0.4.4
- Accelerate version: 0.33.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 4090 D
### Who can help?
@ArthurZucker @amyeroberts
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hi, I'm finetuning the newly-released `Qwen2VLForConditionalGeneration` model by LoRA. I'm building the model by
```python
Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-7B-Instruct", attn_implementation="flash_attention_2", torch_dtype=torch.float16
)
```
I found `attn_implementation="flash_attention_2"` activates `Qwen2VLFlashAttention2` which will throw a out-of-index error during training. When I switch to `attn_implementation="sdpa"`, the error does not come out and training goes smoothly.
After some time of debugging, I located that the problem comes from [this line](https://github.com/huggingface/transformers/blob/main/src/transformers/models/qwen2_vl/modeling_qwen2_vl.py#L638) where `rotary_seq_len` does not properly reflect the length of the input sequence but rather the real length minus 1. I modified this line to `rotary_seq_len = cache_position[-1] + 1` in my local transformers offline package and it turns out that the training with `flash_attention_2` goes smoothly.
My input batch to the model is as follow:
```
batch
input_ids: Tensor (B, seq_len)
attention_mask: Tensor (B, seq_len)
labels: Tensor (B, seq_len)
pixel_values: Tensor (B, res_h, res_w) # res_h and res_w are the shape of image after processor()
image_grid_thw: Tensor (B, 3)
```
I suspect that my input batch to the model has the correct shape, so I'm wondering whether my tiny workaround is the optimal solution to the problem. I really appreciate it if you could tell me some better solutions.
### Expected behavior
As Reproduction section. Thanks for your patience for my issue. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/36144 |
TITLE
Add the support for deepseek architecture .gguf
COMMENTS
9
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
The current version does not support gguf under the deepseek architecture. It is hoped that the deepseek architecture will be added. [[supported-model-architectures]](https://huggingface.co/docs/transformers/en/gguf#supported-model-architectures)
### Motivation
In some framework based transformers (e.g. vllm) will raise error when load .gguf file of deepseek model or quantized deepseek model.
* [[Usage]: Does DeepSeek-R1 1.58-bit Dynamic Quant work on VLLM? · Issue #12573 · vllm-project/vllm](https://github.com/vllm-project/vllm/issues/12573)
* [unsloth/DeepSeek-R1-GGUF · Running the model with vLLM does not actually work](https://huggingface.co/unsloth/DeepSeek-R1-GGUF/discussions/12)
### Your contribution
Is there any guidance to help users add relevant support? | [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/34056 |
TITLE
Boolean as tool input
COMMENTS
2
REACTIONS
+1: 1
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
It would be great if `boolean` was authorized as input to a `Tool`
### Motivation
I am willing to use my own tools with transformers CodeAgent ; using the method `tool`
I have a proper function `func` with typing and doc-strings as required. One of the input of the function is a `bool`.
When I try to run `tool(func)` I get: `Exception: Input 'perte_de_salaire': type 'boolean' is not an authorized value, should be one of ['string', 'integer', 'number', 'image', 'audio', 'any']`.
The Exception is rather clear, but why wouldn't a type as basic as `boolean` not be allowed? Especially since any is authorized. This is clearly a limitation to using the library.
### Your contribution
I seems like a few lines of code to change in tools.py (https://github.com/huggingface/transformers/blob/main/src/transformers/agents/tools.py) | [
76,
78
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
1,
0,
0,
0
] | [
"Feature request",
"Agents"
] |
https://api.github.com/repos/huggingface/transformers/issues/34476 |
TITLE
Albert is ExecuTorch compatible
COMMENTS
0
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
Albert is ExecuTorch compatible.
Unit Test:
`RUN_SLOW=1 pytest tests/models/albert/test_modeling_albert.py -k test_export -v`
```
tests/models/albert/test_modeling_albert.py::AlbertModelIntegrationTest::test_export PASSED [100%]
```
E2E test in ExecuTorch:
Patch https://github.com/pytorch/executorch/pull/6509
`python -m extension.export_util.export_hf_model -hfm="albert/albert-base-v2" -lm masked_lm`
```
Saved exported program to ./albert.pte
```
`./cmake-out/backends/xnnpack/xnn_executor_runner --model_path albert.pte`
```
I 00:00:00.051666 executorch:executor_runner.cpp:82] Model file albert.pte is loaded.
I 00:00:00.051701 executorch:executor_runner.cpp:91] Using method forward
I 00:00:00.051704 executorch:executor_runner.cpp:138] Setting up planned buffer 0, size 12005376.
I 00:00:00.101731 executorch:executor_runner.cpp:161] Method loaded.
I 00:00:00.101775 executorch:executor_runner.cpp:171] Inputs prepared.
I 00:00:00.251130 executorch:executor_runner.cpp:180] Model executed successfully.
I 00:00:00.251148 executorch:executor_runner.cpp:184] 1 outputs:
Output 0: tensor(sizes=[1, 64, 30000], [
7.12213, 16.4255, -9.69697, 0.315882, 7.49277, 8.37929, 8.01692, 12.2838, 8.11998, 12.4227,
7.5468, -5.25646, -5.68964, 11.3917, 8.85877, 8.94538, 5.69543, 7.87437, 10.1869, 6.47921,
5.09051, 8.5898, 7.79427, 1.2211, 3.30417, 3.22097, 1.58806, 9.30696, 1.07529, 4.84525,
2.17895, 8.81211, -1.02848, -3.64258, 6.78737, 4.30354, 1.65078, 3.47092, 11.7028, 7.89638,
5.70505, -1.05684, 8.3248, 12.2657, 4.26686, 10.2256, -1.99968, 2.86684, 1.18797, 16.2016,
1.63196, 5.46712, 2.33064, 7.08936, 0.676241, 6.57334, 1.04902, 0.281277, 12.6735, -1.04131,
4.93435, -5.3161, 10.982, 2.07643, -1.98044, 1.17825, -6.78902, -0.594365, 9.06238, 11.7988,
6.41249, 2.30598, 2.37509, 8.14539, 0.708781, 0.270195, -0.437858, -3.87035, -3.94704, 12.5791,
0.291936, 5.41188, -2.38334, -4.61858, 2.57807, -0.0342076, -2.09207, 3.3832, 4.2705, -5.35976,
6.55041, -5.35834, 0.0824419, 10.0817, -11.5175, 7.71341, 14.2482, -2.19647, 0.258341, 13.5795,
...,
-26.6734, -15.8391, -9.05885, -22.9564, -14.1135, -14.3582, -1.38681, -22.967, -6.46937, -5.23052,
-15.8735, -0.781886, 1.96928, -0.801466, -13.4606, -9.3534, -7.63344, -18.6456, -14.0491, -10.0933,
-10.3132, -11.3254, -12.3537, -4.23457, -9.51285, -19.6473, -14.6648, -5.87785, -2.96578, -14.0239,
-0.557438, -5.21334, -5.5204, 1.0429, -8.47772, -12.8267, -8.01721, -15.3659, -15.7359, -14.8388,
-11.0749, -14.5002, -22.6418, -7.16905, -5.90876, -12.3513, -9.51316, -21.9345, -18.6938, -6.07597,
-11.0177, 0.404317, -6.31417, -8.48093, -7.75292, -7.26334, -14.5192, -9.14845, -10.1494, -5.35306,
-2.2068, -12.4971, -20.1255, -3.67846, -8.99902, -11.6741, -13.5727, -0.0831118, -12.0526, -7.48546,
-18.1656, -12.0559, -6.95208, 1.14825, -11.53, -13.2759, -9.91268, -8.80736, -3.15759, -4.27456,
-6.43947, -7.06724, -8.69398, -13.4397, -4.94796, -9.45768, -11.02, -11.3739, -10.9547, -18.7554,
-25.8251, -12.1951, -6.00279, -9.81018, -5.64514, -20.6445, -12.1152, -7.1209, -13.5729, -8.33296,
])
```
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. #33836
- [x] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
@qubvel
| [
73,
31
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"run-slow",
"ExecuTorch"
] |
https://api.github.com/repos/huggingface/transformers/issues/33837 |
TITLE
Stable Diffusion is ExecuTorch compatible
COMMENTS
0
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
Enable Stable Diffusion to ["Export to ExecuTorch"](https://github.com/huggingface/transformers/issues/32253) workflow
### Motivation
See details in #32253
### Your contribution
Stable Diffusion model enablement | [
76,
31
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request",
"ExecuTorch"
] |
https://api.github.com/repos/huggingface/transformers/issues/35335 |
TITLE
Default arguments in `DebertaConfig` disable relative attention, contrary to the docs and `deberta-base`
COMMENTS
8
REACTIONS
+1: 1
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
transformers 4.47.0
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The documentation for `DebertaConfig` says that
> Instantiating a configuration with the defaults will yield a similar configuration to that of the DeBERTa [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) architecture.
Yet, the **most important part** of DeBERTa, namely the relative attention, is disabled by default in the model and in the config:
https://github.com/huggingface/transformers/blob/9613933b022ddbf085e2c593ed4ceea4c734179a/src/transformers/models/deberta/modeling_deberta.py#L191
https://github.com/huggingface/transformers/blob/9613933b022ddbf085e2c593ed4ceea4c734179a/src/transformers/models/deberta/configuration_deberta.py#L71-L75
https://github.com/huggingface/transformers/blob/9613933b022ddbf085e2c593ed4ceea4c734179a/src/transformers/models/deberta/configuration_deberta.py#L120
Even when users request a given amount of `max_relative_positions`, relative attention stays disabled as long as that option is set to False.
https://github.com/huggingface/transformers/blob/9613933b022ddbf085e2c593ed4ceea4c734179a/src/transformers/models/deberta/modeling_deberta.py#L201-L210
And indeed:
```python
from transformers import DebertaConfig
config = DebertaConfig()
print(config.relative_attention)
```
This prints False, and when you instantiate a new DeBERTa model, e.g. like
```python
from transformers import DebertaConfig, DebertaForMaskedLM
print(DebertaForMaskedLM._from_config(DebertaConfig()))
print(DebertaForMaskedLM._from_config(DebertaConfig(max_relative_positions=512)))
```
...there are **no relative positional embeddings** in the model, only absolute positional embeddings. This model will also not do any disentangled attention.
### Expected behavior
Conform to the documentation by setting `relative_attention=True` in the `DebertaConfig` by default.
I would also add a warning when relative attention is False, so that users know very clearly that *despite* using a DeBERTa model, they are not getting the core feature offered by DeBERTa, namely the relative attention. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35723 |
TITLE
Break point in Transformers V4.48.0 and python 3.9
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.48.0
- Platform: Linux-5.15.0-25-generic-x86_64-with-glibc2.35
- Python version: 3.9.0
- Huggingface_hub version: 0.27.1
- Safetensors version: 0.5.2
- Accelerate version: 1.2.1
### Who can help?
@gante @Rocketknight1
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Install Python 3.9
Install Transformers >= v4.48.0
install fastchat accelerate
I was trying to attach delta weights of Vicuna-13B to existing Llama 13B weights in local , for this the advised was below:
```
python3 -m fastchat.model.apply_delta \
--base-model-path /path/to/llama-13b \
--target-model-path /path/to/output/vicuna-13b \
--delta-path lmsys/vicuna-13b-delta-v1.1
```
doing this raised the following error:
```
RuntimeError: Failed to import transformers.generation.streamers because of the following error (look up to see its traceback):
unsupported operand type(s) for |: 'type' and 'NoneType'
```
### Expected behavior
Work in Python 3.9 | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/33467 |
TITLE
Support context parallel training with ring-flash-attention
COMMENTS
9
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 3
heart: 0
rocket: 2
eyes: 0
BODY
### Feature request
Hi, I'm the author of [zhuzilin/ring-flash-attention](https://github.com/zhuzilin/ring-flash-attention).
I wonder if you are interested in integrating context parallel with [zhuzilin/ring-flash-attention](https://github.com/zhuzilin/ring-flash-attention), so that user can train llm with long data more efficiently.
### Motivation
As openai o1 released, it will probably be common for people to train model with really long cot data. And it will be nice if most model within the transformers library can support training with long context efficiently with certain type of context parallel, i.e. the context length scale linearly with the number of GPUs.
The 3 existing context parallel methods are the deepspeed ulysses, ring attention and the one proposed in the [llama3 tech report](https://arxiv.org/abs/2407.21783). The deepspeed ulysses will be limited by the number of kv heads (the maximum context length can be `num_head_kv * seq_length_per_gpu`), which makes it a little unfriendly to GQA models. So it will be great if the transformers library could support the one or both of the other 2 context parallel methods.
And both ring attention and the llama3 strategy are supported with flash attention in [zhuzilin/ring-flash-attention](https://github.com/zhuzilin/ring-flash-attention), whose correctness has been proved by [jzhang38/EasyContext](https://github.com/jzhang38/EasyContext). The library basically has the same api as flash attention, and hides the communication required from its user to make it a easy substitution from any origin flash attention api callsite.
Therefore, I believe it will be easy to support the context parallel with [zhuzilin/ring-flash-attention](https://github.com/zhuzilin/ring-flash-attention). For example, we could have different branch in `modeling_flash_attention_utils._flash_attention_forward`.
### Your contribution
I'd love to help if you have interests :) | [
76,
0,
68
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request",
"Good Difficult Issue",
"Flash Attention"
] |
https://api.github.com/repos/huggingface/transformers/issues/35124 |
TITLE
Add common test for `torch.export` and fix some vision models
COMMENTS
25
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 1
eyes: 0
BODY
# What does this PR do?
Add a common **slow** test to check if a model can be exported with no issues using `torch.export.export`
1. Add an optional test, to enable it please set `test_torch_exportable = True` flag for model-specific test.
2. Enable test for vision and video models
3. Fix most of the vision models
The main fixes include:
- Use a compile-compatible LRU cache for models.
- Avoid modifying model parameters in the forward pass (e.g. self.param = self.param + x).
- Avoid modifying leaf in-place tensors created in the forward pass.
- Avoid creating tensors with `requires_grad=True` in the forward pass.
Testing is not complete, there might be code paths that can't be exported. I did additional testing with specific checkpoints. In most cases, we are safe. The only two situations I found where tests pass but checkpoint export does not pass are:
- beit (fixed)
- zoedepth (not fixed)
## Results
✅ - can be exported with `torch.export.export`
🔵 - export fixed in this PR
❌ - can't be exported
### Vision models
- 🔵 beit
- 🔵 bit
- 🔵 conditional_detr
- ✅ convnext
- ✅ convnextv2
- ✅ cvt
- ✅ dab_detr
- 🔵 deformable_detr
- ✅ deit
- ✅ depth_anything
- ✅ depth_pro
- 🔵 detr
- ✅ dinat
- ✅ dinov2
- ✅ dinov2_with_registers
- ✅ dit
- ✅ dpt
- ✅ efficientnet
- 🔵 focalnet
- ✅ glpn
- ✅ hiera
- ✅ ijepa
- 🔵 imagegpt
- ❌ levit (low usage, won't fix)
- ✅ mask2former
- 🔵 maskformer
- ✅ mobilenet_v1
- ✅ mobilenet_v2
- ✅ mobilevit
- ✅ mobilevitv2
- ✅ poolformer
- ✅ pvt
- ✅ pvt_v2
- ✅ regnet
- ✅ resnet
- ✅ rt_detr
- 🔵 rt_detr_v2
- ✅ segformer
- 🔵 seggpt
- ❌ superpoint (data-dependent [expression](https://github.com/huggingface/transformers/blob/main/src/transformers/models/superpoint/modeling_superpoint.py#L320))
- ✅ swiftformer
- ✅ swin
- ✅ swinv2
- 🔵 swin2sr
- ✅ table_transformer
- ✅ textnet
- ✅ upernet
- ✅ vit
- ✅ vitdet
- ✅ vit_mae
- ✅ vitmatte
- ✅ vit_msn
- ✅ vitpose
- ✅ vitpose_backbone
- ✅ yolos
- ❌ zoedept (data-dependent expression, test config pass but checkpoint not)
### Video models
- ✅ timesformer
- ✅ vivit
- ✅ videomae
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @ylacombe, @eustlb
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @muellerzr
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| [
62,
73,
11
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Vision",
"run-slow",
"torch export"
] |
https://api.github.com/repos/huggingface/transformers/issues/34481 |
TITLE
Expand AcceleratorConfig to accommodate other features such as NCCL timeout etc
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
Expand `AcceleratorConfig` and corresponding transformers trainer args to allow transformer users to use full feature set of accelerate through the config arguments supported by `Accelerator()`. The args are being materialized here for use - https://github.com/huggingface/transformers/blob/a769ed45e17c44fd17b85c025863c4e4f2f73634/src/transformers/trainer.py#L5000
### Motivation
When using `HF/transformers` or `HF/trl SFTTrainer` with accelerate under the hood, its sad that only a limited set of arguments are exposed in `AcceleratorConfig` thereby not having control over using other features of the `Accelerator` such modifying the NCCL timeout.
### Your contribution
I will be glad to raise a PR to expand AcceleratorConfig to enable wide array of arguments supported by `Accelerator`. | [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/35618 |
TITLE
Help Understanding Beam Search Scores in Hugging Face (LLaMA + LoRA)
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
Hello Hugging Face community,
I’m working with a LLaMA-based model that has a LoRA (Low-Rank Adapter) applied, and I’m using beam search in Transformers. I’m trying to debug how the final beam scores are computed, because the step-by-step log probabilities I print out look far more negative than the final “sequence score” reported by Hugging Face.
Below is a sample of my debug output for 4 beams, each showing:
Generated Sequence (token IDs, excluding the prompt/input).
Generated Text (decoded).
Step-by-Step Analysis: Each newly generated token’s log probability.
HF Cumulative Sequence Score (final beam score from generation_output.sequences_scores).
Debug Info (lengths, how many log-prob steps were used vs. available).
=== HuggingFace Beam Analysis (Generated Tokens Only) ===
Input sequence length: 148
--- Beam 1 ---
Generated Sequence (IDs): [32, 3202, 128001, 128001, 128001, 128001, 128001, 128001, 128001, 128001, 128001, 128001, 128001]
Generated Text: AUP
Step-by-Step Analysis:
Step 1: Token='A' (ID=32), LogProb=-0.741240
Step 2: Token='UP' (ID=3202), LogProb=-28.383789
Step 3: Token='' (ID=128001), LogProb=-32.667973
Final Scores:
HF Cumulative Sequence Score: -0.247081
--- Beam 2 ---
Generated Sequence (IDs): [51154, 128001, 128001, 128001, 128001, 128001, 128001, 128001, 128001, 128001, 128001, 128001, 128001]
Generated Text: Others
Step-by-Step Analysis:
Step 1: Token='Others' (ID=51154), LogProb=-0.647490
Step 2: Token='' (ID=128001), LogProb=-29.399292
Final Scores:
HF Cumulative Sequence Score: -0.323745
--- Beam 3 ---
Generated Sequence (IDs): [32, 3202, 320, 6546, 1428, 11, 10984, 49541, 13, 15388, 3298, 8, 128001]
Generated Text: AUP (CSAM, Encourg. Illegal Act)
Step-by-Step Analysis:
Step 1: Token='A' (ID=32), LogProb=-0.741240
Step 2: Token='UP' (ID=3202), LogProb=-20.869020
Step 3: Token=' (' (ID=320), LogProb=-9.416358
Step 4: Token='CS' (ID=6546), LogProb=-19.269587
Step 5: Token='AM' (ID=1428), LogProb=-23.486216
Step 6: Token=',' (ID=11), LogProb=-10.883574
Step 7: Token=' Enc' (ID=10984), LogProb=-0.144973
Step 8: Token='ourg' (ID=49541), LogProb=-0.001301
Step 9: Token='.' (ID=13), LogProb=-0.001659
Step 10: Token=' Illegal' (ID=15388), LogProb=-20.425816
Step 11: Token=' Act' (ID=3298), LogProb=-14.907486
Step 12: Token=')' (ID=8), LogProb=-0.150186
Step 13: Token='' (ID=128001), LogProb=-17.213655
Final Scores:
HF Cumulative Sequence Score: -1.447294
--- Beam 4 ---
Generated Sequence (IDs): [32, 3202, 320, 6546, 1428, 11, 10984, 49541, 13, 15388, 3298, 6266, 128001]
Generated Text: AUP (CSAM, Encourg. Illegal Act.)
Step-by-Step Analysis:
Step 1: Token='A' (ID=32), LogProb=-0.741240
Step 2: Token='UP' (ID=3202), LogProb=-28.162111
Step 3: Token=' (' (ID=320), LogProb=-10.757921
Step 4: Token='CS' (ID=6546), LogProb=-6.859391
Step 5: Token='AM' (ID=1428), LogProb=-20.384962
Step 6: Token=',' (ID=11), LogProb=-15.148496
Step 7: Token=' Enc' (ID=10984), LogProb=-0.298849
Step 8: Token='ourg' (ID=49541), LogProb=-18.535187
Step 9: Token='.' (ID=13), LogProb=-0.006747
Step 10: Token=' Illegal' (ID=15388), LogProb=-14.434349
Step 11: Token=' Act' (ID=3298), LogProb=-12.582914
Step 12: Token='.)' (ID=6266), LogProb=-12.790556
Step 13: Token='' (ID=128001), LogProb=-20.104782
Final Scores:
HF Cumulative Sequence Score: -1.464120
The Question
--------------
How does Hugging Face’s beam search compute the final scores (e.g., −0.247081, −0.323745, −1.447294, −1.464120) given the very negative individual log probabilities?
For example, for the first beam, I expected a cumulative probability of (-0.741240 - 28.38378 - 32.667973) / 3 = -20.597667 since no length_penalty is being applied. However, the final sequences_scores from HF differ significantly from any straightforward summation of the listed token log-probs, even when accounting for a length_penalty.
Can someone help clarify how these scores are calculated?
### Who can help?
@gante @ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
GENERATION CODE :
------------------------------------------------------------------------------------------------------------------------
model_name = "./Llama/Meta-Llama-3.1-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = LlamaForCausalLM.from_pretrained(
model_name,
load_in_8bit=False,
torch_dtype=torch.float16,
device_map='auto',
)
adaptor_path = './model_spec/checkpoints/checkpoint-200'
model = PeftModel.from_pretrained(
model,
adaptor_path,
torch_dtype=torch.float16,
)
model.eval()
message = "Lady Sold Children's Clothes That She Don't Send!"
input_raw = "Message: {message}"
input = input_raw.format(message=message)
instruction = "Does this customer-reported message indicate an AUP violation from the following categories? \n[A, B, C]\nIf yes, respond 'AUP'; if not, respond 'Others'."
prompt_template = f"<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{instruction}<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
prompt = prompt_template.format(instruction=instruction, input=input)
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].to('cuda')
generation_config = GenerationConfig(
temperature=0,
top_p=1,
top_k=-1,
num_beams=4, # Number of beams for beam search
num_return_sequences=4, # Return all beams
)
generate_params = {
"input_ids": input_ids,
"generation_config": generation_config,
"return_dict_in_generate": True,
"output_scores": True,
"max_new_tokens": 128,
}
with torch.no_grad():
generation_output = model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=128
)
s = generation_output.sequences[0]
output = tokenizer.decode(s,skip_special_tokens=True)
result = output.split('assistant')[1].strip()
```
DECODE CODE :
------------------------------------------------------------------------------------------------------------------------
```
import torch
import torch.nn.functional as F
def analyze_beams(
generation_output,
tokenizer,
input_ids,
end_of_text_id=128001,
length_penalty=1.0,
ignore_after_first_eos=False
):
"""
Analyzes final beams from a Hugging Face generation output.
1) Excludes the original input tokens, only focusing on "newly generated" tokens.
2) Prints step-by-step tokens (ID & text) + log-probs.
3) Applies optional length penalty for the final "calculated score."
4) Optionally stops counting tokens after first <eos> if 'ignore_after_first_eos=True'.
:param generation_output: Object with attributes:
- sequences: final beam sequences (tensor shape [num_beams, total_seq_len])
- sequences_scores: final HF beam scores
- scores: list of per-step logits ([num_steps], each shape [num_beams, vocab_size])
:param tokenizer: A Hugging Face tokenizer to decode tokens into text.
:param input_ids: The original input_ids (so we can know how many tokens to skip).
:param end_of_text_id: The <eos> or <end_of_text> token ID (default=128001).
:param length_penalty: Exponent for length normalization.
:param ignore_after_first_eos: If True, we ignore any tokens after the first <eos>.
"""
# 1) Determine how many input tokens to skip
input_length = len(input_ids[0]) # e.g. shape [batch_size, seq_len]
print("\n=== HuggingFace Beam Analysis (Generated Tokens Only) ===")
print(f"Input sequence length: {input_length}")
# 2) Convert generation_output.scores into shape [num_beams, steps, vocab_size]
logits = torch.stack(generation_output.scores, dim=1) # shape [num_beams, steps, vocab_size]
log_probs = F.log_softmax(logits, dim=-1) # shape [num_beams, steps, vocab_size]
beam_sequences = generation_output.sequences
beam_scores = generation_output.sequences_scores
num_beams = beam_sequences.shape[0]
steps_available = log_probs.shape[1]
vocab_size = log_probs.shape[2]
# 3) Analyze each beam
for beam_idx in range(num_beams):
print(f"\n--- Beam {beam_idx + 1} ---")
# Slice out only the newly generated portion (excluding input)
full_sequence = beam_sequences[beam_idx]
generated_sequence = full_sequence[input_length:] # This is your "generated" part
# Decode text
generated_text = tokenizer.decode(generated_sequence, skip_special_tokens=True)
print(f"Generated Sequence (IDs): {generated_sequence.tolist()}")
print(f"Generated Text: {generated_text}")
print("\nStep-by-Step Analysis:")
beam_score_sum = 0.0
used_step_count = 0
# We'll iterate over each newly generated token
for step_idx, token_id in enumerate(generated_sequence):
if step_idx >= steps_available:
# We've run out of log_probs steps
break
# Retrieve distribution for this beam at this step
# shape [vocab_size]
token_log_probs = log_probs[beam_idx, step_idx]
# The log-prob for the chosen token_id
token_logp = token_log_probs[token_id].item()
# Accumulate beam score
beam_score_sum += token_logp
used_step_count += 1
# Print step info
token_text = tokenizer.decode([token_id], skip_special_tokens=True)
print(
f"Step {step_idx + 1}: "
f"Token='{token_text}' (ID={token_id}), LogProb={token_logp:.6f}"
)
# If ignoring repeated <eos>, we break after the first <eos> token
if ignore_after_first_eos and token_id == end_of_text_id:
break
# 4) Apply length penalty
# If all tokens are used, used_step_count is the length; otherwise we truncated early
final_len = used_step_count if used_step_count > 0 else 1
calculated_score = beam_score_sum / (final_len ** length_penalty)
# 5) Print results
print("\nFinal Scores:")
# Show Hugging Face's final beam score
hf_score = beam_scores[beam_idx].item()
print(f" HF Cumulative Sequence Score: {hf_score:.6f}")
print(f" Calculated Score: {calculated_score:.6f}")
print("\nDebug Info:")
print(f" Full sequence length: {len(full_sequence)} (including input)")
print(f" Generated sequence length: {len(generated_sequence)}")
print(f" Steps of log_probs used: {used_step_count}")
print(f" Steps of log_probs avail: {steps_available}")
print(f" Vocab size: {vocab_size}")
```
### Expected behavior
Expected a cumulative probability of (-0.741240 - 28.38378 - 32.667973) / 3 = -20.597667 since no length_penalty is being applied. | [
64,
18
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug",
"Generation"
] |
https://api.github.com/repos/huggingface/transformers/issues/34933 |
TITLE
Add support for causal language modeling for `DistilBertModel`
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
HuggingFace Transformers currently supports causal language modeling (CLM) fine-tuning for BERT using`BertLMHeadModel` [as shown in the docs](https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertLMHeadModel). My request is simply to extend this support to `DistilBertModel`.
### Motivation
I want to use a distilBERT model to initialize a `EncoderDecoderModel`, but I am getting an error message that says it does not support CLM.
```python
from transformers import EncoderDecoderModel
EncoderDecoderModel.from_encoder_decoder_pretrained(
encoder_pretrained_model_name_or_path="distilbert/distilbert-base-multilingual-cased",
decoder_pretrained_model_name_or_path="distilbert/distilbert-base-multilingual-cased",
)
```
Here is the error message:
```
ValueError: Unrecognized configuration class <class 'transformers.models.distilbert.configuration_distilbert.DistilBertConfig'> for this kind of AutoModel: AutoModelForCausalLM.
Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CohereConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, DbrxConfig, ElectraConfig, ErnieConfig, FalconConfig, FuyuConfig, GemmaConfig, Gemma2Config, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, JambaConfig, JetMoeConfig, LlamaConfig, MambaConfig, Mamba2Config, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MptConfig, MusicgenConfig, MusicgenMelodyConfig, MvpConfig, NemotronConfig, OlmoConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, Phi3Config, PLBartConfig, ProphetNetConfig, QDQBertConfig, Qwen2Config, Qwen2MoeConfig, RecurrentGemmaConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, StableLmConfig, Starcoder2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.
```
### Your contribution
I'm happy to contribute. | [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/35068 |
TITLE
bitsandbytes: simplify 8bit dequantization
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
Simplifies the dequantization for bitsandbytes int8 weights. There is a similar PR open in [huggingface/peft#2245](https://github.com/huggingface/peft/pull/2245/files).
In the upcoming bitsandbytes release we will have an added API for for int8 dequantization. For backwards compatibility, a simplified (but functionally equivalent) dequantization operation is performed.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@SunMarc @BenjaminBossan
| [
40
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Quantization"
] |
https://api.github.com/repos/huggingface/transformers/issues/33353 |
TITLE
Add visit webpage tool
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
Adds a tool to visit webpages (once they're found by the DuckDuckGoSearchTool). | [
78
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | [
"Agents"
] |
https://api.github.com/repos/huggingface/transformers/issues/33807 |
TITLE
Saving model in safetensors format through Trainer fails for Gemma 2 due to shared tensors
COMMENTS
4
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 1
rocket: 0
eyes: 3
BODY
### System Info
- `transformers` version: 4.44.2
- Platform: Linux-5.10.220-209.869.amzn2.x86_64-x86_64-with-glibc2.26
- Python version: 3.10.14
- Huggingface_hub version: 0.25.1
- Safetensors version: 0.4.5
- Accelerate version: 0.34.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA A10G
### Who can help?
@muellerz @SunMarc
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am finetuning `google/gemma-2-2b` and these are the arguments and trainer call:
```
text_model = AutoModelForCausalLM.from_pretrained("google/gemma-2-2b", token=token, attn_implementation='eager')
training_args = TrainingArguments(
output_dir=args.log_dir,
num_train_epochs=args.epochs,
per_device_train_batch_size=args.train_batch_size,
per_device_eval_batch_size=args.eval_batch_size,
warmup_steps=args.warmup_steps,
learning_rate=args.learning_rate,
evaluation_strategy="no",
logging_dir=args.log_dir,
logging_steps=50,
save_strategy="steps",
save_steps=2000,
report_to="mlflow",
run_name=args.run_name,
)
trainer = Trainer(
model=text_model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
compute_metrics=compute_metrics,
)
```
I am getting the following error when trainer tries to save the model:
```
RuntimeError:
Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'text_model.model.embed_tokens.weight', 'text_model.lm_head.weight'}].
A potential way to correctly save your model is to use `save_model`.
```
I have currently disabled saving as safetensors through the training arguments:
`save_safetensors=False,`
### Expected behavior
Should save in safetensors without raising an error. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/36092 |
TITLE
Adding tokens and resizing doesn't work
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
Python 3.11 colab
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Hello. I'm trying to add custom tokens into `Qwen2.5-VL-7B-Instruct` model like this:
```
from transformers import AutoProcessor, AutoModelForImageTextToText, AutoTokenizer
import torch
import os
base_model_name = "/content/Qwen2.5-VL-7B-Instruct"
base_model = AutoModelForImageTextToText.from_pretrained(
base_model_name,
low_cpu_mem_usage=True,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(base_model_name, trust_remote_code=True)
additional_tokens = [
"<memory>", "</memory>", "<shujinko>", "</shujinko>",
"<kanojo>", "</kanojo>", "<dialog>", "<yuki>", "</yuki>",
"<yuna>", "</yuna>", "<hito>", "</hito>", "<qt>", "</qt>",
"<action>", "</action>", "<data>", "</data>", "<unk>"
]
tokenizer.add_tokens(additional_tokens)
print(f"New vocab size: {len(tokenizer)}")
base_model.resize_token_embeddings(len(tokenizer))
if base_model.config.tie_word_embeddings:
with torch.no_grad():
output_embeddings = base_model.get_output_embeddings()
if output_embeddings is not None:
# Copy weights for new tokens from input embeddings to output embeddings
output_embeddings.weight[-len(additional_tokens):] = base_model.get_input_embeddings().weight[-len(additional_tokens):]
new_model_dir = "new_model"
os.makedirs(new_model_dir, exist_ok=True)
base_model.save_pretrained(new_model_dir, safe_serialization=False) # Disable safe serialization for large models
tokenizer.save_pretrained(new_model_dir)
print("New model created and saved.")
```
Error:
```
Loading checkpoint shards: 100%
5/5 [00:03<00:00, 2.22it/s]
WARNING:accelerate.big_modeling:Some parameters are on the meta device because they were offloaded to the cpu.
New vocab size: 151684
/usr/local/lib/python3.11/dist-packages/transformers/modeling_utils.py:2842: UserWarning: Attempting to save a model with offloaded modules. Ensure that unallocated cpu memory exceeds the `shard_size` (5GB default)
warnings.warn(
Saving checkpoint shards: 75%
3/4 [00:44<00:14, 14.85s/it]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-20-6a2d7dfe0bf0>](https://localhost:8080/#) in <cell line: 0>()
39 new_model_dir = "new_model"
40 os.makedirs(new_model_dir, exist_ok=True)
---> 41 base_model.save_pretrained(new_model_dir, safe_serialization=False) # Disable safe serialization for large models
42 tokenizer.save_pretrained(new_model_dir)
43 print("New model created and saved.")
5 frames
[/usr/local/lib/python3.11/dist-packages/accelerate/utils/modeling.py](https://localhost:8080/#) in set_module_tensor_to_device(module, tensor_name, device, value, dtype, fp16_statistics, tied_params_map)
285 # In other cases, we want to make sure we're not loading checkpoints that do not match the config.
286 if old_value.shape != value.shape and param_cls.__name__ != "Params4bit":
--> 287 raise ValueError(
288 f'Trying to set a tensor of shape {value.shape} in "{tensor_name}" (which has shape {old_value.shape}), this looks incorrect.'
289 )
ValueError: Trying to set a tensor of shape torch.Size([152064, 3584]) in "weight" (which has shape torch.Size([151684, 3584])), this looks incorrect.
```
How to fix this?
### Expected behavior
Should work | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/33567 |
TITLE
Mamba 2 Multi-GPU errors out on generation with parallel beam search
COMMENTS
1
REACTIONS
+1: 1
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 1
BODY
### Observed issue
Found out when running multi-gpu slow tests in https://github.com/huggingface/transformers/pull/33560 .
Line 479 exactly of the mamba2 modeling file
https://github.com/huggingface/transformers/blob/8efc06ee1863bd6e34e8adb7b10901da87c66818/src/transformers/models/mamba2/modeling_mamba2.py#L472-L480
Will raise the following for the test `tests/models/mamba2/test_modeling_mamba2.py::Mamba2ModelTest::test_model_parallel_beam_search`
```bash
E RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1!
```
### Mitigation
Move the smallest tensors of that operation to the device of the largest. Will do that soon, but if anyone wants to jump on the issue happy to review as well. | [
38,
18
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Distributed Training / Models",
"Generation"
] |
https://api.github.com/repos/huggingface/transformers/issues/35978 |
TITLE
HPD-Transformer: A Hybrid Parsing-Density Transformer for Efficient Structured & Probabilistic Reasoning
COMMENTS
0
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Model description
**Overview**
HPD‑Transformer is a hybrid AI model combining structured parsing (syntax/semantic analysis) and probabilistic density estimation (uncertainty-aware reasoning) within a single, energy-efficient framework. Developed under the brand name **OpenSeek**, HPD‑Transformer outperforms several general-purpose LLMs (e.g., ChatGPT‑4, Qwen 2.5 Max, DeepSeek) on specialized tasks while reducing computational costs by up to 60–70%.
### Key Features
- **Hybrid Architecture**: Integrates parsing and density modules.
- **Sparse Mixture of Experts (MoE)**: Domain‑specific experts reduce compute cost.
- **Energy Efficiency**: Uses quantization, pruning, and Performer attention for ~60% lower FLOPs.
- **Multi‑Modal & Multilingual**: Handles text, tables, and 50+ languages.
- **Real‑Time UI**: Interactive visualization for parsing, uncertainty estimates, and more.
### Methodology Highlights
1. **Hybrid Parsing-Density**:
- Parsing Module: Lightweight transformer blocks (Performer) for syntactic/semantic analysis.
- Density Module: Monte Carlo dropout & Sparse Gaussian Processes for uncertainty modeling.
2. **Sparse MoE**:
- 32 experts (small feed-forward networks), each specialized in a domain (medical, legal, finance, etc.).
- Top-2 routing activates only the most relevant experts per token.
3. **Training**:
- **Knowledge Distillation** from teacher models (ChatGPT‑4, Qwen 2.5 Max, etc.).
- **RLHF**: Reinforcement Learning from Human Feedback for correctness and clarity.
- **Curriculum Learning**: General pretraining → domain-specific → task-specific.
- **Online Meta-Learning**: Real-time adaptation without full retraining.
4. **Efficiency**:
- 8-bit Quantization, structured pruning, and mixed-precision training.
- Performer (FAVOR+) attention for O(n) complexity.
5. **Evaluation & Benchmarks**:
- Targets >80% accuracy on MMLU, surpassing ChatGPT‑4 (~78%).
- Achieves lower inference cost ($0.001/query) vs. ChatGPT‑4’s ($0.005/query).
6. **Use Cases**:
- High-stakes fields (healthcare, legal, finance) needing interpretable outputs.
- Edge deployments where compute/energy are limited.
7. **Limitations**:
- Context window limited to ~8k tokens (less than some mega-LLMs).
- May require additional domain experts for niche tasks.
**Reference Implementation**
We provide a reference PyTorch implementation (see code snippets below) that includes:
- Shared Embedding Layer
- Parsing Module (Performer-based)
- Density Module (Bayesian Neural Network + MC dropout)
- Sparse Mixture of Experts (Top-2 gating)
- Simple training loop for demonstration
**UI/Deployment**
- FastAPI backend with Docker support for cloud or on-prem deployment.
- Optional Streamlit/React UI to visualize dependency parsing and uncertainty in real-time.
- Supports edge deployments via ONNX or TensorFlow Lite.
**License**
- Core modules are open-sourced under Apache 2.0.
- Extended enterprise features available for commercial use.
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
[HPD.docx](https://github.com/user-attachments/files/18615085/HPD.docx) | [
77
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
"New model"
] |
https://api.github.com/repos/huggingface/transformers/issues/35565 |
TITLE
Add cosmos from Nvidia
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 3
BODY
### Model description
https://www.nvidia.com/en-us/ai/cosmos/ on model is autoregressive, all open source
We might have a PR coming directly from NVIDIA!
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_ | [
77,
0
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
"New model",
"Good Difficult Issue"
] |
https://api.github.com/repos/huggingface/transformers/issues/35084 |
TITLE
size mismatch for lm_head.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([59744, 768]).
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
transformers version: 4.46.3
Platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.17
Python version: 3.8.20
Huggingface_hub version: 0.26.2
Safetensors version: 0.4.5
Accelerate version: 1.0.1
Accelerate config: not found
PyTorch version (GPU?): 2.4.1+cu121 (True)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using distributed or parallel set-up in script?:
Using GPU in script?:
GPU type: NVIDIA RTX A4000
### Who can help?
@ArthurZucker
Hi, I am using T5 1.1 base to train on my seq2seq task, and the source vocab size is 59744 and the target vocab size is only 32. I know by default the headLM's size and decoder embedding size are equal to the vocab's size as 59k so I change decoder embedding also headLM as follows when trainning and I found it all appear no error through training:
model = T5ForConditionalGeneration.from_pretrained("google/t5-v1_1-base")
model.resize_token_embeddings(new_vocab_size)
target_vocab_size = 32
model.decoder.embed_tokens = torch.nn.Embedding(target_vocab_size, model.config.d_model)
model.lm_head = torch.nn.Linear(model.config.d_model, target_vocab_size, bias=False)
torch.nn.init.normal_(model.lm_head.weight, mean=0.0, std=model.config.initializer_factor)
but when i try to load the checkpoint to predict as follows:
config = T5Config.from_pretrained("./resultstest/checkpoint-100")
# model = T5ForConditionalGeneration(config)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint_path, config=config)
it shows like that:
size mismatch for decoder.embed_tokens.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([59744, 768]).
size mismatch for lm_head.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([59744, 768]).
So how can I fix it to achieve the decoder embedding and headLM's size is only 32 to fit my target. because I need to calculate the probability of each token using softmax on 32 candidates rather than 59k.
It would be very appreciate if someone could help me to fix it.
Best.
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
this is my trainning code:
'''python
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
import torch
from transformers import AutoModelForSeq2SeqLM, Trainer, TrainingArguments, AutoTokenizer, T5Config, T5ForConditionalGeneration
from datasets import Dataset
import pandas as pd
from sklearn.model_selection import train_test_split
from tokenizer import SourceFuzzyTokenizer as CustomTokenizer
vocab_file = "./datasets/tokenizer/source_vocab.json"
data_file_fuzzy = "./datasets/test_data/testfuzzy.txt"
data_file_gt = "./datasets/test_data/testgt.txt"
max_length = 50
batch_size = 50
num_epochs = 100
learning_rate = 5e-4
new_vocab_size = 59744
output_dir = "./resultstest"
with open(data_file_fuzzy, "r") as fuzzy_file, open(data_file_gt, "r") as gt_file:
fuzzy_seqs = fuzzy_file.read().splitlines()
gt_seqs = gt_file.read().splitlines()
assert len(fuzzy_seqs) == len(gt_seqs), "fuzzy_seqs.txt and gt_seqs.txt do not MATCH!"
data = {"input": fuzzy_seqs, "target": gt_seqs}
df = pd.DataFrame(data)
train_df, eval_df = train_test_split(df, test_size=0.1, random_state=42)
train_dataset = Dataset.from_pandas(train_df)
eval_dataset = Dataset.from_pandas(eval_df)
tokenizer = CustomTokenizer(vocab_file)
model = T5ForConditionalGeneration.from_pretrained("google/t5-v1_1-base")
model.resize_token_embeddings(new_vocab_size)
target_vocab_size = 32
model.decoder.embed_tokens = torch.nn.Embedding(target_vocab_size, model.config.d_model)
model.lm_head = torch.nn.Linear(model.config.d_model, target_vocab_size, bias=False)
torch.nn.init.normal_(model.lm_head.weight, mean=0.0, std=model.config.initializer_factor)
model.config.vocab_size = 59744
model.config.decoder_vocab_size = target_vocab_size
model.lm_head.weight.data.normal_(mean=0.0, std=model.config.initializer_factor)
def preprocess_data(examples):
inputs = [tokenizer.tokenize(seq) for seq in examples["input"]]
targets = [tokenizer.tokenize(seq) for seq in examples["target"]]
input_ids = [tokenizer.convert_tokens_to_ids(tokens)[:max_length] for tokens in inputs]
target_ids = [tokenizer.convert_tokens_to_ids(tokens)[:max_length] for tokens in targets]
pad_id = tokenizer.vocab.get("[PAD]", 0)
input_ids = [seq + [pad_id] * (max_length - len(seq)) for seq in input_ids]
target_ids = [seq + [pad_id] * (max_length - len(seq)) for seq in target_ids]
attention_mask = [[1 if token != pad_id else 0 for token in seq] for seq in input_ids]
return {
"input_ids": input_ids,"attention_mask": attention_mask,
"labels": target_ids,
}
train_dataset = train_dataset.map(preprocess_data, batched=True)
eval_dataset = eval_dataset.map(preprocess_data, batched=True)
training_args = TrainingArguments(
output_dir=output_dir,
eval_strategy="steps",
learning_rate=learning_rate,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=num_epochs,
weight_decay=0.01,
save_strategy="steps",
save_total_limit=2,
logging_dir="./logs",
logging_steps=10,
evaluation_strategy="epoch",
save_steps = 50,
# load_best_model_at_end=True,
dataloader_num_workers = 1,
fp16=False,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
)
trainer.train()
'''
### Expected behavior
I expect that the headLM and decoder embedding layer can fit my source dictionary's size 32, rather than calculate logits witch have 59744 size. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34266 |
TITLE
Fixes for Modular Converter on Windows
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
This PR fixes some issues with Modular Converter on Windows.
1. Relative path regex. `rf"(src{os.sep}transformers{os.sep}.*|examples{os.sep}.*)"` results in `(src\transformers\.*|examples\.*)`, due to `\t` and `\.` this regex fails, we replace `\` on `os.path.abspath` instead, and use `(src/transformers/.*|examples/.*)` as the regex.
~~2. Relative path in auto generated message. On Windows this generates as `This file was automatically generated from src\transformers\models\florence2\modular_florence2.py.` This could potentially cause issues elsewhere, and it's better to be standardized, so when `os.sep` == `\` we replace `os.sep` with `/`.~~
3. `open` encoding. On Windows the default encoding is `cp1252` which doesn't work with `🚨`, so we use `encoding=utf-8` instead for all `open`. This particular fix is in effect in many other places in the codebase.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
| [
25,
45
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"HACKTOBERFEST-ACCEPTED",
"Modular"
] |
https://api.github.com/repos/huggingface/transformers/issues/35689 |
TITLE
use_liger_kernel requires much more GPU memory during evaluation than training
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
I found when enabling use_liger_kernel=True, it does reduce GPU memory for training. however, when dealing with evaluation, I found it requires much more GPU memory than training, even though the per_device_eval_batch_size is smaller than per_device_train_batch_size, and seq_lengths are similar.

Architecture/Model:
AutoModelForSequenceClassification - Qwen/Qwen2.5-1.5B (it happens on all qwen2.5 models, including 0.5b to 32b ones);
Specific Setting:
--per_device_train_batch_size 4 --gradient_accumulation_steps 4 --per_device_eval_batch_size 1 --bf16 --max_length 4096 --gradient_checkpointing True --group_by_length True --use_liger_kernel True --attn_implementation flash_attention_2
Unnecessary but things you might want to know:
I use deepspeed zero(1/2/3), yet I found this issue also exists when running with ddp.
People who could help:
@muellerzr @SunMarc
### Who can help?
@muellerzr @SunMarc
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
To reproduce:
Simply follow this trl [reward modeling example](https://github.com/huggingface/trl/blob/main/examples/scripts/reward_modeling.py).
### Expected behavior
I expect enabling use_liger_kernel=True does not occupy much more GPU memory for evaluation than training. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/33400 |
TITLE
Encounter error when loading checkpoint generated by latest accelerate>=0.34.0
COMMENTS
4
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.43.4
- Platform: Linux-4.9.151-015.ali3000.alios7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.18
- Huggingface_hub version: 0.24.6
- Safetensors version: 0.4.5
- Accelerate version: 0.34.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA A100-SXM4-80GB
### Who can help?
@muellerzr
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Train the model (e.g. Qwen1.5-0.5B) using FSDP with accelerate>=0.34.0. When you get the checkpoint, then load the model using AutoModelForCausalLM.from_pretrained. Then you will get the following errror.

### Expected behavior
When I downgraded the accelerate version to 0.33.0, the issue was resolved. Further investigation revealed that it was these two lines of code in the `get_state_dict` function of the `Accelerator` in `accelerate.py` that caused the problem.

Transformers may need to adapt to this change in accelerate. | [
23,
64,
17,
80
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | [
"Core: Modeling",
"bug",
"PyTorch FSDP",
"Accelerate"
] |
https://api.github.com/repos/huggingface/transformers/issues/33490 |
TITLE
Pixtral error: The following `model_kwargs` are not used by the model
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
Transformers version https://github.com/huggingface/transformers/commit/8bd2b1e8c23234cd607ca8d63f53c1edfea27462
### Who can help?
@ArthurZucker @amyeroberts
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Running this code
```
from transformers import LlavaForConditionalGeneration, AutoProcessor
from PIL import Image
model_id = "hf-internal-testing/pixtral-12b"
model = LlavaForConditionalGeneration.from_pretrained(model_id, low_cpu_mem_usage=True, load_in_8bit=True)
processor = AutoProcessor.from_pretrained(model_id)
IMG_URLS = [
"https://picsum.photos/id/237/400/300",
"https://picsum.photos/id/231/200/300",
"https://picsum.photos/id/27/500/500",
"https://picsum.photos/id/17/150/600",
]
PROMPT = "<s>[INST]Describe the images.\n[IMG][IMG][IMG][IMG][/INST]"
inputs = processor(text=PROMPT, images=IMG_URLS, return_tensors="pt").to("cuda")
generate_ids = model.generate(**inputs, max_new_tokens=500)
ouptut = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
EXPECTED_GENERATION = """
Describe the images.
Sure, let's break down each image description:
1. **Image 1:**
- **Description:** A black dog with a glossy coat is sitting on a wooden floor. The dog has a focused expression and is looking directly at the camera.
- **Details:** The wooden floor has a rustic appearance with visible wood grain patterns. The dog's eyes are a striking color, possibly brown or amber, which contrasts with its black fur.
2. **Image 2:**
- **Description:** A scenic view of a mountainous landscape with a winding road cutting through it. The road is surrounded by lush green vegetation and leads to a distant valley.
- **Details:** The mountains are rugged with steep slopes, and the sky is clear, indicating good weather. The winding road adds a sense of depth and perspective to the image.
3. **Image 3:**
- **Description:** A beach scene with waves crashing against the shore. There are several people in the water and on the beach, enjoying the waves and the sunset.
- **Details:** The waves are powerful, creating a dynamic and lively atmosphere. The sky is painted with hues of orange and pink from the setting sun, adding a warm glow to the scene.
4. **Image 4:**
- **Description:** A garden path leading to a large tree with a bench underneath it. The path is bordered by well-maintained grass and flowers.
- **Details:** The path is made of small stones or gravel, and the tree provides a shaded area with the bench invitingly placed beneath it. The surrounding area is lush and green, suggesting a well-kept garden.
Each image captures a different scene, from a close-up of a dog to expansive natural landscapes, showcasing various elements of nature and human interaction with it.
"""
```
### Expected behavior
To not crash with
```
File "/mnt/688LD58782D4FA20/XComp/CogVLM2/basic_demo/hf.py", line 17, in <module>
generate_ids = model.generate(**inputs, max_new_tokens=500)
File "/mnt/688LD58782D4FA20/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/mnt/688LD58782D4FA20/.venv/lib/python3.10/site-packages/transformers/generation/utils.py", line 1811, in generate
self._validate_model_kwargs(model_kwargs.copy())
File "/mnt/688LD58782D4FA20/.venv/lib/python3.10/site-packages/transformers/generation/utils.py", line 1215, in _validate_model_kwargs
raise ValueError(
ValueError: The following `model_kwargs` are not used by the model: ['token_type_ids'] (note: typos in the generate arguments will also show up in this list)
``` | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34462 |
TITLE
tokenizer's `apply_chat_template` doesn't preserve trailing newlines
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.46.0
- Platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.35
- Python version: 3.12.7
- Huggingface_hub version: 0.26.1
- Safetensors version: 0.4.5
- Accelerate version: 1.0.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.0+cu124 (True)
### Who can help?
@Rocketknight1 @itazap @ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The current behaviour of `apply_chat_template` is removing the last trailing newline in the chat template, even if it has more than one newlines.
```python3
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("gpt2")
tokenizer.chat_template = """<|im_start|>user
hello<|im_end|>
<|im_start|>assistant
"""
print(repr(tokenizer.chat_template))
# '<|im_start|>user\nhello<|im_end|>\n<|im_start|>assistant\n'
print(repr(tokenizer.apply_chat_template(None, tokenize=False)))
# '<|im_start|>user\nhello<|im_end|>\n<|im_start|>assistant'
tokenizer.chat_template = """<|im_start|>user
hello<|im_end|>
<|im_start|>assistant
"""
print(repr(tokenizer.chat_template))
# '<|im_start|>user\nhello<|im_end|>\n<|im_start|>assistant\n\n\n'
print(repr(tokenizer.apply_chat_template(None, tokenize=False)))
# '<|im_start|>user\nhello<|im_end|>\n<|im_start|>assistant\n\n'
```
Not sure if this is the intended behaviour but I didn't find any discussion/document that mentions this. If this was indeed intended, I would like to learn why.
From my experience, there are models that are quite sensitive to this last newline token. Besides, most users would be unaware of this behaviour, which might cause unexpected outcomes.
I have traced the code and found out that this was due to `jinja2` default configs. It can be fixed by simply setting `keep_trailing_newline=True` in the below code. After you guys confirm that this is a bug, I can make a PR if needed.
https://github.com/huggingface/transformers/blob/1d063793318b20654ebb850f48f43e0a247ab7bb/src/transformers/utils/chat_template_utils.py#L422-L424
### Expected behavior
- `apply_chat_template` should preserve that `chat_template`
- If there is any modification to the template then it should be documented. | [
64,
52
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug",
"Chat Template"
] |
https://api.github.com/repos/huggingface/transformers/issues/34985 |
TITLE
Bug for logger.warning_once
COMMENTS
5
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
https://github.com/huggingface/transformers/blob/5523e38b553ff6c46b04d2376870fcd842feeecc/src/transformers/generation/configuration_utils.py#L765
The UserWarning could not be loggered by logging and here is a simple test:
```
>>> from transformers.generation.configuration_utils import logger
>>> logger.warning_once('error', UserWarning)
--- Logging error ---
Traceback (most recent call last):
File "/home/test/miniconda3/envs/test/lib/python3.11/logging/__init__.py", line 1110, in emit
msg = self.format(record)
^^^^^^^^^^^^^^^^^^^
File "/home/test/miniconda3/envs/test/lib/python3.11/logging/__init__.py", line 953, in format
return fmt.format(record)
^^^^^^^^^^^^^^^^^^
File "/home/testminiconda3/envs/test/lib/python3.11/logging/__init__.py", line 687, in format
record.message = record.getMessage()
^^^^^^^^^^^^^^^^^^^
File "/home/test/miniconda3/envs/test/lib/python3.11/logging/__init__.py", line 377, in getMessage
msg = msg % self.args
~~~~^~~~~~~~~~~
TypeError: not all arguments converted during string formatting
Call stack:
File "<stdin>", line 1, in <module>
File "/home/test/miniconda3/envs/test/lib/python3.11/site-packages/transformers/utils/logging.py", line 328, in warning_once
self.warning(*args, **kwargs)
Message: 'error'
Arguments: (<class 'UserWarning'>,)
``` | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35536 |
TITLE
Mask2FormerImageProcessor support overlapping features
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
transformers version: 4.48.0.dev0
Python version: 3.13.1
OS: Linux (AWS CodeSpace) `Linux default 5.10.228-219.884.amzn2.x86_64 #1 SMP Wed Oct 23 17:17:00 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux`
Virtual environment: Conda
Output of `pip list`
```
Package Version
------------------------ -----------
aiohappyeyeballs 2.4.4
aiohttp 3.11.11
aiosignal 1.3.2
asttokens 3.0.0
attrs 24.3.0
Brotli 1.1.0
certifi 2024.12.14
cffi 1.17.1
charset-normalizer 3.4.1
colorama 0.4.6
comm 0.2.2
datasets 3.2.0
debugpy 1.8.11
decorator 5.1.1
dill 0.3.8
exceptiongroup 1.2.2
executing 2.1.0
filelock 3.16.1
frozenlist 1.5.0
fsspec 2024.9.0
h2 4.1.0
hpack 4.0.0
huggingface_hub 0.26.5
hyperframe 6.0.1
idna 3.10
importlib_metadata 8.5.0
ipykernel 6.29.5
ipython 8.31.0
jedi 0.19.2
Jinja2 3.1.5
jupyter_client 8.6.3
jupyter_core 5.7.2
MarkupSafe 3.0.2
matplotlib-inline 0.1.7
mpmath 1.3.0
multidict 6.1.0
multiprocess 0.70.16
nest_asyncio 1.6.0
networkx 3.4.2
numpy 2.2.1
nvidia-cublas-cu12 12.4.5.8
nvidia-cuda-cupti-cu12 12.4.127
nvidia-cuda-nvrtc-cu12 12.4.127
nvidia-cuda-runtime-cu12 12.4.127
nvidia-cudnn-cu12 9.1.0.70
nvidia-cufft-cu12 11.2.1.3
nvidia-curand-cu12 10.3.5.147
nvidia-cusolver-cu12 11.6.1.9
nvidia-cusparse-cu12 12.3.1.170
nvidia-nccl-cu12 2.21.5
nvidia-nvjitlink-cu12 12.4.127
nvidia-nvtx-cu12 12.4.127
packaging 24.2
pandas 2.2.3
parso 0.8.4
pexpect 4.9.0
pickleshare 0.7.5
pillow 11.1.0
pip 24.3.1
platformdirs 4.3.6
prompt_toolkit 3.0.48
propcache 0.2.1
psutil 6.1.1
ptyprocess 0.7.0
pure_eval 0.2.3
pyarrow 18.1.0
pycparser 2.22
Pygments 2.18.0
PySocks 1.7.1
python-dateutil 2.9.0.post0
pytz 2024.1
PyYAML 6.0.2
pyzmq 26.2.0
regex 2024.11.6
requests 2.32.3
safetensors 0.5.0
setuptools 75.7.0
six 1.17.0
stack_data 0.6.3
sympy 1.13.1
tokenizers 0.21.0
torch 2.5.1
tornado 6.4.2
tqdm 4.67.1
traitlets 5.14.3
transformers 4.48.0.dev0
typing_extensions 4.12.2
tzdata 2024.2
urllib3 2.3.0
wcwidth 0.2.13
xxhash 3.5.0
yarl 1.18.3
zipp 3.21.0
zstandard 0.23.0
```
### Who can help?
@amyeroberts @qubvel
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
From
The code below gives an error "ValueError: Unable to infer channel dimension format". Different permutations of ChannelDimension and location of `num_features` give the same or similar errors.
```
import numpy as np
from transformers.image_utils import ChannelDimension
from transformers import Mask2FormerImageProcessor # Assumes torchvision is installed
processor = Mask2FormerImageProcessor(do_rescale=False, do_resize=False, do_normalize=False)
num_classes = 2
num_features = 5
height, width = (16, 16)
images = [np.zeros((height, width, 3))]
segmentation_maps = [np.random.randint(0, num_classes, (height, width, num_features))]
batch = processor(images,
segmentation_maps=segmentation_maps,
return_tensors="pt",
input_data_format=ChannelDimension.LAST)
```
See https://stackoverflow.com/questions/79331752/does-the-huggingface-mask2formerimageprocessor-support-overlapping-features.
### Expected behavior
Processor supports overlapping masks without error. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35158 |
TITLE
Add ModernBERT to Transformers
COMMENTS
14
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
This PR will add ModernBERT to Transformers. | [
77
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
"New model"
] |
https://api.github.com/repos/huggingface/transformers/issues/35425 |
TITLE
DeepSeek V3 Support
COMMENTS
9
REACTIONS
+1: 6
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Model description
#### Transformer model
DeepSeek V3 is a Transformer model that utilizes Mixture of Experts (similar to Qwen2 MoE) and Multi-head Latent Attention (MLA).

#### Multi-token Prediction
The model is able to predict multiple tokens sequentially at each step through the MTP modules. The first token is generated by the causal LM which feeds the output token into what I would describe as a "Transformer head" to generate additional tokens for the current step. DeepSeek notes in their release that *"MTP support is currently under active development within the community, and we welcome your contributions and feedback."* (i.e. code for this is not released).

### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Transformers Code: https://huggingface.co/deepseek-ai/DeepSeek-V3
GitHub Code (minimal implementation): https://github.com/deepseek-ai/DeepSeek-V3/tree/main/inference
Paper: https://github.com/deepseek-ai/DeepSeek-V3/blob/main/DeepSeek_V3.pdf | [
77
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
"New model"
] |
https://api.github.com/repos/huggingface/transformers/issues/34029 |
TITLE
Image processing for mllama is broken for Wx1 (i.e. height == 1) image sizes
COMMENTS
8
REACTIONS
+1: 1
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
When image size of 1x1 or Wx1 is passed, the normalize() method crashes with the following error:
```
File "/usr/local/lib/python3.12/dist-packages/transformers/models/mllama/image_processing_mllama.py", line 711, in preprocess
ERROR 10-07 06:31:28 engine.py:157] image = self.normalize(
ERROR 10-07 06:31:28 engine.py:157] ^^^^^^^^^^^^^^^
ERROR 10-07 06:31:28 engine.py:157] File "/usr/local/lib/python3.12/dist-packages/transformers/image_processing_utils.py", line 111, in normalize
ERROR 10-07 06:31:28 engine.py:157] return normalize(
ERROR 10-07 06:31:28 engine.py:157] ^^^^^^^^^^
ERROR 10-07 06:31:28 engine.py:157] File "/usr/local/lib/python3.12/dist-packages/transformers/image_transforms.py", line 392, in normalize
ERROR 10-07 06:31:28 engine.py:157] raise ValueError(f"mean must have {num_channels} elements if it is an iterable, got {len(mean)}")
ERROR 10-07 06:31:28 engine.py:157] ValueError: mean must have 1 elements if it is an iterable, got 3
```
### Who can help?
@amyeroberts, @qubvel
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoImageProcessor
from PIL import Image
if __name__ == "__main__":
image_processor = AutoImageProcessor.from_pretrained("meta-llama/Llama-3.2-11B-Vision-Instruct")
data = Image.new("RGB", (1, 1))
data = image_processor.preprocess(data, return_tensors="pt").data
print(data)
```
### Expected behavior
It shouldn't crash | [
64,
62,
12,
65
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
1,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug",
"Vision",
"Multimodal",
"Processing"
] |
https://api.github.com/repos/huggingface/transformers/issues/34456 |
TITLE
Manually setting `device_map` causes a RuntimeError.
COMMENTS
6
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
python version: 3.11.10
transformers version: 4.46.0
torch version: 2.4.0+cu118
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0,1'
from transformers import AutoTokenizer, AutoModelForCausalLM
model_path = "my_workspace/llama3-8B"
tokenizer = AutoTokenizer.from_pretrained(model_path, padding_side="left")
device_map = {}
device_map['model.embed_tokens'] = 0
for layer_idx in range(20):
device_map[f'model.layers.{layer_idx}'] = 0
for layer_idx in range(20, 32):
device_map[f'model.layers.{layer_idx}'] = 1
device_map['lm_head.weight'] = 1
device_map['model.norm.weight'] = 1
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype="auto",
device_map=device_map
)
print(model(**tokenizer("111",return_tensors="pt").to(0)).logits.shape)
```
With `transformers==4.46.0`, the code above results in:
> `RuntimeError`: Expected all tensors to be on the same device, but found at least two devices, CPU and CUDA:0! (when checking argument for mat2 in method wrapper_CUDA_bmm)
### Expected behavior
This issue does not occur with lower versions of transformers. I tried `transformers==4.40.0`, and the code successfully outputs:
> torch.Size([1, 1, 128256])
| [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35003 |
TITLE
Replace all torch.FloatTensor by torch.Tensor
COMMENTS
0
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
I think `torch.FloatTensor` dtype should be replace by `torch.Tensor` now. torch.FloatTensor is quite anoying as it trigger warning most of the time, and force users to manually cast type to avoid such warning.
It seems that FloatTensor is not really use anymore in Torch. torch.FloatTensor and torch.cuda.FloatTensor are still available to ensure backward compatibility
### Motivation
Fix typing warnings
### Your contribution
Replace every occurence of torch.FloatTensor by torch.Tensor. Including docstrings | [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/35027 |
TITLE
pip install "transformers[sentencepiece]" doesn't work!
COMMENTS
13
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
MacOS Sequoia 15.0.1, python 3.13.0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. I created a python 13 virtual environment using conda on MacOS Sequoia 15.0.1.
2. I activated the environment w/ 'conda activate my_python_venv`.
2. Given the instruction here, https://huggingface.co/learn/nlp-course/chapter0/1, I ran the command `pip install "transformers[sentencepiece]"` and this is the error I get.
```
Building wheels for collected packages: sentencepiece
Building wheel for sentencepiece (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [99 lines of output]
/Users/bill/anaconda3/envs/pyhugface/lib/python3.13/site-packages/setuptools/_distutils/dist.py:261: UserWarning: Unknown distribution option: 'test_suite'
warnings.warn(msg)
running bdist_wheel
running build
running build_py
creating build/lib.macosx-10.15-x86_64-cpython-313/sentencepiece
copying src/sentencepiece/__init__.py -> build/lib.macosx-10.15-x86_64-cpython-313/sentencepiece
copying src/sentencepiece/_version.py -> build/lib.macosx-10.15-x86_64-cpython-313/sentencepiece
copying src/sentencepiece/sentencepiece_model_pb2.py -> build/lib.macosx-10.15-x86_64-cpython-313/sentencepiece
copying src/sentencepiece/sentencepiece_pb2.py -> build/lib.macosx-10.15-x86_64-cpython-313/sentencepiece
running build_ext
Package sentencepiece was not found in the pkg-config search path.
Perhaps you should add the directory containing `sentencepiece.pc'
to the PKG_CONFIG_PATH environment variable
No package 'sentencepiece' found
./build_bundled.sh: line 21: cmake: command not found
./build_bundled.sh: line 22: nproc: command not found
./build_bundled.sh: line 22: cmake: command not found
Traceback (most recent call last):
File "<string>", line 2, in <module>
exec(compile('''
~~~~^^^^^^^^^^^^
# This is <pip-setuptools-caller> -- a caller that pip uses to run setup.py
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<31 lines>...
exec(compile(setup_py_code, filename, "exec"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
''' % ('/private/var/folders/pf/m0zg4n_x68b83v_2xzrpj1380000gn/T/pip-install-_7dg5qos/sentencepiece_2f9832006d114e8cb0a1723f23c3b92d/setup.py',), "<pip-setuptools-caller>", "exec"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<pip-setuptools-caller>", line 34, in <module>
File "/private/var/folders/pf/m0zg4n_x68b83v_2xzrpj1380000gn/T/pip-install-_7dg5qos/sentencepiece_2f9832006d114e8cb0a1723f23c3b92d/setup.py", line 169, in <module>
setup(
~~~~~^
name='sentencepiece',
^^^^^^^^^^^^^^^^^^^^^
...<29 lines>...
test_suite='sentencepiece_test.suite',
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/Users/bill/anaconda3/envs/pyhugface/lib/python3.13/site-packages/setuptools/__init__.py", line 117, in setup
return distutils.core.setup(**attrs)
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^
File "/Users/bill/anaconda3/envs/pyhugface/lib/python3.13/site-packages/setuptools/_distutils/core.py", line 183, in setup
return run_commands(dist)
File "/Users/bill/anaconda3/envs/pyhugface/lib/python3.13/site-packages/setuptools/_distutils/core.py", line 199, in run_commands
dist.run_commands()
~~~~~~~~~~~~~~~~~^^
File "/Users/bill/anaconda3/envs/pyhugface/lib/python3.13/site-packages/setuptools/_distutils/dist.py", line 954, in run_commands
self.run_command(cmd)
~~~~~~~~~~~~~~~~^^^^^
File "/Users/bill/anaconda3/envs/pyhugface/lib/python3.13/site-packages/setuptools/dist.py", line 950, in run_command
super().run_command(command)
~~~~~~~~~~~~~~~~~~~^^^^^^^^^
File "/Users/bill/anaconda3/envs/pyhugface/lib/python3.13/site-packages/setuptools/_distutils/dist.py", line 973, in run_command
cmd_obj.run()
~~~~~~~~~~~^^
File "/Users/bill/anaconda3/envs/pyhugface/lib/python3.13/site-packages/setuptools/command/bdist_wheel.py", line 398, in run
self.run_command("build")
~~~~~~~~~~~~~~~~^^^^^^^^^
File "/Users/bill/anaconda3/envs/pyhugface/lib/python3.13/site-packages/setuptools/_distutils/cmd.py", line 316, in run_command
self.distribution.run_command(command)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^
File "/Users/bill/anaconda3/envs/pyhugface/lib/python3.13/site-packages/setuptools/dist.py", line 950, in run_command
super().run_command(command)
~~~~~~~~~~~~~~~~~~~^^^^^^^^^
File "/Users/bill/anaconda3/envs/pyhugface/lib/python3.13/site-packages/setuptools/_distutils/dist.py", line 973, in run_command
cmd_obj.run()
~~~~~~~~~~~^^
File "/Users/bill/anaconda3/envs/pyhugface/lib/python3.13/site-packages/setuptools/_distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
~~~~~~~~~~~~~~~~^^^^^^^^^^
File "/Users/bill/anaconda3/envs/pyhugface/lib/python3.13/site-packages/setuptools/_distutils/cmd.py", line 316, in run_command
self.distribution.run_command(command)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^
File "/Users/bill/anaconda3/envs/pyhugface/lib/python3.13/site-packages/setuptools/dist.py", line 950, in run_command
super().run_command(command)
~~~~~~~~~~~~~~~~~~~^^^^^^^^^
File "/Users/bill/anaconda3/envs/pyhugface/lib/python3.13/site-packages/setuptools/_distutils/dist.py", line 973, in run_command
cmd_obj.run()
~~~~~~~~~~~^^
File "/Users/bill/anaconda3/envs/pyhugface/lib/python3.13/site-packages/setuptools/command/build_ext.py", line 98, in run
_build_ext.run(self)
~~~~~~~~~~~~~~^^^^^^
File "/Users/bill/anaconda3/envs/pyhugface/lib/python3.13/site-packages/setuptools/_distutils/command/build_ext.py", line 359, in run
self.build_extensions()
~~~~~~~~~~~~~~~~~~~~~^^
File "/Users/bill/anaconda3/envs/pyhugface/lib/python3.13/site-packages/setuptools/_distutils/command/build_ext.py", line 476, in build_extensions
self._build_extensions_serial()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/Users/bill/anaconda3/envs/pyhugface/lib/python3.13/site-packages/setuptools/_distutils/command/build_ext.py", line 502, in _build_extensions_serial
self.build_extension(ext)
~~~~~~~~~~~~~~~~~~~~^^^^^
File "/private/var/folders/pf/m0zg4n_x68b83v_2xzrpj1380000gn/T/pip-install-_7dg5qos/sentencepiece_2f9832006d114e8cb0a1723f23c3b92d/setup.py", line 87, in build_extension
subprocess.check_call(['./build_bundled.sh', __version__])
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bill/anaconda3/envs/pyhugface/lib/python3.13/subprocess.py", line 419, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['./build_bundled.sh', '0.2.0']' returned non-zero exit status 127.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for sentencepiece
Running setup.py clean for sentencepiece
Failed to build sentencepiece
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (sentencepiece)
```
### Expected behavior
To be installed w/o issues. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35037 |
TITLE
LlamaTokenizer being recognized as a bool
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
When initializing LlamaTokenizer from the Transformers library, the tokenizer is being recognized as a bool. This issue persists across different environments and Python versions.
Steps to Reproduce:
Install the required libraries:
pip install transformers torch sentencepiece
Use the following script to initialize the tokenizer:
from transformers.models.llama import LlamaTokenizer
model_path = "C:/Users/spger/.llama/checkpoints/Llama3.1-70B"
try:
tokenizer = LlamaTokenizer.from_pretrained(model_path, use_fast=True, legacy=False)
print("Tokenizer initialized successfully.")
print("Tokenizer type:", type(tokenizer))
except Exception as e:
print("Error initializing tokenizer:", e)
Observed Output:
The tokenizer type is <class 'bool'> instead of the expected tokenizer class.
System Info:
transformers version: 4.46.3
Platform: Windows-10-10.0.26100-SP0
Python version: 3.11.9
Huggingface_hub version: 0.26.3
Safetensors version: 0.4.5
Accelerate version: not installed
Accelerate config: not found
PyTorch version (GPU?): 2.5.1+cpu (False)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using distributed or parallel set-up in script?: No
Additional Details:
Other tokenizers like AutoTokenizer for GPT-2 and BERT initialize correctly.
### Who can help?
@ArthurZucker @itazap
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
from transformers.models.llama import LlamaTokenizer
model_path = "C:/Users/spger/.llama/checkpoints/Llama3.1-70B"
try:
tokenizer = LlamaTokenizer.from_pretrained(model_path, use_fast=True, legacy=False)
print("Tokenizer initialized successfully.")
print("Tokenizer type:", type(tokenizer))
except Exception as e:
print("Error initializing tokenizer:", e)
Steps to reproduce the behavior:
1. Install the required libraries:
```bash
pip install transformers torch sentencepiece
2. Run the provided script.
3. Observe that the tokenizer is of type bool.
### Expected behavior
#### Expected behavior:
```markdown
I expect the `LlamaTokenizer` to be correctly initialized and recognized as a `LlamaTokenizer` object instead of a `bool`.
| [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34643 |
TITLE
Passing nn.Parameter values within the model architecture as deep copies.
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
python==3.9
transformers==4.41.2
linux
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm performing some parameter sharing operations. For example, I define a new attribute in the model's self_atten layer and assign it the value of a parameter (nn.Parameter) from one of the weights in self_atten, like this:
`self.test = self.o_proj.weight
`
However, I noticed that when the model is loaded, this operation is actually a deep copy, as self.test is self.o_proj.weight returns False; it should be a shallow copy instead. But when assigning an object like nn.Linear instead of nn.Parameter, it becomes a shallow copy.
Interestingly, if this operation is performed after loading the model:
`model.model.layers[1].self_attn.test = model.model.layers[1].self_attn.k_proj.weight
`
at this point, it becomes a shallow copy."
Additionally, if a custom model architecture is defined:
```
import torch
import torch.nn as nn
class MyModule(nn.Linear):
def __init__(self, in_features, r):
nn.Linear.__init__(self, in_features, in_features)
meta_device = torch.device('meta')
self.weight = nn.parameter.Parameter(torch.randn(in_features, in_features, device = meta_device))
self.lora_A = nn.parameter.Parameter(self.weight.new_zeros((r, in_features), device = meta_device))
self.test = self.lora_A
print(self.test is self.lora_A)
def forward(self, x):
return x
model = MyModule(in_features=10, r=5)
```
At this point, the output is True.
May I ask why it's not possible to assign an nn.Parameter with the same memory address to a model attribute within the model? Is this a bug or a feature?
### Expected behavior
Please address my question by explaining the reason for this phenomenon and clarifying whether it is a bug or a feature. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34744 |
TITLE
tokenizer.json modified after tokenizer.save_pretrained of OLMO models
COMMENTS
10
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.45.0
- Platform: Linux-6.8.0-48-generic-x86_64-with-glibc2.39
- Python version: 3.10.15
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 1.0.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.0+rocm6.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: AMD Instinct MI250X/MI250
### Who can help?
@ArthurZucker and @itazap
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When I load and then save the tokenizer with OLMO models, the tokenizer.json files appear different, particularly with the `merge` key.

The code to reproduce that is :
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-1B-0724-hf")
tokenizer.save_pretrained("saved_tokenizer")
```
### Expected behavior
The original `tokenizer.json` and the saved `tokenizer.json` should be the same. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34990 |
TITLE
BLIP2 Model Fails with Version 4.46.3 (Shape Mismatch Error)
COMMENTS
4
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.46.3
- Platform: Linux-5.15.0-89-generic-x86_64-with-glibc2.35
- Python version: 3.10.15
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 0.34.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 2080 Ti
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import requests
from PIL import Image
import torch
from transformers import Blip2ForConditionalGeneration, Blip2Processor
# Load the model and processor for BLIP2 from HuggingFace
model_id = "Salesforce/blip2-opt-2.7b"
torch_dtype = torch.float16
load_in_8bit = False
model = Blip2ForConditionalGeneration.from_pretrained(
model_id, torch_dtype=torch_dtype, load_in_8bit=load_in_8bit, device_map="cuda"
)
model = torch.compile(model)
model.eval()
processor = Blip2Processor.from_pretrained(model_id)
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
img_url = "https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg"
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
inputs = processor(raw_image, return_tensors="pt").to(device)
generated_ids = model.generate(**inputs)
generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True)
generated_texts = [generated_text.strip() for generated_text in generated_texts]
print (generated_texts)
```
Error message:
```python
RuntimeError Traceback (most recent call last)
Cell In[2], line 26
23 raw_image = Image.open(requests.get(img_url, stream=True).raw).convert(\"RGB\")
25 inputs = processor(raw_image, return_tensors=\"pt\").to(device)
---> 26 generated_ids = model.generate(**inputs)
27 generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True)
28 generated_texts = [generated_text.strip() for generated_text in generated_texts]
File ~/.cache/pypoetry/virtualenvs/aana-vIr3-B0u-py3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py:116, in context_decorator.<locals>.decorate_context(*args, **kwargs)
113 @functools.wraps(func)
114 def decorate_context(*args, **kwargs):
115 with ctx_factory():
--> 116 return func(*args, **kwargs)
File ~/.cache/pypoetry/virtualenvs/aana-vIr3-B0u-py3.10/lib/python3.10/site-packages/transformers/models/blip_2/modeling_blip_2.py:2316, in Blip2ForConditionalGeneration.generate(self, pixel_values, input_ids, attention_mask, interpolate_pos_encoding, **generate_kwargs)
2314 if getattr(self.config, \"image_token_index\", None) is not None:
2315 special_image_mask = (input_ids == self.config.image_token_index).unsqueeze(-1).expand_as(inputs_embeds)
-> 2316 inputs_embeds[special_image_mask] = language_model_inputs.flatten()
2317 else:
2318 logger.warning_once(
2319 \"Expanding inputs for image tokens in BLIP-2 should be done in processing. \"
2320 \"Please follow instruction here (https://gist.github.com/zucchini-nlp/e9f20b054fa322f84ac9311d9ab67042) to update your BLIP-2 model. \"
2321 \"Using processors without these attributes in the config is deprecated and will throw an error in v4.47.\"
2322 )
RuntimeError: shape mismatch: value tensor of shape [81920] cannot be broadcast to indexing result of shape [0]"
```
### Expected behavior
The BLIP2 model does not work with `transformers==4.46.3`
The model should not fail. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/33776 |
TITLE
[IDEFICS2] Fix past_seen_tokens unset bug
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
Generation in Idefics2 was broken by #33541, as `past_seen_tokens` wouldn't be set if `past_key_values` was a `Cache` instance
Fixes #33763 #33752
| [
73
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"run-slow"
] |
https://api.github.com/repos/huggingface/transformers/issues/35876 |
TITLE
Support Shared Cache
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 1
eyes: 0
BODY
### Feature request
A new cache class that supports sharing the same or part of the KV cache between different layers to improve cache efficiency.
### Motivation
Many studies have shown that attention weights between different attention layers are always similar, and `KV cache sharing` only causes a small quality degradation, while improving **2~3 times token/sec**.
### Your contribution
I would try to submit a PR. | [
76,
33
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request",
"Cache"
] |
https://api.github.com/repos/huggingface/transformers/issues/35494 |
TITLE
Loss.. should be specified as either training loss or validation loss
COMMENTS
4
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
Doesn't apply.. using runpod
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
As you fine tune a new model, it shows a loss curve and numbers as it runs.. is that training loss or validation loss? Should be labelled
If training loss.. how can I view validation loss? I set 'val size', it should exist somewhere
### Expected behavior
Should be labeled more clearly | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35609 |
TITLE
Trainer sets `state.best_model_checkpoint` even when it doesn't save there; leads to training crash
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 1
BODY
### System Info
- `transformers` version: 4.49.0.dev0
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.9.16
- Huggingface_hub version: 0.24.7
- Safetensors version: 0.4.5
- Accelerate version: 0.34.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: Yes
- GPU type: NVIDIA GeForce RTX 3090
### Who can help?
@muellerz
@SunMarc
@seanswyi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
`pytest tests/test_model_card.py::test_model_card` from `setfit` (link: https://github.com/huggingface/setfit/blob/main/tests/test_model_card.py#L15)
Apologies for not having a convenient ready-to-go `transformers`-only script. I'm afraid I don't have time for that right now.
In essence, the flow is as follows:
1. I start the trainer, with lots of evaluations (e.g. `eval_steps=1`, `eval_strategy="steps"`)
2. When evaluating, the new `_determine_best_metric` is called: https://github.com/huggingface/transformers/blob/f63829c87bd89a4a0cea45d81c1cd870996b30c4/src/transformers/trainer.py#L3070-L3075
3. With `args.metric_for_best_model` set, we only set the `best_metric` in the first evaluation: https://github.com/huggingface/transformers/blob/f63829c87bd89a4a0cea45d81c1cd870996b30c4/src/transformers/trainer.py#L3182
4. On the 2nd eval, we start comparing against the first. If the model is better, we now also set `best_model_checkpoint`: https://github.com/huggingface/transformers/blob/f63829c87bd89a4a0cea45d81c1cd870996b30c4/src/transformers/trainer.py#L3184-L3192 *but* we're not sure if we're even going to be saving at this step! If `args.save_strategy != SaveStrategy.BEST:`, then it's very possible that we're not saving.
5. The eventual crash occurs when "deleting old checkpoints", because there is no file at `best_model_checkpoint`: https://github.com/huggingface/transformers/blob/f63829c87bd89a4a0cea45d81c1cd870996b30c4/src/transformers/trainer.py#L2680-L2685
### Expected behavior
We should not be setting `best_model_checkpoint` unless we're confident that 1) `state.should_save` is True or 2) `args.save_strategy == "best"`. Then we'll avoid this crash.
- Tom Aarsen | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35443 |
TITLE
Compatibility Issue with Python 3.13
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
Python 3.13
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
pip install transformers
### Expected behavior
Issue: Compatibility Issue with Python 3.13
Description:
When attempting to install safetensors as a dependency for transformers on Python 3.13, the installation fails during the metadata generation step. The issue seems to be related to the maturin build tool and its handling of the pyproject.toml. This problem does not occur on Python 3.12.
Steps to Reproduce:
Install Python 3.13.1 (official build).
Run the command:
bash
复制代码
pip install safetensors
Observe the following error:
plaintext
复制代码
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [5 lines of output]
💥 maturin failed
Caused by: `project.version` field is required in pyproject.toml unless it is present in the `project.dynamic` list
Error running maturin: Command '['maturin', 'pep517', 'write-dist-info', '--metadata-directory', ...]' returned non-zero exit status 1.
Environment:
Python Version: 3.13.1
OS: Windows 10 (64-bit)
Pip Version: 23.x
Rust Version: [Provide output of rustc --version]
Temporary Solution:
Switching to Python 3.12 resolves the issue. Dependencies (safetensors and transformers) install successfully on Python 3.12.
Expected Behavior:
The package should install successfully on Python 3.13.
Additional Context:
This issue might be due to the pyproject.toml configuration or an incompatibility in the maturin build tool when used with Python 3.13.
Suggestions:
Investigate the pyproject.toml for compatibility with Python 3.13.
Ensure dependencies like maturin and Rust are compatible with the latest Python version. | [
27,
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"dependencies",
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/33579 |
TITLE
NER workflow improvement
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
1. [run_ner.py in examples](https://github.com/huggingface/transformers/blob/main/examples/pytorch/token-classification/run_ner.py) are requiring data of pre-tokenized words, like [conll2003 dataset](https://huggingface.co/datasets/eriktks/conll2003), where texts have been splitted into words. However, the [TokenClassificationPipeline](https://huggingface.co/docs/transformers/v4.44.0/en/main_classes/pipelines#transformers.TokenClassificationPipeline) has no such pre-tokenization but simply aggregate pre-entities by checking [is_subword](https://github.com/huggingface/transformers/blob/984bc11b0882ff1e5b34ba717ea357e069ceced9/src/transformers/pipelines/token_classification.py#L396) using whitespace separator.
This might work somehow on English words, but would fail when it is inconsistent with pre-tokenization used in training data (which would raise multiple `B-` for all sub-tokens in one pre-token)
**My solution:** In the `TokenClassificationPipeline`, use the pre-tokenizer to mark `is_subword` for all non-first sub-tokens to aggregate them in `gather_pre_entities`.
2. `run_ner.py` shall support chunks during training. For example, XLM-Roberta-Base supports 512 as the maximum sequence length, and we might suffer with memory concern to limit the sequence length during training for large models. In this case, it might be better to split long training data into chunks (with certain overlapping) to fully utilize the information instead of truncation.
3. For `label_all_tokens` argument in `run_ner.py`, it means we currently have 2 options for sub-token labelling: `first` and `all`. It might be better to provide a `last` option to better support for Decoder-only models. See https://github.com/huggingface/transformers/issues/15389#issuecomment-1387305312
### Motivation
Included in Feature request Section.
### Your contribution
I currently implement some parts of them in some private confidential codes during work. I would help if repo maintainers think it worth doing. | [
51,
48,
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Core: Pipeline",
"Ex: Named Entity Recognition",
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/36136 |
TITLE
Bump transformers from 4.38.0 to 4.48.0 in /examples/research_projects/vqgan-clip
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
Bumps [transformers](https://github.com/huggingface/transformers) from 4.38.0 to 4.48.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p>
<blockquote>
<h2>v4.48.0: ModernBERT, Aria, TimmWrapper, ColPali, Falcon3, Bamba, VitPose, DinoV2 w/ Registers, Emu3, Cohere v2, TextNet, DiffLlama, PixtralLarge, Moonshine</h2>
<h2>New models</h2>
<h3>ModernBERT</h3>
<p>The ModernBert model was proposed in <a href="https://arxiv.org/abs/2412.13663">Smarter, Better, Faster, Longer: A Modern Bidirectional Encoder for Fast, Memory Efficient, and Long Context Finetuning and Inference</a> by Benjamin Warner, Antoine Chaffin, Benjamin Clavié, Orion Weller, Oskar Hallström, Said Taghadouini, Alexis Galalgher, Raja Bisas, Faisal Ladhak, Tom Aarsen, Nathan Cooper, Grifin Adams, Jeremy Howard and Iacopo Poli.</p>
<p>It is a refresh of the traditional encoder architecture, as used in previous models such as <a href="https://huggingface.co/docs/transformers/en/model_doc/bert">BERT</a> and <a href="https://huggingface.co/docs/transformers/en/model_doc/roberta">RoBERTa</a>.</p>
<p>It builds on BERT and implements many modern architectural improvements which have been developed since its original release, such as:</p>
<ul>
<li><a href="https://huggingface.co/blog/designing-positional-encoding">Rotary Positional Embeddings</a> to support sequences of up to 8192 tokens.</li>
<li><a href="https://arxiv.org/abs/2208.08124">Unpadding</a> to ensure no compute is wasted on padding tokens, speeding up processing time for batches with mixed-length sequences.</li>
<li><a href="https://arxiv.org/abs/2002.05202">GeGLU</a> Replacing the original MLP layers with GeGLU layers, shown to improve performance.</li>
<li><a href="https://arxiv.org/abs/2004.05150v2">Alternating Attention</a> where most attention layers employ a sliding window of 128 tokens, with Global Attention only used every 3 layers.</li>
<li><a href="https://github.com/Dao-AILab/flash-attention">Flash Attention</a> to speed up processing.</li>
<li>A model designed following recent <a href="https://arxiv.org/abs/2401.14489">The Case for Co-Designing Model Architectures with Hardware</a>, ensuring maximum efficiency across inference GPUs.</li>
<li>Modern training data scales (2 trillion tokens) and mixtures (including code ande math data)</li>
</ul>
<p><img src="https://github.com/user-attachments/assets/4256c0b1-9b40-4d71-ac42-fc94827d5e9d" alt="image" /></p>
<ul>
<li>Add ModernBERT to Transformers by <a href="https://github.com/warner-benjamin"><code>@warner-benjamin</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/35158">#35158</a></li>
</ul>
<h3>Aria</h3>
<p>The Aria model was proposed in <a href="https://huggingface.co/papers/2410.05993">Aria: An Open Multimodal Native Mixture-of-Experts Model</a> by Li et al. from the Rhymes.AI team.</p>
<p>Aria is an open multimodal-native model with best-in-class performance across a wide range of multimodal, language, and coding tasks. It has a Mixture-of-Experts architecture, with respectively 3.9B and 3.5B activated parameters per visual token and text token.</p>
<ul>
<li>Add Aria by <a href="https://github.com/aymeric-roucher"><code>@aymeric-roucher</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/34157">#34157</a>
<img src="https://github.com/user-attachments/assets/ef41fcc9-2c5f-4a75-ab1a-438f73d3d7e2" alt="image" /></li>
</ul>
<h3>TimmWrapper</h3>
<p>We add a <code>TimmWrapper</code> set of classes such that timm models can be loaded in as transformer models into the library.</p>
<p>Here's a general usage example:</p>
<pre lang="py"><code>import torch
from urllib.request import urlopen
from PIL import Image
from transformers import AutoConfig, AutoModelForImageClassification, AutoImageProcessor
<p>checkpoint = "timm/resnet50.a1_in1k"
img = Image.open(urlopen(
'<a href="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png">https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png</a>'
))</p>
<p>image_processor = AutoImageProcessor.from_pretrained(checkpoint)
</tr></table>
</code></pre></p>
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/huggingface/transformers/commit/6bc0fbcfa7acb6ac4937e7456a76c2f7975fefec"><code>6bc0fbc</code></a> [WIP] Emu3: add model (<a href="https://redirect.github.com/huggingface/transformers/issues/33770">#33770</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/59e28c30fa3a91213f569bccef73f082afa8c656"><code>59e28c3</code></a> Fix flex_attention in training mode (<a href="https://redirect.github.com/huggingface/transformers/issues/35605">#35605</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/7cf6230e25078742b21907ae49d1542747606457"><code>7cf6230</code></a> push a fix for now</li>
<li><a href="https://github.com/huggingface/transformers/commit/d6f446ffa79811d35484d445bc5c7932e8a536d6"><code>d6f446f</code></a> when filtering we can't use the convert script as we removed them</li>
<li><a href="https://github.com/huggingface/transformers/commit/8ce1e9578af6151e4192d59c345e2ad86ee789d4"><code>8ce1e95</code></a> [test-all]</li>
<li><a href="https://github.com/huggingface/transformers/commit/af2d7caff393cf8881396b73d92d0595b6a3b2ae"><code>af2d7ca</code></a> Add Moonshine (<a href="https://redirect.github.com/huggingface/transformers/issues/34784">#34784</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/42b8e7916b6b6dff5cb77252286db1aa07b7b41e"><code>42b8e79</code></a> ModernBert: reuse GemmaRotaryEmbedding via modular + Integration tests (<a href="https://redirect.github.com/huggingface/transformers/issues/35459">#35459</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/e39c9f7a78fa2960a7045e8fc5a2d96b5d7eebf1"><code>e39c9f7</code></a> v4.48-release</li>
<li><a href="https://github.com/huggingface/transformers/commit/8de7b1ba8d126a6fc9f9bcc3173a71b46f0c3601"><code>8de7b1b</code></a> Add flex_attn to diffllama (<a href="https://redirect.github.com/huggingface/transformers/issues/35601">#35601</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/1e3ddcb2d0380d0d909a44edc217dff68956ec5e"><code>1e3ddcb</code></a> ModernBERT bug fixes (<a href="https://redirect.github.com/huggingface/transformers/issues/35404">#35404</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v4.38.0...v4.48.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | [
27,
60
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"dependencies",
"python"
] |
https://api.github.com/repos/huggingface/transformers/issues/34178 |
TITLE
problem with rag
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.44.2
- Platform: Linux-6.1.85+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.24.7
- Safetensors version: 0.4.5
- Accelerate version: 0.34.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu121 (False)
- Tensorflow version (GPU?): 2.17.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.8.5 (cpu)
- Jax version: 0.4.33
- JaxLib version: 0.4.33
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I tested the rag function with the same dataset 'my_knowledge.csv', everything seemed to work, butThe generated response is not understandable,
example: Q: What does Moses' rod turn into ?
A: Powderoo Two stockp upgr Noct vic soy perBleublished *** Offic molecular memElf nu Ore
### Expected behavior
The behavior that i expected is for the rag is to return some coherent responses | [
58,
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"rag",
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/33770 |
TITLE
Emu3: add model
COMMENTS
8
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
As per title. The code can work for generating text in single-batch scenarios but the generated text doesn't match input image. For batched generation, seems like the orig impl neither supports it mostly because image features from processor are returned with different shapes (smart resize to converse as much orig image size as possible). We can try to do padding similar to llava-next but I am not sure if will just work, I'll contact the authors
TODO:
- [x] Batched generation
- [x] Upload chat template and change the image-placeholder token from `extra-0` to smth like `<image>`
- [x] Match the orig implementation on logit level
- [x] Tests, many more tests
- [x] Check out image generation and see how we can enable interleaved image+text generation as in Chameleon. Maybe not natively with transformers, but we can provide scripts with external libraries for structured generation -> not possible because text-generation and image-generation are two different checkpoints with different weights
```python
from PIL import Image
import torch
import requests
from transformers import (
Emu3Config,
Emu3ForConditionalGeneration,
Emu3ImageProcessor,
Emu3Processor,
)
output_dir = "/raid/raushan/emu3"
processor = Emu3Processor.from_pretrained(output_dir)
model = Emu3ForConditionalGeneration.from_pretrained(output_dir, torch_dtype="bfloat16", device_map="auto")
processor.tokenizer.padding_side = "left"
text = "You are a helpful assistant. USER: <|extra_0|>Please describe the image. ASSISTANT:"
image = Image.open("/raid/raushan/image.png")
image2 = Image.open(requests.get("https://www.ilankelman.org/stopsigns/australia.jpg", stream=True).raw)
inputs = processor(
text=[text, text],
images=[image2, image],
return_tensors="pt",
padding=True,
)
inputs = inputs.to(device="cuda:0", dtype=torch.bfloat16)
out = model.generate(**inputs, max_new_tokens=100)
text_out = processor.batch_decode(out, skip_special_tokens=True)
print(text_out)
```
And for image generation:
```python
from PIL import Image
from transformers import AutoTokenizer, AutoModel, AutoImageProcessor, AutoModelForCausalLM
import torch
import requests
from transformers import (
Emu3Config,
Emu3ForConditionalGeneration,
Emu3ImageProcessor,
Emu3Processor,
)
output_dir = "/raid/raushan/emu3-gen"
processor = Emu3Processor.from_pretrained(output_dir)
model = Emu3ForConditionalGeneration.from_pretrained(output_dir, torch_dtype="bfloat16", device_map="auto", ) # attn_implementation="flash_attention_2",
inputs = processor(
text=["a portrait of young girl. masterpiece, film grained, best quality.", "a dog running under the rain"],
padding=True,
return_tensors="pt",
return_for_image_generation=True,
)
inputs = inputs.to(device="cuda:0", dtype=torch.bfloat16)
image_sizes = inputs.pop("image_sizes")
HEIGHT, WIDTH = image_sizes[0]
VISUAL_TOKENS = model.model.vocabulary_mapping.image_tokens
def prefix_allowed_tokens_fn(batch_id, input_ids):
height, width = HEIGHT, WIDTH
visual_tokens = VISUAL_TOKENS
image_token_id = processor.tokenizer.encode("<|image token|>", return_tensors="pt")[0].to(model.device) # torch.tensor([processor.tokenizer.image_token_id], device=model.device)
eoi_token_id = processor.tokenizer.encode("<|image end|>", return_tensors="pt")[0] # torch.tensor([processor.tokenizer.eoi_token_id], device=model.device)
eos_token_id = processor.tokenizer.encode("<|extra_204|>", return_tensors="pt")[0] # torch.tensor([processor.tokenizer.eos_token_id], device=model.device)
pad_token_id = processor.tokenizer.encode("<|endoftext|>", return_tensors="pt")[0] # torch.tensor([processor.tokenizer.pad_token_id], device=model.device)
eol_token_id = processor.tokenizer.encode("<|extra_200|>", return_tensors="pt")[0]
eof_token_id = processor.tokenizer.encode("<|extra_201|>", return_tensors="pt")[0]
position = torch.nonzero(input_ids == image_token_id, as_tuple=True)[0][0]
offset = input_ids.shape[0] - position
if offset % (width + 1) == 0:
return (eol_token_id, )
elif offset == (width + 1) * height + 1:
return (eof_token_id, )
elif offset == (width + 1) * height + 2:
return (eoi_token_id, )
elif offset == (width + 1) * height + 3:
return (eos_token_id, )
elif offset > (width + 1) * height + 3:
return (pad_token_id, )
else:
return visual_tokens
out = model.generate(
**inputs,
max_new_tokens=50_000,
prefix_allowed_tokens_fn=prefix_allowed_tokens_fn,
do_sample=True,
top_k=2048,
return_dict_in_generate=True,
)
print(out.sequences.shape, inputs.input_ids.shape)
image = model.model.decode_image_tokens(out.sequences[:, inputs.input_ids.shape[1]: ], height=HEIGHT, width=WIDTH)
images = processor.postprocess(list(image.float()), return_tensors="PIL.Image.Image") # internally we convert to np but it's not supported in bf16 precision
for i, image in enumerate(images['pixel_values']):
image.save(f"result_{i}.png")
```
| [
77,
12
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
"New model",
"Multimodal"
] |
https://api.github.com/repos/huggingface/transformers/issues/35134 |
TITLE
[i18n-<languageCode>] Translating Benchmarks.md to Chinese
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
Hello, I will translate Benchmarks.md to Chinese.
doc url : [https://github.com/huggingface/transformers/blob/main/docs/source/en/benchmarks.md](url)
| [
1
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"WIP"
] |
https://api.github.com/repos/huggingface/transformers/issues/34019 |
TITLE
Add Loss Functions for QFormer Training in BLIP-2 Model (ITC, ITM, and ITG)
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 1
BODY
### Feature request
I propose adding a loss calculation for QFormer training in the BLIP-2 model. Implementing this feature would allow fine-tuning the QFormer and language models for image-text retrieval and captioning tasks, which is crucial for practical applications.
### Motivation
I want to train the BLIP-2 model using the transformers library. In particular, loss functions for Image-Text Contrastive (ITC), Image-Text Matching (ITM), and Image-grounded Text Generation(ITG) are not included, which requires users to manually implement the loss functions.
### Your contribution
I would like to contribute to this open-source project by implementing the loss functions. | [
76,
12
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request",
"Multimodal"
] |
https://api.github.com/repos/huggingface/transformers/issues/33680 |
TITLE
save_pretrained is changing the name of module when saving
COMMENTS
5
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.44.2
- Platform: macOS-15.1-arm64-arm-64bit
- Python version: 3.10.14
- Huggingface_hub version: 0.23.3
- Safetensors version: 0.4.3
- Accelerate version: 0.34.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
class XxxSparseLayer(nn.Module):
def __init__(self, config: XxxConfig):
super().__init__()
self.num_experts = config.num_experts
self.top_k = config.expert_top_k
self.attention = XxxAttention(config)
self.layer_norm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
self.router = nn.Linear(config.hidden_size, self.num_experts)
self.experts = nn.ModuleList([XxxFeedForward(config) for _ in range(self.num_experts)])
self._register_load_state_dict_pre_hook(self.copy_ffn_params)
def forward(
self,
hidden_states: Tensor,
attention_mask: torch.FloatTensor | None = None,
head_mask: torch.FloatTensor | None = None,
output_attentions: bool = False,
) -> Tuple[Tensor, ...]:
self_attention_outputs = self.attention(
hidden_states,
attention_mask,
head_mask,
output_attentions=output_attentions,
)
attention_output = self_attention_outputs[0]
layer_output = self.layer_norm(attention_output)
router_logits = self.router(layer_output[:, 0, :])
router_probs = router_logits.softmax(dim=-1)
router_weights, router_idx = router_probs.topk(self.top_k, dim=-1)
router_weights /= router_weights.sum(dim=-1, keepdim=True)
expert_outputs = torch.stack([self.experts[i](layer_output) for i in range(self.num_experts)], dim=1)
solicited_outputs = expert_outputs[torch.arange(router_idx.size(0)).unsqueeze(1), router_idx]
weighted_outputs = (solicited_outputs * router_weights.unsqueeze(-1).unsqueeze(-1)).sum(1)
layer_output = weighted_outputs + layer_output
return (layer_output,) + self_attention_outputs[1:]
def copy_ffn_params(self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs):
ffn_prefix = prefix + "ffn."
ffn_states = {k: v for k, v in state_dict.items() if k.startswith(ffn_prefix)}
ffn_states = {k: v for k, v in state_dict.items() if "layer_norm" not in k}
for k, v in ffn_states.items():
for i in range(self.num_experts):
state_dict[k.replace("ffn.", f"experts.{i}.")] = v.clone()
del state_dict[k]
class XxxFeedForward(nn.Module):
def __init__(self, config: XxxConfig):
super().__init__()
self.in_proj = nn.Linear(config.hidden_size, config.intermediate_size)
if isinstance(config.hidden_act, str):
self.act = ACT2FN[config.hidden_act]
else:
self.act = config.hidden_act
self.out_proj = nn.Linear(config.intermediate_size, config.hidden_size)
self.dropout = nn.Dropout(config.hidden_dropout)
def forward(self, hidden_states: Tensor) -> Tensor:
hidden_states = self.in_proj(hidden_states)
hidden_states = self.act(hidden_states)
hidden_states = self.out_proj(hidden_states)
hidden_states = self.dropout(hidden_states)
return hidden_states
```
```
>>> model.save_pretrained('xxx')
>>> model.state_dict() == torch.load('xxx/pytorch_model.bin')
False
```
### Expected behavior
I have the above layer definition.
Since it's a MoE module all experts shares one `layer_norm`, the layer norm of FFN is in Layer, not FFN.
But when using the `save_pretrained`, transformers will move the weights to ffn automatically, causing the load to fail. | [
23,
64,
9
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Core: Modeling",
"bug",
"Mixture of Experts"
] |
Subsets and Splits