url
stringlengths 66
66
| text
stringlengths 141
41.9k
| num_labels
sequencelengths 1
8
| arr_labels
sequencelengths 82
82
| labels
sequencelengths 1
8
|
---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/35437 |
TITLE
Missing weights are not properly initialized when using model.from_pretrained()
COMMENTS
5
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.47.1
- Platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.35
- Python version: 3.12.2
- Huggingface_hub version: 0.27.0
- Safetensors version: 0.4.5
- Accelerate version: 1.1.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: yes
- Using GPU in script?: yes
- GPU type: NVIDIA A100-PCIE-40GB
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
import torch.nn as nn
from transformers import PreTrainedModel, PretrainedConfig
class Config(PretrainedConfig):
def __init__(self, use_new=False, **kwargs):
self.use_new = use_new
super().__init__(**kwargs)
class Model(PreTrainedModel):
config_class = Config
def __init__(self, config: Config):
super().__init__(config)
self.use_new = config.use_new
self.proj = nn.Linear(10, 10, bias=False)
if self.use_new:
self.new_proj = nn.Linear(20, 20, bias=False)
self.post_init()
def post_init(self):
nn.init.constant_(self.proj.weight, 0)
if self.use_new:
nn.init.constant_(self.new_proj.weight, 0)
if __name__ == "__main__":
# 1. Pretrain a base model
config = Config(use_new=False)
original_model = Model(config)
print(original_model.proj.weight.data.max()) # 0
# 2. Save the pretrained weights
original_model.save_pretrained("./original_model/")
# 3. Load the pretrained weights, and finetune the model with a newly added layer
new_model1 = Model.from_pretrained("./original_model/", use_new=True)
print(new_model1.proj.weight.data.max()) # 0
print(new_model1.new_proj.weight.data.max()) # nan - BUG: This is unexpected!
# 4. A trick to work around this problem: pass _fast_init=False into from_pretrained()
new_model2 = Model.from_pretrained("./original_model/", use_new=True, _fast_init=False)
print(new_model2.proj.weight.data.max()) # 0
print(new_model2.new_proj.weight.data.max()) # 0
```
### Expected behavior
**The missing weights during `from_pretrained()` are not initialized according to `self.post_init()`.**
In this case, I want to fine-tune a pretrained model and add some new parameters (`self.new_proj.weight`), which is a very common scenario.
The missing weights (`self.new_proj.weight`) are expected to be initialized to 0, but the values are actually frozen during `from_pretrained()` and cannot be properly initialized.
A workaround is to pass `_fast_init=False` to `from_pretrained()`, but I noticed that this feature is deprecated. Therefore, there should be a more appropriate solution to this problem. | [
23,
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Core: Modeling",
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35309 |
TITLE
Torch is installed but still getting this error "RuntimeError: At least one of TensorFlow 2.0 or PyTorch should be installed. "
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.47.0
- Platform: Linux-5.11.0-1021-azure-x86_64-with-glibc2.31
- Python version: 3.10.16
- Huggingface_hub version: 0.27.0
- Safetensors version: 0.4.5
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu124 (False)
- Tensorflow version (GPU?): 2.18.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. I created a new environment in conda & activated it with python=3.10
2. Installed jupyter, transformers,tensorflow,torch
3. Tried running this code in jupyter
```
from transformers import pipeline
sentiment_pipeline = pipeline("sentiment-analysis")
```
4. Got the error "RuntimeError: At least one of TensorFlow 2.0 or PyTorch should be installed."
### Expected behavior
Hi @sanchit-gandhi @Rocketknight1 & HF Team,
I am not able to run transformer pipelines due to the error mentioned in the reproduction steps.
Please advise, thank you! | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35166 |
TITLE
Mimi model gives different outputs when using batch encode vs single encode
COMMENTS
12
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.46.3
- Platform: Linux-6.6.20-aufs-1-x86_64-with-glibc2.36
- Python version: 3.11.2
- Huggingface_hub version: 0.26.1
- Safetensors version: 0.4.5
- Accelerate version: 1.0.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: Yes
- GPU type: NVIDIA A10
### Who can help?
@ylacombe @eustlb
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
import torchaudio
from transformers import MimiModel, AutoFeatureExtractor
model = MimiModel.from_pretrained("kyutai/mimi", num_quantizers=8)
feature_extractor = AutoFeatureExtractor.from_pretrained("kyutai/mimi")
model.cuda()
# load some audio file
inputs = feature_extractor(raw_audio=[audio.squeeze(0).numpy(),audio.squeeze(0).numpy()] , sampling_rate=feature_extractor.sampling_rate, return_tensors="pt")
inputs = {key:value.cuda() for key,value in inputs.items()}
out_batch = model.encode(**inputs).audio_codes
inputs = feature_extractor(raw_audio=audio.squeeze(0).numpy(), sampling_rate=feature_extractor.sampling_rate, return_tensors="pt")
inputs = {key:value.cuda() for key,value in inputs.items()}
out = model.encode(**inputs).audio_codes
(out_batch[0] == out[0]).all() # prints tensor(False, device='cuda:0')
```
### Expected behavior
the output for out_batch[0] and out[0] should be the same. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34722 |
TITLE
CI fails on few test_training_gradient_checkpointing tests for LLAMA
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
With:
* https://github.com/huggingface/transformers/commit/33eef992503689ba1af98090e26d3e98865b2a9b
* https://github.com/huggingface/accelerate/commit/c0552c9012a9bae7f125e1df89cf9ee0b0d250fd
* https://github.com/pytorch/pytorch/commit/1a8752bc7df93bd32c4b99523ec5890c24da0664
On:
* Nvidia A10
* Intel Max Series
The following 3 tests are failing:
* `tests/models/llama/test_modeling_llama.py::LlamaModelTest::test_training_gradient_checkpointing`
* `tests/models/llama/test_modeling_llama.py::LlamaModelTest::test_training_gradient_checkpointing_use_reentrant`
* `tests/models/llama/test_modeling_llama.py::LlamaModelTest::test_training_gradient_checkpointing_use_reentrant_false`
See log of the failure below. That is a regressions after this commit: https://github.com/huggingface/transformers/commit/19d58d31f19049e8280ccb62a5b098d89909bf5a, PR:
* https://github.com/huggingface/transformers/pull/33703
```
$ git bisect log
git bisect start
# bad: [33eef992503689ba1af98090e26d3e98865b2a9b] Agents: Small fixes in streaming to gradio + add tests (#34549)
git bisect bad 33eef992503689ba1af98090e26d3e98865b2a9b
# good: [984bc11b0882ff1e5b34ba717ea357e069ceced9] Revert "fixes to properly shard FSDP across cpu and meta for cpu_effcient_loading for prequantized 4bit (#32276)" (#32477)
git bisect good 984bc11b0882ff1e5b34ba717ea357e069ceced9
# good: [80b90e7b2f7466ffb1d9036986e0699880d34284] Add codestral mamba2 (#32080)
git bisect good 80b90e7b2f7466ffb1d9036986e0699880d34284
# bad: [1ec7a70fef4158ab1ed660cba5126c8cde08c7e8] fix trainer tr_loss add error (#33651)
git bisect bad 1ec7a70fef4158ab1ed660cba5126c8cde08c7e8
# good: [7ed9789e210d8eca797fc21b9c783b1ce718ecb5] Fix: `num_logits_to_keep` in composite models (#33168)
git bisect good 7ed9789e210d8eca797fc21b9c783b1ce718ecb5
# good: [bcf8946f0acb578c534b1d33d534450d1fc88507] Fix number of patch check for different vision feature select strategy (#32494)
git bisect good bcf8946f0acb578c534b1d33d534450d1fc88507
# good: [dc8b6eaeeeb59dd3089b478cc09b577f2c62a297] Fix contrastive search to correctly handle input with padding (#33507)
git bisect good dc8b6eaeeeb59dd3089b478cc09b577f2c62a297
# good: [fa0bb0fe762c757203565a940c6e59a8d27537c4] Fix ByteLevel alphabet missing when Sequence pretokenizer is used (#33556)
git bisect good fa0bb0fe762c757203565a940c6e59a8d27537c4
# bad: [19d58d31f19049e8280ccb62a5b098d89909bf5a] Add MLLama (#33703)
git bisect bad 19d58d31f19049e8280ccb62a5b098d89909bf5a
# good: [7e638ef2b8650aaa3e3a8e575bb63af262a43d95] fix code quality after merge
git bisect good 7e638ef2b8650aaa3e3a8e575bb63af262a43d95
# good: [61e98cb957862d679c4a338319a386da197b8073] Add SDPA support for M2M100 (#33309)
git bisect good 61e98cb957862d679c4a338319a386da197b8073
# good: [ade9e0fe41a414c6a24a03a79c15798db609a6c9] Corrected max number for bf16 in transformer/docs (#33658)
git bisect good ade9e0fe41a414c6a24a03a79c15798db609a6c9
# good: [94f18cf23c128055a984ffbe9c57df133c1f6cc7] Add OmDet-Turbo (#31843)
git bisect good 94f18cf23c128055a984ffbe9c57df133c1f6cc7
# first bad commit: [19d58d31f19049e8280ccb62a5b098d89909bf5a] Add MLLama (#33703)
```
Log for one of the failures (others are similar):
```
$ python3 -m pytest --pspec tests/models/llama/test_modeling_llama.py::LlamaModelTest::test_training_gradient_checkpointing
============================================================================================ test session starts ============================================================================================
platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.5.0
rootdir: /home/dvrogozh/git/huggingface/transformers
configfile: pyproject.toml
plugins: hypothesis-6.111.1, subtests-0.13.1, rich-0.1.1, dash-2.17.1, xdist-3.6.1, pspec-0.0.4, timeout-2.3.1
collected 1 item
tests/models/llama/test_modeling_llama.py
Llama Model Test
» training gradient checkpointing
[100%]
================================================================================================== ERRORS ===================================================================================================
_________________________________________________________________ ERROR at teardown of LlamaModelTest.test_training_gradient_checkpointing __________________________________________________________________
self = <tests.models.llama.test_modeling_llama.LlamaModelTest testMethod=test_training_gradient_checkpointing>
def test_training_gradient_checkpointing(self):
# Scenario - 1 default behaviour
> self.check_training_gradient_checkpointing()
tests/test_modeling_common.py:899:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <tests.models.llama.test_modeling_llama.LlamaModelTest testMethod=test_training_gradient_checkpointing>, gradient_checkpointing_kwargs = None
def check_training_gradient_checkpointing(self, gradient_checkpointing_kwargs=None):
if not self.model_tester.is_training:
self.skipTest(reason="ModelTester is not configured to run training tests")
for model_class in self.all_model_classes:
with self.subTest(model_class.__name__):
if (
model_class.__name__
in [
*get_values(MODEL_MAPPING_NAMES),
*get_values(MODEL_FOR_BACKBONE_MAPPING_NAMES),
]
or not model_class.supports_gradient_checkpointing
):
self.skipTest(reason=f"`supports_gradient_checkpointing` is False for {model_class.__name__}.")
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
config.use_cache = False
config.return_dict = True
model = model_class(config)
model.to(torch_device)
model.gradient_checkpointing_enable(gradient_checkpointing_kwargs=gradient_checkpointing_kwargs)
model.train()
# unfreeze additional layers
for p in model.parameters():
p.requires_grad_(True)
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True)
> loss = model(**inputs).loss
E AttributeError: 'BaseModelOutputWithPast' object has no attribute 'loss'
tests/test_modeling_common.py:868: AttributeError
============================================================================================= warnings summary ==============================================================================================
../../../../../usr/lib/python3.10/distutils/command/build_ext.py:13
/usr/lib/python3.10/distutils/command/build_ext.py:13: DeprecationWarning: The distutils.sysconfig module is deprecated, use sysconfig instead
from distutils.sysconfig import customize_compiler, get_python_version
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
========================================================================================== short test summary info ==========================================================================================
ERROR tests/models/llama/test_modeling_llama.py::LlamaModelTest::test_training_gradient_checkpointing - AttributeError: 'BaseModelOutputWithPast' object has no attribute 'loss'
================================================================================== 1 skipped, 1 warning, 1 error in 3.91s ===================================================================================
(pytorch.cuda) dvrogozh@cg-cyp-03:~/git/huggingface/transformers$
```
CC: @amyeroberts, @ArthurZucker | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35180 |
TITLE
Adding Mamba2ForTokenClassification to Mamba2
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
I’ve noticed that many newly added models do not include a `ForTokenClassification` implementation. Is this due to fundamental challenges in implementation (though I don’t perceive any major obstacles—perhaps I’ve overlooked something), or is it simply a matter of development priorities and time constraints?
### Motivation
I am currently testing a prototype based on the Mamba series models, which requires token classification outputs.
### Your contribution
If it’s merely a time issue preventing the implementation of `ForTokenClassification` in `transformers`, I’d be more than willing to contribute by adding this feature for Mamba/Mamba2. If time allows, I’d also be happy to extend the support to other models.
From my understanding, replicating the approach used in `LlamaForTokenClassification` should suffice to implement token classification model for most models. Any advice or guidance would be highly appreciated! | [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/33286 |
TITLE
Bump cryptography from 42.0.0 to 43.0.1 in /examples/research_projects/decision_transformer
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
Bumps [cryptography](https://github.com/pyca/cryptography) from 42.0.0 to 43.0.1.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's changelog</a>.</em></p>
<blockquote>
<p>43.0.1 - 2024-09-03</p>
<pre><code>
* Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL 3.3.2.
<p>.. _v43-0-0:</p>
<p>43.0.0 - 2024-07-20<br />
</code></pre></p>
<ul>
<li><strong>BACKWARDS INCOMPATIBLE:</strong> Support for OpenSSL less than 1.1.1e has been
removed. Users on older version of OpenSSL will need to upgrade.</li>
<li><strong>BACKWARDS INCOMPATIBLE:</strong> Dropped support for LibreSSL < 3.8.</li>
<li>Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL 3.3.1.</li>
<li>Updated the minimum supported Rust version (MSRV) to 1.65.0, from 1.63.0.</li>
<li>:func:<code>~cryptography.hazmat.primitives.asymmetric.rsa.generate_private_key</code>
now enforces a minimum RSA key size of 1024-bit. Note that 1024-bit is still
considered insecure, users should generally use a key size of 2048-bits.</li>
<li>:func:<code>~cryptography.hazmat.primitives.serialization.pkcs7.serialize_certificates</code>
now emits ASN.1 that more closely follows the recommendations in :rfc:<code>2315</code>.</li>
<li>Added new :doc:<code>/hazmat/decrepit/index</code> module which contains outdated and
insecure cryptographic primitives.
:class:<code>~cryptography.hazmat.primitives.ciphers.algorithms.CAST5</code>,
:class:<code>~cryptography.hazmat.primitives.ciphers.algorithms.SEED</code>,
:class:<code>~cryptography.hazmat.primitives.ciphers.algorithms.IDEA</code>, and
:class:<code>~cryptography.hazmat.primitives.ciphers.algorithms.Blowfish</code>, which were
deprecated in 37.0.0, have been added to this module. They will be removed
from the <code>cipher</code> module in 45.0.0.</li>
<li>Moved :class:<code>~cryptography.hazmat.primitives.ciphers.algorithms.TripleDES</code>
and :class:<code>~cryptography.hazmat.primitives.ciphers.algorithms.ARC4</code> into
:doc:<code>/hazmat/decrepit/index</code> and deprecated them in the <code>cipher</code> module.
They will be removed from the <code>cipher</code> module in 48.0.0.</li>
<li>Added support for deterministic
:class:<code>~cryptography.hazmat.primitives.asymmetric.ec.ECDSA</code> (:rfc:<code>6979</code>)</li>
<li>Added support for client certificate verification to the
:mod:<code>X.509 path validation <cryptography.x509.verification></code> APIs in the
form of :class:<code>~cryptography.x509.verification.ClientVerifier</code>,
:class:<code>~cryptography.x509.verification.VerifiedClient</code>, and
<code>PolicyBuilder</code>
:meth:<code>~cryptography.x509.verification.PolicyBuilder.build_client_verifier</code>.</li>
<li>Added Certificate
:attr:<code>~cryptography.x509.Certificate.public_key_algorithm_oid</code>
and Certificate Signing Request
:attr:<code>~cryptography.x509.CertificateSigningRequest.public_key_algorithm_oid</code>
to determine the :class:<code>~cryptography.hazmat._oid.PublicKeyAlgorithmOID</code>
Object Identifier of the public key found inside the certificate.</li>
<li>Added :attr:<code>~cryptography.x509.InvalidityDate.invalidity_date_utc</code>, a
timezone-aware alternative to the naïve <code>datetime</code> attribute
:attr:<code>~cryptography.x509.InvalidityDate.invalidity_date</code>.</li>
<li>Added support for parsing empty DN string in</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pyca/cryptography/commit/a7733878281ca261c4ada04022fc706ba5de9d8b"><code>a773387</code></a> bump for 43.0.1 (<a href="https://redirect.github.com/pyca/cryptography/issues/11533">#11533</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/0393fef5758e55e3c7b3a3e6e5b77821c594a87f"><code>0393fef</code></a> Backport setuptools version ban (<a href="https://redirect.github.com/pyca/cryptography/issues/11526">#11526</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/6687bab97aef31d6ee6cc94ecc87a972137b5d4a"><code>6687bab</code></a> Bump openssl from 0.10.65 to 0.10.66 in /src/rust (<a href="https://redirect.github.com/pyca/cryptography/issues/11320">#11320</a>) (<a href="https://redirect.github.com/pyca/cryptography/issues/11324">#11324</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/ebf14f2edc8536f36797979cb0e075e766d978c5"><code>ebf14f2</code></a> bump for 43.0.0 and update changelog (<a href="https://redirect.github.com/pyca/cryptography/issues/11311">#11311</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/42788a0353e0ca0d922b6b8b9bde77cbb1c65984"><code>42788a0</code></a> Fix exchange with keys that had Q automatically computed (<a href="https://redirect.github.com/pyca/cryptography/issues/11309">#11309</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/2dbdfb8f3913cb9cef08218fcd48a9b4eaa8b57d"><code>2dbdfb8</code></a> don't assign unused name (<a href="https://redirect.github.com/pyca/cryptography/issues/11310">#11310</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/ccc66e6cdf92f4c29012f86f44ad183161eccaad"><code>ccc66e6</code></a> Bump openssl from 0.10.64 to 0.10.65 in /src/rust (<a href="https://redirect.github.com/pyca/cryptography/issues/11308">#11308</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/4310c8727b50fa5f713a0e863ee3defc0c831921"><code>4310c87</code></a> Bump sphinxcontrib-qthelp from 1.0.7 to 1.0.8 (<a href="https://redirect.github.com/pyca/cryptography/issues/11307">#11307</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/f66a9c4b4fe9b87825872fef7a36c319b823f322"><code>f66a9c4</code></a> Bump sphinxcontrib-htmlhelp from 2.0.5 to 2.0.6 (<a href="https://redirect.github.com/pyca/cryptography/issues/11306">#11306</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/a8fcf18ee0bb0570bd4c9041cf387dc7a9c1968a"><code>a8fcf18</code></a> Bump openssl-sys from 0.9.102 to 0.9.103 in /src/rust (<a href="https://redirect.github.com/pyca/cryptography/issues/11305">#11305</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/pyca/cryptography/compare/42.0.0...43.0.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | [
27,
60
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"dependencies",
"python"
] |
https://api.github.com/repos/huggingface/transformers/issues/34795 |
TITLE
Errors when using transformers with torch<2.5
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
Transformers latest from `main`, pytorch 2.3, cpu or gpu.
Needs to have #34184
### Who can help?
@ArthurZucker @muellerzr
### Information
- [x] My own modified scripts
### Reproduction
1. `pip install torch==2.3`
2. `pip install git+https://github.com/huggingface/transformers.git`
3. `python -c "import transformers.models.vision_encoder_decoder.modeling_vision_encoder_decoder"`
### Expected behavior
This does not error out with torch 2.5, but does with torch 2.3 or 2.4, if these are not supported, we should get an error message when installing transformers. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34111 |
TITLE
[Whisper] Fix whisper integration tests
COMMENTS
5
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 1
eyes: 0
BODY
# What does this PR do?
This PR fixes multiple errors in Whisper integration tests and expected outputs.
To compute the correct excepted outputs, it is necessary to work from a [very simple fork of the original OpenAI Whisper implementation](https://github.com/eustlb/whisper/tree/transcribe-from-mel-spectrogram). Indeed, the extraction of the mel spectrogram in `WhisperFeatureExtractor` diverges slightly from OpenAI's one: we pad the audio array to 30sec/ to longest with 0.0s and then extract our spectrogram through batched STFT while OpenAI's one will add 30sec of 0.0s to the audio array (and not pad _to_ 30sec). This way, the are sure that model inputs for our and OpenAI's implementations are exactly the same.
With this, we can use the following protocol to compute the expected outputs for the tests:
1. extract mel inputs using the test's implementation (so using the WhisperFeatureExtractor)
2. infer OpenAI's model through the above explained fork directly from the mel input
> [!IMPORTANT]
> Code to reproduce the outputs for each of the verified tests can be found [here](https://github.com/eustlb/reproduce-whisper-expected-outputs).
# Edit: some more details about why we work from a whisper fork
In Transformers, we have two inputs possibilities for Whisper:
1. mel spectrogram with 3000 frames → audio is first padded **to** 30sec with 0.0s and then we extract the mel
2. mel spectrogram with more than 3000 frames → no need for padding
---
**case 1**
With an audio <=30sec, the difference between our implementation and OAI is that we first pad to 30sec with 0.0s, then extract features and this will be the input to the model's forward, while OAI pads audio with adding 30sec 0.0s, extract features, slice the exact number of frames and then pads the mel spectrogram to 3000 frames with 0.0s.
To understand better, for an audio of 10secs:
**Transformers**: audio + 20sec of 0.0s → mel spectrogram of shape [80, 3000] where **[2000:] frames are close but not exactly 0.0s**
**OAI**: audio + 30sec of 0.0s → mel spectrogram of shape [80, 4000] → sliced to the duration of the audio (so until frame 1000) and then padded with 0.0s: **[2000:] frames are exactly 0.0s.**
---
**case 2**
No differences (other than numerical difference due to STFT implementation).
---
About the implementation in the [simple whisper fork](https://github.com/eustlb/whisper/tree/transcribe-from-mel-spectrogram):
We just take the mel spectrogram and concat with 3000 frames of 0.0s. This emulates the 30sec of 0.0s added originally.
For case 1, the duration considered by OAI is 30sec (see [this line](https://github.com/openai/whisper/blob/25639fc17ddc013d56c594bfbf7644f2185fad84/whisper/transcribe.py#L134C9-L134C46)) and therefore the audio segment that will be given to the forward is the exact mel input that was given.
For case 2, likewise the duration considered is the one of the given mel input.
With inferring OAI directly on the mel spectrogram (so either of exactly 3000 frames, either on more than 3000 frames), we ensure that each pass of the forward of OAI whisper and our gets the exact same mel spectrogram. This ensures that the expected result we have in the test are indeed results that should be expected given the same input mel with OAI implementation.
> [!NOTE]
> For tests that required batched inference which is not supported by OAI implementation, I simply run it sequentially to get the outputs
# TODO
## Tests to be verified and eventually corrected
✅ for a correct test
❌ for an incorrect one
- [x] **test_tiny_timestamp_generation** ❌
- [x] **test_large_timestamp_generation** ❌
- [x] **test_default_multilingual_transcription_short_form** ✅
- [x] **test_default_multilingual_transcription_long_form** ❌ → here we wan't the model to return input token ids and eos
- [x] **test_whisper_shortform_single_batch_prev_cond** ❌
- [x] **test_whisper_shortform_multi_batch_hard_prev_cond** ❌
- [x] **test_whisper_longform_single_batch** ✅
- [x] **test_whisper_longform_prompt_ids** ✅ → partially verified, `"all-segments"` has no equivalent in OAI implem
- [x] **test_whisper_longform_multi_batch** ✅
- [x] **test_whisper_longform_single_batch_prev_cond** ✅
- [x] **test_whisper_longform_multi_batch_prev_cond** ✅
- [x] **test_whisper_longform_multi_batch_hard** ❌
- [x] **test_whisper_longform_multi_batch_hard_prev_cond** ❌
- [x] **test_whisper_longform_single_batch_beam** ❌
- [x] **test_tiny_generation** ❌
- [x] **test_tiny_en_generation** ❌
- [x] **test_tiny_en_batched_generation** ❌
- [x] **test_tiny_longform_timestamps_generation** ❌
- [x] **test_large_generation** ❌
- [x] **test_large_batched_generation** ❌
- [x] **test_large_generation_multilingual** ❌
- [x] **test_small_longform_timestamps_generation** ✅
| [
73
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"run-slow"
] |
https://api.github.com/repos/huggingface/transformers/issues/35574 |
TITLE
`Nan` logits when performing inference using ModernBERT
COMMENTS
5
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
transformers == 4.48.0.dev0
torch == 2.2.2
### Description
I have finetuned ModelBERT model for multi-label task and when performing batched inference I am getting `Nan` values for logits in a batch except for one sample. So, in each batch except one sample all the remaining logits are `Nan`
### Who can help?
@tomaarsen @ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Simply passing model(**batch) is resulting in this - while with bs=1 no issue exists.
Code:
```
model.eval()
labels, predictions, confidences = [], [], []
with torch.no_grad():
for batch in tqdm(val_loader):
labels.extend(batch['labels'].cpu().numpy())
batch.pop('labels')
batch = {k: v.to(device) for k, v in batch.items()}
outputs = model(**batch)
probs = torch.sigmoid(outputs.logits)
preds = (probs > 0.5).int()
confs = probs
predictions.extend(preds.cpu().numpy())
confidences.extend(confs.cpu().numpy())
```
<img width="1124" alt="image" src="https://github.com/user-attachments/assets/fe10bfdb-4921-4733-bae4-7f87d68581f4" />
### Expected behavior
Expecting some values instead of `Nan` | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34982 |
TITLE
Add the Bamba Model
COMMENTS
5
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 2
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
This PR merges the `BambaModel`, which is a hybrid [mamba2](https://github.com/state-spaces/mamba) architecture with SwiGLU. The checkpoints are jointly trained by IBM, Princeton, and UIUC.
The implementation is based off [ai21labs/Jamba-v0.1](https://huggingface.co/ai21labs/Jamba-v0.1) and the mamba2 implementation ported over to HF for the codestral model.
cc: @ani300, @raghukiran1224
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @ylacombe, @eustlb
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @muellerzr
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| [
77,
73,
79
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
1,
0,
1,
0,
0
] | [
"New model",
"run-slow",
"State space models"
] |
https://api.github.com/repos/huggingface/transformers/issues/33706 |
TITLE
`processing_mllama.py` has a bug?
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 1
BODY
### System Info
v4.45.0
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
In https://github.com/huggingface/transformers/blob/f2c388e3f946862f657acc1e21b272ec946fc66c/src/transformers/models/mllama/processing_mllama.py#L305
should the code be following?
```
not all(batch_img_per_prompt == b for batch_img_per_prompt, b in zip(n_images_in_text, n_images_in_images))
```
### Expected behavior
The above code failed when you passed in a batch of examples | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/33517 |
TITLE
test_pt_flax_equivalence and test_encoder_decoder_model_standalone fail running on device (cuda or xpu)
COMMENTS
0
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
With:
* https://github.com/pytorch/pytorch/commit/0aa41eb52f7e577cf88e0f1b0adb34167a9ae94b
* https://github.com/huggingface/accelerate/commit/4b4c036933f7c50fe3a7027b0380fcec53c6975e
* https://github.com/huggingface/transformers/commit/98adf24883b007c2a7fb17bab1c01b1614673433
Issue seen on NVidia A10 and Intel PVC.
`test_pt_flax_equivalence` and `test_encoder_decoder_model_standalone` are failing across multiple models due to missing models or tensors placements on devices. Specifically, there are 3 types of issues:
1. Model was not moved to device (`model.to(cuda)` is missing)
2. Input was not moved to device (`input.to(cuda)` is missing)
3. `torch.Tensor.numpy()` called with tensor being on device (should first be moved to CPU according to https://pytorch.org/docs/2.4/generated/torch.Tensor.numpy.html)
**Proposed fix:**
* #33485
CC: @sanchit-gandhi, @amyeroberts
See the following log for repro cmdline and list of errors (log running on NVidia A10, for XPU log will be similar):
```
$ python3 -m pytest --tb=short \
tests/models/informer/test_modeling_informer.py::InformerModelTest::test_encoder_decoder_model_standalone \
tests/models/encoder_decoder/test_modeling_flax_encoder_decoder.py::FlaxGPT2EncoderDecoderModelTest::test_pt_flax_equivalence \
tests/models/encoder_decoder/test_modeling_flax_encoder_decoder.py::FlaxBartEncoderDecoderModelTest::test_pt_flax_equivalence \
tests/models/encoder_decoder/test_modeling_flax_encoder_decoder.py::FlaxBertEncoderDecoderModelTest::test_pt_flax_equivalence \
tests/models/vision_text_dual_encoder/test_modeling_vision_text_dual_encoder.py::ViTBertModelTest::test_pt_flax_equivalence \
tests/models/vision_text_dual_encoder/test_modeling_vision_text_dual_encoder.py::CLIPVisionBertModelTest::test_pt_flax_equivalence \
tests/models/speech_encoder_decoder/test_modeling_flax_speech_encoder_decoder.py::FlaxWav2Vec2GPT2ModelTest::test_pt_flax_equivalence \
tests/models/speech_encoder_decoder/test_modeling_flax_speech_encoder_decoder.py::FlaxWav2Vec2BartModelTest::test_pt_flax_equivalence \
tests/models/speech_encoder_decoder/test_modeling_flax_speech_encoder_decoder.py::FlaxWav2Vec2BertModelTest::test_pt_flax_equivalence \
tests/models/vision_text_dual_encoder/test_modeling_flax_vision_text_dual_encoder.py::FlaxViTBertModelTest::test_pt_flax_equivalence tests/models/vision_text_dual_encoder/test_modeling_flax_vision_text_dual_encoder.py::FlaxCLIPVisionBertModelTest::test_pt_flax_equivalence \
tests/models/vision_encoder_decoder/test_modeling_flax_vision_encoder_decoder.py::FlaxViT2GPT2EncoderDecoderModelTest::test_pt_flax_equivalence
========================================================================================= test session starts =========================================================================================
platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.5.0
rootdir: /home/dvrogozh/git/huggingface/transformers
configfile: pyproject.toml
plugins: hypothesis-6.111.1, subtests-0.13.1, rich-0.1.1, dash-2.17.1, xdist-3.6.1, pspec-0.0.4, timeout-2.3.1
collected 12 items
tests/models/informer/test_modeling_informer.py F [ 8%]
tests/models/encoder_decoder/test_modeling_flax_encoder_decoder.py FFF [ 33%]
tests/models/vision_text_dual_encoder/test_modeling_vision_text_dual_encoder.py FF [ 50%]
tests/models/speech_encoder_decoder/test_modeling_flax_speech_encoder_decoder.py FFF [ 75%]
tests/models/vision_text_dual_encoder/test_modeling_flax_vision_text_dual_encoder.py FF [ 91%]
tests/models/vision_encoder_decoder/test_modeling_flax_vision_encoder_decoder.py F [100%]
============================================================================================== FAILURES ===============================================================================================
_______________________________________________________________________ InformerModelTest.test_encoder_decoder_model_standalone _______________________________________________________________________
tests/models/informer/test_modeling_informer.py:226: in test_encoder_decoder_model_standalone
self.model_tester.check_encoder_decoder_model_standalone(*config_and_inputs)
tests/models/informer/test_modeling_informer.py:174: in check_encoder_decoder_model_standalone
self.parent.assertTrue(torch.equal(model.encoder.embed_positions.weight, embed_positions.weight))
E RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument other in method wrapper_CUDA__equal)
______________________________________________________________________ FlaxGPT2EncoderDecoderModelTest.test_pt_flax_equivalence _______________________________________________________________________
tests/models/encoder_decoder/test_modeling_flax_encoder_decoder.py:413: in test_pt_flax_equivalence
self.check_equivalence_pt_to_flax(config, decoder_config, inputs_dict)
tests/models/encoder_decoder/test_modeling_flax_encoder_decoder.py:344: in check_equivalence_pt_to_flax
self.check_pt_flax_equivalence(pt_model, fx_model, inputs_dict)
tests/models/encoder_decoder/test_modeling_flax_encoder_decoder.py:303: in check_pt_flax_equivalence
pt_outputs = pt_model(**pt_inputs).to_tuple()
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/encoder_decoder/modeling_encoder_decoder.py:597: in forward
encoder_outputs = self.encoder(
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/bert/modeling_bert.py:1077: in forward
embedding_output = self.embeddings(
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/bert/modeling_bert.py:210: in forward
inputs_embeds = self.word_embeddings(input_ids)
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/sparse.py:190: in forward
return F.embedding(
../../pytorch/pytorch/torch/nn/functional.py:2551: in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
E RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)
______________________________________________________________________ FlaxBartEncoderDecoderModelTest.test_pt_flax_equivalence _______________________________________________________________________
tests/models/encoder_decoder/test_modeling_flax_encoder_decoder.py:413: in test_pt_flax_equivalence
self.check_equivalence_pt_to_flax(config, decoder_config, inputs_dict)
tests/models/encoder_decoder/test_modeling_flax_encoder_decoder.py:344: in check_equivalence_pt_to_flax
self.check_pt_flax_equivalence(pt_model, fx_model, inputs_dict)
tests/models/encoder_decoder/test_modeling_flax_encoder_decoder.py:303: in check_pt_flax_equivalence
pt_outputs = pt_model(**pt_inputs).to_tuple()
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/encoder_decoder/modeling_encoder_decoder.py:597: in forward
encoder_outputs = self.encoder(
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/bert/modeling_bert.py:1077: in forward
embedding_output = self.embeddings(
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/bert/modeling_bert.py:210: in forward
inputs_embeds = self.word_embeddings(input_ids)
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/sparse.py:190: in forward
return F.embedding(
../../pytorch/pytorch/torch/nn/functional.py:2551: in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
E RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)
---------------------------------------------------------------------------------------- Captured stderr call -----------------------------------------------------------------------------------------
Config of the decoder: <class 'transformers.models.bart.modeling_bart.BartForCausalLM'> is overwritten by shared decoder config: BartConfig {
"activation_dropout": 0.0,
"activation_function": "gelu",
"add_cross_attention": true,
"attention_dropout": 0.1,
"bos_token_id": 0,
"classifier_dropout": 0.0,
"d_model": 32,
"decoder_attention_heads": 4,
"decoder_ffn_dim": 4,
"decoder_layerdrop": 0.0,
"decoder_layers": 2,
"decoder_start_token_id": 2,
"dropout": 0.1,
"encoder_attention_heads": 4,
"encoder_ffn_dim": 4,
"encoder_layerdrop": 0.0,
"encoder_layers": 2,
"eos_token_id": 2,
"forced_eos_token_id": 2,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2"
},
"init_std": 0.02,
"initializer_range": 0.02,
"is_decoder": true,
"is_encoder_decoder": true,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_2": 2
},
"max_position_embeddings": 32,
"model_type": "bart",
"num_hidden_layers": 2,
"pad_token_id": 1,
"scale_embedding": false,
"transformers_version": "4.45.0.dev0",
"use_cache": false,
"vocab_size": 99
}
______________________________________________________________________ FlaxBertEncoderDecoderModelTest.test_pt_flax_equivalence _______________________________________________________________________
tests/models/encoder_decoder/test_modeling_flax_encoder_decoder.py:413: in test_pt_flax_equivalence
self.check_equivalence_pt_to_flax(config, decoder_config, inputs_dict)
tests/models/encoder_decoder/test_modeling_flax_encoder_decoder.py:344: in check_equivalence_pt_to_flax
self.check_pt_flax_equivalence(pt_model, fx_model, inputs_dict)
tests/models/encoder_decoder/test_modeling_flax_encoder_decoder.py:303: in check_pt_flax_equivalence
pt_outputs = pt_model(**pt_inputs).to_tuple()
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/encoder_decoder/modeling_encoder_decoder.py:597: in forward
encoder_outputs = self.encoder(
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/bert/modeling_bert.py:1077: in forward
embedding_output = self.embeddings(
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/bert/modeling_bert.py:210: in forward
inputs_embeds = self.word_embeddings(input_ids)
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/sparse.py:190: in forward
return F.embedding(
../../pytorch/pytorch/torch/nn/functional.py:2551: in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
E RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)
______________________________________________________________________________ ViTBertModelTest.test_pt_flax_equivalence ______________________________________________________________________________
tests/models/vision_text_dual_encoder/test_modeling_vision_text_dual_encoder.py:266: in test_pt_flax_equivalence
self.check_equivalence_pt_to_flax(vision_config, text_config, inputs_dict)
tests/models/vision_text_dual_encoder/test_modeling_vision_text_dual_encoder.py:226: in check_equivalence_pt_to_flax
self.check_pt_flax_equivalence(pt_model, fx_model, **inputs_dict)
tests/models/vision_text_dual_encoder/test_modeling_vision_text_dual_encoder.py:182: in check_pt_flax_equivalence
flax_inputs = {k: v.numpy() for k, v in pt_inputs.items()}
tests/models/vision_text_dual_encoder/test_modeling_vision_text_dual_encoder.py:182: in <dictcomp>
flax_inputs = {k: v.numpy() for k, v in pt_inputs.items()}
E TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
__________________________________________________________________________ CLIPVisionBertModelTest.test_pt_flax_equivalence ___________________________________________________________________________
tests/models/vision_text_dual_encoder/test_modeling_vision_text_dual_encoder.py:266: in test_pt_flax_equivalence
self.check_equivalence_pt_to_flax(vision_config, text_config, inputs_dict)
tests/models/vision_text_dual_encoder/test_modeling_vision_text_dual_encoder.py:226: in check_equivalence_pt_to_flax
self.check_pt_flax_equivalence(pt_model, fx_model, **inputs_dict)
tests/models/vision_text_dual_encoder/test_modeling_vision_text_dual_encoder.py:182: in check_pt_flax_equivalence
flax_inputs = {k: v.numpy() for k, v in pt_inputs.items()}
tests/models/vision_text_dual_encoder/test_modeling_vision_text_dual_encoder.py:182: in <dictcomp>
flax_inputs = {k: v.numpy() for k, v in pt_inputs.items()}
E TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
_________________________________________________________________________ FlaxWav2Vec2GPT2ModelTest.test_pt_flax_equivalence __________________________________________________________________________
tests/models/speech_encoder_decoder/test_modeling_flax_speech_encoder_decoder.py:532: in test_pt_flax_equivalence
self.check_equivalence_pt_to_flax(config, decoder_config, inputs_dict)
tests/models/speech_encoder_decoder/test_modeling_flax_speech_encoder_decoder.py:459: in check_equivalence_pt_to_flax
self.check_pt_flax_equivalence(pt_model, fx_model, inputs_dict)
tests/models/speech_encoder_decoder/test_modeling_flax_speech_encoder_decoder.py:418: in check_pt_flax_equivalence
pt_outputs = pt_model(**pt_inputs).to_tuple()
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py:501: in forward
encoder_outputs = self.encoder(
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/wav2vec2/modeling_wav2vec2.py:1809: in forward
extract_features = self.feature_extractor(input_values)
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/wav2vec2/modeling_wav2vec2.py:463: in forward
hidden_states = conv_layer(hidden_states)
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/wav2vec2/modeling_wav2vec2.py:332: in forward
hidden_states = self.conv(hidden_states)
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/conv.py:375: in forward
return self._conv_forward(input, self.weight, self.bias)
../../pytorch/pytorch/torch/nn/modules/conv.py:370: in _conv_forward
return F.conv1d(
E RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
_________________________________________________________________________ FlaxWav2Vec2BartModelTest.test_pt_flax_equivalence __________________________________________________________________________
tests/models/speech_encoder_decoder/test_modeling_flax_speech_encoder_decoder.py:532: in test_pt_flax_equivalence
self.check_equivalence_pt_to_flax(config, decoder_config, inputs_dict)
tests/models/speech_encoder_decoder/test_modeling_flax_speech_encoder_decoder.py:459: in check_equivalence_pt_to_flax
self.check_pt_flax_equivalence(pt_model, fx_model, inputs_dict)
tests/models/speech_encoder_decoder/test_modeling_flax_speech_encoder_decoder.py:418: in check_pt_flax_equivalence
pt_outputs = pt_model(**pt_inputs).to_tuple()
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py:501: in forward
encoder_outputs = self.encoder(
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/wav2vec2/modeling_wav2vec2.py:1809: in forward
extract_features = self.feature_extractor(input_values)
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/wav2vec2/modeling_wav2vec2.py:463: in forward
hidden_states = conv_layer(hidden_states)
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/wav2vec2/modeling_wav2vec2.py:332: in forward
hidden_states = self.conv(hidden_states)
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/conv.py:375: in forward
return self._conv_forward(input, self.weight, self.bias)
../../pytorch/pytorch/torch/nn/modules/conv.py:370: in _conv_forward
return F.conv1d(
E RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
---------------------------------------------------------------------------------------- Captured stderr call -----------------------------------------------------------------------------------------
Config of the decoder: <class 'transformers.models.bart.modeling_bart.BartForCausalLM'> is overwritten by shared decoder config: BartConfig {
"activation_dropout": 0.0,
"activation_function": "gelu",
"add_cross_attention": true,
"attention_dropout": 0.1,
"bos_token_id": 0,
"classifier_dropout": 0.0,
"d_model": 24,
"decoder_attention_heads": 4,
"decoder_ffn_dim": 4,
"decoder_layerdrop": 0.0,
"decoder_layers": 2,
"decoder_start_token_id": 2,
"dropout": 0.1,
"encoder_attention_heads": 4,
"encoder_ffn_dim": 4,
"encoder_layerdrop": 0.0,
"encoder_layers": 2,
"eos_token_id": 2,
"forced_eos_token_id": 2,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2"
},
"init_std": 0.02,
"initializer_range": 0.02,
"is_decoder": true,
"is_encoder_decoder": true,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_2": 2
},
"max_position_embeddings": 32,
"model_type": "bart",
"num_hidden_layers": 2,
"pad_token_id": 1,
"scale_embedding": false,
"transformers_version": "4.45.0.dev0",
"use_cache": false,
"vocab_size": 99
}
_________________________________________________________________________ FlaxWav2Vec2BertModelTest.test_pt_flax_equivalence __________________________________________________________________________
tests/models/speech_encoder_decoder/test_modeling_flax_speech_encoder_decoder.py:532: in test_pt_flax_equivalence
self.check_equivalence_pt_to_flax(config, decoder_config, inputs_dict)
tests/models/speech_encoder_decoder/test_modeling_flax_speech_encoder_decoder.py:459: in check_equivalence_pt_to_flax
self.check_pt_flax_equivalence(pt_model, fx_model, inputs_dict)
tests/models/speech_encoder_decoder/test_modeling_flax_speech_encoder_decoder.py:418: in check_pt_flax_equivalence
pt_outputs = pt_model(**pt_inputs).to_tuple()
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py:501: in forward
encoder_outputs = self.encoder(
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/wav2vec2/modeling_wav2vec2.py:1809: in forward
extract_features = self.feature_extractor(input_values)
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/wav2vec2/modeling_wav2vec2.py:463: in forward
hidden_states = conv_layer(hidden_states)
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/wav2vec2/modeling_wav2vec2.py:332: in forward
hidden_states = self.conv(hidden_states)
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/conv.py:375: in forward
return self._conv_forward(input, self.weight, self.bias)
../../pytorch/pytorch/torch/nn/modules/conv.py:370: in _conv_forward
return F.conv1d(
E RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
____________________________________________________________________________ FlaxViTBertModelTest.test_pt_flax_equivalence ____________________________________________________________________________
tests/models/vision_text_dual_encoder/test_modeling_flax_vision_text_dual_encoder.py:243: in test_pt_flax_equivalence
self.check_equivalence_pt_to_flax(vision_config, text_config, inputs_dict)
tests/models/vision_text_dual_encoder/test_modeling_flax_vision_text_dual_encoder.py:207: in check_equivalence_pt_to_flax
self.check_pt_flax_equivalence(pt_model, fx_model, inputs_dict)
tests/models/vision_text_dual_encoder/test_modeling_flax_vision_text_dual_encoder.py:166: in check_pt_flax_equivalence
pt_outputs = pt_model(**pt_inputs).to_tuple()
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/vision_text_dual_encoder/modeling_vision_text_dual_encoder.py:358: in forward
vision_outputs = self.vision_model(
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/vit/modeling_vit.py:619: in forward
embedding_output = self.embeddings(
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/vit/modeling_vit.py:124: in forward
embeddings = self.patch_embeddings(pixel_values, interpolate_pos_encoding=interpolate_pos_encoding)
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/vit/modeling_vit.py:183: in forward
embeddings = self.projection(pixel_values).flatten(2).transpose(1, 2)
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/conv.py:554: in forward
return self._conv_forward(input, self.weight, self.bias)
../../pytorch/pytorch/torch/nn/modules/conv.py:549: in _conv_forward
return F.conv2d(
E RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
________________________________________________________________________ FlaxCLIPVisionBertModelTest.test_pt_flax_equivalence _________________________________________________________________________
tests/models/vision_text_dual_encoder/test_modeling_flax_vision_text_dual_encoder.py:243: in test_pt_flax_equivalence
self.check_equivalence_pt_to_flax(vision_config, text_config, inputs_dict)
tests/models/vision_text_dual_encoder/test_modeling_flax_vision_text_dual_encoder.py:207: in check_equivalence_pt_to_flax
self.check_pt_flax_equivalence(pt_model, fx_model, inputs_dict)
tests/models/vision_text_dual_encoder/test_modeling_flax_vision_text_dual_encoder.py:166: in check_pt_flax_equivalence
pt_outputs = pt_model(**pt_inputs).to_tuple()
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/vision_text_dual_encoder/modeling_vision_text_dual_encoder.py:358: in forward
vision_outputs = self.vision_model(
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/clip/modeling_clip.py:1116: in forward
return self.vision_model(
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/clip/modeling_clip.py:1040: in forward
hidden_states = self.embeddings(pixel_values)
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/clip/modeling_clip.py:202: in forward
patch_embeds = self.patch_embedding(pixel_values.to(dtype=target_dtype)) # shape = [*, width, grid, grid]
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/conv.py:554: in forward
return self._conv_forward(input, self.weight, self.bias)
../../pytorch/pytorch/torch/nn/modules/conv.py:549: in _conv_forward
return F.conv2d(
E RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
____________________________________________________________________ FlaxViT2GPT2EncoderDecoderModelTest.test_pt_flax_equivalence _____________________________________________________________________
tests/models/vision_encoder_decoder/test_modeling_flax_vision_encoder_decoder.py:352: in test_pt_flax_equivalence
self.check_equivalence_pt_to_flax(config, decoder_config, inputs_dict)
tests/models/vision_encoder_decoder/test_modeling_flax_vision_encoder_decoder.py:288: in check_equivalence_pt_to_flax
self.check_pt_flax_equivalence(pt_model, fx_model, inputs_dict)
tests/models/vision_encoder_decoder/test_modeling_flax_vision_encoder_decoder.py:247: in check_pt_flax_equivalence
pt_outputs = pt_model(**pt_inputs).to_tuple()
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py:587: in forward
encoder_outputs = self.encoder(
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/vit/modeling_vit.py:619: in forward
embedding_output = self.embeddings(
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/vit/modeling_vit.py:124: in forward
embeddings = self.patch_embeddings(pixel_values, interpolate_pos_encoding=interpolate_pos_encoding)
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
src/transformers/models/vit/modeling_vit.py:183: in forward
embeddings = self.projection(pixel_values).flatten(2).transpose(1, 2)
../../pytorch/pytorch/torch/nn/modules/module.py:1736: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/module.py:1747: in _call_impl
return forward_call(*args, **kwargs)
../../pytorch/pytorch/torch/nn/modules/conv.py:554: in forward
return self._conv_forward(input, self.weight, self.bias)
../../pytorch/pytorch/torch/nn/modules/conv.py:549: in _conv_forward
return F.conv2d(
E RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
========================================================================================== warnings summary ===========================================================================================
../../../pytorch.cuda/lib/python3.10/site-packages/tensorflow/__init__.py:30
/home/dvrogozh/pytorch.cuda/lib/python3.10/site-packages/tensorflow/__init__.py:30: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives
import distutils as _distutils
src/transformers/deepspeed.py:24
/home/dvrogozh/git/huggingface/transformers/src/transformers/deepspeed.py:24: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules directly from transformers.integrations
warnings.warn(
../../../pytorch.cuda/lib/python3.10/site-packages/optax/_src/second_order.py:46
/home/dvrogozh/pytorch.cuda/lib/python3.10/site-packages/optax/_src/second_order.py:46: DeprecationWarning: jax.numpy.DeviceArray is deprecated. Use jax.Array.
v: jnp.DeviceArray,
../../../pytorch.cuda/lib/python3.10/site-packages/optax/_src/second_order.py:48
/home/dvrogozh/pytorch.cuda/lib/python3.10/site-packages/optax/_src/second_order.py:48: DeprecationWarning: jax.numpy.DeviceArray is deprecated. Use jax.Array.
inputs: jnp.DeviceArray,
../../../pytorch.cuda/lib/python3.10/site-packages/optax/_src/second_order.py:49
/home/dvrogozh/pytorch.cuda/lib/python3.10/site-packages/optax/_src/second_order.py:49: DeprecationWarning: jax.numpy.DeviceArray is deprecated. Use jax.Array.
targets: jnp.DeviceArray,
../../../pytorch.cuda/lib/python3.10/site-packages/optax/_src/second_order.py:50
/home/dvrogozh/pytorch.cuda/lib/python3.10/site-packages/optax/_src/second_order.py:50: DeprecationWarning: jax.numpy.DeviceArray is deprecated. Use jax.Array.
) -> jnp.DeviceArray:
../../../pytorch.cuda/lib/python3.10/site-packages/optax/_src/second_order.py:72
/home/dvrogozh/pytorch.cuda/lib/python3.10/site-packages/optax/_src/second_order.py:72: DeprecationWarning: jax.numpy.DeviceArray is deprecated. Use jax.Array.
inputs: jnp.DeviceArray,
../../../pytorch.cuda/lib/python3.10/site-packages/optax/_src/second_order.py:73
/home/dvrogozh/pytorch.cuda/lib/python3.10/site-packages/optax/_src/second_order.py:73: DeprecationWarning: jax.numpy.DeviceArray is deprecated. Use jax.Array.
targets: jnp.DeviceArray,
../../../pytorch.cuda/lib/python3.10/site-packages/optax/_src/second_order.py:74
/home/dvrogozh/pytorch.cuda/lib/python3.10/site-packages/optax/_src/second_order.py:74: DeprecationWarning: jax.numpy.DeviceArray is deprecated. Use jax.Array.
) -> jnp.DeviceArray:
../../../pytorch.cuda/lib/python3.10/site-packages/optax/_src/second_order.py:97
/home/dvrogozh/pytorch.cuda/lib/python3.10/site-packages/optax/_src/second_order.py:97: DeprecationWarning: jax.numpy.DeviceArray is deprecated. Use jax.Array.
) -> jnp.DeviceArray:
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
======================================================================================= short test summary info =======================================================================================
FAILED tests/models/informer/test_modeling_informer.py::InformerModelTest::test_encoder_decoder_model_standalone - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument other in method wrapper_CUDA__equal)
FAILED tests/models/encoder_decoder/test_modeling_flax_encoder_decoder.py::FlaxGPT2EncoderDecoderModelTest::test_pt_flax_equivalence - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)
FAILED tests/models/encoder_decoder/test_modeling_flax_encoder_decoder.py::FlaxBartEncoderDecoderModelTest::test_pt_flax_equivalence - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)
FAILED tests/models/encoder_decoder/test_modeling_flax_encoder_decoder.py::FlaxBertEncoderDecoderModelTest::test_pt_flax_equivalence - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)
FAILED tests/models/vision_text_dual_encoder/test_modeling_vision_text_dual_encoder.py::ViTBertModelTest::test_pt_flax_equivalence - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED tests/models/vision_text_dual_encoder/test_modeling_vision_text_dual_encoder.py::CLIPVisionBertModelTest::test_pt_flax_equivalence - TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
FAILED tests/models/speech_encoder_decoder/test_modeling_flax_speech_encoder_decoder.py::FlaxWav2Vec2GPT2ModelTest::test_pt_flax_equivalence - RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
FAILED tests/models/speech_encoder_decoder/test_modeling_flax_speech_encoder_decoder.py::FlaxWav2Vec2BartModelTest::test_pt_flax_equivalence - RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
FAILED tests/models/speech_encoder_decoder/test_modeling_flax_speech_encoder_decoder.py::FlaxWav2Vec2BertModelTest::test_pt_flax_equivalence - RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
FAILED tests/models/vision_text_dual_encoder/test_modeling_flax_vision_text_dual_encoder.py::FlaxViTBertModelTest::test_pt_flax_equivalence - RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
FAILED tests/models/vision_text_dual_encoder/test_modeling_flax_vision_text_dual_encoder.py::FlaxCLIPVisionBertModelTest::test_pt_flax_equivalence - RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
FAILED tests/models/vision_encoder_decoder/test_modeling_flax_vision_encoder_decoder.py::FlaxViT2GPT2EncoderDecoderModelTest::test_pt_flax_equivalence - RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
================================================================================== 12 failed, 10 warnings in 19.19s ===================================================================================
```
| [
55
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Flax"
] |
https://api.github.com/repos/huggingface/transformers/issues/34277 |
TITLE
bitnet support
COMMENTS
7
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
ValueError: Unknown quantization type, got bitnet - supported types are: ['awq', 'bitsandbytes_4bit', 'bitsandbytes_8bit', 'gptq', 'aqlm', 'quanto', 'eetq', 'hqq', 'fbgemm_fp8']
### Motivation
I was trying to use "HF1BitLLM/Llama3-8B-1.58-100B-tokens" but i encounter above error.
#### Sample code snipt
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("HF1BitLLM/Llama3-8B-1.58-100B-tokens", device_map="cuda", torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B-Instruct")
### Your contribution
I can help. | [
76,
40
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request",
"Quantization"
] |
https://api.github.com/repos/huggingface/transformers/issues/33721 |
TITLE
validation of the eval dataset should be done in advance
COMMENTS
0
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
I found that the validation of the eval dataset should be done in advance. It only prompted that the validation set was empty after I had trained for half an hour and completed one epoch.The code is as follows.I set eval_strategy="epoch" in train_args, but I don't set eval dataset in Trainer init.
`from transformers import TrainingArguments
train_args = TrainingArguments(output_dir="./checkpoints",
per_device_train_batch_size=2,
per_device_eval_batch_size=1,
logging_steps=1,
**eval_strategy="epoch",**
save_strategy="epoch",
save_total_limit=3,
learning_rate=2e-5,
weight_decay=0.01,
num_train_epochs=30,
dataloader_drop_last=True,
metric_for_best_model="f1",
load_best_model_at_end=True
)
from transformers import DataCollatorWithPadding, Trainer
trainer = Trainer(model=model,
args=train_args,
train_dataset=process_dataset["train"],
**eval_dataset=None,**
data_collator=collate_fn)
`
transformers version:4.45.0
python version:3.10
paltform:ubuntu
@muellerzr @Sunm
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. set eval_strategy="epoch" in train_args
2. set eval dataset=None in Trainer init
3. run Trainer.run()
### Expected behavior
validation of the eval dataset should be done in advance | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34046 |
TITLE
Support for torch._dynamo.export for Phi3
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
Compared to `symbolic_trace`, the new (but I assume, experimental) entrypoint in `torch._dynamo.export` seems to provide a more robust way to extract modular FX graphs, that can't have any graph breaks.
I have been experimenting with some networks (Pythia, OPT, Llama, Mistral), and they all go through.
It seems that Phi3 breaks because of this line:
https://github.com/huggingface/transformers/blob/36d410dab637c133f1bb706779c75d9021d403cf/src/transformers/models/phi3/modeling_phi3.py#L213
Where `self.inv_freq` is redefined at runtime in the forward pass.
This is a bit confusing, and I would recommend to drop `self` and use a `normal` runtime variable.
I'm not sure if this has potential side effects.
A similar patter seems to be repeated in other Embedding classes in Phi3.
To reproduce:
```python
model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-3.5-mini-instruct")
model, guards = torch._dynamo.export(model)(**model.dummy_inputs)
```
@gante @ArthurZucker
### Motivation
Dropping the reference to `self.inv_freq` would allow to obtain a fullgraph with dynamo.
Having full FX graph is also a requirement for torch.export, although I have not tested that API.
### Your contribution
I can't directly contribute with a PR at the moment.
I could test a PR from my side to check compatibility with dynamo and potential side effects, once the PR is open. | [
76,
32
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request",
"Deployment"
] |
https://api.github.com/repos/huggingface/transformers/issues/35200 |
TITLE
PaliGemma2 Processor returns wrong labels array when <image> token is present in `text`
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.47.0
- Platform: Linux-5.10.0-33-cloud-amd64-x86_64-with-glibc2.31
- Python version: 3.9.1
- Huggingface_hub version: 0.26.3
- Safetensors version: 0.4.3
- Accelerate version: 0.30.1
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: NO
- mixed_precision: fp16
- use_cpu: False
- debug: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- gpu_ids: 0
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.2.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: no
- Using GPU in script?: yes
- GPU type: Tesla T4
### Who can help?
@ArthurZucker @molbap we chatted about the last paligemma release :)
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Here is a script that shows the problem:
```python
from transformers import PaliGemmaForConditionalGeneration, PaliGemmaProcessor
from PIL import Image
import numpy as np
hf_token = "..."
processor = PaliGemmaProcessor.from_pretrained(
"google/paligemma2-3b-pt-224", token=hf_token
)
text = ["How many shapes are green?"]
suffix = ["4"]
image = [Image.fromarray(np.zeros((224, 224, 3), dtype=np.uint8))]
print(
processor(
images=image, text=text, suffix=suffix, return_tensors="pt", padding="longest"
).labels
)
text = ["<image>How many shapes are green?"]
print(
processor(
images=image, text=text, suffix=suffix, return_tensors="pt", padding="longest"
).labels
)
```
### Expected behavior
As you can see, the bottom one is missing the EOS token, which leads to bad finetunes! But the processor class warns me when the `<image>` token isn't present. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34427 |
TITLE
Trying to train a model using automatic1111. Error - Exception training model: 'module 'transformers.integrations' has no attribute 'deepspeed''.
COMMENTS
22
REACTIONS
+1: 2
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 3
BODY
### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.46.0
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.10.0
- Huggingface_hub version: 0.25.2
- Safetensors version: 0.4.5
- Accelerate version: 1.0.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 4090
### Who can help?
Im trying to fine tune a model. Keep getting this error - Exception training model: 'module 'transformers.integrations' has no attribute 'deepspeed''.
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
downloaded model. installed dependencies. ran automatic1111 web userinterface. Installed dreambooth. trying to run model
### Expected behavior
a fine tuned model | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34347 |
TITLE
Add support for Allegro
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Model description
Allegro is a powerful text-to-video model that generates high-quality videos up to 6 seconds at 15 FPS and 720p resolution from simple text input.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Model code: https://github.com/rhymes-ai/Allegro | [
77
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
"New model"
] |
https://api.github.com/repos/huggingface/transformers/issues/34276 |
TITLE
Limit number of parametes logged with `MLflowCallback`
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 1
rocket: 0
eyes: 0
BODY
### Feature request
Add a new environment variable, such as `MLFLOW_MAX_LOG_PARAMS`, which can limit the number of parameters logged by the `MLflowCallback`.
### Motivation
When using mlflow in Azure ML, there is a limit of 200 parameters that can be logged in a single run, meaning that when attempting to run a training job, the callback needs to be disabled entirely, or the module needs to be "monkeypatched" to limit the number of params logged.
### Your contribution
I will submit a PR | [
66,
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"trainer",
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/34846 |
TITLE
change inputs_embeds to input_embeds
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
Change the inputs_embeds in the LlamaModel(Maybe others also) input parameters to input_embeds
### Motivation
Our code work should be based on one of these two, and we will also switch from A to B or vice versa, and the current difference in the prefixes of the two makes the experience of this process not so good. This process can be made more convenient with simple modifications.
### Your contribution
I can help to submit a PR for it | [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/33816 |
TITLE
[Qwen2-VL] RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
When i load the Qwen2-VL 7B model using example script from the model card, i encounter with this error:
This error doesnt seems to happen yet at this git commit: 21fac7abba2a37fae86106f87fcf9974fd1e3830
The issue is this hash commit doesnt contains the update for Llama 3.2 yet and i hope to be able to run both
```
RuntimeError Traceback (most recent call last)
Cell In[2], line 64
60 inputs = inputs.to(model.device)
61 # inputs = inputs.to("cuda")
62
63 # Inference: Generation of the output
---> 64 generated_ids = model.generate(**inputs, max_new_tokens=128)
65 generated_ids_trimmed = [
66 out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
67 ]
68 output_text = processor.batch_decode(
69 generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
70 )
File ~/miniconda3/envs/mlenv_tp/lib/python3.12/site-packages/torch/utils/_contextlib.py:116, in context_decorator.<locals>.decorate_context(*args, **kwargs)
113 @functools.wraps(func)
114 def decorate_context(*args, **kwargs):
115 with ctx_factory():
--> 116 return func(*args, **kwargs)
File ~/miniconda3/envs/mlenv_tp/lib/python3.12/site-packages/transformers/generation/utils.py:2048, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, negative_prompt_ids, negative_prompt_attention_mask, **kwargs)
2040 input_ids, model_kwargs = self._expand_inputs_for_generation(
2041 input_ids=input_ids,
2042 expand_size=generation_config.num_return_sequences,
2043 is_encoder_decoder=self.config.is_encoder_decoder,
2044 **model_kwargs,
2045 )
2047 # 12. run sample (it degenerates to greedy search when `generation_config.do_sample=False`)
-> 2048 result = self._sample(
2049 input_ids,
2050 logits_processor=prepared_logits_processor,
2051 stopping_criteria=prepared_stopping_criteria,
2052 generation_config=generation_config,
2053 synced_gpus=synced_gpus,
2054 streamer=streamer,
2055 **model_kwargs,
2056 )
2058 elif generation_mode in (GenerationMode.BEAM_SAMPLE, GenerationMode.BEAM_SEARCH):
2059 # 11. prepare beam search scorer
2060 beam_scorer = BeamSearchScorer(
2061 batch_size=batch_size,
2062 num_beams=generation_config.num_beams,
(...)
2067 max_length=generation_config.max_length,
2068 )
File ~/miniconda3/envs/mlenv_tp/lib/python3.12/site-packages/transformers/generation/utils.py:3008, in GenerationMixin._sample(self, input_ids, logits_processor, stopping_criteria, generation_config, synced_gpus, streamer, **model_kwargs)
3005 model_inputs.update({"output_hidden_states": output_hidden_states} if output_hidden_states else {})
3007 # forward pass to get next token
-> 3008 outputs = self(**model_inputs, return_dict=True)
3010 if synced_gpus and this_peer_finished:
3011 continue # don't waste resources running the code we don't need
File ~/miniconda3/envs/mlenv_tp/lib/python3.12/site-packages/torch/nn/modules/module.py:1553, in Module._wrapped_call_impl(self, *args, **kwargs)
1551 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1552 else:
-> 1553 return self._call_impl(*args, **kwargs)
File ~/miniconda3/envs/mlenv_tp/lib/python3.12/site-packages/torch/nn/modules/module.py:1562, in Module._call_impl(self, *args, **kwargs)
1557 # If we don't have any hooks, we want to skip the rest of the logic in
1558 # this function, and just call forward.
1559 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1560 or _global_backward_pre_hooks or _global_backward_hooks
1561 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1562 return forward_call(*args, **kwargs)
1564 try:
1565 result = None
File ~/miniconda3/envs/mlenv_tp/lib/python3.12/site-packages/accelerate/hooks.py:170, in add_hook_to_module.<locals>.new_forward(module, *args, **kwargs)
168 output = module._old_forward(*args, **kwargs)
169 else:
--> 170 output = module._old_forward(*args, **kwargs)
171 return module._hf_hook.post_forward(module, output)
File ~/miniconda3/envs/mlenv_tp/lib/python3.12/site-packages/transformers/models/qwen2_vl/modeling_qwen2_vl.py:1694, in Qwen2VLForConditionalGeneration.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict, pixel_values, pixel_values_videos, image_grid_thw, video_grid_thw, rope_deltas)
1692 image_mask = (input_ids == self.config.image_token_id).unsqueeze(-1).expand_as(inputs_embeds)
1693 image_embeds = image_embeds.to(inputs_embeds.device, inputs_embeds.dtype)
-> 1694 inputs_embeds = inputs_embeds.masked_scatter(image_mask, image_embeds)
1696 if pixel_values_videos is not None:
1697 pixel_values_videos = pixel_values_videos.type(self.visual.get_dtype())
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument mask in method wrapper_CUDA__masked_scatter_)
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-7B-Instruct", torch_dtype="auto", device_map="auto"
)
# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
# model = Qwen2VLForConditionalGeneration.from_pretrained(
# "Qwen/Qwen2-VL-7B-Instruct",
# torch_dtype=torch.bfloat16,
# attn_implementation="flash_attention_2",
# device_map="auto",
# )
# default processer
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct")
# The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
### Expected behavior
Bug fix or some workaround | [
64,
62,
18,
12
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug",
"Vision",
"Generation",
"Multimodal"
] |
https://api.github.com/repos/huggingface/transformers/issues/34769 |
TITLE
image-to-text pipeline failure when using Qwen2_VL
COMMENTS
5
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.47.0.dev0
- Platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.35
- Python version: 3.10.0
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.4
- Accelerate version: 0.33.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.0+cu121 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When using the Qwen2-VL in image-to-text pipeline, it failed with following issue
```
File "xxx/modeling_qwen2_vl.py", line 1572, in get_rope_index
position_ids[..., i, attention_mask[i] == 1] = llm_positions.to(position_ids.device)
RuntimeError: shape mismatch: value tensor of shape [3, 3596] cannot be broadcast to indexing result of shape [3, 26]
```
Here are the code the reproduce:
```py
import PIL.Image
import requests
from transformers import AutoProcessor, pipeline
model_name_or_path = "Qwen/Qwen2-VL-2B-Instruct"
image_path = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg"
processor = AutoProcessor.from_pretrained(model_name_or_path)
conversation = [
{
"role": "user",
"content": [
{"type": "text", "text": "Describe this image."},
{"type": "image"},
],
}
]
prompt = processor.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)
images = [PIL.Image.open(requests.get(image_path, stream=True, timeout=3000).raw)]
generator = pipeline(
"image-to-text",
model=model_name_or_path,
config=model_name_or_path,
# tokenizer=model_name_or_path,
# image_processor=model_name_or_path,
torch_dtype="auto",
device="cpu",
)
generate_kwargs = {"max_new_tokens": 100}
result = generator(images, prompt=prompt, batch_size=1, generate_kwargs=generate_kwargs)
print(result)
```
Uncomment the `# tokenizer=model_name_or_path,` ` # image_processor=model_name_or_path,` also does not work.
### Expected behavior
The pipeline should work with Qwen2-VL | [
51,
64,
12
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Core: Pipeline",
"bug",
"Multimodal"
] |
https://api.github.com/repos/huggingface/transformers/issues/34954 |
TITLE
[`ESM`] Add support for sdpa.
COMMENTS
10
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Add support for SDPA (scaled dot product attention) for ESM. More context in https://github.com/huggingface/transformers/pull/28802 (And this pr mainly reused the code from this pr as the ESM is Bert-based model) and https://github.com/huggingface/transformers/issues/28005 .
This is my first time contributing to this project, please point out if there is any mistakes.
And revert a change in https://github.com/huggingface/transformers/pull/29329 as the dtype-mismatching issue for bitsandbytes is actually caused by the rotary embedding.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @ylacombe, @eustlb
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @muellerzr
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@ArthurZucker | [
36,
73
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"SDPA",
"run-slow"
] |
https://api.github.com/repos/huggingface/transformers/issues/34294 |
TITLE
SiglipVisionEmbeddings doesn't cast pixel_values like CLIPVisionEmbeddings does
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
pip show transformers
Name: transformers
Version: 4.45.2
### Who can help?
@amyeroberts @qubvel
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The issue occurs when using a Llava model that uses SigLip as its vision tower, and loading the model using a quantization configuration. For example:
```
MODEL_NAME = "fancyfeast/llama-joycaption-alpha-two-hf-llava"
qnt_config = BitsAndBytesConfig(load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16,
)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, use_fast=True)
llava_model = LlavaForConditionalGeneration.from_pretrained(MODEL_NAME, quantization_config=qnt_config, device_map=0)
```
Loading the model is fine, but when running inference an exception is thrown `Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same`:
```
File ~/miniconda3/envs/tmpenv5/lib/python3.11/site-packages/transformers/models/siglip/modeling_siglip.py:311, in SiglipVisionEmbeddings.forward(self, pixel_values, interpolate_pos_encoding)
[309](~/miniconda3/envs/tmpenv5/lib/python3.11/site-packages/transformers/models/siglip/modeling_siglip.py:309) def forward(self, pixel_values: torch.FloatTensor, interpolate_pos_encoding=False) -> torch.Tensor:
[310](~/miniconda3/envs/tmpenv5/lib/python3.11/site-packages/transformers/models/siglip/modeling_siglip.py:310) _, _, height, width = pixel_values.shape
--> [311](~/miniconda3/envs/tmpenv5/lib/python3.11/site-packages/transformers/models/siglip/modeling_siglip.py:311) patch_embeds = self.patch_embedding(pixel_values) # shape = [*, width, grid, grid]
[312](~/miniconda3/envs/tmpenv5/lib/python3.11/site-packages/transformers/models/siglip/modeling_siglip.py:312) embeddings = patch_embeds.flatten(2).transpose(1, 2)
[314](~/miniconda3/envs/tmpenv5/lib/python3.11/site-packages/transformers/models/siglip/modeling_siglip.py:314) if interpolate_pos_encoding:
```
All of this works with a more standard model like `llava-hf/llava-1.5-7b-hf`, because it uses CLIP as its vision tower.
The issue stems from a difference between `CLIPVisionEmbedding` and `SiglipVisionEmbeddings`. Specifically `CLIPVisionEmbedding` casts the input dtype before running it through the Embedding module: https://github.com/huggingface/transformers/blob/32590b5ecb50f1c56be32cb0e686196be9427f2f/src/transformers/models/clip/modeling_clip.py#L247-L248
Whereas SigLip just hands it straight off: https://github.com/huggingface/transformers/blob/32590b5ecb50f1c56be32cb0e686196be9427f2f/src/transformers/models/siglip/modeling_siglip.py#L311
Seems like just copying `CLIPVisionEmbedding`'s behavior will fix this issue.
NOTE: There may be other bugs in the chain, this is just the first breaking issue I ran into when trying to use SigLip in a quantized vision tower.
### Expected behavior
SigLip should work in quantized llava vision towers, just like CLIP. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34867 |
TITLE
Data prefetching does not occur for iterable datasets
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.46.1
- Platform: macOS-15.1-arm64-arm-64bit
- Python version: 3.11.10
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 1.0.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
### Who can help?
@muellerzr @SunMarc
### Reproduction
PR #28498 was meant to allow for specifying the Pytorch dataloader's `prefetch_factor` argument via the huggingface `dataloader_prefetch_factor` training argument. As we can see [on this line](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L989), this feature was added inside of a
```python
if not isinstance(train_dataset, torch.utils.data.IterableDataset):
```
statement which results in prefetching never occurring for `IterableDataset`s (which seems like a mistake). There are a two other lines where the same error happens for [test](https://github.com/huggingface/transformers/blob/4e90b99ed916300b80bac9db793f2a96b2a87122/src/transformers/trainer.py#L1126) and [eval](https://github.com/huggingface/transformers/blob/4e90b99ed916300b80bac9db793f2a96b2a87122/src/transformers/trainer.py#L1084) dataloaders. Unless I'm missing something, I believe these lines can be moved out of the if condition and into other logic.
### Expected behavior
Prefetching should work with iterable datasets. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/33782 |
TITLE
Add AudioQuestionAnswering pipeline
COMMENTS
5
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
A new AudioQuestionAnswering pipeline, just like DQA but instead of providing a document, applying OCR, and doing QA over it, provide audio file, apply STT, and do QA over the transcript. Advanced version includes diarization+STT as speaker annotations provide important context and will improve QA/understanding.
### Motivation
This kind of pipeline is one that I have had to build on multiple occasions for processing audio, specifically phone call recordings. Just like the other pipelines which provide accessibility to some applied ML based pipeline for those to use quickly and easily, this will provide the same thing just for a different modality than what is currently provided.
### Your contribution
I plan to contribute the entire pipeline. My inspiration and what I plan to base a lot of the PR for this pipeline comes from [#18414](https://github.com/huggingface/transformers/pull/18414).
I'm mostly just posting this issue to get feedback from HF team. Tagging @Narsil @NielsRogge as they also provided feedback on the DQA PR. | [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/35602 |
TITLE
LlavaNextVideoProcessor -> TypeError: LlavaNextVideoProcessor.__call__() got an unexpected keyword argument 'legacy' (I have the fix)
COMMENTS
7
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
Problem's root cause is in `ImageTextToTextPipeline` class in the `image_text_to_text.py` pipeline.
Line `438`
```py
model_inputs = self.processor(
images=images, text=text, return_tensors=self.framework, legacy=False, **processing_kwargs
).to(dtype=self.torch_dtype)
```
Notice how legacy is always specified as False?
If you use this model (`llava-hf/LLaVA-NeXT-Video-7B-32K-hf`) on `transfomers==4.47.1` you will get this error because its config specifies to use the class: `LlavaNextVideoProcessor` from `processing_llava_next_video.py` and it's `__call__` method is not expecting that kwarg.
The quick fix is this:
Modify `__call__` (line `101`) in `processing_llava_next_video.py`
from this:
```py
def __call__(
self,
text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]],
images: ImageInput = None,
videos: VideoInput = None,
padding: Union[bool, str, PaddingStrategy] = False,
truncation: Union[bool, str, TruncationStrategy] = None,
max_length: int = None,
return_tensors: Optional[Union[str, TensorType]] = TensorType.PYTORCH,
) -> BatchFeature:
```
to this:
```py
def __call__(
self,
text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]],
images: ImageInput = None,
videos: VideoInput = None,
padding: Union[bool, str, PaddingStrategy] = False,
truncation: Union[bool, str, TruncationStrategy] = None,
max_length: int = None,
return_tensors: Optional[Union[str, TensorType]] = TensorType.PYTORCH,
**kwargs, # <-- this guy
) -> BatchFeature:
```
Notice the unused kwargs at the end. This reflects the pattern used for `__init__`
which looks like this:
```py
def __init__(
self,
video_processor=None,
image_processor=None,
tokenizer=None,
chat_template=None,
patch_size=None,
vision_feature_select_strategy=None,
video_token="<video>",
image_token="<image>",
num_additional_image_tokens=0,
**kwargs, # <-- this guy
):
```
I ain't got time to step through the PR process, so I hope this helps the HF staff either make this quick patch, or solve the problem at a higher level in the code for `image_text_to_text.py`.
### Who can help?
HF staff
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) (`image-to-text-to-text`)
### Reproduction
```py
pipe = pipeline("image-text-to-text", model="llava-hf/LLaVA-NeXT-Video-7B-32K-hf")
messages = {'role': 'user', 'content': [{'type': 'text', 'text': "What's in this image?"}, {'type': 'video'}]}
videos = ["https://huggingface.co/datasets/raushan-testing-hf/videos-test/resolve/main/sample_demo_1.mp4"]
out = pipe(text=messages, videos=videos)
```
### Expected behavior
No exception raised due to an unexpected kwarg. | [
51,
64,
19
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Core: Pipeline",
"bug",
"VLM"
] |
https://api.github.com/repos/huggingface/transformers/issues/34402 |
TITLE
Accelerate + Dynamo broken in 4.46.0 due to model loss functions refactor
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.46.0
- Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35
- Python version: 3.9.16
- Huggingface_hub version: 0.24.0
- Safetensors version: 0.4.5
- Accelerate version: 1.0.1
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: NO
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- gpu_ids: 0
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: True
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- dynamo_config: {'dynamo_backend': 'INDUCTOR'}
- PyTorch version (GPU?): 2.5.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: Yes
- GPU type: NVIDIA RTX A6000
### Who can help?
@muellerzr @ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
#34191 introduced custom loss functions to the model classes. This appears to have broken training with accelerate + torch dynamo.
To reproduce, use `run_clm.py` with the following accelerate config:
```yaml
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: 'NO'
downcast_bf16: 'no'
dynamo_config:
dynamo_backend: INDUCTOR
enable_cpu_affinity: true
gpu_ids: '0'
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 1
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
```bash
accelerate launch run_clm.py \
--log_level info \
--model_name_or_path=meta-llama/Llama-3.2-1B \
--dataset_name=Salesforce/wikitext \
--dataset_config_name=wikitext-2-raw-v1 \
--block_size=1024 \
--per_device_train_batch_size=4 \
--do_train \
--bf16 \
--output_dir=Llama-3.2-1B-wikitext-2-raw-v1 \
--overwrite_output_dir \
--seed=42 \
--logging_steps=10 \
--lr_scheduler_type=cosine \
--num_train_epochs=3 \
--learning_rate=5e-05 \
--warmup_ratio=0.03 \
--dataloader_drop_last
```
This produces an error from dynamo relating to the new `model_cls.loss_function` attribute added in #34191:
https://github.com/huggingface/transformers/blob/239a256a0c3bf2edab2bbd614923c4eaf88b867d/src/transformers/models/llama/modeling_llama.py#L1209-L1211
Important part of the traceback:
```text
File "/anaconda3/envs/dev/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 152, in __init__
assert isinstance(
AssertionError: expected FunctionType found _lru_cache_wrapper <functools._lru_cache_wrapper object at 0x7f1091109a40>
from user code:
File "/anaconda3/envs/dev/lib/python3.9/site-packages/accelerate/utils/operations.py", line 820, in forward
return model_forward(*args, **kwargs)
File "/anaconda3/envs/dev/lib/python3.9/site-packages/accelerate/utils/operations.py", line 808, in __call__
return convert_to_fp32(self.model_forward(*args, **kwargs))
File "/anaconda3/envs/dev/lib/python3.9/site-packages/torch/amp/autocast_mode.py", line 44, in decorate_autocast
return func(*args, **kwargs)
File "/anaconda3/envs/dev/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 1214, in forward
loss = self.loss_function(logits=logits, labels=labels, vocab_size=self.config.vocab_size, **loss_kwargs)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
Now, if you update the accelerate config to not use dynamo, it runs just fine:
```yaml
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: 'NO'
downcast_bf16: 'no'
enable_cpu_affinity: true
gpu_ids: '0'
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 1
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
### Expected behavior
Accelerate should not throw the error when using torch dynamo. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35283 |
TITLE
Request to add D-FINE
COMMENTS
16
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Model description
D-FINE is a RT-DETR based model that uses probability distributions as an intermediate representation from which bounding boxes are predicted. This model has achieved SOTA results in real-time object detection with additional training data. https://paperswithcode.com/sota/real-time-object-detection-on-coco?p=d-fine-redefine-regression-task-in-detrs-as
### Open source status
- [x] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Here is the link to the repository: https://github.com/Peterande/D-FINE. The weights are present on [Huggingface Hub](https://huggingface.co/Peterande/D-FINE/tree/main). The only github username I know of is @Peterande. | [
77,
6,
62,
54
] | [
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
"New model",
"Good Second Issue",
"Vision",
"contributions-welcome"
] |
https://api.github.com/repos/huggingface/transformers/issues/33969 |
TITLE
Jitter Noise added to input being passed to experts in Switch Transformers
COMMENTS
12
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
System Info
- transformers version: 4.44.2
- Platform: Linux-6.5.0-35-generic-x86_64-with-glibc2.35
- Python version: 3.10.14
- Huggingface_hub version: 0.24.6
- Safetensors version: 0.4.5
- Accelerate version: 0.34.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.3.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: No
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import torch
import torch.nn as nn
from transformers import (
SwitchTransformersConfig,
SwitchTransformersTop1Router,
)
from transformers.models.switch_transformers.modeling_switch_transformers import SwitchTransformersDenseActDense
class MySwitchTransformersSparseMLP(nn.Module):
r"""
Implementation of the Switch Transformers Sparse MLP module.
"""
def __init__(self, config: SwitchTransformersConfig, expert_class: nn.Module = SwitchTransformersDenseActDense):
super().__init__()
# Step 1: Get the correct router according to its class
self.router = SwitchTransformersTop1Router(config)
# Step 2: Get the experts
self.experts = nn.ModuleDict()
for idx in range(config.num_experts):
self.experts[f"expert_{idx}"] = expert_class(config)
def forward(self, hidden_states):
r"""
Hold on, this will be slightly tricky to understand In the correct order, a MoE layer does the following:
1- Gets the `router_mask` from the router. The shape of the mask is `(batch_size, sequence_length, num_expert)`
and corresponds to the argmax of the `router_probs`. The probabilities are needed in the computation of the
hidden states : they are broadcasted to the hidden states values (can be interpreted as a scaling factor).
2- Dispatch the tokens to its associated experts. We do a classic for loop over the experts and assign for each
expert the corresponding hidden states.
"""
prev_save = hidden_states.clone()
# Step 1: Get the router_mask from the router as wel as the probabilities
router_mask, router_probs, router_logits = self.router(hidden_states)
expert_index = torch.argmax(router_mask, dim=-1)
print(torch.allclose(prev_save, hidden_states))
print(torch.mean(prev_save - hidden_states))
# The routers introduced might not always map all the tokens, to a router, which means that some hidden states
# can be unchanged from one layer to another. That is why the hidden states are cloned before updating only the seleced ones.
next_states = hidden_states.clone()
router_mask = router_mask.bool()
batch_size, seq_len, num_experts = router_mask.shape
idx_mask = router_mask.transpose(1, 2).reshape(batch_size * seq_len, num_experts).sum(dim=0)
idx_mask = torch.nonzero(idx_mask, as_tuple=True)[
0
].tolist() # length: number of "activated" expert / value: index
for idx in idx_mask:
next_states[router_mask[:, :, idx]] = getattr(self.experts, "expert_{}".format(idx))(
hidden_states[router_mask[:, :, idx]]
)
hidden_states = router_probs * next_states
return hidden_states, (router_logits, expert_index)
config = SwitchTransformersConfig()
model = MySwitchTransformersSparseMLP(config)
model.train()
in_data = torch.ones(1, 1, 768)
out = model(in_data)
```
The output is
```bash
False
tensor(-0.0001)
```
which ideally should give True and the mean difference should be zero.
This is because in `SwitchTransformersTop1Router`, the `hidden_states` are multiplied with jitter noise which persists even when you pass it to the experts.
https://github.com/huggingface/transformers/blob/e71a01a104dd663c730e494eb0b6467bb51df357/src/transformers/models/switch_transformers/modeling_switch_transformers.py#L159-L161
### Expected behavior
Ideally, no jitter noise should be present when passing the input to the experts, returning True and the mean difference as 0. | [
23,
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Core: Modeling",
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35656 |
TITLE
AttributeError: 'MERTConfig' object has no attribute 'conv_pos_batch_norm'
COMMENTS
2
REACTIONS
+1: 1
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
```
- `transformers` version: 4.48.0
- Platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.35
- Python version: 3.12.5
- Huggingface_hub version: 0.27.1
- Safetensors version: 0.5.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: no
- Using GPU in script?: yes (but makes no difference on cpu)
- GPU type: NVIDIA GeForce RTX 3090
```
### Who can help?
Hello everyone, since updating from `4.47.1` to version `4.48.0` we are unable to load `MERT-v1-330M` on the latest revision.
This works perfectly fine on the previous version.
I haven't seen any mention of a breaking change regarding this, have we missed something or is this a legitimate bug?
```
config.model = tr.AutoModel.from_pretrained(config.model_id, trust_remote_code=True).to(device)
.venv/lib/python3.12/site-packages/transformers/models/auto/auto_factory.py:559: in from_pretrained
return model_class.from_pretrained(
.venv/lib/python3.12/site-packages/transformers/modeling_utils.py:4090: in from_pretrained
model = cls(config, *model_args, **model_kwargs)
../../.cache/huggingface/modules/transformers_modules/m-a-p/MERT-v1-330M/0e06c986db0527c0fd1b47181c40f006805e3de0/modeling_MERT.py:92: in __init__
self.encoder = HubertEncoderStableLayerNorm(config)
.venv/lib/python3.12/site-packages/transformers/models/hubert/modeling_hubert.py:1023: in __init__
self.pos_conv_embed = HubertPositionalConvEmbedding(config)
.venv/lib/python3.12/site-packages/transformers/models/hubert/modeling_hubert.py:275: in __init__
if config.conv_pos_batch_norm:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MERTConfig {
"_attn_implementation_autoset": true,
"_name_or_path": "m-a-p/MERT-v1-330M",
"activation_dropout": ... "torch_dtype": "float32",
"transformers_version": "4.48.0",
"use_weighted_layer_sum": false,
"vocab_size": 32
}
, key = 'conv_pos_batch_norm'
def __getattribute__(self, key):
if key != "attribute_map" and key in super().__getattribute__("attribute_map"):
key = super().__getattribute__("attribute_map")[key]
> return super().__getattribute__(key)
E AttributeError: 'MERTConfig' object has no attribute 'conv_pos_batch_norm'
.venv/lib/python3.12/site-packages/transformers/configuration_utils.py:211: AttributeError
```
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
1. Import transformers (version 4.48.0) and load mert
```
import transformers as tr
m = tr.AutoModel.from_pretrained("m-a-p/MERT-v1-330M", trust_remote_code=True).to("cuda")
```
### Expected behavior
I should be able to load the model | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34353 |
TITLE
Add Image Processor Fast Deformable DETR
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
Adds a fast image processor for Deformable DETR. Follows issue https://github.com/huggingface/transformers/issues/33810.
This image processor is a result of [this work](https://www.notion.so/huggingface2/OptimVision-Optimize-preprocessing-time-10f1384ebcac8091a12debb87fe5f591) on comparing different image processing method.
The diffs look bad but this PR is almost exclusively made up of `# Copied from` based on the fast image processor for DETR!
## Implementation
See https://github.com/huggingface/transformers/pull/34063
## Usage
Except for the fact that it only returns torch tensors, this fast processor is fully compatible with the current one.
It can be instantiated through AutoImageProcessor with use_fast=True, or through the Class directly:
```python
from transformers import AutoImageProcessor
processor = AutoImageProcessor.from_pretrained("SenseTime/deformable-detr", use_fast=True)
```
```python
from transformers import DeformableDetrImageProcessorFast
processor = DeformableDetrImageProcessorFast.from_pretrained("SenseTime/deformable-detr")
```
Usage is the same as the current processor, except for the `device` kwarg:
```python
from torchvision.io import read_image
images = torchvision.io.read_image(image_path)
processor = DeformableDetrImageProcessorFast.from_pretrained("SenseTime/deformable-detr")
images_processed = processor(images , return_tensors="pt", device="cuda")
```
If `device` is not specified:
- If the input images are tensors, the processing will be done on the device of the images.
- If the inputs are PIL or Numpy images, the processing is done on CPU.
## Performance gains
- Average over 100 runs on the same 480x640 image. No padding needed, as "all" the images have the same size.

---
- Average over 10% of the COCO 2017 validation dataset, with `batch_size=8`. Forcing padding to 1333x1333 (="longest_edge"), as otherwise torch.compile needs to recompile if the different batches have different max sizes.

---
- Average over 10% of the COCO 2017 validation dataset, with `batch_size=1`. Forcing padding to 1333x1333.

---
## Tests
- The new image processor is tested on all the tests of the current processor.
- I have also added two consistency tests (panoptic and detection) for processing on GPU vs CPU.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Who can review?
@ArthurZucker Pinging you directly as there is almost no "new" code here.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @ylacombe, @eustlb
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @muellerzr
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| [
62,
39,
65
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Vision",
"optimization",
"Processing"
] |
https://api.github.com/repos/huggingface/transformers/issues/35529 |
TITLE
Trainer: update `state.num_input_tokens_seen` to use `num_items_in_batch`
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
Trainer has a state and inside the state there is a field called `num_input_tokens_seen`, which could be relevant for callbacks and other information.
The problem is that this field is updated using numel() which means it counts the padding as well.
https://github.com/huggingface/transformers/blob/241c04d36867259cdf11dbb4e9d9a60f9cb65ebc/src/transformers/trainer.py#L2492-L2496
In a recent update the class calculates `num_tokens_in_batch` so it can be used for the loss calculation, this ignores padding. and can be used to give a more accurate picture for `num_input_tokens_seen`.
__
### Motivation
`num_input_tokens_seen` is improtant information when you train a model. Having a more accurate picture for it will help people understand their training. It can also be used for callbacks (e.g. stop training after X tokens).
This will also eliminate the double calculation for num_tokens_seen (we are already calculating it once), and eliminate the discrepancy between the `num_input_tokens_seen` and `num_tokens_in_batch`
### Your contribution
I maybe be able to do a PR for this | [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/35032 |
TITLE
Unexpected output of _flash_attention_forward() for cross attention
COMMENTS
6
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
My environment:
> transformers 4.44.1
> flash-attn 2.6.3
### Who can help?
_No response_
### Information
- [ ] My own modified scripts
### Tasks
- [ ] My own task or dataset (give details below)
### Reproduction
```py
import torch
from transformers.modeling_flash_attention_utils import _flash_attention_forward
query = torch.randn(4, 20,1, 32).cuda().half()
key = torch.randn(4, 10, 1, 32).cuda().half()
value = key
unmasked = _flash_attention_forward(query,key,value,
attention_mask=None,
query_length=20,
is_causal=False)
masked = _flash_attention_forward(query,key,value,
attention_mask=torch.ones((4, 10)).cuda().bool(),
query_length=20,
is_causal=False)
breakpoint()
```
### Expected behavior
In my understanding, the attention_mask has a size of `(batch_size, seq_len)` where 1 stands for the position of non-padding tokens, so an all-one mask should lead to the same result as no mask provided. However, the outputs are significantly different.
```
(Pdb) print(unmasked, masked)
tensor([[[[ 0.4114, -0.3369, 0.6221, ..., -0.4475, -0.2361, 0.3022]],
[[ 0.2480, -0.1396, 0.1614, ..., -0.0728, -0.2788, 0.3950]],
[[ 0.3828, -0.1323, 0.2101, ..., -0.4751, -0.0179, 0.3181]],
...,
[[ 0.2654, -0.3137, 0.1637, ..., 0.3464, -0.6318, 0.4377]],
[[ 0.5464, -0.2251, 0.4897, ..., -0.3184, -0.1769, 0.3203]],
[[-0.1514, 0.3037, -0.1609, ..., -0.4651, -0.1842, 0.3386]]],
[[[ 0.1772, 0.3240, -1.1143, ..., 0.1444, 0.5684, 0.3770]],
[[ 0.4187, 0.2264, 0.2446, ..., 0.7036, 0.3003, 0.2981]],
[[ 0.1241, 0.1919, -0.5239, ..., -0.1606, 0.5210, -0.1896]],
...,
[[ 0.5225, -0.2333, 0.1004, ..., 0.0297, 1.0059, -0.1329]],
[[ 0.4304, 0.4819, 0.1232, ..., 0.5234, 0.5210, 0.2379]],
[[ 0.5361, -0.0976, -0.3975, ..., 0.2217, 0.8481, 0.0780]]],
[[[-0.6426, -0.1761, 0.3420, ..., 0.4404, 0.5273, 0.0485]],
[[-0.2313, 0.5249, 0.8975, ..., 0.2517, 0.2163, 0.3628]],
[[-0.9180, -0.7173, -0.3291, ..., 0.0781, 1.0693, -0.5142]],
...,
[[-0.8945, -0.1444, -0.0460, ..., 0.2571, 0.8721, -0.0226]],
[[-0.6978, -0.7417, 0.2061, ..., 0.2173, 0.2798, -0.2246]],
[[-0.3818, -0.7246, 0.7720, ..., -0.3567, 0.0623, -0.0179]]],
[[[-0.5347, -0.6885, -1.3604, ..., -1.3672, -1.1768, -1.2275]],
[[-0.2400, -0.5176, -0.4875, ..., -0.2822, 0.1527, -0.0917]],
[[-0.1940, -0.1766, -0.8022, ..., -0.3743, -0.2607, 0.1602]],
...,
[[-0.2346, -0.4260, -0.2166, ..., 0.1776, -0.2793, -0.8052]],
[[-0.3430, -0.9839, 0.3735, ..., 0.3267, 0.4268, -0.5464]],
[[-0.3293, -0.0431, -1.1631, ..., -0.5742, -1.3242, -1.2441]]]],
device='cuda:0', dtype=torch.float16) tensor([[[[ 0.4114, -0.3369, 0.6221, ..., -0.4475, -0.2361, 0.3022]],
[[ 0.2480, -0.1396, 0.1614, ..., -0.0728, -0.2788, 0.3950]],
[[ 0.3828, -0.1323, 0.2101, ..., -0.4751, -0.0179, 0.3181]],
...,
[[-0.0500, -0.0776, 0.3552, ..., 0.6475, 0.1764, -0.2125]],
[[ 0.7197, 0.4253, 0.0373, ..., 0.7168, 0.5254, 0.2496]],
[[ 0.0498, 0.1896, -0.1191, ..., 0.1239, 0.5039, -0.0274]]],
[[[-0.6572, -0.5571, -0.0978, ..., 0.2294, 0.8623, -0.4048]],
[[-0.5635, -0.4026, 0.1295, ..., 0.1743, 0.6333, -0.0356]],
[[-0.8359, -0.7275, 0.3054, ..., -0.2150, 0.5693, -0.5825]],
...,
[[-0.9004, -0.7935, 0.1372, ..., -0.1024, 0.6543, 0.3892]],
[[-0.1636, 0.1940, -0.9355, ..., -0.2068, -0.7847, -0.1024]],
[[-0.1720, -0.6123, -0.6470, ..., -0.3550, 0.4495, -0.0429]]],
[[[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]],
[[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]],
[[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]],
...,
[[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]],
[[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]],
[[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]]],
[[[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]],
[[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]],
[[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]],
...,
[[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]],
[[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]],
[[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]]]],
device='cuda:0', dtype=torch.float16)
```
Is this due to a bug? Or I just have some misunderstandings about this function? | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34611 |
TITLE
bugs in flax_dinov2 when batch size is not 1
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
4.46
### Who can help?
Flax: @sanchit-gandhi flax: @sanchit-gandhi @MHRDYN7
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
my code :
from transformers import AutoImageProcessor, FlaxDinov2Model
from PIL import Image
import requests
import jax
import jax.numpy as jnp
import random
model = FlaxDinov2Model.from_pretrained("dinov2_base")
key = jax.random.PRNGKey(0)
inputs_np = {'pixel_values':jax.random.normal(key, (2, 3, 224, 224))}
outputs = model(**inputs_np)
last_hidden_states = outputs.last_hidden_state
when batch size is not 1, the bug is :
<img width="1032" alt="image" src="https://github.com/user-attachments/assets/5c0ad809-cabc-4963-96b8-f69cb57567a0">
I reviewed the source code and found that in the function FlaxDinov2Embeddings/interpolate_pos_encoding within the file transformers/models/dinov2/modeling_flax_dinov2.py, the shape of class_pos_embed does not match the shape of patch_pos_embed.
Specifically, the first dimension of class_pos_embed is 1, while the first dimension of patch_pos_embed corresponds to the batch size, so they cannot be concatenated.
<img width="865" alt="image" src="https://github.com/user-attachments/assets/42b49046-9925-4db7-81ff-429350dd2b7a">
I would like to know how I can fix this bug and use dinov2 with a dataset batch size greater than 1.
### Expected behavior
I would like to know how I can fix this bug and use dinov2 with a dataset batch size greater than 1. | [
55,
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Flax",
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/33336 |
TITLE
Training Resumes with Increased Loss Despite Checkpoint Loading
COMMENTS
4
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
Problem: When resuming the training of a BERT model with the Hugging Face Trainer from a checkpoint, the loss value increases again in the second run, even though the checkpoint is loaded correctly and the global_step, optimizer state, and scheduler state are restored.
**Troubleshooting Steps Taken:**
Manually Setting global_step:
Set the global_step manually in the Trainer after loading the checkpoint.
Result: Problem not resolved.
Overriding the train() method:
Created a new class MyTrainer inheriting from Trainer and overrode the train() method to set the global_step when loading a checkpoint.
Result: Problem not resolved.
Removing resume_from_checkpoint:
Removed the resume_from_checkpoint argument from the trainer.train() call and manually loaded the global_step, optimizer state, and scheduler state.
Result: Problem not resolved.
Resetting/Deactivating the Learning Rate Scheduler:
Reinitialized the scheduler after loading the checkpoint, skipped the step() call, or completely deactivated the scheduler.
Result: Problem not resolved. The learning rate is still set to 0.0.
Manually Setting the Learning Rate:
Manually set the learning rate of the parameter groups in the optimizer after loading the checkpoint.
Result: Problem not resolved. The scheduler resets the learning rate back to 0.0.
Explicitly Calculating num_training_steps:
Explicitly calculated the number of training steps and stored it in the num_training_steps variable, using it when initializing the scheduler.
Result: Problem not resolved.
Manually Moving trainer_state.json into Checkpoint Subfolder:
Moved the trainer_state.json file after trainer.save_state() using shutil.move() into the checkpoint-XXXX subfolder.
Result: Problem not resolved.
Manually Setting TrainerState Attributes:
Instead of overwriting the entire trainer.state variable, the global_step and epoch attributes were set individually after loading the checkpoint.
Result: Problem not resolved.
**Key Observations:**
The error only occurs when resuming training from a checkpoint. The first run works flawlessly.
The training data, environment, and hardware remain identical between runs.
The global_step is loaded correctly and set in the Trainer.
The optimizer and scheduler states are loaded correctly.
The TrainerState is loaded correctly.
Manually setting the learning rate has no effect, the scheduler resets it.
**Suspicions:**
The Trainer might internally reset the global_step or the optimizer state after the checkpoint is loaded.
There might be a bug in the Trainer class that prevents the correct restoration of the training state.
**Question for Hugging Face Transformers:**
Why does the loss value increase when resuming training from a checkpoint in the second run, even though the checkpoint is loaded correctly? Are there any known issues with the Trainer regarding the restoration of the training state, especially the learning rate and scheduler? We have tried numerous troubleshooting steps (listed above) without success. We suspect a potential bug within the Trainer itself. Could you provide guidance or insights on how to resolve this issue?
**Additional Information:**
Model: BertForMaskedLM
Trainer: transformers.Trainer
Scheduler: The default scheduler of the Trainer (linear warmup scheduler with subsequent linear decay)
Optimizer: AdamW
Dataset: A custom text dataset
Code: The relevant code is provided above.
Logs: Detailed logs of the training process can be provided if needed.
**Goal:**
We want to be able to resume training from checkpoints without the loss value increasing again and the training having to start from scratch. We hope that the Hugging Face Transformers team can help us solve this problem.
**File:**
```
import os
import shutil
import torch
import warnings
import logging
import json
from transformers import BertForMaskedLM, BertTokenizer, TrainingArguments, Trainer, DataCollatorForLanguageModeling
from datasets import load_dataset
from config import load_config
from datetime import datetime
# Suppress specific FutureWarnings
warnings.filterwarnings("ignore", category=FutureWarning, module="accelerate")
# Format the logging output
logging.basicConfig(filename='training.log', level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s')
class MyTrainer(Trainer):
def train(self, **kwargs):
"""
Overridden train() method that adds additional logging information.
"""
# Log the global_step and epoch before training
logging.info(f"Global Step before training: {self.state.global_step}")
logging.info(f"Epoch before training: {self.state.epoch}")
# Call the train() method of the superclass
super().train(**kwargs)
# Log the global_step and epoch after training
logging.info(f"Global Step after training: {self.state.global_step}")
logging.info(f"Epoch after training: {self.state.epoch}")
def _inner_training_loop(
self, batch_size=None, args=None, resume_from_checkpoint=None, trial=None, ignore_keys_for_eval=None
):
"""
Override _inner_training_loop() to log the global_step, epoch,
and learning rate after each step.
"""
output = super()._inner_training_loop(batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
# Log the global_step, epoch, and learning rate after each training step
logging.debug(f"Global Step after step: {self.state.global_step}")
logging.debug(f"Epoch after step: {self.state.epoch}")
logging.debug(f"Learning rate after step: {self.lr_scheduler.get_last_lr()}")
return output
def train_with_books(config_params, run_id):
"""
Function to train BERT with the books in the books folder.
"""
model_name = "bert-base-german-cased"
model_path = config_params["bert_model_path"]
trained_model_path = config_params["trained_model_path"]
checkpoint_path = config_params["checkpoint_path"]
log_path = os.path.join(config_params["logs_path"], f"run-{run_id}")
global_step_file = os.path.join(checkpoint_path, "global_step.txt")
# Initialize logging and output configuration
logging.info(f"Model Path: {model_path}")
logging.info(f"Trained Model Path: {trained_model_path}")
logging.info(f"Checkpoint Path: {checkpoint_path}")
logging.info(f"Log Path: {log_path}")
trainer_state = None # Define trainer_state before the if-block
optimizer_state = None
scheduler_state = None
global_step = 0
epoch = 0
# 1. Check if a checkpoint exists
if os.path.exists(checkpoint_path) and any(fname.startswith("checkpoint") for fname in os.listdir(checkpoint_path)):
latest_checkpoint = max([f for f in os.listdir(checkpoint_path) if f.startswith("checkpoint")], key=lambda x: int(x.split('-')[-1]))
checkpoint_path = os.path.join(checkpoint_path, latest_checkpoint)
logging.info(f"Loading checkpoint from {checkpoint_path}...")
# Load the optimizer and scheduler state from the checkpoint
optimizer_path = os.path.join(checkpoint_path, "optimizer.pt")
scheduler_path = os.path.join(checkpoint_path, "scheduler.pt")
if os.path.exists(optimizer_path) and os.path.exists(scheduler_path):
optimizer_state = torch.load(optimizer_path)
scheduler_state = torch.load(scheduler_path)
logging.info(f"Optimizer and scheduler state loaded from checkpoint: {optimizer_path}, {scheduler_path}")
else:
logging.warning(f"Optimizer and/or scheduler state not found in checkpoint.")
# Load the model from the checkpoint
model = BertForMaskedLM.from_pretrained(checkpoint_path)
# Load global step from the checkpoint
with open(global_step_file, "r") as f:
global_step = int(f.read())
logging.info(f"Global step loaded from {global_step_file}: {global_step}")
# Load epoch from the trainer state in the checkpoint (if available)
trainer_state_path = os.path.join(checkpoint_path, "trainer_state.json")
if os.path.exists(trainer_state_path):
with open(trainer_state_path, 'r') as f:
trainer_state = json.load(f)
epoch = trainer_state.get("epoch", 0)
logging.info(f"Epoch loaded from trainer state: {epoch}")
else:
logging.warning(f"Trainer state not found in checkpoint.")
# 2. Check if a trained model exists
elif os.path.exists(trained_model_path):
logging.info(f"Loading trained model from {trained_model_path}...")
model = BertForMaskedLM.from_pretrained(trained_model_path, local_files_only=True, ignore_mismatched_sizes=True)
# 3. Check if the original model exists
elif os.path.exists(model_path):
logging.info(f"Loading original model from {model_path}...")
model = BertForMaskedLM.from_pretrained(model_path, local_files_only=True, ignore_mismatched_sizes=True)
# 4. Download the model from Hugging Face
else:
logging.info(f"Downloading model {model_name} from Hugging Face...")
model = BertForMaskedLM.from_pretrained(model_name, cache_dir=None, ignore_mismatched_sizes=True)
model.save_pretrained(model_path)
logging.info(f"Model saved to {model_path}.")
tokenizer = BertTokenizer.from_pretrained(model_name, cache_dir=model_path)
# Load books dataset
books_dataset = load_dataset('text', data_files=f"{config_params['books_dataset_path']}/*.txt")
def tokenize_function(examples):
return tokenizer(
examples["text"],
padding="max_length",
truncation=True,
max_length=128
)
# Tokenize the dataset
tokenized_datasets = books_dataset.map(
tokenize_function,
batched=True,
num_proc=4,
remove_columns=["text"]
)
# Training Arguments
training_args = TrainingArguments(
output_dir=checkpoint_path,
overwrite_output_dir=True,
num_train_epochs=3,
per_device_train_batch_size=192,
save_steps=10_000,
save_total_limit=2,
fp16=True,
gradient_accumulation_steps=2,
logging_dir=log_path,
logging_steps=20,
report_to="tensorboard",
save_strategy="steps",
save_safetensors=False,
dataloader_num_workers=4
)
# Data Collator for Masked Language Modeling
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=True, mlm_probability=0.15
)
# Define the trainer
trainer = MyTrainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets["train"],
data_collator=data_collator
)
# Set global step in the trainer
trainer.state.global_step = global_step
# Set epoch in the trainer
trainer.state.epoch = epoch
# Initialize optimizer and scheduler
trainer.create_optimizer_and_scheduler(num_training_steps=trainer.state.max_steps)
logging.info(f"Optimizer and scheduler initialized.")
# Load optimizer and scheduler state after creating the trainer
if optimizer_state is not None and scheduler_state is not None:
trainer.optimizer.load_state_dict(optimizer_state)
trainer.lr_scheduler.load_state_dict(scheduler_state)
logging.info(f"Optimizer and scheduler state loaded")
# Manually set learning rate
for i, param_group in enumerate(trainer.optimizer.param_groups):
param_group['lr'] = 5e-5
logging.info(f"Learning rate for parameter group {i} manually set: {param_group['lr']}")
# Set TrainerState attributes (only if loaded from checkpoint)
if trainer_state is not None:
trainer.state.global_step = trainer_state.get("global_step", 0)
trainer.state.epoch = trainer_state.get("epoch", 0)
logging.info(f"TrainerState attributes set in the trainer.")
# Debug information before training
logging.info(f"Trainer state before training: {trainer.state}")
logging.info(f"Optimizer: {trainer.optimizer}")
logging.info(f"LR Scheduler: {trainer.lr_scheduler}")
# Start training
logging.info("Starting training ...")
trainer.train()
# Save global step after training
with open(global_step_file, "w") as f:
f.write(str(trainer.state.global_step))
logging.info(f"Global step after training saved to {global_step_file}: {trainer.state.global_step}")
# Debug information after training
logging.info(f"Trainer state after training: {trainer.state}")
# Save the final model
trainer.save_model(trained_model_path)
logging.info(f"Final model saved to {trained_model_path}.")
# Save the current checkpoint in a subfolder
checkpoint_save_path = os.path.join(checkpoint_path, f"checkpoint-{trainer.state.global_step}")
os.makedirs(checkpoint_save_path, exist_ok=True)
trainer.save_model(checkpoint_save_path)
trainer.save_state()
# Move trainer_state.json to the subfolder
trainer_state_source = os.path.join(checkpoint_path, "trainer_state.json")
trainer_state_dest = os.path.join(checkpoint_save_path, "trainer_state.json")
if os.path.exists(trainer_state_source):
shutil.move(trainer_state_source, trainer_state_dest)
logging.info(f"TrainerState saved to {trainer_state_dest}.")
# Save optimizer and scheduler state in the checkpoint
torch.save(trainer.optimizer.state_dict(), os.path.join(checkpoint_save_path, "optimizer.pt"))
torch.save(trainer.lr_scheduler.state_dict(), os.path.join(checkpoint_save_path, "scheduler.pt"))
logging.info(f"Optimizer and scheduler state saved to {checkpoint_save_path}.")
# Output relevant variables
logging.info(f"trainer.state.global_step: {trainer.state.global_step}")
checkpoint_files = [f for f in os.listdir(checkpoint_save_path)]
logging.info(f"Checkpoint files in {checkpoint_save_path}: {checkpoint_files}")
```
**Terminal Output:**
Run:
{‘loss’: 2.8581, ‘grad_norm’: 2.897771120071411, ‘learning_rate’: 4.682539682539683e-05, ‘epoch’: 0.19}
{‘loss’: 2.1545, ‘grad_norm’: 2.8640339374542236, ‘learning_rate’: 4.3650793650793655e-05, ‘epoch’: 0.38}
{‘loss’: 2.0396, ‘grad_norm’: 2.7819628715515137, ‘learning_rate’: 4.047619047619048e-05, ‘epoch’: 0.57}
{‘loss’: 1.9786, ‘grad_norm’: 2.644606828689575, ‘learning_rate’: 3.730158730158731e-05, ‘epoch’: 0.76}
{‘loss’: 1.9553, ‘grad_norm’: 2.7417070865631104, ‘learning_rate’: 3.412698412698413e-05, ‘epoch’: 0.95}
{‘loss’: 1.8961, ‘grad_norm’: 2.6237854957580566, ‘learning_rate’: 3.095238095238095e-05, ‘epoch’: 1.14}
{‘loss’: 1.8793, ‘grad_norm’: 2.5830185413360596, ‘learning_rate’: 2.777777777777778e-05, ‘epoch’: 1.33}
{‘loss’: 1.8715, ‘grad_norm’: 2.652275800704956, ‘learning_rate’: 2.4603174603174602e-05, ‘epoch’: 1.52}
{‘loss’: 1.8362, ‘grad_norm’: 2.6065754890441895, ‘learning_rate’: 2.1428571428571428e-05, ‘epoch’: 1.71}
{‘loss’: 1.8474, ‘grad_norm’: 2.6352243423461914, ‘learning_rate’: 1.8253968253968254e-05, ‘epoch’: 1.9}
{‘loss’: 1.8197, ‘grad_norm’: 2.56719708442688, ‘learning_rate’: 1.5079365079365079e-05, ‘epoch’: 2.09}
{‘loss’: 1.826, ‘grad_norm’: 2.5195322036743164, ‘learning_rate’: 1.1904761904761905e-05, ‘epoch’: 2.27}
{‘loss’: 1.8074, ‘grad_norm’: 2.614032506942749, ‘learning_rate’: 8.73015873015873e-06, ‘epoch’: 2.46}
{‘loss’: 1.8029, ‘grad_norm’: 2.600111246109009, ‘learning_rate’: 5.555555555555556e-06, ‘epoch’: 2.65}
{‘loss’: 1.7879, ‘grad_norm’: 2.4874589443206787, ‘learning_rate’: 2.3809523809523808e-06, ‘epoch’: 2.84}
{‘train_runtime’: 142.9413, ‘train_samples_per_second’: 846.431, ‘train_steps_per_second’: 2.204, ‘train_loss’: 1.9500562516469804, ‘epoch’: 2.99}
Run:
{‘loss’: 1.453, ‘grad_norm’: 2.474428415298462, ‘learning_rate’: 4.682539682539683e-05, ‘epoch’: 0.19}
{‘loss’: 1.406, ‘grad_norm’: 2.5064773559570312, ‘learning_rate’: 4.3650793650793655e-05, ‘epoch’: 0.38}
{‘loss’: 1.4152, ‘grad_norm’: 2.5167486667633057, ‘learning_rate’: 4.047619047619048e-05, ‘epoch’: 0.57}
{‘loss’: 1.426, ‘grad_norm’: 2.4449574947357178, ‘learning_rate’: 3.730158730158731e-05, ‘epoch’: 0.76}
{‘loss’: 1.4592, ‘grad_norm’: 2.5427122116088867, ‘learning_rate’: 3.412698412698413e-05, ‘epoch’: 0.95}
{‘loss’: 1.4357, ‘grad_norm’: 2.4496681690216064, ‘learning_rate’: 3.095238095238095e-05, ‘epoch’: 1.14}
{‘loss’: 1.4684, ‘grad_norm’: 2.4780757427215576, ‘learning_rate’: 2.777777777777778e-05, ‘epoch’: 1.33}
{‘loss’: 1.5027, ‘grad_norm’: 2.5224385261535645, ‘learning_rate’: 2.4603174603174602e-05, ‘epoch’: 1.52}
{‘loss’: 1.5133, ‘grad_norm’: 2.5421390533447266, ‘learning_rate’: 2.1428571428571428e-05, ‘epoch’: 1.71}
{‘loss’: 1.5651, ‘grad_norm’: 2.5934836864471436, ‘learning_rate’: 1.8253968253968254e-05, ‘epoch’: 1.9}
{‘loss’: 1.562, ‘grad_norm’: 2.5455050468444824, ‘learning_rate’: 1.5079365079365079e-05, ‘epoch’: 2.09}
{‘loss’: 1.6139, ‘grad_norm’: 2.580508232116699, ‘learning_rate’: 1.1904761904761905e-05, ‘epoch’: 2.27}
{‘loss’: 1.631, ‘grad_norm’: 2.7025833129882812, ‘learning_rate’: 8.73015873015873e-06, ‘epoch’: 2.46}
{‘loss’: 1.6631, ‘grad_norm’: 2.669140338897705, ‘learning_rate’: 5.555555555555556e-06, ‘epoch’: 2.65}
{‘loss’: 1.6425, ‘grad_norm’: 2.4610960483551025, ‘learning_rate’: 2.3809523809523808e-06, ‘epoch’: 2.84}
{‘train_runtime’: 157.4714, ‘train_samples_per_second’: 768.33, ‘train_steps_per_second’: 2.0, ‘train_loss’: 1.5153770507328095, ‘epoch’: 2.99}
**Logfile Output:**
```
2024-09-05 17:16:34,568 - INFO - Model Path: models/bert_model
2024-09-05 17:16:34,568 - INFO - Trained Model Path: models/trained_bert
2024-09-05 17:16:34,568 - INFO - Checkpoint Path: training/checkpoints
2024-09-05 17:16:34,569 - INFO - Log Path: logs/run-20240905_171634
2024-09-05 17:16:34,569 - INFO - Loading original model from models/bert_model...
2024-09-05 17:16:36,409 - INFO - Optimizer and scheduler initialized.
2024-09-05 17:16:36,409 - INFO - Trainer state before training: TrainerState(epoch=0, global_step=0, max_steps=0, logging_steps=500, eval_steps=500, save_steps=500, train_batch_size=None, num_train_epochs=0, num_input_tokens_seen=0, total_flos=0, log_history=[], best_metric=None, best_model_checkpoint=None, is_local_process_zero=True, is_world_process_zero=True, is_hyper_param_search=False, trial_name=None, trial_params=None, stateful_callbacks={'TrainerControl': {'args': {'should_training_stop': False, 'should_epoch_stop': False, 'should_save': False, 'should_evaluate': False, 'should_log': False}, 'attributes': {}}})
2024-09-05 17:16:36,409 - INFO - Optimizer: AdamW (
Parameter Group 0
amsgrad: False
betas: (0.9, 0.999)
capturable: False
differentiable: False
eps: 1e-08
foreach: None
fused: None
initial_lr: 5e-05
lr: 0.0
maximize: False
weight_decay: 0.0
Parameter Group 1
amsgrad: False
betas: (0.9, 0.999)
capturable: False
differentiable: False
eps: 1e-08
foreach: None
fused: None
initial_lr: 5e-05
lr: 0.0
maximize: False
weight_decay: 0.0
)
2024-09-05 17:16:36,409 - INFO - LR Scheduler: <torch.optim.lr_scheduler.LambdaLR object at 0x7fc8adbb95e0>
2024-09-05 17:16:36,409 - INFO - Starting training ...
2024-09-05 17:16:36,409 - INFO - Global Step before training: 0
2024-09-05 17:16:36,409 - INFO - Epoch before training: 0
2024-09-05 17:18:59,573 - INFO - Global Step after training: 315
2024-09-05 17:18:59,573 - INFO - Epoch after training: 2.985781990521327
2024-09-05 17:18:59,573 - INFO - Global step after training saved to training/checkpoints/global_step.txt: 315
2024-09-05 17:18:59,573 - INFO - Trainer state after training: TrainerState(epoch=2.985781990521327, global_step=315, max_steps=315, logging_steps=20, eval_steps=500, save_steps=10000, train_batch_size=192, num_train_epochs=3, num_input_tokens_seen=0, total_flos=7935313554653184.0, log_history=[{'loss': 2.8581, 'grad_norm': 2.897771120071411, 'learning_rate': 4.682539682539683e-05, 'epoch': 0.1895734597156398, 'step': 20}, {'loss': 2.1545, 'grad_norm': 2.8640339374542236, 'learning_rate': 4.3650793650793655e-05, 'epoch': 0.3791469194312796, 'step': 40}, {'loss': 2.0396, 'grad_norm': 2.7819628715515137, 'learning_rate': 4.047619047619048e-05, 'epoch': 0.5687203791469194, 'step': 60}, {'loss': 1.9786, 'grad_norm': 2.644606828689575, 'learning_rate': 3.730158730158731e-05, 'epoch': 0.7582938388625592, 'step': 80}, {'loss': 1.9553, 'grad_norm': 2.7417070865631104, 'learning_rate': 3.412698412698413e-05, 'epoch': 0.9478672985781991, 'step': 100}, {'loss': 1.8961, 'grad_norm': 2.6237854957580566, 'learning_rate': 3.095238095238095e-05, 'epoch': 1.1374407582938388, 'step': 120}, {'loss': 1.8793, 'grad_norm': 2.5830185413360596, 'learning_rate': 2.777777777777778e-05, 'epoch': 1.3270142180094786, 'step': 140}, {'loss': 1.8715, 'grad_norm': 2.652275800704956, 'learning_rate': 2.4603174603174602e-05, 'epoch': 1.5165876777251186, 'step': 160}, {'loss': 1.8362, 'grad_norm': 2.6065754890441895, 'learning_rate': 2.1428571428571428e-05, 'epoch': 1.7061611374407581, 'step': 180}, {'loss': 1.8474, 'grad_norm': 2.6352243423461914, 'learning_rate': 1.8253968253968254e-05, 'epoch': 1.8957345971563981, 'step': 200}, {'loss': 1.8197, 'grad_norm': 2.56719708442688, 'learning_rate': 1.5079365079365079e-05, 'epoch': 2.085308056872038, 'step': 220}, {'loss': 1.826, 'grad_norm': 2.5195322036743164, 'learning_rate': 1.1904761904761905e-05, 'epoch': 2.2748815165876777, 'step': 240}, {'loss': 1.8074, 'grad_norm': 2.614032506942749, 'learning_rate': 8.73015873015873e-06, 'epoch': 2.4644549763033177, 'step': 260}, {'loss': 1.8029, 'grad_norm': 2.600111246109009, 'learning_rate': 5.555555555555556e-06, 'epoch': 2.654028436018957, 'step': 280}, {'loss': 1.7879, 'grad_norm': 2.4874589443206787, 'learning_rate': 2.3809523809523808e-06, 'epoch': 2.843601895734597, 'step': 300}, {'train_runtime': 142.9413, 'train_samples_per_second': 846.431, 'train_steps_per_second': 2.204, 'total_flos': 7935313554653184.0, 'train_loss': 1.9500562516469804, 'epoch': 2.985781990521327, 'step': 315}], best_metric=None, best_model_checkpoint=None, is_local_process_zero=True, is_world_process_zero=True, is_hyper_param_search=False, trial_name=None, trial_params=None, stateful_callbacks={'TrainerControl': {'args': {'should_training_stop': True, 'should_epoch_stop': False, 'should_save': True, 'should_evaluate': False, 'should_log': False}, 'attributes': {}}})
2024-09-05 17:18:59,836 - INFO - Final model saved to models/trained_bert.
2024-09-05 17:19:00,123 - INFO - TrainerState saved to training/checkpoints/checkpoint-315/trainer_state.json.
2024-09-05 17:19:00,734 - INFO - Optimizer and scheduler state saved to training/checkpoints/checkpoint-315.
2024-09-05 17:19:00,734 - INFO - trainer.state.global_step: 315
2024-09-05 17:19:00,734 - INFO - Checkpoint files in training/checkpoints/checkpoint-315: ['config.json', 'trainer_state.json', 'pytorch_model.bin', 'rng_state.pth', 'scheduler.pt', 'training_args.bin', 'optimizer.pt', 'generation_config.json']
2024-09-05 17:19:30,385 - INFO - Model Path: models/bert_model
2024-09-05 17:19:30,385 - INFO - Trained Model Path: models/trained_bert
2024-09-05 17:19:30,385 - INFO - Checkpoint Path: training/checkpoints
2024-09-05 17:19:30,385 - INFO - Log Path: logs/run-20240905_171930
2024-09-05 17:19:30,385 - INFO - Loading checkpoint from training/checkpoints/checkpoint-315...
2024-09-05 17:19:30,856 - INFO - Optimizer and scheduler state loaded from checkpoint: training/checkpoints/checkpoint-315/optimizer.pt, training/checkpoints/checkpoint-315/scheduler.pt
2024-09-05 17:19:30,883 - INFO - Global step loaded from training/checkpoints/global_step.txt: 315
2024-09-05 17:19:30,883 - INFO - Epoch loaded from trainer state: 2.985781990521327
2024-09-05 17:19:31,694 - INFO - Optimizer and scheduler initialized.
2024-09-05 17:19:31,695 - INFO - Optimizer and scheduler state loaded
2024-09-05 17:19:31,695 - INFO - Learning rate for parameter group 0 manually set: 5e-05
2024-09-05 17:19:31,695 - INFO - Learning rate for parameter group 1 manually set: 5e-05
2024-09-05 17:19:31,695 - INFO - TrainerState attributes set in the trainer.
2024-09-05 17:19:31,696 - INFO - Trainer state before training: TrainerState(epoch=2.985781990521327, global_step=315, max_steps=0, logging_steps=500, eval_steps=500, save_steps=500, train_batch_size=None, num_train_epochs=0, num_input_tokens_seen=0, total_flos=0, log_history=[], best_metric=None, best_model_checkpoint=None, is_local_process_zero=True, is_world_process_zero=True, is_hyper_param_search=False, trial_name=None, trial_params=None, stateful_callbacks={'TrainerControl': {'args': {'should_training_stop': False, 'should_epoch_stop': False, 'should_save': False, 'should_evaluate': False, 'should_log': False}, 'attributes': {}}})
2024-09-05 17:19:31,696 - INFO - Optimizer: AdamW (
Parameter Group 0
amsgrad: False
betas: (0.9, 0.999)
capturable: False
differentiable: False
eps: 1e-08
foreach: None
fused: None
initial_lr: 5e-05
lr: 5e-05
maximize: False
weight_decay: 0.0
Parameter Group 1
amsgrad: False
betas: (0.9, 0.999)
capturable: False
differentiable: False
eps: 1e-08
foreach: None
fused: None
initial_lr: 5e-05
lr: 5e-05
maximize: False
weight_decay: 0.0
)
2024-09-05 17:19:31,696 - INFO - LR Scheduler: <torch.optim.lr_scheduler.LambdaLR object at 0x7fc958bd9ac0>
2024-09-05 17:19:31,696 - INFO - Starting training ...
2024-09-05 17:19:31,696 - INFO - Global Step before training: 315
2024-09-05 17:19:31,696 - INFO - Epoch before training: 2.985781990521327
2024-09-05 17:22:09,404 - INFO - Global Step after training: 315
2024-09-05 17:22:09,404 - INFO - Epoch after training: 2.985781990521327
2024-09-05 17:22:09,404 - INFO - Global step after training saved to training/checkpoints/global_step.txt: 315
2024-09-05 17:22:09,404 - INFO - Trainer state after training: TrainerState(epoch=2.985781990521327, global_step=315, max_steps=315, logging_steps=20, eval_steps=500, save_steps=10000, train_batch_size=192, num_train_epochs=3, num_input_tokens_seen=0, total_flos=7935313554653184.0, log_history=[{'loss': 1.453, 'grad_norm': 2.474428415298462, 'learning_rate': 4.682539682539683e-05, 'epoch': 0.1895734597156398, 'step': 20}, {'loss': 1.406, 'grad_norm': 2.5064773559570312, 'learning_rate': 4.3650793650793655e-05, 'epoch': 0.3791469194312796, 'step': 40}, {'loss': 1.4152, 'grad_norm': 2.5167486667633057, 'learning_rate': 4.047619047619048e-05, 'epoch': 0.5687203791469194, 'step': 60}, {'loss': 1.426, 'grad_norm': 2.4449574947357178, 'learning_rate': 3.730158730158731e-05, 'epoch': 0.7582938388625592, 'step': 80}, {'loss': 1.4592, 'grad_norm': 2.5427122116088867, 'learning_rate': 3.412698412698413e-05, 'epoch': 0.9478672985781991, 'step': 100}, {'loss': 1.4357, 'grad_norm': 2.4496681690216064, 'learning_rate': 3.095238095238095e-05, 'epoch': 1.1374407582938388, 'step': 120}, {'loss': 1.4684, 'grad_norm': 2.4780757427215576, 'learning_rate': 2.777777777777778e-05, 'epoch': 1.3270142180094786, 'step': 140}, {'loss': 1.5027, 'grad_norm': 2.5224385261535645, 'learning_rate': 2.4603174603174602e-05, 'epoch': 1.5165876777251186, 'step': 160}, {'loss': 1.5133, 'grad_norm': 2.5421390533447266, 'learning_rate': 2.1428571428571428e-05, 'epoch': 1.7061611374407581, 'step': 180}, {'loss': 1.5651, 'grad_norm': 2.5934836864471436, 'learning_rate': 1.8253968253968254e-05, 'epoch': 1.8957345971563981, 'step': 200}, {'loss': 1.562, 'grad_norm': 2.5455050468444824, 'learning_rate': 1.5079365079365079e-05, 'epoch': 2.085308056872038, 'step': 220}, {'loss': 1.6139, 'grad_norm': 2.580508232116699, 'learning_rate': 1.1904761904761905e-05, 'epoch': 2.2748815165876777, 'step': 240}, {'loss': 1.631, 'grad_norm': 2.7025833129882812, 'learning_rate': 8.73015873015873e-06, 'epoch': 2.4644549763033177, 'step': 260}, {'loss': 1.6631, 'grad_norm': 2.669140338897705, 'learning_rate': 5.555555555555556e-06, 'epoch': 2.654028436018957, 'step': 280}, {'loss': 1.6425, 'grad_norm': 2.4610960483551025, 'learning_rate': 2.3809523809523808e-06, 'epoch': 2.843601895734597, 'step': 300}, {'train_runtime': 157.4714, 'train_samples_per_second': 768.33, 'train_steps_per_second': 2.0, 'total_flos': 7935313554653184.0, 'train_loss': 1.5153770507328095, 'epoch': 2.985781990521327, 'step': 315}], best_metric=None, best_model_checkpoint=None, is_local_process_zero=True, is_world_process_zero=True, is_hyper_param_search=False, trial_name=None, trial_params=None, stateful_callbacks={'TrainerControl': {'args': {'should_training_stop': True, 'should_epoch_stop': False, 'should_save': True, 'should_evaluate': False, 'should_log': False}, 'attributes': {}}})
2024-09-05 17:22:09,693 - INFO - Final model saved to models/trained_bert.
2024-09-05 17:22:09,975 - INFO - TrainerState saved to training/checkpoints/checkpoint-315/checkpoint-315/trainer_state.json.
2024-09-05 17:22:10,570 - INFO - Optimizer and scheduler state saved to training/checkpoints/checkpoint-315/checkpoint-315.
2024-09-05 17:22:10,570 - INFO - trainer.state.global_step: 315
2024-09-05 17:22:10,571 - INFO - Checkpoint files in training/checkpoints/checkpoint-315/checkpoint-315: ['config.json', 'trainer_state.json', 'pytorch_model.bin', 'rng_state.pth', 'scheduler.pt', 'training_args.bin', 'optimizer.pt', 'generation_config.json']
``` | [
50,
66
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"PyTorch",
"trainer"
] |
https://api.github.com/repos/huggingface/transformers/issues/36142 |
TITLE
Bump cryptography from 43.0.1 to 44.0.1 in /examples/research_projects/decision_transformer
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
Bumps [cryptography](https://github.com/pyca/cryptography) from 43.0.1 to 44.0.1.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's changelog</a>.</em></p>
<blockquote>
<p>44.0.1 - 2025-02-11</p>
<pre><code>
* Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL 3.4.1.
* We now build ``armv7l`` ``manylinux`` wheels and publish them to PyPI.
* We now build ``manylinux_2_34`` wheels and publish them to PyPI.
<p>.. _v44-0-0:</p>
<p>44.0.0 - 2024-11-27
</code></pre></p>
<ul>
<li><strong>BACKWARDS INCOMPATIBLE:</strong> Dropped support for LibreSSL < 3.9.</li>
<li>Deprecated Python 3.7 support. Python 3.7 is no longer supported by the
Python core team. Support for Python 3.7 will be removed in a future
<code>cryptography</code> release.</li>
<li>Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL 3.4.0.</li>
<li>macOS wheels are now built against the macOS 10.13 SDK. Users on older
versions of macOS should upgrade, or they will need to build
<code>cryptography</code> themselves.</li>
<li>Enforce the :rfc:<code>5280</code> requirement that extended key usage extensions must
not be empty.</li>
<li>Added support for timestamp extraction to the
:class:<code>~cryptography.fernet.MultiFernet</code> class.</li>
<li>Relax the Authority Key Identifier requirements on root CA certificates
during X.509 verification to allow fields permitted by :rfc:<code>5280</code> but
forbidden by the CA/Browser BRs.</li>
<li>Added support for :class:<code>~cryptography.hazmat.primitives.kdf.argon2.Argon2id</code>
when using OpenSSL 3.2.0+.</li>
<li>Added support for the :class:<code>~cryptography.x509.Admissions</code> certificate extension.</li>
<li>Added basic support for PKCS7 decryption (including S/MIME 3.2) via
:func:<code>~cryptography.hazmat.primitives.serialization.pkcs7.pkcs7_decrypt_der</code>,
:func:<code>~cryptography.hazmat.primitives.serialization.pkcs7.pkcs7_decrypt_pem</code>, and
:func:<code>~cryptography.hazmat.primitives.serialization.pkcs7.pkcs7_decrypt_smime</code>.</li>
</ul>
<p>.. _v43-0-3:</p>
<p>43.0.3 - 2024-10-18</p>
<pre><code>
* Fixed release metadata for ``cryptography-vectors``
<p>.. _v43-0-2:</p>
<p>43.0.2 - 2024-10-18
</code></pre></p>
<ul>
<li>Fixed compilation when using LibreSSL 4.0.0.</li>
</ul>
<p>.. _v43-0-1:</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pyca/cryptography/commit/adaaaed77db676bbaa9d171175db81dce056e2a7"><code>adaaaed</code></a> Bump for 44.0.1 release (<a href="https://redirect.github.com/pyca/cryptography/issues/12441">#12441</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/ccc61dabe38b86956bf218565cd4e82b918345a1"><code>ccc61da</code></a> [backport] test and build on armv7l (<a href="https://redirect.github.com/pyca/cryptography/issues/12420">#12420</a>) (<a href="https://redirect.github.com/pyca/cryptography/issues/12431">#12431</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/f299a48153650f2dd87716343f2daa7cd39a1f59"><code>f299a48</code></a> remove deprecated call (<a href="https://redirect.github.com/pyca/cryptography/issues/12052">#12052</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/439eb0594a9ffb7c9adedb2490998d83914d141e"><code>439eb05</code></a> Bump version for 44.0.0 (<a href="https://redirect.github.com/pyca/cryptography/issues/12051">#12051</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/2c5ad4d8dcec1b8f833198bc2f3b4634c4fd9d78"><code>2c5ad4d</code></a> chore(deps): bump maturin from 1.7.4 to 1.7.5 in /.github/requirements (<a href="https://redirect.github.com/pyca/cryptography/issues/12050">#12050</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/d23968adddd79aa8508d7c1f985da09383b3808f"><code>d23968a</code></a> chore(deps): bump libc from 0.2.165 to 0.2.166 (<a href="https://redirect.github.com/pyca/cryptography/issues/12049">#12049</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/133c0e02edf2f172318eb27d8f50525ed64c9ec3"><code>133c0e0</code></a> Bump x509-limbo and/or wycheproof in CI (<a href="https://redirect.github.com/pyca/cryptography/issues/12047">#12047</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/f2259d7aa0d134c839ebe298baa8b63de9ead804"><code>f2259d7</code></a> Bump BoringSSL and/or OpenSSL in CI (<a href="https://redirect.github.com/pyca/cryptography/issues/12046">#12046</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/e201c870b89fd2606d67230a97e50c3badb07907"><code>e201c87</code></a> fixed metadata in changelog (<a href="https://redirect.github.com/pyca/cryptography/issues/12044">#12044</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/c6104cc3669585941dc1d2b9c6507621c53d242f"><code>c6104cc</code></a> Prohibit Python 3.9.0, 3.9.1 -- they have a bug that causes errors (<a href="https://redirect.github.com/pyca/cryptography/issues/12045">#12045</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/pyca/cryptography/compare/43.0.1...44.0.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | [
27,
60
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"dependencies",
"python"
] |
https://api.github.com/repos/huggingface/transformers/issues/34390 |
TITLE
[mask2former] torch.export error for Mask2Former
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.46.0.dev0
- Platform: Linux-6.8.0-47-generic-x86_64-with-glibc2.35
- Python version: 3.11.9
- Huggingface_hub version: 0.25.2
- Safetensors version: 0.4.5
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 4090
### Who can help?
@amyeroberts, @qubvel, @ylacombe
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import torch
from transformers import Mask2FormerForUniversalSegmentation
model = Mask2FormerForUniversalSegmentation.from_pretrained(
"facebook/mask2former-swin-base-coco-panoptic", torchscript=True
)
scripted_model = torch.export.export(model, args=(torch.randn(1, 3, 800, 1280),))
```
which causes
```
UserError: Could not extract specialized integer from data-dependent expression u0 (unhinted: u0). (Size-like symbols: none)
Potential framework code culprit (scroll up for full backtrace):
File "/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 2132, in run_node
return node.target(*args, **kwargs)
For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
User Stack (most recent call last):
(snipped, see stack below for prefix)
File "/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py", line 2499, in forward
outputs = self.model(
File "/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py", line 2270, in forward
pixel_level_module_output = self.pixel_level_module(
File "/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py", line 1395, in forward
decoder_output = self.decoder(backbone_features, output_hidden_states=output_hidden_states)
File "/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py", line 1319, in forward
encoder_outputs = self.encoder(
File "/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py", line 1165, in forward
reference_points = self.get_reference_points(spatial_shapes, valid_ratios, device=inputs_embeds.device)
File "/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py", line 1106, in get_reference_points
torch.linspace(0.5, height - 0.5, height, dtype=valid_ratios.dtype, device=device),
For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#constrain-as-size-example
from user code:
File "/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py", line 2499, in forward
outputs = self.model(
File "/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py", line 2270, in forward
pixel_level_module_output = self.pixel_level_module(
File "/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py", line 1395, in forward
decoder_output = self.decoder(backbone_features, output_hidden_states=output_hidden_states)
File "/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py", line 1319, in forward
encoder_outputs = self.encoder(
File "/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py", line 1165, in forward
reference_points = self.get_reference_points(spatial_shapes, valid_ratios, device=inputs_embeds.device)
File "/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py", line 1106, in get_reference_points
torch.linspace(0.5, height - 0.5, height, dtype=valid_ratios.dtype, device=device),
```
### Expected behavior
torch.export works for this model. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/36208 |
TITLE
`modular_model_converter` cannot handle local imports with `return`
COMMENTS
0
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.49.0.dev0
- Platform: Linux-5.15.0-70-generic-x86_64-with-glibc2.31
- Python version: 3.11.9
- Huggingface_hub version: 0.28.1
- Safetensors version: 0.4.5
- Accelerate version: 0.34.2
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.1.1 (True)
- Tensorflow version (GPU?): 2.15.1 (True)
- Flax version (CPU?/GPU?/TPU?): 0.7.0 (cpu)
- Jax version: 0.4.13
- JaxLib version: 0.4.13
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: No
- GPU type: NVIDIA GeForce RTX 3090
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
1. Create a new folder named `xxx_model ` in `src/transformers/models/`
2. Inside this folder, create a new Python file called `modular_xxx.py ` with the following content:
```python
from transformers.models.detr.image_processing_detr import DetrImageProcessor
class TmpImageProcessor(DetrImageProcessor):
pass
```
3. Run the following command to execute the model converter:
```shell
python utils/modular_model_converter.py --files_to_parse src/transformers/models/xxx_model/modular_xxx.py
```
### Expected behavior
The expected behavior is that it creates a file `src/transformers/models/xxx_model/image_processing_xxx.py`. However, the script fails with the following traceback:
```shell
Traceback (most recent call last):
File "/Users/houxiuquan/Downloads/transformers/utils/modular_model_converter.py", line 1726, in <module>
converted_files = convert_modular_file(file_name)
File "/Users/houxiuquan/Downloads/transformers/utils/modular_model_converter.py", line 1663, in convert_modular_file
for file, module in create_modules(cst_transformers).items():
File "/Users/houxiuquan/Downloads/transformers/utils/modular_model_converter.py", line 1643, in create_modules
needed_imports = get_needed_imports(body, all_imports)
File "/Users/houxiuquan/Downloads/transformers/utils/modular_model_converter.py", line 1151, in get_needed_imports
append_new_import_node(stmt_node, unused_imports, added_names, new_statements)
File "/Users/houxiuquan/Downloads/transformers/utils/modular_model_converter.py", line 1111, in append_new_import_node
for name in import_node.names:
AttributeError: 'Return' object has no attribute 'names'
```
I found the error is caused by the local imports with `return` in the following function of `transformer.models.detr.image_processing_detr`:
```python
def get_numpy_to_framework_fn(arr) -> Callable:
"""
Returns a function that converts a numpy array to the framework of the input array.
Args:
arr (`np.ndarray`): The array to convert.
"""
if isinstance(arr, np.ndarray):
return np.array
if is_tf_available() and is_tf_tensor(arr):
import tensorflow as tf
return tf.convert_to_tensor
if is_torch_available() and is_torch_tensor(arr):
import torch
return torch.tensor
if is_flax_available() and is_jax_tensor(arr):
import jax.numpy as jnp
return jnp.array
raise ValueError(f"Cannot convert arrays of type {type(arr)}")
```
When removing the `return` row, the script works:
```python
def get_numpy_to_framework_fn(arr) -> Callable:
"""
Returns a function that converts a numpy array to the framework of the input array.
Args:
arr (`np.ndarray`): The array to convert.
"""
if isinstance(arr, np.ndarray):
return np.array
if is_tf_available() and is_tf_tensor(arr):
import tensorflow as tf
# return tf.convert_to_tensor
if is_torch_available() and is_torch_tensor(arr):
import torch
# return torch.tensor
if is_flax_available() and is_jax_tensor(arr):
import jax.numpy as jnp
# return jnp.array
raise ValueError(f"Cannot convert arrays of type {type(arr)}")
```
If moving the import outside the function (global import), the script also works:
```python
import tensorflow as tf
import torch
import jax.numpy as jnp
def get_numpy_to_framework_fn(arr) -> Callable:
"""
Returns a function that converts a numpy array to the framework of the input array.
Args:
arr (`np.ndarray`): The array to convert.
"""
if isinstance(arr, np.ndarray):
return np.array
if is_tf_available() and is_tf_tensor(arr):
return tf.convert_to_tensor
if is_torch_available() and is_torch_tensor(arr):
return torch.tensor
if is_flax_available() and is_jax_tensor(arr):
return jnp.array
raise ValueError(f"Cannot convert arrays of type {type(arr)}")
``` | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35333 |
TITLE
AllAboardBertweetModel
COMMENTS
0
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Model description
AllAboardBertweetModel used for AllaboardSystem
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_ | [
77
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
"New model"
] |
https://api.github.com/repos/huggingface/transformers/issues/35739 |
TITLE
Audio-Classification pipeline function_to_apply ignores initialized values (possibly generalizes to other classification pipelines)
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.48.0
- Platform: macOS-14.6-arm64-arm-64bit
- Python version: 3.12.4
- Huggingface_hub version: 0.27.1
- Safetensors version: 0.5.2
- Accelerate version: 1.2.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: None/NA
### Who can help?
@Rocketknight1
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
from transformers import pipeline
import torch
import numpy as np
model_name = 'audeering/wav2vec2-large-robust-12-ft-emotion-msp-dim'
#model_name = 'pollner/distilhubert-finetuned-ravdess'
top_k = 5
device = 'cuda' if torch.cuda.is_available() else 'cpu'
classification_pipeline = pipeline(
"audio-classification",
model=model_name,
top_k=top_k,
function_to_apply='none',
device=device,
)
# dummy signal
sampling_rate = 16000
signal = np.zeros((sampling_rate), dtype=np.float32)
print('No call parameter should match passing none:')
print(classification_pipeline(signal))
print('Call parameter with none:')
print(classification_pipeline(signal, function_to_apply='none'))
print('Call parameter with softmax which matches no parameter:')
print(classification_pipeline(signal, function_to_apply='softmax'))
print('Call parameter with sigmoid for show:')
print(classification_pipeline(signal, function_to_apply='sigmoid'))
```
### Expected behavior
I will note that this behavior could make sense, but should probably be noted somewhere if it is the intended behavior. I assume this is not intended behavior however because in [this line](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/base.py#L1320) we are only overwriting initialized parameters if they were sent with the call function, theoretically, but [this line](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/base.py#L1320) in `_sanitize_parameters` returns a default value that will always overwrite the value from initialization. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/33359 |
TITLE
[Docs] How to build offline HTML or Docset files for other documentation viewers?
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
How can I build the docs into HTML files for use with other documentation viewers like [Dash](https://www.kapeli.com/dash) , [Dash-User-Contributions](https://github.com/Kapeli/Dash-User-Contributions)?
I successfully built the PyTorch docs for Dash by working directly in their `docs/` directory. I’m wondering if a similar process exists for Hugging Face libraries.
### Motivation
The Dash docset viewer is very useful for viewing multiple documentation sets in one place, even offline. It would be great to support it and include all Hugging Face libraries.
### Your contribution
I’ve built the PyTorch docs for Dash, so I’m familiar with incorporating and generating docsets. | [
74,
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
1,
0,
0,
0,
0,
0
] | [
"Documentation",
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/35401 |
TITLE
Add Prompt Depth Anything Model
COMMENTS
34
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
This PR adds the **Prompt Depth Anything Model**. [Prompt Depth Anything](https://promptda.github.io/) builds upon [Depth Anything V2](https://depth-anything-v2.github.io/) and incorporates **metric prompt depth** to enable accurate and high-resolution metric depth estimation.
The implementation leverages [Modular Transformers](https://huggingface.co/docs/transformers/main/en/modular_transformers). The main file can be found [here](https://github.com/haotongl/transformers/blob/modeling_prompt_depth_anything/src/transformers/models/prompt_depth_anything/modular_prompt_depth_anything.py).
## Before submitting
- [ N/A] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ✅] Did you read the [contributor guideline] (https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ N/A] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ✅] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ✅] Did you write any new necessary tests?
| [
77,
62,
73
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
1,
0,
0,
0,
0
] | [
"New model",
"Vision",
"run-slow"
] |
https://api.github.com/repos/huggingface/transformers/issues/34762 |
TITLE
Fix callback key name
COMMENTS
0
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
Fixes typo.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @ylacombe, @eustlb
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @muellerzr
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| [
74,
66
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
"Documentation",
"trainer"
] |
https://api.github.com/repos/huggingface/transformers/issues/35095 |
TITLE
Documentation for SWAG contradicts itself when constructing the first sentence.
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
Not relevant.
### Who can help?
@stevhliu @ArthurZucker
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The [docs for multiple choice](https://huggingface.co/docs/transformers/tasks/multiple_choice) use SWAG as an example, which is the task of selecting the next sentence given a context. Somewhat strangely, rather than being given in the format `(sentence1, [sentence2a, sentence2b, sentence2c, sentence2d])`, the dataset is given in the format `(sentence1, sentence2_start, [sentence2_endA, sentence2_endB, sentence2_endC, sentence2_endD])`.
The code given in the docs basically turns the dataset into the first format, where **sentence 1 is kept intact** and the start of sentence 2 is concatenated to each ending:
https://github.com/huggingface/transformers/blob/a06a0d12636756352494b99b5b264ac9955bc735/docs/source/en/tasks/multiple_choice.md?plain=1#L96-L100
Yet, the docs say:
https://github.com/huggingface/transformers/blob/a06a0d12636756352494b99b5b264ac9955bc735/docs/source/en/tasks/multiple_choice.md?plain=1#L85-L88
What is being described is formatting the dataset as `(sentence1 + sentence2_start, [sentence2_start + sentence2_endA, sentence2_start + sentence2_endB, sentence2_start + sentence2_endC, sentence2_start + sentence2_endD])`, where **there is overlap between the first and the second sentence** (namely `sentence2_start`).
### Expected behavior
Either the code is wrong or the description is wrong.
If the description is wrong, it should be:
> The preprocessing function you want to create needs to:
> 1. Make four copies of the sent1 field.
> 2. Combine sent2 with each of the four possible sentence endings.
If the code is wrong, it should be:
```python
first_sentences = [[f"{s1} {s2_start}"] * 4 for s1,s2_start in zip(examples["sent1"], examples["sent2"])]
second_sentences = [
[f"{s2_start} {examples[end][i]}" for end in ending_names] for i, s2_start in enumerate(examples["sent2"])
]
``` | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34701 |
TITLE
WanDB callback fails on training end when eval dataset is provided
COMMENTS
4
REACTIONS
+1: 1
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.46.2
- Platform: Linux-5.14.0-427.22.1.el9_4.x86_64-x86_64-with-glibc2.34
- Python version: 3.11.10
- Huggingface_hub version: 0.26.1
- Safetensors version: 0.4.5
- Accelerate version: 1.1.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: no
### Who can help?
@muellerzr @SunMarc
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
(I reduced the code to the relevant parts)
```
train_args = TrainingArguments(
num_train_epochs=50,
eval_strategy="epoch",
logging_strategy="epoch",
save_strategy="epoch",
save_total_limit=3,
report_to="wandb",
run_name=name,
)
trainer = Trainer(
args=train_args,
train_dataset=train_dataset,
eval_dataset=test_dataset,
)
```
The issue is when reporting to WanDB, the callback at the following line of code
https://github.com/huggingface/transformers/blob/ccbd57a8b665fbb5b1d566c0b800dc6ede509e8e/src/transformers/integrations/integration_utils.py#L919
creates a fake trainer
```
fake_trainer = Trainer(args=args, model=model, processing_class=tokenizer)
```
with the same as the training arguments
but it isn't providing any datasets to the fake trainer
but because my script defines ``eval_strategy`` to anything other than ``no``, and because WanDB reporting is defined
it throws the following error at the end of the training
```
105 File "/home/mazuze/NLP/Hebrew-LLM-Eval/sentence_ordering/train_model.py", line 278, in main
106 trainer.train()
107 File "/home/mazuze/.conda/envs/coherence/lib/python3.11/site-packages/transformers/trainer.py", line 2123, in train
108 return inner_training_loop(
109 ^^^^^^^^^^^^^^^^^^^^
110 File "/home/mazuze/.conda/envs/coherence/lib/python3.11/site-packages/transformers/trainer.py", line 2635, in _inner_training_loop
111 self.control = self.callback_handler.on_train_end(args, self.state, self.control)
112 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
113 File "/home/mazuze/.conda/envs/coherence/lib/python3.11/site-packages/transformers/trainer_callback.py", line 471, in on_train_end
114 return self.call_event("on_train_end", args, state, control)
115 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
116 File "/home/mazuze/.conda/envs/coherence/lib/python3.11/site-packages/transformers/trainer_callback.py", line 518, in call_event
117 result = getattr(callback, event)(
118 ^^^^^^^^^^^^^^^^^^^^^^^^^
119 File "/home/mazuze/.conda/envs/coherence/lib/python3.11/site-packages/transformers/integrations/integration_utils.py", line 919, in on_train_end
120 fake_trainer = Trainer(args=args, model=model, processing_class=tokenizer)
121 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
122 File "/home/mazuze/.conda/envs/coherence/lib/python3.11/site-packages/transformers/utils/deprecation.py", line 165, in wrapped_func
123 return func(*args, **kwargs)
124 ^^^^^^^^^^^^^^^^^^^^^
125 File "/home/mazuze/.conda/envs/coherence/lib/python3.11/site-packages/transformers/trainer.py", line 418, in __init__
126 raise ValueError(
127 ValueError: You have set `args.eval_strategy` to IntervalStrategy.EPOCH but you didn't pass an `eval_dataset` to `Trainer`. Either set `args.eval_strategy` to `no` or pass an `eval_dataset`.
```
### Expected behavior
To not throw an exception and run the "on training end" successfully | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34631 |
TITLE
safe_globals are needed to resume training on upcoming PyTorch 2.6
COMMENTS
4
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
With:
* https://github.com/huggingface/transformers/commit/7bbc62474391aff64f63fcc064c975752d1fa4de
* https://github.com/huggingface/accelerate/commit/c0552c9012a9bae7f125e1df89cf9ee0b0d250fd
* https://github.com/pytorch/pytorch/commit/ca43ecd5996b15178de88960d167ccc31458b607 or later (PyTorch 2.6 candidate from main)
PyTorch 2.6 flips default on handling `torch.load(wights_only=True)` (done via https://github.com/pytorch/pytorch/pull/137602). With this change, some tests in Huggingface Transformers start to fail. I did not test everything, but at least these are affected:
* `tests/trainer/test_trainer.py::TrainerIntegrationTest::test_auto_batch_size_with_resume_from_checkpoint`
* `tests/trainer/test_trainer.py::TrainerIntegrationTest::test_can_resume_training`
* `tests/trainer/test_trainer.py::TrainerIntegrationTest::test_compare_trainer_and_checkpoint_args_logging`
* `tests/trainer/test_trainer.py::TrainerIntegrationTest::test_resume_training_with_frozen_params`
* `tests/trainer/test_trainer.py::TrainerIntegrationTest::test_resume_training_with_gradient_accumulation`
* `tests/trainer/test_trainer.py::TrainerIntegrationTest::test_resume_training_with_safe_checkpoint`
* `tests/trainer/test_trainer.py::TrainerIntegrationTest::test_resume_training_with_shard_checkpoint`
**What's the way to handle this case with Huggingface Transformers? Should Transformers retain internal allowed list of safe globals? And/or Transformers API should be extended to allow external safe globals specification? Or this is end user responisbility and such list should be retained on higher level scripts side?**
See the log for one of the tests below. Can be reproduced on 1 card system with NVidia A10 or Intel PVC:
```
$ python3 -m pytest --pspec tests/trainer/test_trainer.py::TrainerIntegrationTest::test_can_resume_training
=========================================================== test session starts ===========================================================
platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.5.0
rootdir: /home/dvrogozh/git/huggingface/transformers
configfile: pyproject.toml
plugins: hypothesis-6.111.1, subtests-0.13.1, rich-0.1.1, dash-2.17.1, xdist-3.6.1, pspec-0.0.4, timeout-2.3.1
collected 1 item
tests/trainer/test_trainer.py
Trainer Integration Test
✗ can resume training
[100%]
================================================================ FAILURES =================================================================
_____________________________________________ TrainerIntegrationTest.test_can_resume_training _____________________________________________
self = <tests.trainer.test_trainer.TrainerIntegrationTest testMethod=test_can_resume_training>
@require_torch_up_to_2_accelerators
def test_can_resume_training(self):
# This test will fail for more than 2 GPUs since the batch size will get bigger and with the number of
# save_steps, the checkpoint will resume training at epoch 2 or more (so the data seen by the model
# won't be the same since the training dataloader is shuffled).
with tempfile.TemporaryDirectory() as tmpdir:
kwargs = {
"output_dir": tmpdir,
"train_len": 128,
"save_steps": 5,
"learning_rate": 0.1,
"logging_steps": 5,
}
trainer = get_regression_trainer(**kwargs)
trainer.train()
(a, b) = trainer.model.a.item(), trainer.model.b.item()
state = dataclasses.asdict(trainer.state)
checkpoint = os.path.join(tmpdir, "checkpoint-5")
# Reinitialize trainer
trainer = get_regression_trainer(**kwargs)
> trainer.train(resume_from_checkpoint=checkpoint)
tests/trainer/test_trainer.py:2610:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/trainer.py:2141: in train
return inner_training_loop(
src/transformers/trainer.py:2470: in _inner_training_loop
self._load_rng_state(resume_from_checkpoint)
src/transformers/trainer.py:3051: in _load_rng_state
checkpoint_rng_state = torch.load(rng_file)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
f = '/tmp/tmpfuilg_pf/checkpoint-5/rng_state.pth', map_location = None, pickle_module = None, weights_only = True, mmap = False
pickle_load_args = {'encoding': 'utf-8'}, _get_wo_message = <function load.<locals>._get_wo_message at 0x7f2190293640>, skip_data = False
weights_only_not_set = True, true_values = ['1', 'y', 'yes', 'true'], force_weights_only_load = False
...
except pickle.UnpicklingError as e:
> raise pickle.UnpicklingError(_get_wo_message(str(e))) from None
E _pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, do those steps only if you trust the source of the checkpoint.
E (1) Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
E (2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
E WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray._reconstruct was not an allowed global by default. Please use `torch.serialization.add_safe_globals([_reconstruct])` or the `torch.serialization.safe_globals([_reconstruct])` context manager to allowlist this global if you trust this class/function.
E
E Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.
../../pytorch/pytorch/torch/serialization.py:1444: UnpicklingError
---------------------------------------------------------- Captured stdout call -----------------------------------------------------------
{'loss': 8.9328, 'grad_norm': 6.47317361831665, 'learning_rate': 0.08958333333333335, 'epoch': 0.31}
{'loss': 8.4215, 'grad_norm': 4.27539587020874, 'learning_rate': 0.07916666666666666, 'epoch': 0.62}
{'loss': 4.469, 'grad_norm': 2.845299005508423, 'learning_rate': 0.06875, 'epoch': 0.94}
{'loss': 2.398, 'grad_norm': 3.027919292449951, 'learning_rate': 0.05833333333333334, 'epoch': 1.25}
{'loss': 2.1414, 'grad_norm': 3.2882485389709473, 'learning_rate': 0.04791666666666667, 'epoch': 1.56}
{'loss': 1.237, 'grad_norm': 1.5751118659973145, 'learning_rate': 0.037500000000000006, 'epoch': 1.88}
{'loss': 0.7723, 'grad_norm': 1.4625849723815918, 'learning_rate': 0.027083333333333334, 'epoch': 2.19}
{'loss': 0.5407, 'grad_norm': 1.1405667066574097, 'learning_rate': 0.016666666666666666, 'epoch': 2.5}
{'loss': 0.3799, 'grad_norm': 1.1864064931869507, 'learning_rate': 0.00625, 'epoch': 2.81}
{'train_runtime': 0.497, 'train_samples_per_second': 772.666, 'train_steps_per_second': 96.583, 'train_loss': 3.0737542832891145, 'epoch': 3.0}
---------------------------------------------------------- Captured stderr call -----------------------------------------------------------
0%| | 0/48 [00:00<?, ?it/s]Could not estimate the number of tokens of the input, floating-point operations will not be computed
100%|██████████| 48/48 [00:00<00:00, 96.67it/s]
0%| | 0/48 [00:00<?, ?it/s]
========================================================= short test summary info =========================================================
FAILED tests/trainer/test_trainer.py::TrainerIntegrationTest::test_can_resume_training - _pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, do those steps only if you trust the source of the checkpoint.
============================================================ 1 failed in 3.32s ============================================================
```
CC: @muellerzr @SunMarc
| [
50,
27
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"PyTorch",
"dependencies"
] |
https://api.github.com/repos/huggingface/transformers/issues/34379 |
TITLE
Unable to run Chameleon model since v4.46
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
After updating to v4.46, [vLLM's CI fails to run Chameleon on HF](https://buildkite.com/vllm/ci-aws/builds/10281#0192bd86-519f-4e90-9e3d-729d97f6f47e). Upon investigation, I found that the model cannot even be run using HF's example script.
### System Info
Python 3.9, Transformers v4.46.0
### Who can help?
@laurentd-lunit since you authored #33608
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the example script in the documentation for [`ChameleonForConditionalGeneration`](https://huggingface.co/docs/transformers/main/en/model_doc/chameleon):
```py
from transformers import ChameleonProcessor, ChameleonForConditionalGeneration
import torch
import requests
from PIL import Image
# I changed this line with `device_map="auto"` so it can load into my GPUs. It should not affect the result
model = ChameleonForConditionalGeneration.from_pretrained("facebook/chameleon-7b", device_map="auto", torch_dtype=torch.bfloat16)
processor = ChameleonProcessor.from_pretrained("facebook/chameleon-7b")
prompt = "I used to know a lot about constellations when I was younger, but as I grew older, I forgot most of what I knew. These are the only two constellations that I really remember now.<image><image>I would like for you to tell me about 3 more constellations and give me a little bit of history about the constellation."
image = Image.open(requests.get("https://nineplanets.org/wp-content/uploads/2020/12/the-big-dipper-1.jpg", stream=True).raw)
image_2 = Image.open(requests.get("https://www.kxan.com/wp-content/uploads/sites/40/2020/10/ORION.jpg", stream=True).raw)
inputs = processor(images=[image, image_2], text=prompt, return_tensors="pt").to(model.device, torch.bfloat16)
generated_ids = model.generate(**inputs, max_new_tokens=100, do_sample=False)
processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
Full output:
```
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:06<00:00, 2.10s/it]
Some kwargs in processor config are unused and will not have any effect: image_seq_length, image_token.
/home/cyrus/miniconda3/envs/vllm/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:590: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.7` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`.
warnings.warn(
/home/cyrus/miniconda3/envs/vllm/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:595: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.9` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`.
warnings.warn(
Setting `pad_token_id` to `eos_token_id`:None for open-end generation.
Traceback (most recent call last):
File "/home/cyrus/vllm/run_chameleon.py", line 15, in <module>
generated_ids = model.generate(**inputs, max_new_tokens=100, do_sample=False)
File "/home/cyrus/miniconda3/envs/vllm/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/cyrus/miniconda3/envs/vllm/lib/python3.9/site-packages/transformers/generation/utils.py", line 2215, in generate
result = self._sample(
File "/home/cyrus/miniconda3/envs/vllm/lib/python3.9/site-packages/transformers/generation/utils.py", line 3206, in _sample
outputs = self(**model_inputs, return_dict=True)
File "/home/cyrus/miniconda3/envs/vllm/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/cyrus/miniconda3/envs/vllm/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/cyrus/miniconda3/envs/vllm/lib/python3.9/site-packages/accelerate/hooks.py", line 170, in new_forward
output = module._old_forward(*args, **kwargs)
File "/home/cyrus/miniconda3/envs/vllm/lib/python3.9/site-packages/transformers/models/chameleon/modeling_chameleon.py", line 1589, in forward
outputs = self.model(
File "/home/cyrus/miniconda3/envs/vllm/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/cyrus/miniconda3/envs/vllm/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/cyrus/miniconda3/envs/vllm/lib/python3.9/site-packages/transformers/models/chameleon/modeling_chameleon.py", line 1293, in forward
raise ValueError(
ValueError: Image features and image tokens do not match: tokens: 2048, features 2
```
The error was added in #33608. Perhaps that PR is faulty.
### Expected behavior
The example script should be run successfully. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34161 |
TITLE
KeyError: 'qwen2_vl' loading from Transformers
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.44.2
- Platform: Linux-6.1.85+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.24.7
- Safetensors version: 0.4.5
- Accelerate version: 0.34.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu121 (True)
- Tensorflow version (GPU?): 2.17.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.8.5 (gpu)
- Jax version: 0.4.33
- JaxLib version: 0.4.33
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: Tesla T4
### Who can help?
@amyeroberts @qubvel
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am attempting to load the model from transformers library and getting an error. Here are instructions from official documentation: https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct?library=transformers
I am on: `transformers 4.44.2`
```
# Load model directly
from transformers import AutoProcessor, AutoModelForSeq2SeqLM
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct")
model = AutoModelForSeq2SeqLM.from_pretrained("Qwen/Qwen2-VL-7B-Instruct")
```
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
992 try:
--> 993 config_class = CONFIG_MAPPING[config_dict["model_type"]]
994 except KeyError:
3 frames
KeyError: 'qwen2_vl'
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
993 config_class = CONFIG_MAPPING[config_dict["model_type"]]
994 except KeyError:
--> 995 raise ValueError(
996 f"The checkpoint you are trying to load has model type `{config_dict['model_type']}` "
997 "but Transformers does not recognize this architecture. This could be because of an "
ValueError: The checkpoint you are trying to load has model type `qwen2_vl` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
```
### Expected behavior
Load the Qwen / llama vision models successfully from transformers. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34169 |
TITLE
Image-Text-to-Text Support in Transformers Pipeline
COMMENTS
2
REACTIONS
+1: 1
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
Implement the new feature to support a pipeline that can take both an image and text as inputs, and produce a text output. This would be particularly useful for multi-modal tasks such as visual question answering (VQA), image captioning, or image-based text generation.
```python
from transformers import pipeline
# Initialize the pipeline with multi-modal models
multi_modal_pipeline = pipeline("image-text-to-text", model="meta-llama/Llama-3.2-11B-Vision-Instruct")
# Example usage
messages = [
{"role": "user", "content": [
{"type": "image"},
{"type": "text", "text": "If I had to write a haiku for this one, it would be: "}
]}
]
result = multi_modal_pipeline(messages )
print(result) # Should return an answer or relevant text based on the image and question
```
### Motivation
- Simplifies workflows involving multi-modal data.
- Enables more complex and realistic tasks to be handled with existing Transformer models.
- Encourages more multi-modal model usage in research and production.
### Your contribution
**Transformers Integration**
Ensure that the pipeline works well within the Hugging Face Transformers library:
- Implement the custom pipeline class (`ImageTextToTextPipeline`).
- Add support for handling different data types (image, text) and ensure smooth forward pass execution.
```python
class ImageTextToTextPipeline(Pipeline):
....
``` | [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/35775 |
TITLE
LLaVA-OneVision image features and image tokens mismatch
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.48.0
- Platform: Linux-5.15.0-1067-nvidia-x86_64-with-glibc2.35
- Python version: 3.11.11
- Huggingface_hub version: 0.27.1
- Safetensors version: 0.5.2
- Accelerate version: 1.2.1
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: FSDP
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- fsdp_config: {'fsdp_activation_checkpointing': True, 'fsdp_auto_wrap_policy': 'TRANSFORMER_BASED_WRAP', 'fsdp_backward_prefetch': 'BACKWARD_PRE', 'fsdp_cpu_ram_efficient_loading': True, 'fsdp_forward_prefetch': False, 'fsdp_offload_params': False, 'fsdp_sharding_strategy': 'FULL_SHARD', 'fsdp_state_dict_type': 'SHARDED_STATE_DICT', 'fsdp_sync_module_states': True, 'fsdp_transformer_layer_cls_to_wrap': '', 'fsdp_use_orig_params': True}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.5.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: False
- Using GPU in script?: True
- GPU type: NVIDIA H100 80GB HBM3
### Who can help?
@amyeroberts @qubvel @zucchini-nlp
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoProcessor, LlavaOnevisionForConditionalGeneration
from datasets import load_dataset
import torch
processor = AutoProcessor.from_pretrained("llava-hf/llava-onevision-qwen2-0.5b-ov-hf")
model = LlavaOnevisionForConditionalGeneration.from_pretrained("llava-hf/llava-onevision-qwen2-0.5b-ov-hf", torch_dtype=torch.float16, device_map="auto")
dataset = load_dataset("lmms-lab/docvqa", 'DocVQA')
d = dataset['test'][2482]
question = d['question']
image = d['image']
conversation = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": question},
],
},
]
prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
inputs = processor(images=image, text=prompt, return_tensors="pt").to("cuda")
with torch.no_grad():
outputs = model(**inputs)
```
Traceback as follows:
```
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/mnt/home/miniforge3/envs/vek/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/home/miniforge3/envs/vek/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/home/miniforge3/envs/vek/lib/python3.11/site-packages/transformers/models/llava_onevision/modeling_llava_onevision.py", line 688, in forward
raise ValueError(
ValueError: Image features and image tokens do not match: tokens: 7332, features 7261
```
### Expected behavior
Expected: Output correctly without errors.
This is a follow-up issue of https://github.com/huggingface/transformers/issues/34625, where the behavior is the same but for different reasons. The reproduction example is a slight modification of the one provided by @chchch0109.
| [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35948 |
TITLE
Document Question Answering Pipeline fails due to array with an inhomogeneous shape
COMMENTS
4
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
transformers==4.48.1
### Who can help?
Maybe @amyeroberts ?
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
from transformers import pipeline
document_qa = pipeline("document-question-answering", model="impira/layoutlm-document-qa")
word_boxes = [["Britta",[100,157,129,163]],["Beispiel",[134,157,176,165]],["Sanit\u00e4r",[181,157,218,163]],["Herrn",[99,177,143,185]],["Emil",[99,191,133,199]],["Exempel",[138,191,200,201]],["Donaustrasse",[99,205,195,213]],["46",[200,205,216,213]],["11300",[100,219,140,227]],["Berlin",[146,219,190,227]],["-",[223,162,226,162]],["Beispielstrasse",[230,157,309,165]],["12",[316,157,326,163]],["-",[331,162,334,162]],["80888",[339,158,371,163]],["Berlin",[376,157,408,163]],["So",[643,172,659,180]],["erreichen",[664,172,731,180]],["Sie",[737,172,757,180]],["uns",[762,175,786,180]],["Internet",[642,187,694,201]],["www",[737,191,772,196]],["britta-sanitaer.de",[776,188,890,196]],["E-Mail",[643,200,689,214]],["britta.beispiel@",[737,202,842,212]],["gmx.net",[845,203,898,212]],["Telefon",[642,214,693,228]],["030",[738,216,762,224]],["\/",[767,216,771,224]],["999666",[776,216,824,224]],["Fax",[643,230,667,238]],["030",[738,230,762,238]],["\/",[767,230,771,238]],["999777",[776,230,824,238]],["Mobil",[642,242,682,256]],["0179",[738,244,770,252]],["\/",[775,244,779,252]],["999888",[784,244,833,252]],["Steuer-Nr",[643,272,708,280]],["122\/5678\/1234",[738,272,838,280]],["UStD",[643,286,685,294]],["DE12345678",[737,286,827,294]],["Datum",[643,315,688,323]],["30.11.2009",[737,315,811,323]],["Kunde",[643,328,687,336]],["14002",[738,328,778,336]],["Rechnung",[643,342,710,352]],["200910214",[737,342,811,350]],["Angebot:",[99,372,160,382]],["10154",[168,372,207,380]],["vom",[213,375,242,380]],["16.11.2009",[248,372,321,380]],["Objekt:",[100,386,148,396]],["10244",[156,386,195,394]],["Berlin,",[200,386,245,395]],["Charlottenstr.",[251,386,341,394]],["152",[348,386,370,394]],["Sehr",[100,415,130,423]],["geehrter",[135,415,189,425]],["Herr",[194,415,224,423]],["Exempel,",[229,415,293,425]],["nach",[99,442,131,450]],["Ausf\u00fchrung",[136,442,215,452]],["der",[220,442,241,450]],["Arbeiten",[246,442,303,450]],["entsprechend",[309,442,397,453]],["meinem",[401,442,455,450]],["og.",[460,445,479,453]],["Angebot",[485,442,542,452]],["erlaube",[547,442,595,450]],["ich",[601,442,620,450]],["mir",[625,442,647,450]],["wie",[652,442,676,450]],["folgt",[681,442,712,452]],["zu",[717,445,732,450]],["berechnen:",[737,442,808,450]],["Rechnung",[100,470,186,482]],["Nr.",[192,470,219,480]],["200910214",[226,470,316,480]],["Das",[538,473,563,481]],["Rechnungsdatum",[568,473,684,484]],["entspricht",[689,473,754,483]],["dem",[759,473,787,481]],["Leistungsdatum",[793,473,899,484]],["Pos",[99,508,123,516]],["Art-Nr.",[133,508,187,516]],["Bezeichnung",[256,506,347,520]],["Menge",[660,508,708,518]],["Einzelpreis",[724,508,803,518]],["Betrag",[851,508,899,518]],["1",[101,522,105,530]],["Austausch",[257,522,325,530]],["der",[331,522,351,530]],["defekten",[357,522,413,530]],["Zuleitung",[419,522,483,532]],["im",[489,522,505,530]],["2,0",[657,522,677,531]],["Std.",[683,522,708,530]],["30,00",[767,522,804,531]],["60,00",[862,522,898,531]],["WC",[256,536,282,544]],["des",[288,536,309,544]],["Ergeschosses",[314,536,402,546]],["2",[99,564,106,572]],["Materialkosten",[256,564,356,572]],["(Diverses",[362,564,424,574]],["Kleinmaterial)",[430,564,527,574]],["3,0",[658,564,677,573]],["Stk.",[683,564,708,572]],["24,56",[766,563,803,573]],["73,68",[862,564,898,573]],["Zahlbar",[100,606,151,614]],["innerhalb",[157,606,219,614]],["von",[224,609,248,614]],["7",[253,606,260,614]],["Tagen",[266,606,307,616]],["(bis",[312,606,336,616]],["zum",[342,609,370,614]],["07.12.2009)",[375,606,454,616]],["unter",[460,607,494,613]],["Abzug",[499,606,543,617]],["Rechnungsbetrag",[600,604,725,618]],["133,68",[814,606,858,616]],["EUR",[864,606,899,614]],["von",[100,623,124,628]],["3%",[129,620,150,628]],["Skonto",[156,620,203,628]],["(Zahlungsbetrag",[208,620,317,630]],["=",[322,624,330,627]],["129,67",[337,620,380,629]],["EUR).",[385,620,427,630]],["Bis",[433,620,454,628]],["zum",[460,623,487,628]],["14.12.2009",[101,634,174,642]],["ohne",[180,634,212,642]],["Abzug.",[216,634,264,644]],["Umsatzsteuer",[99,672,190,680]],["wird",[195,672,225,680]],["nicht",[230,672,263,680]],["in",[269,672,281,680]],["Rechnung",[285,672,352,682]],["gestellt.",[358,672,409,682]],["Als",[415,672,437,680]],["sogenannter",[443,673,522,682]],["Kleinunternehmer",[527,672,648,680]],["i.",[654,672,661,680]],["S.",[667,672,679,680]],["von",[685,675,708,680]],["$",[715,672,721,682]],["19",[728,672,742,680]],["Abs.",[748,672,777,680]],["1",[784,672,788,680]],["UStG",[795,672,833,680]],["wird",[837,672,867,680]],["auf",[873,672,894,680]],["die",[100,685,119,693]],["Regelbesteuerung",[125,685,244,696]],["verzichtet.",[249,686,318,694]],["Vielen",[100,728,144,736]],["Dank",[149,728,185,736]],["f\u00fcr",[189,728,208,736]],["Ihren",[213,728,247,736]],["Auftrag!",[252,728,308,738]],["Ich",[99,742,120,750]],["bitte",[125,742,154,750]],["um",[159,745,180,750]],["\u00dcberweisung",[185,740,274,752]],["des",[279,742,300,750]],["Rechnungsbetrages",[306,742,435,752]],["innerhalb",[440,742,502,750]],["von",[100,758,124,763]],["14",[135,756,150,764]],["Tagen",[159,756,200,766]],["an",[205,758,219,763]],["die",[225,755,244,763]],["unten",[249,757,285,763]],["genannte",[292,757,351,766]],["Bankverbindung",[356,755,467,766]],["Mit",[99,784,123,792]],["freundlichen",[128,784,212,792]],["Gr\u00fc\u00dfen",[218,784,266,792]],["Britta",[99,826,137,834]],["Beispiel",[142,826,196,836]],["Seite",[840,881,872,889]],["1\/1",[879,881,897,889]],["Zahlungsempf\u00e4nger",[100,900,193,907]],["Bankverbindung",[99,910,177,917]],["aus",[100,922,115,925]],["dem",[119,920,138,925]],["Ausland",[142,920,180,925]],["Gesch\u00e4ftsf\u00fchrung",[100,936,190,943]],["Britta",[302,900,329,905]],["Beispiel",[331,900,368,907]],["Beispielbank,",[302,910,366,917]],["KTO",[369,910,393,915]],["0098765,",[397,910,440,916]],["BLZ",[443,910,465,915]],["88899900",[469,910,515,915]],["BIC",[302,920,321,925]],["asdfasdf,",[325,920,366,926]],["IBAN",[370,920,398,925]],["asdfasdf4848",[402,920,463,925]],["Britta",[302,932,329,946]],["Beispiel",[333,932,368,946]]]
document_qa(question="What is the invoice number?", word_boxes=word_boxes, image=None)
```
https://colab.research.google.com/drive/1Rk-a68zREdBBuYN8jVMKUcQUG73Me_6Z?usp=sharing
### Expected behavior
I would expect that the pipeline works with any list of word boxes. But certain word boxes cause the document question answering pipeline to throw a `ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part.`
This error does not always occur. Only certain word box combinations seem to trigger it. For example in the reproduction above I was not able to pinpoint the error to any specific word box.
Also, it seems like this error was introduced with transformers version 4.43 as the example wont throw any errors when using version 4.42. Therefore I suspect that this could be the PR that introduced the bug: https://github.com/huggingface/transformers/pull/32076 | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35491 |
TITLE
tokenizers v0.21 isn't supported
COMMENTS
0
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
[tokenizers v0.21.0](https://github.com/huggingface/tokenizers/releases/tag/v0.21.0) was released in November. There are no listed breaking changes.
### Motivation
Other packages conflict with `transformers` requirement of v0.20.x only.
### Your contribution
Why is this a required field? If I'm going to open a PR, I'm going to do it *after* I file the issue so that the Git commit can mark it as fixed. | [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/35218 |
TITLE
added cached tokenizer
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
This PR introduces a caching mechanism for the added_tokens_encoder property in tokenizers to improve performance. Previously, the added_tokens_encoder mapping was recomputed every time the property was accessed, leading to redundant computation during tasks that frequently access it, such as tokenization or decoding.
Motivation and Context
The motivation for this change is to optimize tokenizer performance, especially in workflows that repeatedly access the added_tokens_encoder property. By caching the result, this PR reduces overhead and improves runtime efficiency without altering the existing behavior of the library.
Key changes:
The added_tokens_encoder mapping is now cached on the first access and reused for subsequent calls.
The caching mechanism is implemented in a way that is backward-compatible and avoids unnecessary recomputation.
## Some benchmarks
### Composite Results
| Model | Composite WER (%) | Composite RTFx (With/Without/Improvement) |
|------------------------------|-------------------|-------------------------------------------|
| distil/whisper-distil-large-v2 | 7.92 | 278.32 / 202.95 / 36% |
| distil/whisper-distil-large-v3 | 7.52 | 282.46 / 214.42 / 32% |
| distil/whisper-distil-medium.en | 8.76 | 406.96 / 279.73 / 45% |
| openai/whisper-large | 7.94 | 167.43 / 143.76 / 16% |
| openai/whisper-large-v2 | 7.83 | 167.95 / 144.45 / 16% |
| openai/whisper-large-v3 | 7.44 | 169.26 / 145.51 / 16% |
| openai/whisper-large-v3-turbo | 7.83 | 268.72 / 197.98 / 36% |
| openai/whisper-medium.en | 8.09 | 222.49 / 182.13 / 22% |
| openai/whisper-small.en | 8.59 | 359.18 / 268.91 / 34% |
| openai/whisper-base.en | 10.32 | 483.69 / 320.67 / 50% |
| openai/whisper-tiny.en | 12.81 | 532.03 / 348.12 / 53% |
<details>
### AMI Results
| Model | AMI WER (%) | AMI RTFx (With/Without/Improvement) |
|------------------------------|-------------|-------------------------------------|
| distil/whisper-distil-large-v2 | 14.67 | 120.15 / 103.50 / 16% |
| distil/whisper-distil-large-v3 | 15.16 | 119.29 / 104.33 / 14% |
| distil/whisper-distil-medium.en | 16.12 | 189.32 / 152.03 / 25% |
| openai/whisper-large | 16.73 | 82.81 / 76.15 / 9% |
| openai/whisper-large-v2 | 16.74 | 85.65 / 79.49 / 7% |
| openai/whisper-large-v3 | 15.95 | 84.31 / 77.97 / 8% |
| openai/whisper-large-v3-turbo | 16.13 | 116.17 / 98.83 / 18% |
| openai/whisper-medium.en | 16.68 | 78.47 / 76.86 / 2% |
| openai/whisper-small.en | 17.93 | 197.70 / 168.88 / 17% |
| openai/whisper-base.en | 21.13 | 224.91 / 181.10 / 24% |
| openai/whisper-tiny.en | 24.24 | 271.98 / 228.77 / 19% |
### Earnings22 Results
| Model | Earnings22 WER (%) | Earnings22 RTFx (With/Without/Improvement) |
|------------------------------|---------------------|-------------------------------------------|
| distil/whisper-distil-large-v2 | 12.19 | 279.17 / 212.11 / 32% |
| distil/whisper-distil-large-v3 | 11.79 | 281.64 / 219.27 / 28% |
| distil/whisper-distil-medium.en | 12.99 | 408.40 / 291.33 / 40% |
| openai/whisper-large | 12.91 | 156.36 / 138.56 / 13% |
| openai/whisper-large-v2 | 12.05 | 173.81 / 151.92 / 14% |
| openai/whisper-large-v3 | 11.29 | 171.74 / 149.66 / 15% |
| openai/whisper-large-v3-turbo | 11.63 | 274.35 / 202.67 / 35% |
| openai/whisper-medium.en | 12.63 | 251.39 / 204.49 / 23% |
| openai/whisper-small.en | 12.97 | 390.44 / 303.05 / 29% |
| openai/whisper-base.en | 15.09 | 554.06 / 370.98 / 49% |
| openai/whisper-tiny.en | 19.12 | 439.19 / 323.27 / 36% |
### Gigaspeech Results
| Model | GigaSpeech WER (%) | GigaSpeech RTFx (With/Without/Improvement) |
|------------------------------|--------------------|-------------------------------------------|
| distil/whisper-distil-large-v2 | 10.32 | 242.64 / 178.28 / 26% |
| distil/whisper-distil-large-v3 | 10.08 | 245.04 / 185.02 / 32% |
| distil/whisper-distil-medium.en | 11.30 | 351.03 / 242.87 / 45% |
| openai/whisper-large | 10.76 | 137.20 / 118.69 / 16% |
| openai/whisper-large-v2 | 10.67 | 139.24 / 120.05 / 15% |
| openai/whisper-large-v3 | 10.02 | 141.93 / 122.97 / 16% |
| openai/whisper-large-v3-turbo | 10.14 | 229.71 / 168.52 / 36% |
| openai/whisper-medium.en | 11.03 | 177.60 / 151.70 / 17% |
| openai/whisper-small.en | 11.35 | 271.56 / 213.19 / 27% |
| openai/whisper-base.en | 12.83 | 357.94 / 253.20 / 41% |
| openai/whisper-tiny.en | 14.08 | 421.61 / 284.52 / 48% |
### LibriSpeech Clean Results
| Model | LibriSpeech Clean WER (%) | LibriSpeech Clean RTFx (With/Without/Improvement) |
|------------------------------|--------------------------|-------------------------------------------|
| distil/whisper-distil-large-v2 | 2.94 | 286.00 / 205.44 / 39% |
| distil/whisper-distil-large-v3 | 2.54 | 288.02 / 217.52 / 32% |
| distil/whisper-distil-medium.en | 3.69 | 415.82 / 280.95 / 48% |
| openai/whisper-large | 2.73 | 181.37 / 150.35 / 21% |
| openai/whisper-large-v2 | 2.83 | 159.01 / 135.81 / 17% |
| openai/whisper-large-v3 | 2.01 | 179.93 / 151.42 / 19% |
| openai/whisper-large-v3-turbo | 2.10 | 278.29 / 201.89 / 38% |
| openai/whisper-medium.en | 3.02 | 244.38 / 196.85 / 24% |
| openai/whisper-small.en | 3.05 | 408.91 / 280.23 / 46% |
| openai/whisper-base.en | 4.25 | 583.91 / 353.97 / 65% |
| openai/whisper-tiny.en | 5.66 | 639.70 / 376.14 / 70% |
### LibriSpeech Other Results
| Model | LibriSpeech Other WER (%) | LibriSpeech Other RTFx (With/Without/Improvement) |
|------------------------------|--------------------------|-------------------------------------------|
| distil/whisper-distil-large-v2 | 6.84 | 248.08 / 177.63 / 40% |
| distil/whisper-distil-large-v3 | 5.19 | 259.09 / 199.72 / 30% |
| distil/whisper-distil-medium.en | 8.35 | 349.71 / 236.81 / 48% |
| openai/whisper-large | 5.54 | 164.39 / 138.73 / 18% |
| openai/whisper-large-v2 | 5.14 | 162.81 / 139.05 / 17% |
| openai/whisper-large-v3 | 3.91 | 163.21 / 140.22 / 16% |
| openai/whisper-large-v3-turbo | 4.24 | 257.22 / 188.87 / 36% |
| openai/whisper-medium.en | 5.85 | 222.76 / 181.65 / 23% |
| openai/whisper-small.en | 7.25 | 367.64 / 262.68 / 40% |
| openai/whisper-base.en | 10.35 | 445.31 / 293.26 / 52% |
| openai/whisper-tiny.en | 15.45 | 420.61 / 298.15 / 41% |
### SPGISpeech Results
| Model | SPGISpeech WER (%) | SPGISpeech RTFx (With/Without/Improvement) |
|------------------------------|--------------------|-------------------------------------------|
| distil/whisper-distil-large-v2 | 3.30 | 331.26 / 232.50 / 42% |
| distil/whisper-distil-large-v3 | 3.27 | 337.55 / 249.00 / 36% |
| distil/whisper-distil-medium.en | 3.83 | 478.64 / 318.96 / 50% |
| openai/whisper-large | 3.20 | 198.02 / 167.48 / 18% |
| openai/whisper-large-v2 | 3.87 | 196.77 / 166.89 / 18% |
| openai/whisper-large-v3 | 2.94 | 197.37 / 166.92 / 18% |
| openai/whisper-large-v3-turbo | 2.97 | 320.11 / 229.57 / 39% |
| openai/whisper-medium.en | 3.33 | 218.35 / 285.07 / 31% |
| openai/whisper-small.en | 3.60 | 427.56 / 307.90 / 39% |
| openai/whisper-base.en | 4.26 | 601.14 / 372.83 / 61% |
| openai/whisper-tiny.en | 5.93 | 648.97 / 398.03 / 63% |
### TEDLIUM Results
| Model | TEDLIUM WER (%) | TEDLIUM RTFx (With/Without/Improvement) |
|------------------------------|----------------|-----------------------------------------|
| distil/whisper-distil-large-v2 | 4.87 | 274.60 / 197.85 / 39% |
| distil/whisper-distil-large-v3 | 3.86 | 294.14 / 217.54 / 35% |
| distil/whisper-distil-medium.en | 4.84 | 425.02 / 282.89 / 50% |
| openai/whisper-large | 3.91 | 166.87 / 143.34 / 16% |
| openai/whisper-large-v2 | 3.90 | 166.91 / 143.77 / 16% |
| openai/whisper-large-v3 | 3.86 | 166.75 / 142.18 / 17% |
| openai/whisper-large-v3-turbo | 3.57 | 288.34 / 199.61 / 44% |
| openai/whisper-medium.en | 4.11 | 237.28 / 185.40 / 28% |
| openai/whisper-small.en | 4.07 | 352.07 / 263.51 / 34% |
| openai/whisper-base.en | 4.87 | 507.93 / 336.00 / 51% |
| openai/whisper-tiny.en | 5.97 | 571.50 / 352.79 / 62% |
### Voxpopuli Results
| Model | VoxPopuli WER (%) | VoxPopuli RTFx (With/Without/Improvement) |
|------------------------------|-------------------|------------------------------------------|
| distil/whisper-distil-large-v2 | 8.24 | 348.26 / 249.25 / 40% |
| distil/whisper-distil-large-v3 | 8.25 | 359.48 / 262.70 / 37% |
| distil/whisper-distil-medium.en | 9.00 | 525.00 / 345.95 / 52% |
| openai/whisper-large | 7.76 | 218.21 / 182.69 / 19% |
| openai/whisper-large-v2 | 7.48 | 219.32 / 182.27 / 20% |
| openai/whisper-large-v3 | 9.54 | 213.33 / 180.51 / 18% |
| openai/whisper-large-v3-turbo | 11.87 | 339.76 / 247.99 / 37% |
| openai/whisper-medium.en | 8.06 | 309.17 / 239.06 / 29% |
| openai/whisper-small.en | 8.50 | 478.84 / 336.49 / 42% |
| openai/whisper-base.en | 9.76 | 681.44 / 418.28 / 63% |
| openai/whisper-tiny.en | 12.00 | 647.46 / 405.49 / 60% |
</details>
Benchmark scripts available there: https://github.com/huggingface/open_asr_leaderboard/tree/main/transformers
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
@Vaibhavs10
This changes was suggested by @pzelasko
| [
47
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Core: Tokenization"
] |
https://api.github.com/repos/huggingface/transformers/issues/33724 |
TITLE
Add sdpa for DistilBert
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Towards #28005
Adds sdpa for the DistilBert model
## Who can review?
@amyeroberts
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @ylacombe, @eustlb
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @muellerzr
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| [
73
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"run-slow"
] |
https://api.github.com/repos/huggingface/transformers/issues/33709 |
TITLE
Gemma is ExecuTorch compatible
COMMENTS
0
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
### Feature request
Enable Gemma to ["Export to ExecuTorch"](https://github.com/huggingface/transformers/issues/32253) workflow
### Motivation
See details in #32253
### Your contribution
Gemma model enablement | [
76,
32
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request",
"Deployment"
] |
https://api.github.com/repos/huggingface/transformers/issues/35905 |
TITLE
Convert RT-DETR model to coreml
COMMENTS
14
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
Hello,
I am currently working on an iOS application that requires object detection functionality, and I’m interested in using the RT-DETR model for this purpose. I would like to ask if anyone has experience or knows of a way to transfer the RT-DETR model into the CoreML format to be used on iOS devices.
Thank you in advance! | [
62
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Vision"
] |
https://api.github.com/repos/huggingface/transformers/issues/36124 |
TITLE
Speaker Verification: All Speakers Getting Perfect 1.000 Similarity Scores
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
### Bug Report
<!-- Important information -->
Model name (e.g. bert-base-cased): pyannote/embedding
Language (if applicable): English
Framework (PyTorch, TensorFlow, etc...): PyTorch
### Description
Using pyannote/embedding for speaker verification, getting perfect similarity scores (1.000) for all speakers, even between obviously different voices in an audiobook.
### Code To Reproduce The Issue
python
import torch
import torchaudio
from pyannote.audio import Model
import torch.nn.functional as F
Setup
device = torch.device("cuda")
embedding_model = Model.from_pretrained("pyannote/embedding",
use_auth_token='xxx').to(device)
Load and process reference audio
reference_waveform, sample_rate = torchaudio.load("reference.flac")
reference_waveform = reference_waveform.mean(dim=0, keepdim=True).to(device)
reference_features = embedding_model(reference_waveform.unsqueeze(0))
reference_features = F.normalize(reference_features, p=2, dim=1)
Load test audio segment
test_waveform, = torchaudio.load("test.flac")
test_waveform = test_waveform.mean(dim=0, keepdim=True).to(device)
test_embedding = embedding_model(test_waveform.unsqueeze(0))
test_embedding = F.normalize(test_embedding, p=2, dim=1)
Calculate similarity
similarity = F.cosine_similarity(reference_features, test_embedding, dim=1).mean()
print(f"Similarity: {similarity.item():.6f}")
### Expected Results
Different speakers should have varying similarity scores below 1.000
### Actual Results
All speakers get perfect 1.000 similarity scores:
- Speaker A vs Reference: 1.000000
- Speaker B vs Reference: 0.999998
- Speaker C vs Reference: 1.000000
### Environment
- pyannote.audio: 3.1.1
- torch: 2.5.1+cu124
- Platform: Google Colab (Ubuntu Linux)
- CUDA: Yes
- GPU: Tesla T4
- Python: 3.11
- torchaudio: 2.5.1+cu124
### Additional Context
- Using professional audiobook with distinct voices
- Reference is 10-minute high-quality audio
- Testing with 4-hour audiobook
- Consistent 1.000 similarity across all different speakers
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Install dependencies:
pip install pyannote.audio==3.1.1 torch==2.5.1+cu124 torchaudio==2.5.1+cu124
2. Use reference audio (10-minute FLAC file) and test audio (different speaker, FLAC file)
3. Run the provided code:
- Load model and audio files
- Extract embeddings
- Calculate similarity
4. Observe that similarity scores are always 1.000 regardless of speaker differences
Full code provided in the description above. This can be reproduced with any two different speakers' audio files.
### Expected behavior
The similarity scores should:
- Be less than 1.000 for different speakers
- Show variation between different voices
- Have lower scores for more dissimilar voices
- Only approach 1.000 for the same speaker
Instead, we're getting perfect 1.000 similarity scores for all speakers, even between obviously different voices (male/female) from a professional audiobook. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35918 |
TITLE
meta-llama/Llama-3.2-11B-Vision-Instruct, device_map = 'auto', padding ruins _prepare_4d_causal_attention_mask_with_cache_position
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
### OS info
- `transformers` version: 4.48.1
- Platform: Linux-5.4.0-192-generic-x86_64-with-glibc2.31
- Python version: 3.12.0
- Huggingface_hub version: 0.26.5
- Safetensors version: 0.4.5
- Accelerate version: 1.2.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using parallel device setup
- Using GPU in script (8 gpu's in my case)
- GPU type: NVIDIA GeForce RTX 3090
### Who can help?
@amyeroberts , @qubvel @ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
## Steps to replicate
```python
import requests
from PIL import Image
from transformers import MllamaForConditionalGeneration, AutoProcessor
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = ["<|image|><|begin_of_text|>What do you see here?", "<|image|><|begin_of_text|>What do you see here but longer?"]
images = [image, image]
repo_id = "meta-llama/Llama-3.2-11B-Vision-Instruct"
processor = AutoProcessor.from_pretrained(repo_id)
model = MllamaForConditionalGeneration.from_pretrained(repo_id, device_map = 'auto')
batch = processor(text=texts, images=images, return_tensors="pt", padding=True)
with torch.no_grad():
model_output = model(
input_ids = batch['input_ids'],
attention_mask = batch['attention_mask'],
pixel_values = batch['pixel_values'],
aspect_ratio_ids = batch['aspect_ratio_ids'],
aspect_ratio_mask = batch['aspect_ratio_mask'],
cross_attention_mask = batch['cross_attention_mask'],
)
```
### Full traceback:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[8], line 2
1 with torch.no_grad():
----> 2 model_output = model(
3 input_ids = batch['input_ids'],
4 attention_mask = batch['attention_mask'],
5 pixel_values = batch['pixel_values'],
6 aspect_ratio_ids = batch['aspect_ratio_ids'],
7 aspect_ratio_mask = batch['aspect_ratio_mask'],
8 cross_attention_mask = batch['cross_attention_mask'],
9 # labels = batch['labels']
10 )
12 # model_output = model.generate(**batch, max_new_tokens = 64)
File ~/mtrix2/mt-rix/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1553, in Module._wrapped_call_impl(self, *args, **kwargs)
1551 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1552 else:
-> 1553 return self._call_impl(*args, **kwargs)
File ~/mtrix2/mt-rix/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1562, in Module._call_impl(self, *args, **kwargs)
1557 # If we don't have any hooks, we want to skip the rest of the logic in
1558 # this function, and just call forward.
1559 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1560 or _global_backward_pre_hooks or _global_backward_hooks
1561 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1562 return forward_call(*args, **kwargs)
1564 try:
1565 result = None
File ~/mtrix2/mt-rix/.venv/lib/python3.12/site-packages/accelerate/hooks.py:170, in add_hook_to_module.<locals>.new_forward(module, *args, **kwargs)
168 output = module._old_forward(*args, **kwargs)
169 else:
--> 170 output = module._old_forward(*args, **kwargs)
171 return module._hf_hook.post_forward(module, output)
File ~/mtrix2/mt-rix/.venv/lib/python3.12/site-packages/transformers/models/mllama/modeling_mllama.py:2131, in MllamaForConditionalGeneration.forward(self, input_ids, pixel_values, aspect_ratio_mask, aspect_ratio_ids, attention_mask, cross_attention_mask, cross_attention_states, position_ids, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict, cache_position, num_logits_to_keep)
2128 cross_attention_mask = cross_attention_mask[:, :, cache_position]
2129 full_text_row_masked_out_mask = full_text_row_masked_out_mask[:, :, cache_position]
-> 2131 outputs = self.language_model(
2132 input_ids=input_ids,
2133 attention_mask=attention_mask,
2134 position_ids=position_ids,
2135 cross_attention_states=cross_attention_states,
2136 cross_attention_mask=cross_attention_mask,
2137 full_text_row_masked_out_mask=full_text_row_masked_out_mask,
2138 past_key_values=past_key_values,
2139 use_cache=use_cache,
2140 inputs_embeds=inputs_embeds,
2141 labels=labels,
2142 output_hidden_states=output_hidden_states,
2143 output_attentions=output_attentions,
2144 return_dict=return_dict,
2145 cache_position=cache_position,
2146 num_logits_to_keep=num_logits_to_keep,
2147 )
2149 return outputs
File ~/mtrix2/mt-rix/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1553, in Module._wrapped_call_impl(self, *args, **kwargs)
1551 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1552 else:
-> 1553 return self._call_impl(*args, **kwargs)
File ~/mtrix2/mt-rix/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1562, in Module._call_impl(self, *args, **kwargs)
1557 # If we don't have any hooks, we want to skip the rest of the logic in
1558 # this function, and just call forward.
1559 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1560 or _global_backward_pre_hooks or _global_backward_hooks
1561 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1562 return forward_call(*args, **kwargs)
1564 try:
1565 result = None
File ~/mtrix2/mt-rix/.venv/lib/python3.12/site-packages/transformers/models/mllama/modeling_mllama.py:1939, in MllamaForCausalLM.forward(self, input_ids, attention_mask, position_ids, cross_attention_states, cross_attention_mask, full_text_row_masked_out_mask, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict, cache_position, num_logits_to_keep, **loss_kwargs)
1936 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1938 # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
-> 1939 outputs = self.model(
1940 input_ids=input_ids,
1941 cross_attention_states=cross_attention_states,
1942 attention_mask=attention_mask,
1943 position_ids=position_ids,
1944 cross_attention_mask=cross_attention_mask,
1945 full_text_row_masked_out_mask=full_text_row_masked_out_mask,
1946 past_key_values=past_key_values,
1947 inputs_embeds=inputs_embeds,
1948 use_cache=use_cache,
1949 output_attentions=output_attentions,
1950 output_hidden_states=output_hidden_states,
1951 return_dict=return_dict,
1952 cache_position=cache_position,
1953 )
1955 hidden_states = outputs[0]
1956 logits = self.lm_head(hidden_states[:, -num_logits_to_keep:, :]).float()
File ~/mtrix2/mt-rix/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1553, in Module._wrapped_call_impl(self, *args, **kwargs)
1551 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1552 else:
-> 1553 return self._call_impl(*args, **kwargs)
File ~/mtrix2/mt-rix/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1562, in Module._call_impl(self, *args, **kwargs)
1557 # If we don't have any hooks, we want to skip the rest of the logic in
1558 # this function, and just call forward.
1559 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1560 or _global_backward_pre_hooks or _global_backward_hooks
1561 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1562 return forward_call(*args, **kwargs)
1564 try:
1565 result = None
File ~/mtrix2/mt-rix/.venv/lib/python3.12/site-packages/transformers/models/mllama/modeling_mllama.py:1758, in MllamaTextModel.forward(self, input_ids, attention_mask, position_ids, cross_attention_states, cross_attention_mask, full_text_row_masked_out_mask, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict, cache_position)
1755 if position_ids is None:
1756 position_ids = cache_position.unsqueeze(0)
-> 1758 causal_mask = self._update_causal_mask(
1759 attention_mask, inputs_embeds, cache_position, past_key_values, output_attentions
1760 )
1762 # create position embeddings to be shared across the decoder layers
1763 position_embeddings = self.rotary_emb(hidden_states, position_ids)
File ~/mtrix2/mt-rix/.venv/lib/python3.12/site-packages/transformers/models/mllama/modeling_mllama.py:1111, in MllamaPreTrainedModel._update_causal_mask(self, attention_mask, input_tensor, cache_position, past_key_values, output_attentions)
1104 target_length = (
1105 attention_mask.shape[-1]
1106 if isinstance(attention_mask, torch.Tensor)
1107 else past_seen_tokens + sequence_length + 1
1108 )
1110 # In case the provided `attention` mask is 2D, we generate a causal mask here (4D).
-> 1111 causal_mask = self._prepare_4d_causal_attention_mask_with_cache_position(
1112 attention_mask,
1113 sequence_length=sequence_length,
1114 target_length=target_length,
1115 dtype=dtype,
1116 device=device,
1117 cache_position=cache_position,
1118 batch_size=input_tensor.shape[0],
1119 )
1121 if (
1122 self.config._attn_implementation == "sdpa"
1123 and attention_mask is not None
(...)
1128 # using left padding. This is required by F.scaled_dot_product_attention memory-efficient attention path.
1129 # Details: https://github.com/pytorch/pytorch/issues/110213
1130 min_dtype = torch.finfo(dtype).min
File ~/mtrix2/mt-rix/.venv/lib/python3.12/site-packages/transformers/models/mllama/modeling_mllama.py:1187, in MllamaPreTrainedModel._prepare_4d_causal_attention_mask_with_cache_position(attention_mask, sequence_length, target_length, dtype, device, cache_position, batch_size, **kwargs)
1185 causal_mask = causal_mask.clone() # copy to contiguous memory for in-place edit
1186 mask_length = attention_mask.shape[-1]
-> 1187 padding_mask = causal_mask[:, :, :, :mask_length] + attention_mask[:, None, None, :]
1188 padding_mask = padding_mask == 0
1189 causal_mask[:, :, :, :mask_length] = causal_mask[:, :, :, :mask_length].masked_fill(
1190 padding_mask, min_dtype
1191 )
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!
```
### Expected behavior
## Expected Behaviour
Described in Potential mitigation BP, basically attention mask(causal_mask should handle device similar to attention mask for multiple gpu setups)
## Notes:
1. I've also tried to use: ```processor.tokenizer.padding_side = 'left' ``` throws same error
2. Potential mitigation: transfer attention_mask within `MllamaPreTrainedModel._prepare_4d_causal_attention_mask_with_cache_position` to self.device if provided, updated code:
```python
def _prepare_4d_causal_attention_mask_with_cache_position(
attention_mask: torch.Tensor,
sequence_length: int,
target_length: int,
dtype: torch.dtype,
device: torch.device,
cache_position: torch.Tensor,
batch_size: int,
**kwargs,
):
"""
Creates a causal 4D mask of shape `(batch_size, 1, query_length, key_value_length)` from a 2D mask of shape
`(batch_size, key_value_length)`, or if the input `attention_mask` is already 4D, do nothing.
Args:
attention_mask (`torch.Tensor`):
A 2D attention mask of shape `(batch_size, key_value_length)` or a 4D attention mask of shape
`(batch_size, 1, query_length, key_value_length)`.
sequence_length (`int`):
The sequence length being processed.
target_length (`int`):
The target length: when generating with static cache, the mask should be as long as the static cache,
to account for the 0 padding, the part of the cache that is not filled yet.
dtype (`torch.dtype`):
The dtype to use for the 4D attention mask.
device (`torch.device`):
The device to plcae the 4D attention mask on.
cache_position (`torch.Tensor`):
Indices depicting the position of the input sequence tokens in the sequence.
batch_size (`torch.Tensor`):
Batch size.
"""
if attention_mask is not None:
attention_mask = attention_mask.to(device)
cache_position = cache_position.to(device)
if attention_mask is not None and attention_mask.dim() == 4:
# In this case we assume that the mask comes already in inverted form and requires no inversion or slicing.
causal_mask = attention_mask
else:
min_dtype = torch.finfo(dtype).min
causal_mask = torch.full(
(sequence_length, target_length), fill_value=min_dtype, dtype=dtype, device=device
)
if sequence_length != 1:
causal_mask = torch.triu(causal_mask, diagonal=1)
causal_mask *= torch.arange(target_length, device=device) > cache_position.reshape(-1, 1)
causal_mask = causal_mask[None, None, :, :].expand(batch_size, 1, -1, -1)
if attention_mask is not None:
causal_mask = causal_mask.clone() # copy to contiguous memory for in-place edit
mask_length = attention_mask.shape[-1]
padding_mask = causal_mask[:, :, :, :mask_length] + attention_mask[:, None, None, :]
padding_mask = padding_mask == 0
causal_mask[:, :, :, :mask_length] = causal_mask[:, :, :, :mask_length].masked_fill(
padding_mask, min_dtype
)
return causal_mask
``` | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35129 |
TITLE
No module named 'torch._C._distributed_c10d'; 'torch._C' is not a package on darwin
COMMENTS
0
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
I am a [nixpkgs](https://github.com/NixOS/nixpkgs/) maintainer and manage several python packages there.
https://github.com/huggingface/transformers/commit/20142ab5422fbafcd10e221e37e95f8aaf1bde3c has introduced a regression on `darwin` (both ARM and Intel):
```
import transformers.models.auto.modeling_auto
```
now fails with:
```
ModuleNotFoundError: No module named 'torch._C._distributed_c10d'; 'torch._C' is not a package
```
<details>
```
Traceback (most recent call last):
File "/nix/store/cgcy6sfgcqwpzfpixwp3r6c24rfsndc3-python3.12-transformers-4.47.0/lib/python3.12/site-packages/transformers/utils/import_utils.py", line 1793, in _get_module
return importlib.import_module("." + module_name, self.__name__)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/7c494qcmh62av43zsxr3wvzh8hcpy1vl-python3-3.12.7/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 995, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/nix/store/cgcy6sfgcqwpzfpixwp3r6c24rfsndc3-python3.12-transformers-4.47.0/lib/python3.12/site-packages/transformers/generation/utils.py", line 41, in <module>
from ..pytorch_utils import isin_mps_friendly
File "/nix/store/cgcy6sfgcqwpzfpixwp3r6c24rfsndc3-python3.12-transformers-4.47.0/lib/python3.12/site-packages/transformers/pytorch_utils.py", line 43, in <module>
from torch.distributed.tensor import Replicate
File "/nix/store/fgzf3xwls552byw9vdjxqiznxbg62zak-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/distributed/tensor/__init__.py", line 4, in <module>
import torch.distributed.tensor._ops # force import all built-in dtensor ops
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/fgzf3xwls552byw9vdjxqiznxbg62zak-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/distributed/tensor/_ops/__init__.py", line 2, in <module>
from ._conv_ops import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/fgzf3xwls552byw9vdjxqiznxbg62zak-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/distributed/tensor/_ops/_conv_ops.py", line 7, in <module>
from torch.distributed.tensor._dtensor_spec import DTensorSpec, TensorMeta
File "/nix/store/fgzf3xwls552byw9vdjxqiznxbg62zak-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/distributed/tensor/_dtensor_spec.py", line 6, in <module>
from torch.distributed.tensor.placement_types import (
File "/nix/store/fgzf3xwls552byw9vdjxqiznxbg62zak-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/distributed/tensor/placement_types.py", line 8, in <module>
import torch.distributed._functional_collectives as funcol
File "/nix/store/fgzf3xwls552byw9vdjxqiznxbg62zak-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/distributed/_functional_collectives.py", line 8, in <module>
import torch.distributed.distributed_c10d as c10d
File "/nix/store/fgzf3xwls552byw9vdjxqiznxbg62zak-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py", line 22, in <module>
from torch._C._distributed_c10d import (
ModuleNotFoundError: No module named 'torch._C._distributed_c10d'; 'torch._C' is not a package
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "<string>", line 1, in <lambda>
File "/nix/store/7c494qcmh62av43zsxr3wvzh8hcpy1vl-python3-3.12.7/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 995, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/nix/store/cgcy6sfgcqwpzfpixwp3r6c24rfsndc3-python3.12-transformers-4.47.0/lib/python3.12/site-packages/transformers/models/auto/modeling_auto.py", line 21, in <module>
from .auto_factory import (
File "/nix/store/cgcy6sfgcqwpzfpixwp3r6c24rfsndc3-python3.12-transformers-4.47.0/lib/python3.12/site-packages/transformers/models/auto/auto_factory.py", line 40, in <module>
from ...generation import GenerationMixin
File "<frozen importlib._bootstrap>", line 1412, in _handle_fromlist
File "/nix/store/cgcy6sfgcqwpzfpixwp3r6c24rfsndc3-python3.12-transformers-4.47.0/lib/python3.12/site-packages/transformers/utils/import_utils.py", line 1781, in __getattr__
module = self._get_module(self._class_to_module[name])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/cgcy6sfgcqwpzfpixwp3r6c24rfsndc3-python3.12-transformers-4.47.0/lib/python3.12/site-packages/transformers/utils/import_utils.py", line 1795, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.generation.utils because of the following error (look up to see its traceback):
No module named 'torch._C._distributed_c10d'; 'torch._C' is not a package
```
</details>
This has been detected when running the test suite of [accelerate](https://github.com/huggingface/accelerate).
Thank you in advance for helping us :)
### Who can help?
cc @natsukium
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import transformers.models.auto.modeling_auto
```
### Expected behavior
Imports correctly. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34694 |
TITLE
Discrepancy in Training Loss Behavior with Gradient Accumulation using DeepSpeed
COMMENTS
6
REACTIONS
+1: 4
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
Accelerate version: 1.1.0
transformers version: 4.46.2
DeepSpeed version: 0.14.4
Platform: Linux 5.15.0-101-generic #111-Ubuntu SMP x86_64 GNU/Linux
Python version: 3.10.14
PyTorch version (GPU?): 2.1.2+cu118 True
GPU type: NVIDIA A100
### Who can help?
@muellerzr
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The code provided below is a simplified example of training a small model using the Hugging Face Trainer. The setup includes creating a dataset, initializing a model and tokenizer, and configuring the Trainer with different settings for gradient accumulation and DeepSpeed.
```python
import argparse
import torch
from datasets import load_dataset
from transformers import (set_seed,
Trainer,
TrainingArguments,
DataCollatorForLanguageModeling,
LlamaForCausalLM,
LlamaConfig,
AutoTokenizer
)
DEEPSPEED_CONFIG = {
"zero_optimization": {
"stage": 0
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto"
}
TRAIN_ARGS = {'output_dir': './test_GA',
'bf16': True,
'learning_rate': 6e-4,
'lr_scheduler_type': 'cosine',
'max_steps': 200,
'optim': 'adamw_torch',
'weight_decay': 0.1,
'per_device_train_batch_size': 128,
'gradient_accumulation_steps': 1,
'logging_steps': 1,
'report_to': 'none'}
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('--bs', default=128, type=int, help='batch size')
parser.add_argument('--ga', default=1, type=int, help='number of gradient accumulation step')
parser.add_argument('--deepspeed', action='store_true', help='use deepspeed')
args = parser.parse_args()
set_seed(42)
torch.use_deterministic_algorithms(True)
torch.backends.cudnn.benchmark = False
# Initialize dataset
CONTEXT_LENGTH = 512 # Small context length as specified
def preprocess_data(examples, tokenizer, max_length=CONTEXT_LENGTH):
"""Tokenizes the input data and truncates/pads to the max context length."""
return tokenizer(examples["sentence"], truncation=True, padding="max_length", max_length=max_length, add_special_tokens=True)
# Load the dataset from Hugging Face
dataset = load_dataset("ptb_text_only", trust_remote_code=True, split='train')
# Load and configure the tokenizer
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf", add_prefix_space=True, use_fast=True)
tokenizer.pad_token = tokenizer.eos_token
# Preprocess the dataset
column_names = list(dataset.features)
train_dataset = dataset.map(lambda x: preprocess_data(x, tokenizer), batched=True, remove_columns=column_names)
# Initialize model
model_cfg = LlamaConfig(n_positions=CONTEXT_LENGTH, hidden_size=512, num_attention_heads=8, num_hidden_layers=4,
vocab_size=tokenizer.vocab_size, eos_token_id=tokenizer.eos_token_id, bos_token_id=tokenizer.bos_token_id)
model = LlamaForCausalLM(model_cfg)
# Initialize trainer
if args.deepspeed:
TRAIN_ARGS.update({"deepspeed": DEEPSPEED_CONFIG})
TRAIN_ARGS.update({"per_device_train_batch_size": args.bs, "gradient_accumulation_steps": args.ga})
trainer = Trainer(model=model, args=TrainingArguments(**TRAIN_ARGS), train_dataset=train_dataset,
data_collator=DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False))
trainer.train()
```
### Expected behavior
The training loss should remain consistent for different gradient accumulation steps, both with and without DeepSpeed enabled. However, the figure shows a divergence when DeepSpeed is enabled:

| [
21,
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"DeepSpeed",
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35654 |
TITLE
Fix tests for vision models
COMMENTS
22
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
Fixing tests for vision models
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @ylacombe, @eustlb
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @muellerzr
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| [
2,
62,
73
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Tests",
"Vision",
"run-slow"
] |
https://api.github.com/repos/huggingface/transformers/issues/34785 |
TITLE
Fall back to slow image processor in ImageProcessingAuto when no fast processor available
COMMENTS
8
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
Refactor parts of image_processing_auto to fall back on slow processor when use_fast is set to True and no fast processor is available.
Before, this would throw an error:
```python
processor = AutoImageProcessor.from_pretrained("Salesforce/blip-image-captioning-large", use_fast=True)
```
Now the following warning is displayed
```
`use_fast` is set to `True` but the image processor class does not have a fast version. Falling back to the slow version.
```
Also add warnings to start rolling out fast image processor by default (goal is v4.48). If use_fast is not set, and the checkpoint was saved with a slow processor, display:
```
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @ylacombe, @eustlb
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @muellerzr
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| [
62,
65
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Vision",
"Processing"
] |
https://api.github.com/repos/huggingface/transformers/issues/34626 |
TITLE
[openai/whisper-tiny][torch.compile] Model compilation: AttributeError: 'DynamicCache' object has no attribute 'key_cache'
COMMENTS
6
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.46.2
- Platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.31
- Python version: 3.9.5
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ylacombe, @eustlb
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi there! I'm trying to employ `torch.compile` to speedup the inference of the whisper model, but I cannot overcome the following error:
```python3
import torch
import copy
import librosa
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq, pipeline
from transformers.utils import logging
from urllib.request import urlretrieve
model_id = "openai/whisper-tiny"
processor = AutoProcessor.from_pretrained(model_id)
pt_model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id)
pipe_pt = pipeline(
"automatic-speech-recognition",
model=pt_model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
device="cpu",
)
en_example_short = "courtroom.wav"
url = "https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/courtroom.wav"
urlretrieve(url, en_example_short)
en_raw_speech, samplerate = librosa.load(str(en_example_short), sr=16000)
sample = copy.deepcopy(en_raw_speech)
pt_result = pipe_pt(sample)
print("*" * 20)
print(f"Result: {pt_result['text']}")
pipe_pt.model.model = torch.compile(pipe_pt.model.model)
pt_result = pipe_pt(sample) # Raises the error
print("*" * 20)
print(f"Result: {pt_result['text']}")
```
The expected output is something like
```
Result: Colonel Jessif, did you order the code rate? You don't have to answer that question. I'll answer the question. You want answers? I think I'm entitled. You want answers? I want the truth. You can't handle the truth.
```
But an error occures:
<details>
```
Traceback (most recent call last):
File "/home/dlyakhov/Projects/openvino_notebooks/notebooks/whisper-asr-genai/repro.py", line 39, in <module>
pt_result = pipe_pt(sample)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 283, in __call__
return super().__call__(inputs, **kwargs)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1294, in __call__
return next(
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py", line 124, in __next__
item = next(self.iterator)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py", line 269, in __next__
processed = self.infer(next(self.iterator), **self.params)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1209, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 515, in _forward
tokens = self.model.generate(
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/transformers/models/whisper/generation_whisper.py", line 555, in generate
init_tokens = self._retrieve_init_tokens(
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/transformers/models/whisper/generation_whisper.py", line 1370, in _retrieve_init_tokens
lang_ids = self.detect_language(
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/transformers/models/whisper/generation_whisper.py", line 1474, in detect_language
logits = self(**inputs, decoder_input_ids=decoder_input_ids).logits[:, -1]
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/transformers/models/whisper/modeling_whisper.py", line 1767, in forward
outputs = self.model(
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 465, in _fn
return fn(*args, **kwargs)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/transformers/models/whisper/modeling_whisper.py", line 1634, in forward
decoder_outputs = self.decoder(
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/transformers/models/whisper/modeling_whisper.py", line 1240, in forward
logger.warning_once(
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1269, in __call__
return self._torchdynamo_orig_callable(
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1064, in __call__
result = self._inner_convert(
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 526, in __call__
return _compile(
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 952, in _compile
raise InternalTorchDynamoError(
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 924, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 666, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_utils_internal.py", line 87, in wrapper_function
return function(*args, **kwargs)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 699, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1322, in transform_code_object
transformations(instructions, code_options)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 219, in _fn
return fn(*args, **kwargs)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 634, in transform
tracer.run()
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2796, in run
super().run()
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1692, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/variables/nn_module.py", line 899, in call_function
return variables.UserFunctionVariable(fn, source=source).call_function(
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 324, in call_function
return super().call_function(tx, args, kwargs)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
tracer.run()
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1692, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 156, in realize_and_forward
return getattr(self.realize(), name)(*args, **kwargs)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/variables/nn_module.py", line 899, in call_function
return variables.UserFunctionVariable(fn, source=source).call_function(
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 324, in call_function
return super().call_function(tx, args, kwargs)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
tracer.run()
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 491, in inner
if truth_fn(mod):
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/transformers/cache_utils.py", line 406, in __len__
return len(self.key_cache)
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1931, in __getattr__
raise AttributeError(
torch._dynamo.exc.InternalTorchDynamoError: AttributeError: 'DynamicCache' object has no attribute 'key_cache'
from user code:
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/transformers/models/whisper/modeling_whisper.py", line 1324, in torch_dynamo_resume_in_forward_at_1240
layer_outputs = decoder_layer(
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/transformers/models/whisper/modeling_whisper.py", line 732, in forward
hidden_states, cross_attn_weights, cross_attn_present_key_value = self.encoder_attn(
File "/home/dlyakhov/Projects/openvino_notebooks/.venv_39/lib/python3.9/site-packages/transformers/models/whisper/modeling_whisper.py", line 520, in forward
if is_cross_attention and past_key_value and is_updated:
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
</details>
### Expected behavior
```
Result: Colonel Jessif, did you order the code rate? You don't have to answer that question. I'll answer the question. You want answers? I think I'm entitled. You want answers? I want the truth. You can't handle the truth.
```
Could you please help me with that? Thank you! | [
64,
43
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug",
"Audio"
] |
https://api.github.com/repos/huggingface/transformers/issues/35057 |
TITLE
Get "NotImplementedError: Cannot copy out of meta tensor; no data!" error while deploying model
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.46.3
- Platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.17
- Python version: 3.8.19
- Huggingface_hub version: 0.25.1
- Safetensors version: 0.4.5
- Accelerate version: 1.0.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA RTX A6000
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hi Developers,
I have finetuned a Llama-3.1-8b-Instruct model; everything is fine in the fine-tuning stage. However, I often get the following error message when I deploy my model with Langchain.
Here is the error message
```
NotImplementedError: Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.
Traceback:
File "/home/revlis_ai/anaconda3/envs/env_llm/lib/python3.8/site-packages/streamlit/runtime/scriptrunner/exec_code.py", line 88, in exec_func_with_error_handling
result = func()
File "/home/revlis_ai/anaconda3/envs/env_llm/lib/python3.8/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 579, in code_to_exec
exec(code, module.__dict__)
File "/home/revlis_ai/Documents/llm_practise/lora_finetune_llm/app3.py", line 50, in <module>
model = AutoModelForCausalLM.from_pretrained(
File "/home/revlis_ai/anaconda3/envs/env_llm/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained
return model_class.from_pretrained(
File "/home/revlis_ai/anaconda3/envs/env_llm/lib/python3.8/site-packages/transformers/modeling_utils.py", line 4310, in from_pretrained
model.load_adapter(
File "/home/revlis_ai/anaconda3/envs/env_llm/lib/python3.8/site-packages/transformers/integrations/peft.py", line 214, in load_adapter
inject_adapter_in_model(peft_config, self, adapter_name, **peft_load_kwargs)
File "/home/revlis_ai/anaconda3/envs/env_llm/lib/python3.8/site-packages/peft/mapping.py", line 227, in inject_adapter_in_model
peft_model = tuner_cls(model, peft_config, adapter_name=adapter_name, low_cpu_mem_usage=low_cpu_mem_usage)
File "/home/revlis_ai/anaconda3/envs/env_llm/lib/python3.8/site-packages/peft/tuners/lora/model.py", line 141, in __init__
super().__init__(model, config, adapter_name, low_cpu_mem_usage=low_cpu_mem_usage)
File "/home/revlis_ai/anaconda3/envs/env_llm/lib/python3.8/site-packages/peft/tuners/tuners_utils.py", line 184, in __init__
self.inject_adapter(self.model, adapter_name, low_cpu_mem_usage=low_cpu_mem_usage)
File "/home/revlis_ai/anaconda3/envs/env_llm/lib/python3.8/site-packages/peft/tuners/tuners_utils.py", line 496, in inject_adapter
self._create_and_replace(peft_config, adapter_name, target, target_name, parent, current_key=key)
File "/home/revlis_ai/anaconda3/envs/env_llm/lib/python3.8/site-packages/peft/tuners/lora/model.py", line 230, in _create_and_replace
self._replace_module(parent, target_name, new_module, target)
File "/home/revlis_ai/anaconda3/envs/env_llm/lib/python3.8/site-packages/peft/tuners/lora/model.py", line 254, in _replace_module
new_module.to(child.weight.device)
File "/home/revlis_ai/anaconda3/envs/env_llm/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1174, in to
return self._apply(convert)
File "/home/revlis_ai/anaconda3/envs/env_llm/lib/python3.8/site-packages/torch/nn/modules/module.py", line 780, in _apply
module._apply(fn)
File "/home/revlis_ai/anaconda3/envs/env_llm/lib/python3.8/site-packages/torch/nn/modules/module.py", line 780, in _apply
module._apply(fn)
File "/home/revlis_ai/anaconda3/envs/env_llm/lib/python3.8/site-packages/torch/nn/modules/module.py", line 805, in _apply
param_applied = fn(param)
File "/home/revlis_ai/anaconda3/envs/env_llm/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1167, in convert
raise NotImplementedError(
```
And here is my code to deploy my app with Langchain
```
#%% Import Libraries
import streamlit as st
from uuid import uuid4
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain.chains import create_retrieval_chain, create_history_aware_retriever
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_community.vectorstores import FAISS
from langchain_community.chat_message_histories import ChatMessageHistory
from langchain_core.chat_history import BaseChatMessageHistory
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_huggingface import HuggingFaceEmbeddings, HuggingFacePipeline, ChatHuggingFace
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, BitsAndBytesConfig, TextStreamer
from langchain_community.document_loaders import PyPDFLoader
from langchain_core.runnables.history import RunnableWithMessageHistory
import torch
import os
from pathlib import Path
from dotenv import load_dotenv
#%% Get Correct Path
current_dir = Path(__file__).parent.absolute()
doc_dir = os.path.join(current_dir, "research_papers")
db_path = os.path.join(current_dir, "FAISS_DB")
#%% Load API Key
if 'STREAMLIT_PUBLIC_PATH' in os.environ:
os.environ["HF_TOKEN"] = st.secrets['HUGGINGFACE_TOKEN']
else:
load_dotenv()
os.environ["HF_TOKEN"] = os.getenv('HUGGINGFACE_TOKEN')
#%% Load LLM and Embedding
embeddings = HuggingFaceEmbeddings(
model_name='all-MiniLM-L6-v2',)
# bnb_config = BitsAndBytesConfig(
# load_in_4bit = True,
# bnb_4bit_quant_type = "nf4",
# bnb_4bit_compute_dtype = torch.bfloat16,
# bnb_4bit_use_double_quant = True,)
bnb_config = BitsAndBytesConfig(
load_in_8bit = True,)
tokenizer = AutoTokenizer.from_pretrained(
"./lora_finetune_llm/llm_finetune/checkpoint-2571")
streamer = TextStreamer(tokenizer)
model = AutoModelForCausalLM.from_pretrained(
"./lora_finetune_llm/llm_finetune/checkpoint-2571",
quantization_config = bnb_config)
llm_pipeline = pipeline(
"text-generation",
model = model,
tokenizer = tokenizer,
streamer = streamer,
torch_dtype = torch.bfloat16,
temperature = 0.15,
top_p = .15,
max_new_tokens = 512,
trust_remote_code = True,
return_full_text = False,)
hf_pipeline = HuggingFacePipeline(pipeline=llm_pipeline)
llm = ChatHuggingFace(llm=hf_pipeline, tokenizer=hf_pipeline.pipeline.tokenizer)
#%% Initialize session state
if 'conversations' not in st.session_state:
st.session_state.conversations = {}
st.session_state.current_session = None
st.session_state.session_history = {}
#%% Build Streamlit APP
st.title('ChatBot Q&A with RAG')
st.sidebar.title("Conversations")
def new_chat():
session_id = str(uuid4()).replace('-', '')
st.session_state.conversations[session_id] = {
"history": ChatMessageHistory(),
"uploaded_files": []}
st.session_state.current_session = session_id
if not st.session_state.conversations:
new_chat()
if st.sidebar.button("New Chat"):
new_chat()
session_ids = list(st.session_state.conversations.keys())
session_names = [f"Session {i+1}" for i in range(len(session_ids))]
selected_session = st.sidebar.radio(
"Select Conversation",
session_names,
index=session_ids.index(st.session_state.current_session))
if selected_session:
selected_index = session_names.index(selected_session)
st.session_state.current_session = session_ids[selected_index]
current_session = st.session_state.current_session
conversation_data = st.session_state.conversations[current_session]
uploaded_files = st.file_uploader(
'Upload PDF Files',
type=['pdf'],
accept_multiple_files=True,
key=st.session_state.current_session)
def get_session_history(session_id: str) -> BaseChatMessageHistory:
return st.session_state.conversations[session_id]["history"]
if uploaded_files:
documents = []
for f in uploaded_files:
if f.name not in conversation_data["uploaded_files"]:
conversation_data["uploaded_files"].append(f.name)
temp = './temp.pdf'
with open(temp, 'wb') as file:
file.write(f.getvalue())
loader = PyPDFLoader(temp)
doc = loader.load()
documents.extend(doc)
os.remove(temp)
splitter = RecursiveCharacterTextSplitter(chunk_size=5140, chunk_overlap=512)
try:
st.write('Trying to load FAISS BD')
faiss_db = FAISS.load_local(
db_path + f"/{current_session}",
embeddings,
allow_dangerous_deserialization = True)
if documents:
st.write('Detect new file(s) upload, add docs into Chroma DB')
chunked_documents = splitter.split_documents(documents)
faiss_db.add_documents(chunked_documents)
faiss_db.save_local(db_path + f"/{current_session}")
st.write('FAISS DB loaded!!!')
except:
st.write('No FAISS DB found, creating FAISS DB.......')
chunked_documents = splitter.split_documents(documents)
faiss_db = FAISS.from_documents(chunked_documents, embeddings)
faiss_db.save_local(db_path + f"/{current_session}")
st.write('FAISS DB Created!!!')
st.write(f"Uploaded Files: {conversation_data['uploaded_files']}")
user_input = st.text_area("Ask me your question:", key="input_text",)
if st.button("Submit"):
if user_input:
faiss_db = FAISS.load_local(
db_path + f"/{current_session}",
embeddings,
allow_dangerous_deserialization = True)
retriever = faiss_db.as_retriever()
contextual_sys_prompt = '''
Given a chat history and the latest user question which might reference context in the chat history,
formulate a standalone question which can be understood without the chat history. Do NOT answer the question,
just reformulate it if needed and otherwise return it as is.'''
contextual_prompt = ChatPromptTemplate.from_messages(
[('system', contextual_sys_prompt),
MessagesPlaceholder('chat_history'),
('human', '{input}')])
history_aware_retriever = create_history_aware_retriever(llm, retriever, contextual_prompt)
sys_prompt = '''
You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question.
If you don't know the answer, say that you don't know. Use three sentences maximum and keep the answer concise.
\n\n
{context}'''
qa_prompt = ChatPromptTemplate.from_messages(
[('system', sys_prompt),
MessagesPlaceholder('chat_history'),
('human', '{input}')])
qa_chain = create_stuff_documents_chain(llm, qa_prompt)
rag_chain = create_retrieval_chain(history_aware_retriever, qa_chain)
conversational_rag_chain = RunnableWithMessageHistory(
rag_chain,
get_session_history,
input_messages_key="input",
history_messages_key="chat_history",
output_messages_key="answer",)
st.session_state.session_history[current_session] = get_session_history(current_session)
with torch.autocast("cuda"):
ans = conversational_rag_chain.invoke(
{'input': user_input},
config={'configurable': {'session_id': current_session}})
st.write("Assistant:", ans['answer'])
if current_session in st.session_state.session_history:
st.write("Chat History:")
for message in st.session_state.session_history[current_session].messages:
st.write(f"Role: {message.type}, Content: {message.content}")
else:
st.write("No chat history available.")
```
I also tried the original open-sourced model from Meta with the following code, yet this issue still happens.
Here is the code form using the original model
```
bnb_config = BitsAndBytesConfig(
load_in_8bit = True, )
tokenizer = AutoTokenizer.from_pretrained(
"meta-llama/Llama-3.1-8B-Instruct", )
streamer = TextStreamer(tokenizer)
model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-3.1-8B-Instruct",
quantization_config = bnb_config,)
```
How to reproduce this issue:
1. Execute `streamlit run ./path/to/the/file.py`
2. Upload article(s) in pdf format via browser
3. Ask your questions.
4. The error message shall pop out after some interactions. Sometimes, the error pops out right after the first question is asked.
I believe this issue does not result from GPU OOM. I was monitoring the GPU memory usage via nvtop, and I still had around 20-ish GB remaining when the error popped out.
### Expected behavior
I shall consistently get answers from LLM when I have enough GPU memory. However, my app is very unstable right now. Sometimes, the app seems to work fine, and then the error message suddenly pops out after a new question is submitted. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34210 |
TITLE
Missing timestamp offset using Whisper with pipeline and sequential decoding
COMMENTS
11
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 1
BODY
### System Info
- `transformers` version: 4.45.2
- Platform: macOS-15.0.1-arm64-arm-64bit
- Python version: 3.12.1
- Huggingface_hub version: 0.23.3
- Safetensors version: 0.4.3
- Accelerate version: 0.34.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: no
### Who can help?
@Rocketknight1 @gante @ylacombe
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. `pip install transformers==4.45.2`
2. Setup a Whisper pipeline using `chunk_length_s=0` (which is sequential long-form decoding according to the model card (at least for large-v3)) and `return_timestamps=True`
3. Transcribe an audio longer than 30s
```py
from transformers import pipeline
import torch
audio_file = '<an-audio-file-longer-than-30-s>'
chunked = False
pipe = pipeline(
'automatic-speech-recognition',
model='openai/whisper-small',
chunk_length_s=30 if chunked else 0,
return_timestamps=True,
torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,
device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu',
)
result = pipe(audio_file)
transcript = '\n'.join(
f"({chunk['timestamp'][0]}, {chunk['timestamp'][1]})\t{chunk['text']}" for chunk in result['chunks']
)
print(transcript)
```
4. See that the timestamps start at 0.0s after 30s
```
(0.0, 4.44) Er hatte schon mal eine Schnauze voll von allem und jedem.
(4.44, 6.28) Und er hat den Schluss getroffen.
(6.28, 7.8) Es hilft nichts mehr.
(7.8, 9.28) Ich wandere aus.
(9.28, 11.4) Das kann ein Grund sein,
(11.4, 14.48) wieso er eine Heimat für immer der Rückenträger will.
(14.48, 16.72) Oder es ist etwas ganz anderes.
(16.72, 19.24) Der wohl bekannt ist Grund...
(19.24, 20.36) ... die Liebe.
(20.36, 22.44) So ist es bei Hans Muster.
(22.44, 24.72) Die Liebe hat ihn nach Deutschland gezogen.
(24.72, 27.0) Und dort ist er seit vier Jahren.
(27.0, 29.4) Aber welter der für immer dort bleibt.
(0.0, 1.0) Gute Frage.
(1.0, 4.0) Ich stelle mir einen Gart am Viertel vor im PO bei den Leuten.
(4.0, 7.0) Und bis dort her, mein Name ist Peter Müller.
(7.0, 11.0) Und ich bin Wassermelone Heines vom Harry Styles.
```
### Expected behavior
The timestamps should be correct, also if the audio is longer than 30s (as if the chunked-algorithm is used):
```
(0.0, 4.44) Er hatte schon mal eine Schnauze voll von allem und jedem.
(4.44, 6.28) Und er hat den Schluss getroffen.
(6.28, 7.8) Es hilft nichts mehr.
(7.8, 9.28) Ich wandere aus.
(9.28, 11.4) Das kann ein Grund sein,
(11.4, 14.48) wieso er eine Heimat für immer der Rückenträger will.
(14.48, 16.72) Oder es ist etwas ganz anderes.
(16.72, 19.24) Der wohl bekannt ist Grund...
(19.24, 20.36) ... die Liebe.
(20.36, 22.44) So ist es bei Hans Muster.
(22.44, 24.72) Die Liebe hat ihn nach Deutschland gezogen.
(24.72, 26.0) Und dort ist er seit vier Jahren.
(26.0, 29.0) Aber welter der für immer dort bleibt, gute Frage.
(29.0, 32.0) Wir stellen es dir an, am Viertel vor, im PO bei den Leuten.
(32.0, 35.0) Und bis dort her, mein Name ist Peter Müller.
(35.0, 39.0) Und jetzt ein Wassermelon Heines vom Harry Styles.
```
The output is from above script using `chunked=True` | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35317 |
TITLE
Deepseek v2
COMMENTS
5
REACTIONS
+1: 2
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Model description
There is stale PR https://github.com/huggingface/transformers/pull/31976. Is anybody working on this model?
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
_No response_ | [
77
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
"New model"
] |
https://api.github.com/repos/huggingface/transformers/issues/34173 |
TITLE
Add support for GOT-OCR2.0
COMMENTS
12
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Model description
As an OCR-2.0 model, GOT can handle all artificial optical signals (e.g., plain texts, math/molecular formulas, tables, charts, sheet music, and even geometric shapes) under various OCR tasks. On the input side, the model supports commonly used scene- and document-style images in slice and whole-page styles. On the output side, GOT can generate plain or formatted results (markdown/tikz/smiles/kern) via an easy prompt. Besides, the model enjoys interactive OCR features, i.e., region-level recognition guided by coordinates or colors.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Implementation: https://github.com/Ucas-HaoranWei/GOT-OCR2.0/
Paper: https://arxiv.org/abs/2409.01704 | [
77
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
"New model"
] |
https://api.github.com/repos/huggingface/transformers/issues/33560 |
TITLE
fix tests with main revision and read token
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 1
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
Fixes slow tests in mamba2 not passing because read token was missing on the decorators. | [
73
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"run-slow"
] |
https://api.github.com/repos/huggingface/transformers/issues/35264 |
TITLE
🚨🚨🚨 Limit backtracking in Nougat regexp
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
Limit the number of repetitions in a regular expression pattern to prevent the method from hanging.
Here is a code sample to test previous and updated regex matches, along with a performance test.
The only different example is `# 1.`, which I believe should be parsed. Therefore, it can also be considered as a fix, not just a breaking change.
```python
import re
import time
# Original Regex (Vulnerable to ReDoS)
def original_post_process(generation):
return re.sub(r"^#+ (?:\.?(?:\d|[ixv])+)*\s*(?:$|\n\s*)", "", generation, flags=re.M)
# Updated Regex (Avoids ReDoS by simplifying backtracking risks)
def updated_post_process(generation):
return re.sub(r"^#+ (?:[\d+\.]+|[ixv\.]+)?\s*(?:$|\n\s*)", "", generation, flags=re.M)
# Test cases to validate equivalence and performance
def test_post_process_equivalence():
test_cases = [
# Simple headings
"# Section",
"# 1.2.3",
"# .1.2.3",
"# .i.v.x",
# Standard headings
"# Heading 1\n## 1. Subheading\n### 1.1 Sub-subheading\n#### IV. Roman numeral heading\nRegular text starts here.",
# Trailing spaces
"# \n## 1. \n### Roman numeral heading with spaces \nRegular text here.",
# Roman numerals
"# i\n# iv Roman numeral heading\n# x Section\nText with valid content.",
# Mixed content
"# Heading 1\n## Subheading 1\nRegular text.\n### Subheading with text\nSome more regular text.",
# Non-heading patterns
"# This is a valid heading\nSome text that shouldn't be removed.\n# Heading with text afterward\nText with valid content.",
# Completely empty or irregular inputs
"",
" \n \n ",
# Inputs with special characters
"# # Special heading\nRegular text.",
# Escaped markdown syntax
"\\# Escaped heading\n## Valid heading\nText content.",
# Multiline text blocks
"# Valid heading\n\nText under heading.\n\n## Another valid heading\n\nMore text here.",
# Random non-heading input
"This is just random text with no headings.\nAnother line of text.",
# Long problematic input for ReDoS
"# " + "0" * 25 + ":\n", # Long problematic input for ReDoS
# Large multiline input
"# Heading 1\n## 1. Subheading\n### 1.1 Sub-subheading\n#### IV. Roman numeral heading\nRegular text starts here.\n" * 100,
]
for i, input_str in enumerate(test_cases):
original_output = original_post_process(input_str)
updated_output = updated_post_process(input_str)
if original_output != updated_output:
print("\nInput:\n", input_str)
print("\nOriginal:\n", original_output)
print("\nUpdated:\n", updated_output)
print("\n" * 3)
# raise ValueError(f"Test {i + 1}: Outputs do not match!")
print(f"Test {i + 1}: Outputs match!")
# Performance comparison
def performance_test():
long_input = "# " + "0" * 25 + ":\n" # Long problematic input for ReDoS
# Test original method
start_time = time.time()
original_post_process(long_input)
print(f"Original method execution time: {time.time() - start_time:.6f} seconds")
# Test updated method
start_time = time.time()
updated_post_process(long_input)
print(f"Updated method execution time: {time.time() - start_time:.6f} seconds")
if __name__ == "__main__":
# Run tests
print("Running equivalence tests...")
test_post_process_equivalence()
print("\nRunning performance tests...")
performance_test()
```
Output:
```
Running equivalence tests...
Test 1: Outputs match!
Test 2: Outputs match!
Test 3: Outputs match!
Test 4: Outputs match!
Test 5: Outputs match!
Input:
#
## 1.
### Roman numeral heading with spaces
Regular text here.
Original:
## 1.
### Roman numeral heading with spaces
Regular text here.
Updated:
### Roman numeral heading with spaces
Regular text here.
Test 7: Outputs match!
Test 8: Outputs match!
Test 9: Outputs match!
Test 10: Outputs match!
Test 11: Outputs match!
Test 12: Outputs match!
Test 13: Outputs match!
Test 14: Outputs match!
Test 15: Outputs match!
Test 16: Outputs match!
Test 17: Outputs match!
Running performance tests...
Updated method execution time: 0.000503 seconds
```
Even for 100000 zeros the time now does not exceed a second.
| [
62,
73,
65
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Vision",
"run-slow",
"Processing"
] |
https://api.github.com/repos/huggingface/transformers/issues/34673 |
TITLE
Error when loading openbmb/RLHF-V
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.46.2
- Platform: Linux-5.15.0-120-generic-x86_64-with-glibc2.35
- Python version: 3.10.15
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 1.1.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: None
- Using GPU in script?: 1
- GPU type: NVIDIA A100-PCIE-40GB
### Who can help?
@zucchini-nlp, @amyeroberts, @qubvel
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
With the latest transformers version `transformers==4.46.2`, when I try to load this model by using:
```
model_name = "openbmb/RLHF-V"
model = AutoModelForCausalLM.from_pretrained(
model_name,
cache_dir=self.model_dir,
torch_dtype=self.torch_dtype,
low_cpu_mem_usage=True,
device_map=self.device,
attn_implementation="flash_attention_2" if "cuda" in str(self.device) else None,
).eval()
```
or use:
```
from transformers import pipeline
pipe = pipeline("text-generation", model="openbmb/RLHF-V", cache_dir="/root/llm-project/util/model")
```
An error occured:
```
ValueError: The checkpoint you are trying to load has model type `beit3_llava` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
```
This seems to affect more than just this one model. Please identify the issue and fix it.
### Expected behavior
Fix it | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34346 |
TITLE
Error when running Grounding DINO for batch inference.
COMMENTS
7
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.46.0.dev0
- Platform: Linux-5.15.0-120-generic-x86_64-with-glibc2.35
- Python version: 3.10.15
- Huggingface_hub version: 0.25.2
- Safetensors version: 0.4.5
- Accelerate version: 1.0.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: None
- Using GPU in script?: None
- GPU type: NVIDIA A100-PCIE-40GB
### Who can help?
@clefourrier @zucchini-nlp @amyeroberts
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When I use Grounding DINO for batch inference, I encounter some issues:
When I perform batch inference on multiple images with different texts, it results in errors. However, if I change the text corresponding to each image to be all the same, there is no problem.
The whole code to reproduce it:
``` python
import torch, json
from PIL import Image
from PIL.ImageFile import ImageFile
from transformers import AutoProcessor, AutoModelForZeroShotObjectDetection, GroundingDinoProcessor, GroundingDinoForObjectDetection
def open_image_from_url(images: Image.Image | str | list[Image.Image | str]) -> Image.Image | list[Image.Image]:
def open_single_image(image: Image.Image | str) -> Image.Image:
if isinstance(image, (Image.Image, ImageFile)):
img = image
else:
img = Image.open(image)
if img.mode != "RGB":
img = img.convert("RGB")
return img
if isinstance(images, list):
return [open_single_image(i) for i in images]
else:
return open_single_image(images)
model: GroundingDinoForObjectDetection = AutoModelForZeroShotObjectDetection.from_pretrained(
"IDEA-Research/grounding-dino-tiny",
cache_dir='/root/llm-project/util/model',
low_cpu_mem_usage=True,
).to('cuda').eval()
processor: GroundingDinoProcessor = AutoProcessor.from_pretrained(
"IDEA-Research/grounding-dino-tiny",
cache_dir='/root/llm-project/util/model',
padding_side="left",
)
datalist: list[str, str] = []
with open('temp.jsonl', 'r') as f:
for line in f:
datalist.append(json.loads(line))
images: list[Image.Image] = [open_image_from_url(data['image_path']) for data in datalist]
captions: list[str] = [data['obj_to_detect'] for data in datalist]
with torch.inference_mode():
encoded_inputs = processor(
images=images,
text=captions,
max_length=300,
return_tensors="pt",
padding=True,
truncation=True,
).to('cuda')
outputs = model(**encoded_inputs)
target_sizes = [image.size[::-1] for image in images]
results: list = processor.post_process_grounded_object_detection(
outputs,
encoded_inputs["input_ids"],
box_threshold=0.3,
text_threshold=0.3,
target_sizes=target_sizes,
)
```
The content of the source jsonl file is like this:
The images is from coco and they are all valid images.
``` json
{"image_path": "000000449603.jpg", "obj_to_detect": "person. wave. surfboard."}
{"image_path": "000000565776.jpg", "obj_to_detect": "kitchen. appliance. utensil."}
{"image_path": "000000226903.jpg", "obj_to_detect": "dining. food."}
{"image_path": "000000480936.jpg", "obj_to_detect": "woman. chair. meal."}
{"image_path": "000000276018.jpg", "obj_to_detect": "area. child. adult."}
```
In the line `outputs = model(**encoded_inputs)` raise an error:
```
Traceback (most recent call last):
File "/root/anaconda3/envs/LVLM/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/root/anaconda3/envs/LVLM/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/root/.vscode-server/extensions/ms-python.debugpy-2024.12.0-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 71, in <module>
cli.main()
File "/root/.vscode-server/extensions/ms-python.debugpy-2024.12.0-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 501, in main
run()
File "/root/.vscode-server/extensions/ms-python.debugpy-2024.12.0-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 351, in run_file
runpy.run_path(target, run_name="__main__")
File "/root/.vscode-server/extensions/ms-python.debugpy-2024.12.0-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 310, in run_path
return _run_module_code(code, init_globals, run_name, pkg_name=pkg_name, script_name=fname)
File "/root/.vscode-server/extensions/ms-python.debugpy-2024.12.0-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 127, in _run_module_code
_run_code(code, mod_globals, init_globals, mod_name, mod_spec, pkg_name, script_name)
File "/root/.vscode-server/extensions/ms-python.debugpy-2024.12.0-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 118, in _run_code
exec(code, run_globals)
File "/root/llm-project/LVLM/test.py", line 57, in <module>
outputs = model(**encoded_inputs)
File "/root/anaconda3/envs/LVLM/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/anaconda3/envs/LVLM/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/root/anaconda3/envs/LVLM/lib/python3.10/site-packages/transformers/models/grounding_dino/modeling_grounding_dino.py", line 2582, in forward
outputs = self.model(
File "/root/anaconda3/envs/LVLM/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/anaconda3/envs/LVLM/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/root/anaconda3/envs/LVLM/lib/python3.10/site-packages/transformers/models/grounding_dino/modeling_grounding_dino.py", line 2260, in forward
text_self_attention_masks, position_ids = generate_masks_with_special_tokens_and_transfer_map(input_ids)
File "/root/anaconda3/envs/LVLM/lib/python3.10/site-packages/transformers/models/grounding_dino/modeling_grounding_dino.py", line 2050, in generate_masks_with_special_tokens_and_transfer_map
position_ids[row, previous_col + 1 : col + 1] = torch.arange(
RuntimeError: upper bound and larger bound inconsistent with step sign
```
In the line before, the input to the processor (images and texts) are:
```
the images:
[<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x426 at 0x7F5462AF91B0>,
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x421 at 0x7F5462B688E0>,
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x480 at 0x7F5462B68970>,
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=482x640 at 0x7F5462B68820>,
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=416x640 at 0x7F5462B688B0>]
The texts:
['person. wave. surfboard.', 'kitchen. appliance. utensil.', 'dining. food.', 'woman. chair. meal.', 'area. child. adult.']
```
When I change the text input to `captions: list[str] = ["person."] * len(images)`
It's all ok.
### Expected behavior
I don't know why different text input in an batch when batch inference will raise error | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35471 |
TITLE
Unknown quantization type, got fp8
COMMENTS
32
REACTIONS
+1: 12
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.47.1
- Platform: macOS-15.1.1-arm64-arm-64bit
- Python version: 3.10.16
- Huggingface_hub version: 0.27.0
- Safetensors version: 0.4.5
- Accelerate version: 1.2.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@SunMarc @MekkCyber
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Issue arises when using AutoModelForCasualLM.from_pretrained()
The model used is `"deepseek-ai/DeepSeek-V3"`
File "/Users/ruidazeng/Demo/chatbot.py", line 13, in init
self.model = AutoModelForCausalLM.from_pretrained(
File "/opt/anaconda3/envs/gaming-bot/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 559, in from_pretrained
return model_class.from_pretrained(
File "/opt/anaconda3/envs/gaming-bot/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3659, in from_pretrained
config.quantization_config = AutoHfQuantizer.merge_quantization_configs(
File "/opt/anaconda3/envs/gaming-bot/lib/python3.10/site-packages/transformers/quantizers/auto.py", line 173, in merge_quantization_configs
quantization_config = AutoQuantizationConfig.from_dict(quantization_config)
File "/opt/anaconda3/envs/gaming-bot/lib/python3.10/site-packages/transformers/quantizers/auto.py", line 97, in from_dict
raise ValueError(
ValueError: Unknown quantization type, got fp8 - supported types are: ['awq', 'bitsandbytes_4bit', 'bitsandbytes_8bit', 'gptq', 'aqlm', 'quanto', 'eetq', 'hqq', 'compressed-tensors', 'fbgemm_fp8', 'torchao', 'bitnet']
### Expected behavior
To be able to run Deepseek-R1 | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34675 |
TITLE
CUDA Out Of Memory when training a DETR Object detection model with compute_metrics
COMMENTS
11
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
`transformers` version 4.47.0.dev0
`accelerate` version 0.34.2
`timm` version 1.0.11
`supervision` version 0.25.0rc2
### Who can help?
@muellerzr @Arth
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I’m training a DETR Object Detection model using the Trainer API. I have properly created the coco dataset.
But when I run the Trainer API with `custom_metrics`, I get the error “OutOfMemoryError: CUDA out of memory.”. I have reduced batch_size from 16 until 1, but the same error of “Out of memory”.
```
def collate_fn(batch):
data = {}
data["pixel_values"] = torch.stack([x["pixel_values"] for x in batch])
data["labels"] = [x["labels"] for x in batch]
return data
```
Here’s how I’m creating the `custom_metrics` function
```
id2label = {id: label for id, label in enumerate(train_ds.classes)}
label2id = {label: id for id, label in enumerate(train_ds.classes)}
@dataclass
class ModelOutput:
logits: torch.Tensor
pred_boxes: torch.Tensor
class MAPEvaluator:
def __init__(self, image_processor, threshold=0.00, id2label=None):
self.image_processor = image_processor
self.threshold = threshold
self.id2label = id2label
def collect_image_sizes(self, targets):
"""Collect image sizes across the dataset as list of tensors with shape [batch_size, 2]."""
image_sizes = []
for batch in targets:
batch_image_sizes = torch.tensor(np.array([x["size"] for x in batch]))
image_sizes.append(batch_image_sizes)
return image_sizes
def collect_targets(self, targets, image_sizes):
post_processed_targets = []
for target_batch, image_size_batch in zip(targets, image_sizes):
for target, (height, width) in zip(target_batch, image_size_batch):
boxes = target["boxes"]
boxes = sv.xcycwh_to_xyxy(boxes)
boxes = boxes * np.array([width, height, width, height])
boxes = torch.tensor(boxes)
labels = torch.tensor(target["class_labels"])
post_processed_targets.append({"boxes": boxes, "labels": labels})
return post_processed_targets
def collect_predictions(self, predictions, image_sizes):
post_processed_predictions = []
for batch, target_sizes in zip(predictions, image_sizes):
batch_logits, batch_boxes = batch[1], batch[2]
output = ModelOutput(logits=torch.tensor(batch_logits), pred_boxes=torch.tensor(batch_boxes))
post_processed_output = self.image_processor.post_process_object_detection(
output, threshold=self.threshold, target_sizes=target_sizes
)
post_processed_predictions.extend(post_processed_output)
return post_processed_predictions
@torch.no_grad()
def __call__(self, evaluation_results):
predictions, targets = evaluation_results.predictions, evaluation_results.label_ids
image_sizes = self.collect_image_sizes(targets)
post_processed_targets = self.collect_targets(targets, image_sizes)
post_processed_predictions = self.collect_predictions(predictions, image_sizes)
evaluator = MeanAveragePrecision(box_format="xyxy", class_metrics=True)
evaluator.warn_on_many_detections = False
evaluator.update(post_processed_predictions, post_processed_targets)
metrics = evaluator.compute()
# Replace list of per class metrics with separate metric for each class
classes = metrics.pop("classes")
map_per_class = metrics.pop("map_per_class")
mar_100_per_class = metrics.pop("mar_100_per_class")
for class_id, class_map, class_mar in zip(classes, map_per_class, mar_100_per_class):
class_name = id2label[class_id.item()] if id2label is not None else class_id.item()
metrics[f"map_{class_name}"] = class_map
metrics[f"mar_100_{class_name}"] = class_mar
metrics = {k: round(v.item(), 4) for k, v in metrics.items()}
return metrics
eval_compute_metrics_fn = MAPEvaluator(image_processor=processor, threshold=0.01, id2label=id2label)
```
This is how I'm training the model
```
training_args = TrainingArguments(
output_dir=f"Malaria-finetune",
report_to="none",
num_train_epochs=10,
max_grad_norm=0.1,
learning_rate=5e-5,
warmup_steps=300,
per_device_train_batch_size=1,
dataloader_num_workers=2,
metric_for_best_model="eval_map",
greater_is_better=True,
load_best_model_at_end=True,
eval_strategy="epoch",
save_strategy="epoch",
save_total_limit=2,
remove_unused_columns=False,
eval_do_concat_batches=False,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=pytorch_dataset_train,
eval_dataset=pytorch_dataset_valid,
processing_class=processor,
data_collator=collate_fn,
compute_metrics=eval_compute_metrics_fn
)
trainer.train()
```
I would like to please get help on this.
### Expected behavior
I expected the Trainer to train normally on this custom dataset for 10 epochs without any errors | [
66,
64,
62
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
1,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"trainer",
"bug",
"Vision"
] |
https://api.github.com/repos/huggingface/transformers/issues/33358 |
TITLE
Can't save quantized models
COMMENTS
13
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.44.2
- Platform: Linux-6.8.0-40-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.24.6
- Safetensors version: 0.4.4
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: no
- Using GPU in script?: no
- GPU type: NVIDIA T600 Laptop GPU
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
model = SentenceTransformer('sentence-transformers/paraphrase-MiniLM-L6-v2', device="cpu")
model._modules["1"].pooling_mode_mean_tokens = False
model._modules["1"].pooling_mode_cls_token = True
qconfig_dict = {
Embedding : float_qparams_weight_only_qconfig,
Linear: default_dynamic_qconfig
}
q_model = quantize_dynamic(model, qconfig_dict)
//work well uppon this point, model work well, it take a lot less space, great succes !
q_model.save_pretrained("lamashnikov/cls-quantitized-paraphrase-MiniLM-L6-v2")
//problem happen here
procedure was described here:
https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/distillation/model_quantization.py
### Expected behavior
not that:
Traceback (most recent call last):
File "/home/censored/perso/./my_project.py", line 190, in
main()
File "/home/censored/perso/./my_project.py", line 171, in main
q_model.save_pretrained("lamashnikov/cls-quantitized-paraphrase-MiniLM-L6-v2")
File "/home/censored/.local/lib/python3.10/site-packages/sentence_transformers/SentenceTransformer.py", line 1072, in save_pretrained
self.save(
File "/home/sencored/.local/lib/python3.10/site-packages/sentence_transformers/SentenceTransformer.py", line 1037, in save
module.save(model_path, safe_serialization=safe_serialization)
File "/home/censored/.local/lib/python3.10/site-packages/sentence_transformers/models/Transformer.py", line 180, in save
self.auto_model.save_pretrained(output_path, safe_serialization=safe_serialization)
File "/home/censored/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2698, in save_pretrained
shared_names, disjoint_names = _find_disjoint(shared_ptrs.values(), state_dict)
File "/home/censored/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 650, in _find_disjoint
areas.append((tensor.data_ptr(), _end_ptr(tensor), name))
AttributeError: 'torch.dtype' object has no attribute 'data_ptr' | [
40,
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Quantization",
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34241 |
TITLE
How to output token by token use transformers?
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
...
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
...
### Expected behavior
How to output token by token use transformers? | [
75,
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
"Discussion",
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34830 |
TITLE
Data collator class type integrity is not intact throughout the runtime
COMMENTS
4
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
transformers: v4.46.3
python: 3.11
### Who can help?
trainer: @muellerzr @SunMarc
Anyone else in the community is as well open to comment or suggest. Thank you.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The original data type / class of the data collator meta information is completely lost after the below step which happens at couple of places as listed below.
https://github.com/huggingface/transformers/blob/f297af55dfc27485189f352cd36b4683de12e0b3/src/transformers/trainer.py#L975
https://github.com/huggingface/transformers/blob/f297af55dfc27485189f352cd36b4683de12e0b3/src/transformers/trainer.py#L1071
https://github.com/huggingface/transformers/blob/f297af55dfc27485189f352cd36b4683de12e0b3/src/transformers/trainer.py#L1113
The meta information loss happens since the collator is completely replaced with RemoveColumnsCollator wrapper collator class when the original dataset is **NOT** of type `datasets.Dataset`, mostly meant to support `datasets.IterableDataset` and others.
This raises issues for complex usecases where we wish to inject custom behaviour as part of the Trainer object when the collator is of certain class type and etc. Since the current code completely removes that piece of information and changes it to RemoveColumnsCollator, its really hard to know what was the original datacollator class. There are workarounds like writing case specific code handling `RemoveColumnsCollator` with special care, however, given the growing transformers code, things could change in the future and break such case specific code. On the other hand, it would be great to actually handle this situation better by preserving the original collator class information.
I propose the following to options
* Dynamically modify the `RemoveColumnsCollator` class to subclass from the original data collator class that was passed.
This is bit of a fancy way of doing by creating a custom class/type using Python's `type()` API.
OR
* Monkey patch the data collator object's caller functions (like `__call__` etc) to include the remove columns logic on top of it. This would mean to remove RemoveColumnsCollator completely and do a simple monkey patch.
Example of monkey patch being already adopted in existing HF code making it a good option for this fix following the existing code style
https://github.com/huggingface/accelerate/blob/d7b1b368e9f484a18636a71600566b757d5cf87e/src/accelerate/utils/operations.py#L819
I am happy to discuss and raise a PR to fix this behaviour.
### Expected behavior
The class type information of the original data collator has to be intact and preserved throughout the runtime. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/33294 |
TITLE
"Qwen2-VL FP16 inference results in errors or gibberish output."
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
base this pull request :https://github.com/huggingface/transformers/pull/33211
python: Python 3.10.12
### infer code:
```
from PIL import Image
import requests
import torch
from torchvision import io
from typing import Dict
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from torch.cuda.amp import autocast
model_dir = "/workspace/qwenvl-dev/Qwen2-VL-2B-Instruct"
# Load the model in half-precision on the available device(s)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = Qwen2VLForConditionalGeneration.from_pretrained(
model_dir,
torch_dtype=torch.float16, # Explicitly set to float16 for half-precision
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_dir)
# Image
image_path = "/workspace/qwenvl-dev/demo.jpeg"
image = Image.open(image_path)
conversation = [
{
"role": "user",
"content": [
{
"type": "image",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preprocess the inputs
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n'
inputs = processor(
text=[text_prompt], images=[image], padding=True, return_tensors="pt"
)
inputs = inputs.to("cuda")
output_ids = model.generate(
**inputs,
max_new_tokens=128,
)
generated_ids = [
output_ids[len(input_ids) :]
for input_ids, output_ids in zip(inputs.input_ids, output_ids)
]
output_text = processor.batch_decode(
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
print(output_text)
```
#### the error information
```
Traceback (most recent call last):
File "/workspace/qwenvl-dev/test_infer.py", line 61, in <module>
output_ids = model.generate(
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py", line 2015, in generate
result = self._sample(
File "/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py", line 2998, in _sample
next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
RuntimeError: probability tensor contains either `inf`, `nan` or element < 0
```
### another infer to set do_sample false
```
output_ids = model.generate(
**inputs,
max_new_tokens=128,
do_sample=False,
top_k=None,
top_p=None,
temperature=None,
)
```
#### infer res
```
['!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!']
```
### the float32 res is
```
['The image depicts a serene beach scene with a woman and her dog. The woman is sitting on the sand, wearing a plaid shirt and black pants, and appears to be smiling. She is giving a high-five to the dog, which is sitting on the sand next to her. The dog has a harness around its neck and is looking up at the woman with its front paws raised in a gesture of affection or playfulness. The background shows the ocean with gentle waves, and the sky is clear with a soft glow from the setting or rising sun, casting a warm light over the entire scene. The overall atmosphere is peaceful and joyful']
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from PIL import Image
import requests
import torch
from torchvision import io
from typing import Dict
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from torch.cuda.amp import autocast
model_dir = "/workspace/qwenvl-dev/Qwen2-VL-2B-Instruct"
# Load the model in half-precision on the available device(s)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = Qwen2VLForConditionalGeneration.from_pretrained(
model_dir,
torch_dtype=torch.float16, # Explicitly set to float16 for half-precision
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_dir)
# Image
image_path = "/workspace/qwenvl-dev/demo.jpeg"
image = Image.open(image_path)
conversation = [
{
"role": "user",
"content": [
{
"type": "image",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preprocess the inputs
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n'
inputs = processor(
text=[text_prompt], images=[image], padding=True, return_tensors="pt"
)
inputs = inputs.to("cuda")
output_ids = model.generate(
**inputs,
max_new_tokens=128,
)
generated_ids = [
output_ids[len(input_ids) :]
for input_ids, output_ids in zip(inputs.input_ids, output_ids)
]
output_text = processor.batch_decode(
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
print(output_text)
```
### Expected behavior
I hope this error gets fixed and merged in. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34950 |
TITLE
Bug: Qwen2 vl mrope support?
COMMENTS
7
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.47.0.dev0
- Platform: Linux-5.4.0-137-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 0.34.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: distributed
- GPU type: CUDA GPU
### Who can help?
@ArthurZucker @zucchini-nlp @amyeroberts
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am trying to further train Qwen/Qwen2-VL-7B-Instruct model and got this error:
```markdown
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/work/.vscode-server/extensions/ms-python.debugpy-2024.13.2024111901-linux-x64/bundled/libs/debugpy/_vendored/pydevd/pydevd.py", line 3704, in <module>
[rank0]: main()
[rank0]: File "/home/work/.vscode-server/extensions/ms-python.debugpy-2024.13.2024111901-linux-x64/bundled/libs/debugpy/_vendored/pydevd/pydevd.py", line 3689, in main
[rank0]: globals = debugger.run(setup["file"], None, None, is_module)
[rank0]: File "/home/work/.vscode-server/extensions/ms-python.debugpy-2024.13.2024111901-linux-x64/bundled/libs/debugpy/_vendored/pydevd/pydevd.py", line 2687, in run
[rank0]: return self._exec(is_module, entry_point_fn, module_name, file, globals, locals)
[rank0]: File "/home/work/.vscode-server/extensions/ms-python.debugpy-2024.13.2024111901-linux-x64/bundled/libs/debugpy/_vendored/pydevd/pydevd.py", line 2695, in _exec
[rank0]: globals = pydevd_runpy.run_path(file, globals, "__main__")
[rank0]: File "/home/work/.vscode-server/extensions/ms-python.debugpy-2024.13.2024111901-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 310, in run_path
[rank0]: return _run_module_code(code, init_globals, run_name, pkg_name=pkg_name, script_name=fname)
[rank0]: File "/home/work/.vscode-server/extensions/ms-python.debugpy-2024.13.2024111901-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 127, in _run_module_code
[rank0]: _run_code(code, mod_globals, init_globals, mod_name, mod_spec, pkg_name, script_name)
[rank0]: File "/home/work/.vscode-server/extensions/ms-python.debugpy-2024.13.2024111901-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 118, in _run_code
[rank0]: exec(code, run_globals)
[rank0]: File "/data/workspace/howard/workspace/Llava/qwen2_vl_instruct_post_train.py", line 423, in <module>
[rank0]: main(train_args)
[rank0]: File "/data/workspace/howard/workspace/Llava/qwen2_vl_instruct_post_train.py", line 329, in main
[rank0]: model = Qwen2VLForConditionalGeneration.from_pretrained(model_name_or_path, config=config)
[rank0]: File "/data/workspace/howard/workspace/.venv/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4096, in from_pretrained
[rank0]: model = cls(config, *model_args, **model_kwargs)
[rank0]: File "/data/workspace/howard/workspace/.venv/lib/python3.10/site-packages/transformers/models/qwen2_vl/modeling_qwen2_vl.py", line 1422, in __init__
[rank0]: self.model = Qwen2VLModel(config)
[rank0]: File "/data/workspace/howard/workspace/.venv/lib/python3.10/site-packages/transformers/models/qwen2_vl/modeling_qwen2_vl.py", line 1070, in __init__
[rank0]: self.rotary_emb = Qwen2VLRotaryEmbedding(config=config)
[rank0]: File "/data/workspace/howard/workspace/.venv/lib/python3.10/site-packages/transformers/models/qwen2_vl/modeling_qwen2_vl.py", line 146, in __init__
[rank0]: self.rope_init_fn = ROPE_INIT_FUNCTIONS[self.rope_type]
[rank0]: KeyError: 'mrope'
```
While debugging, I realized that inside of the Qwen2VLRotaryEmbedding class, there's ROPE_INIT_FUNCTIONS that checks the rope setting, and process by mapping each function.
However, Qwen2 VL model is using Multimodal Rotary Position Embedding(M-RoPE), which seems not handled currently. Is there any plans,WIP to support this rope setting or am I missing something?
Current implementation of ROPE_INIT_FUNCTIONS:
```python
ROPE_INIT_FUNCTIONS = {
"default": _compute_default_rope_parameters,
"linear": _compute_linear_scaling_rope_parameters,
"dynamic": _compute_dynamic_ntk_parameters,
"yarn": _compute_yarn_parameters,
"longrope": _compute_longrope_parameters,
"llama3": _compute_llama3_parameters,
}
```
### Expected behavior
should support mrope. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35382 |
TITLE
`RuntimeError: self and mat2 must have the same dtype, but got Float and BFloat16` when training with `torch_compile`
COMMENTS
4
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.48.0.dev0
- Platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.12.5
- Huggingface_hub version: 0.25.1
- Safetensors version: 0.4.5
- Accelerate version: 0.34.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 4090
### Who can help?
@ArthurZucker @muellerzr @SunMarc
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When I try continual pretraining ModernBERT for a MLM objective with the `torch_compile` flag of my `TrainingArguments` set to `True`, I get the below error:
```python
0%| | 0/1223301 [00:00<?, ?it/s]
/home/dev/.venv/lib/python3.12/site-packages/onnxscript/converter.py:820: FutureWarning: 'onnxscript.values.Op.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
/home/dev/.venv/lib/python3.12/site-packages/onnxscript/converter.py:820: FutureWarning: 'onnxscript.values.OnnxFunction.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
W1221 15:33:48.046000 14779 torch/_dynamo/variables/tensor.py:776] [4/0] Graph break from `Tensor.item()`, consider setting:
W1221 15:33:48.046000 14779 torch/_dynamo/variables/tensor.py:776] [4/0] torch._dynamo.config.capture_scalar_outputs = True
W1221 15:33:48.046000 14779 torch/_dynamo/variables/tensor.py:776] [4/0] or:
W1221 15:33:48.046000 14779 torch/_dynamo/variables/tensor.py:776] [4/0] env TORCHDYNAMO_CAPTURE_SCALAR_OUTPUTS=1
W1221 15:33:48.046000 14779 torch/_dynamo/variables/tensor.py:776] [4/0] to include these operations in the captured graph.
W1221 15:33:48.046000 14779 torch/_dynamo/variables/tensor.py:776] [4/0]
W1221 15:33:48.046000 14779 torch/_dynamo/variables/tensor.py:776] [4/0] Graph break: from user code at:
W1221 15:33:48.046000 14779 torch/_dynamo/variables/tensor.py:776] [4/0] File "/home/dev/.venv/lib/python3.12/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 711, in torch_dynamo_resume_in__unpad_modernbert_input_at_710
W1221 15:33:48.046000 14779 torch/_dynamo/variables/tensor.py:776] [4/0] max_seqlen_in_batch = int(seqlens_in_batch.max().item())
W1221 15:33:48.046000 14779 torch/_dynamo/variables/tensor.py:776] [4/0]
W1221 15:33:48.046000 14779 torch/_dynamo/variables/tensor.py:776] [4/0]
Traceback (most recent call last):
File "/home/dev/encoder/scripts/train/train_modernbert.py", line 206, in <module>
trainer.train(
File "/home/dev/.venv/lib/python3.12/site-packages/transformers/trainer.py", line 2163, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/transformers/trainer.py", line 2523, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/transformers/trainer.py", line 3668, in training_step
loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/transformers/trainer.py", line 3722, in compute_loss
outputs = model(**inputs)
^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 465, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/accelerate/utils/operations.py", line 820, in forward
return model_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/accelerate/utils/operations.py", line 808, in __call__
return convert_to_fp32(self.model_forward(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/amp/autocast_mode.py", line 44, in decorate_autocast
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 1023, in forward
@add_start_docstrings_to_model_forward(MODERNBERT_INPUTS_DOCSTRING)
File "/home/dev/.venv/lib/python3.12/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 1055, in torch_dynamo_resume_in_forward_at_1055
input_ids, indices, cu_seqlens, max_seqlen, position_ids, labels = _unpad_modernbert_input(
File "/home/dev/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 913, in forward
layer_outputs = encoder_layer(
^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 519, in forward
def forward(
File "/home/dev/.venv/lib/python3.12/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 529, in torch_dynamo_resume_in_forward_at_529
attn_outputs = self.attn(
File "/home/dev/.venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1100, in forward
return compiled_fn(full_args)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 308, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/utils.py", line 124, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/utils.py", line 98, in g
return f(*args)
^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/autograd/function.py", line 575, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1525, in forward
fw_outs = call_func_at_runtime_with_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/utils.py", line 124, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 488, in wrapper
return compiled_fn(runtime_args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 667, in inner_fn
outs = compiled_fn(args)
^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 1478, in __call__
return self.current_callable(inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/_inductor/utils.py", line 1977, in run
return model(new_inputs)
^^^^^^^^^^^^^^^^^
File "/tmp/torchinductor_/y7/cy7xv2rrzhznbq3e2wurnq5pmygfytvnpovxlh5bugtoa3ebwy6f.py", line 277, in call
extern_kernels.addmm(buf9, buf7, reinterpret_tensor(buf8, (1152, 768), (1, 1152), 0), alpha=1, beta=1, out=buf10)
RuntimeError: self and mat2 must have the same dtype, but got Float and BFloat16
```
This does not occur when finetuning for a classification task.
I am using `bfloat16` mixed precision.
### Expected behavior
The training works. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34671 |
TITLE
`DataCollatorForMultipleChoice` exists in the docs but not in the package
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 1
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
Move the `DataCollatorForMultipleChoice` implementation from the docs to `transformers.data.data_collator.py`.
### Motivation
The [`transformers` docs](https://huggingface.co/docs/transformers/tasks/multiple_choice) provide all collator code needed to run `ForMultipleChoice` fine-tuning for datasets like SWAG. The docs say that *"`transformers` doesn’t have a data collator for multiple choice, so you’ll need to adapt the `DataCollatorWithPadding` to create a batch of examples"*, but... why? Why not add the code given in the docs to the package? It's the same repo...
https://github.com/huggingface/transformers/blob/a06a0d12636756352494b99b5b264ac9955bc735/docs/source/en/tasks/multiple_choice.md?plain=1#L115-L160
### Your contribution
None, the code is already there. | [
74,
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
1,
0,
0,
0,
0,
0
] | [
"Documentation",
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/35174 |
TITLE
DataCollator documentation references the wrong version of Nvidia GPUs for accelerated training on tensor cores
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 1
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.47.0
- Platform: Linux-5.15.0-1073-azure-x86_64-with-glibc2.35
- Python version: 3.11.0rc1
- Huggingface_hub version: 0.26.5
- Safetensors version: 0.4.2
- Accelerate version: 0.31.0
### Who can help?
@stevhliu
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Users can leverage the the powerful `pad_to_multiple_of` parameter in the `DataCollatorForSeq2Seq` class (and other data collator classes) whose documentation references the following:
"This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta)."
This seems to be slightly erroneous - the first Nvidia GPU architecture to introduce tensor cores was indeed the Volta architecture, but it has a compute capability of *7.0*, not *7.5*. Nvidia's compute capability 7.5 chips were introduced in the Turing (T4) architecture. Presumably, the documentation is only slightly erroneous in it's numeric definition - V100 (Volta) chips should indeed see a large benefit in performance, but so also should the T4 (Turing) chips.
Interestingly, the Volta arch chips, despite being older and a lower compute capability, actually have twice the number of tensor cores and CUDA cores (640 and 5,120) that the Turing chips have (320 and 2,560). In this sense, training on Volta should give larger improvement than training on Turing when passing this arg. Nonetheless, users are still capable of leveraging the tensor cores on the newer, less powerful T4 if they so choose. By accurately relabeling the original compute capability of Volta as *7.0* one removes any confusion here.
A link to Nvidia's directory of CUDA compute capabilities can be found here: [https://developer.nvidia.com/cuda-gpus](https://developer.nvidia.com/cuda-gpus)
### Expected behavior
The doc strings of the `DataCollatorWithPadding`, `DataCollatorForTokenClassification`, and `DataCollatorForSeq2Seq` classes, which make use of the `pad_to_multiple_of` argument, should be updated to with only a slight adjustment to the numeric definition of its compute capability. They should read:
"This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= **7.0** (Volta).",
as opposed to the current and slightly erroneous:
"This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= **7.5** (Volta)."
Lastly, the `DataCollatorForLanguageModeling` class also allows for users to pass the `pad_to_multiple_of` argument, but makes no reference to the added benefit of using this arg on GPUs with tensor cores. Presumably this docstring should read the same as the other data collator classes.
I will fork the repo and make these changes myself before linking the PR to this issue. @johngrahamreynolds | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34693 |
TITLE
top-p sampling gives different results even after fixing all random seeds
COMMENTS
5
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
python: 3.11.9
transformers: 4.43.3
torch: 2.4.0+cu121
### Who can help?
@ArthurZucker @gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When using pipeline to generate text using llama3.1 8B, even if I have fixed all the random seeds, with the same prompt, the output will be different everytime. If I set do_sample=False, output will be the same each time.
I understand that do_sample does top_p sampling (or top_k) and therefore there are randomness, but since I have fixed the seed, shouldn't they be the same?
below is the script to reproduce:
```python
import os, random, numpy as np
import transformers
import torch
cache_dir = "some_dir"
model_size="8B"
model_id = f"meta-llama/Meta-Llama-3.1-{model_size}-Instruct"
def seed_everything(seed=1):
os.environ['PYTHONHASHSEED'] = str(seed)
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
# torch.use_deterministic_algorithms(True)
seed_everything(1)
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16, "cache_dir": cache_dir},
device_map="auto",
# do_sample=False
)
message = [
{"role": "system", "content": "You are a helpful assistant that generate random sentences."},
{"role": "user", "content": "please generate a random sentence."}
]
for _ in range(5):
outputs = pipeline(
message,
max_new_tokens = 2048
)
print(outputs[0]["generated_text"][-1]['content'])
```
### Expected behavior
when you fixed the random seed, with the same prompt, each generation should give the same results. | [
64,
18
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug",
"Generation"
] |
https://api.github.com/repos/huggingface/transformers/issues/33983 |
TITLE
Enhancing RoBERTa Robustness through Adversarial Training
COMMENTS
0
REACTIONS
+1: 1
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
The goal of this feature is to implement Adversarial Training for the RoBERTa model to enhance its robustness against adversarial examples. Adversarial training involves generating perturbed inputs (adversarial examples) during the training phase, allowing the model to learn how to withstand such attacks. This improves the model's generalization and performance on unseen data.
# Implementation Overview:
### **Adversarial Example Generation:**
1. Methods: Utilize techniques like FGSM (Fast Gradient Sign Method) or PGD (Projected Gradient Descent)
2. Integration: Modify the training loop to incorporate a step that generates adversarial examples for each batch of data.
Loss Function Adjustment:
Combine the traditional loss (e.g., cross-entropy) with a loss calculated on the adversarial examples. This can be done using a weighted sum to balance the two components.
Training Procedure:
Modify the training loop:
For each epoch:
1. Generate adversarial examples from the input batch.
2. Compute the loss on both clean and adversarial examples.
3. Update the model weights based on the combined loss.
***Hyperparameter Tuning:***
Introduce parameters such as the adversarial strength (epsilon) and the weighting factor (
𝜆
λ) to adjust the training dynamics and effectiveness.
### Evaluation Metrics:
Evaluate model performance using metrics like accuracy, precision, recall, and F1-score on both clean and adversarial datasets to measure the robustness improvements.
# Link to Paper:
[Adversarial Training for Natural Language Processing](https://arxiv.org/abs/1906.05955)
[Adversarial Examples for Evaluating Reading Comprehension Systems](https://arxiv.org/abs/1904.07236)
[Adversarial Training for Large Neural Language Models](https://arxiv.org/abs/1909.03247)
[Towards Robustness Against Adversarial Attacks in Natural Language Processing](https://arxiv.org/abs/2002.07677)
[Adversarial Training with Natural Language Processing](https://arxiv.org/abs/2103.09582)
### Motivation
The motivation for this feature arises from the increasing importance of model robustness in real-world applications. Many NLP models, including RoBERTa, are vulnerable to adversarial attacks that can lead to significant performance degradation.
Real-World Applications: In critical applications like sentiment analysis, spam detection, or other classification tasks, adversarial inputs can lead to serious consequences, such as misclassification of malicious content.
Frustration with Current Limitations: I often find that while RoBERTa performs excellently on clean datasets, its inability to generalize against adversarial examples hampers its deployment in production. This feature aims to address that gap.
### Your contribution
I would like to contribute to the implementation of adversarial training for the RoBERTa model to enhance its robustness against adversarial attacks. I have reviewed the CONTRIBUTING.md file and am familiar with the contribution guidelines.
# I plan to:
Implement adversarial training techniques based on insights from the relevant research papers.
[**1**] Create test cases to validate the effectiveness of the adversarial training implementation.
[**2**] Update the documentation to include usage examples and instructions for leveraging this feature.
I am excited to submit a Pull Request (PR) once the implementation is complete and ensure it aligns with the project standards. | [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/34730 |
TITLE
RuntimeError in `_group_tensors_by_device_and_dtype` (torch/optim/optimizer.py) when training with FSDP on N>1 GPUs.
COMMENTS
5
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.46.2
- Platform: Linux-5.4.0-125-generic-x86_64-with-glibc2.31
- Python version: 3.10.15
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 1.1.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: Yes (FSDP)
- Using GPU in script?: Yes
- GPU type: NVIDIA RTX A5000
### Who can help?
@muellerzr @SunM
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
#### Output Error
```
[rank1]: File "/data/julien_piet/llm-attack-detect/scripts/bug.py", line 254, in <module>
[rank1]: train(
[rank1]: File "/data/julien_piet/llm-attack-detect/scripts/bug.py", line 247, in train
[rank1]: trainer.train()
[rank1]: File "/data/julien_piet/llm-attack-detect/env/lib/python3.10/site-packages/transformers/trainer.py", line 2123, in train
[rank1]: return inner_training_loop(
[rank1]: File "/data/julien_piet/llm-attack-detect/env/lib/python3.10/site-packages/transformers/trainer.py", line 2534, in _inner_training_loop
[rank1]: self.optimizer.step()
[rank1]: File "/data/julien_piet/llm-attack-detect/env/lib/python3.10/site-packages/accelerate/optimizer.py", line 171, in step
[rank1]: self.optimizer.step(closure)
[rank1]: File "/data/julien_piet/llm-attack-detect/env/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 137, in wrapper
[rank1]: return func.__get__(opt, opt.__class__)(*args, **kwargs)
[rank1]: File "/data/julien_piet/llm-attack-detect/env/lib/python3.10/site-packages/torch/optim/optimizer.py", line 487, in wrapper
[rank1]: out = func(*args, **kwargs)
[rank1]: File "/data/julien_piet/llm-attack-detect/env/lib/python3.10/site-packages/torch/optim/optimizer.py", line 91, in _use_grad
[rank1]: ret = func(self, *args, **kwargs)
[rank1]: File "/data/julien_piet/llm-attack-detect/env/lib/python3.10/site-packages/torch/optim/adamw.py", line 220, in step
[rank1]: adamw(
[rank1]: File "/data/julien_piet/llm-attack-detect/env/lib/python3.10/site-packages/torch/optim/optimizer.py", line 154, in maybe_fallback
[rank1]: return func(*args, **kwargs)
[rank1]: File "/data/julien_piet/llm-attack-detect/env/lib/python3.10/site-packages/torch/optim/adamw.py", line 782, in adamw
[rank1]: func(
[rank1]: File "/data/julien_piet/llm-attack-detect/env/lib/python3.10/site-packages/torch/optim/adamw.py", line 480, in _multi_tensor_adamw
[rank1]: grouped_tensors = Optimizer._group_tensors_by_device_and_dtype(
[rank1]: File "/data/julien_piet/llm-attack-detect/env/lib/python3.10/site-packages/torch/optim/optimizer.py", line 516, in _group_tensors_by_device_and_dtype
[rank1]: return _group_tensors_by_device_and_dtype(tensorlistlist, with_indices) # type: ignore[return-value, arg-type]
[rank1]: File "/data/julien_piet/llm-attack-detect/env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
[rank1]: return func(*args, **kwargs)
[rank1]: File "/data/julien_piet/llm-attack-detect/env/lib/python3.10/site-packages/torch/utils/_foreach_utils.py", line 37, in _group_tensors_by_device_and_dtype
[rank1]: return torch._C._group_tensors_by_device_and_dtype(tensorlistlist, with_indices)
[rank1]: RuntimeError: Tensors of the same index must be on the same device and the same dtype except `step` tensors that can be CPU and float32/64 notwithstanding
[rank0]: Traceback (most recent call last):
[rank0]: File "/data/julien_piet/llm-attack-detect/scripts/bug.py", line 254, in <module>
[rank0]: train(
[rank0]: File "/data/julien_piet/llm-attack-detect/scripts/bug.py", line 247, in train
[rank0]: trainer.train()
[rank0]: File "/data/julien_piet/llm-attack-detect/env/lib/python3.10/site-packages/transformers/trainer.py", line 2123, in train
[rank0]: return inner_training_loop(
[rank0]: File "/data/julien_piet/llm-attack-detect/env/lib/python3.10/site-packages/transformers/trainer.py", line 2534, in _inner_training_loop
[rank0]: self.optimizer.step()
[rank0]: File "/data/julien_piet/llm-attack-detect/env/lib/python3.10/site-packages/accelerate/optimizer.py", line 171, in step
[rank0]: self.optimizer.step(closure)
[rank0]: File "/data/julien_piet/llm-attack-detect/env/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 137, in wrapper
[rank0]: return func.__get__(opt, opt.__class__)(*args, **kwargs)
[rank0]: File "/data/julien_piet/llm-attack-detect/env/lib/python3.10/site-packages/torch/optim/optimizer.py", line 487, in wrapper
[rank0]: out = func(*args, **kwargs)
[rank0]: File "/data/julien_piet/llm-attack-detect/env/lib/python3.10/site-packages/torch/optim/optimizer.py", line 91, in _use_grad
[rank0]: ret = func(self, *args, **kwargs)
[rank0]: File "/data/julien_piet/llm-attack-detect/env/lib/python3.10/site-packages/torch/optim/adamw.py", line 220, in step
[rank0]: adamw(
[rank0]: File "/data/julien_piet/llm-attack-detect/env/lib/python3.10/site-packages/torch/optim/optimizer.py", line 154, in maybe_fallback
[rank0]: return func(*args, **kwargs)
[rank0]: File "/data/julien_piet/llm-attack-detect/env/lib/python3.10/site-packages/torch/optim/adamw.py", line 782, in adamw
[rank0]: func(
[rank0]: File "/data/julien_piet/llm-attack-detect/env/lib/python3.10/site-packages/torch/optim/adamw.py", line 480, in _multi_tensor_adamw
[rank0]: grouped_tensors = Optimizer._group_tensors_by_device_and_dtype(
[rank0]: File "/data/julien_piet/llm-attack-detect/env/lib/python3.10/site-packages/torch/optim/optimizer.py", line 516, in _group_tensors_by_device_and_dtype
[rank0]: return _group_tensors_by_device_and_dtype(tensorlistlist, with_indices) # type: ignore[return-value, arg-type]
[rank0]: File "/data/julien_piet/llm-attack-detect/env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
[rank0]: return func(*args, **kwargs)
[rank0]: File "/data/julien_piet/llm-attack-detect/env/lib/python3.10/site-packages/torch/utils/_foreach_utils.py", line 37, in _group_tensors_by_device_and_dtype
[rank0]: return torch._C._group_tensors_by_device_and_dtype(tensorlistlist, with_indices)
[rank0]: RuntimeError: Tensors of the same index must be on the same device and the same dtype except `step` tensors that can be CPU and float32/64 notwithstanding
```
#### Original Code
```python
import os
from dataclasses import dataclass, field
from typing import Any, Optional
import numpy as np
import torch
from datasets import Dataset
from sklearn.utils.class_weight import compute_class_weight
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
HfArgumentParser,
Trainer,
TrainingArguments,
)
@dataclass
class CustomTrainingArguments(TrainingArguments):
cache_dir: Optional[str] = field(default=None)
optim: str = field(default="adamw_torch")
model_max_length: int = field(default=1024)
lr_scheduler_type: Optional[str] = field(default="cosine_with_restarts")
per_device_train_batch_size: int = field(default=4)
per_device_eval_batch_size: int = field(default=4)
output_dir: Optional[str] = field(default="output")
remove_unused_columns: bool = False
@dataclass
class ModelArguments:
model_name_or_path: Optional[str] = field(
default="meta-llama/Llama-3.2-1B-Instruct"
)
pad_token: str = field(
default="<|finetune_right_pad_id|>", metadata={"help": "Padding token."}
)
unk_token: str = field(
default="<|reserved_special_token_0|>",
metadata={"help": "Unknown token."},
)
class SupervisedDataset:
def __init__(self, data, tokenizer, training_args):
data_dict = SupervisedDataset._preprocess(
data, tokenizer, training_args
)
self.input_ids = data_dict["input_ids"]
self.labels = data_dict["labels"]
self.attention_mask = data_dict["attention_mask"]
self.classification_labels = [
d["messages"][-1]["content"] for d in data
]
# Compute class weights for imbalanced classes
self.class_weights, self.class_values, self.class_indices = (
SupervisedDataset.get_class_weights(
self.classification_labels, tokenizer
)
)
self.classification_labels = [
self.class_indices[label] for label in self.classification_labels
]
@staticmethod
def get_class_weights(labels, tokenizer):
classes = sorted(list(set(labels)))
class_indices = {label: idx for idx, label in enumerate(classes)}
label_indices = [class_indices[label] for label in labels]
class_values = []
for class_name in classes:
class_values.append(
tokenizer.encode(class_name, add_special_tokens=False)[0]
)
class_weights = compute_class_weight(
class_weight="balanced",
classes=np.unique(label_indices),
y=label_indices,
)
return class_weights, class_values, class_indices
@staticmethod
def _preprocess(data, tokenizer, training_args):
formatted_inputs = [
tokenizer.apply_chat_template(d["messages"], tokenize=False)
for d in data
]
formatted_prompts = [
tokenizer.apply_chat_template(
d["messages"][:-1], tokenize=False, add_generation_prompt=True
)
for d in data
]
tokenized_inputs = tokenizer(
formatted_inputs,
padding=True,
padding_side="left",
return_tensors="pt",
add_special_tokens=False,
)
tokenized_prompts = tokenizer(
formatted_prompts,
padding=True,
padding_side="left",
return_tensors="pt",
add_special_tokens=False,
)
attention_mask = tokenized_prompts["attention_mask"]
input_ids = tokenized_prompts["input_ids"]
labels = tokenized_inputs["input_ids"][
:, tokenized_prompts["input_ids"].shape[1]
]
attention_mask = attention_mask[:, -training_args.model_max_length :]
input_ids = input_ids[:, -training_args.model_max_length :]
return {
"input_ids": input_ids,
"labels": labels,
"attention_mask": attention_mask,
}
def convert_to_dataset(self):
return Dataset.from_dict(
{
"input_ids": self.input_ids,
"labels": self.labels,
"attention_mask": self.attention_mask,
}
)
# Custom Trainer with weighted loss
class WeightedLoss:
def __init__(self, class_weights=None, class_values=None):
self.class_weights = torch.tensor(class_weights).cuda()
self.class_values = class_values
def compute_loss(self, outputs, labels, **kwargs):
logits = outputs.get("logits")
# Compute loss based on last token logits
logits = logits[:, -1, self.class_values].reshape(
-1, len(self.class_values)
)
ce_labels = torch.tensor(
[self.class_values.index(v) for v in labels]
).to(labels.device)
if self.class_weights.dtype != logits.dtype:
self.class_weights = self.class_weights.to(logits.dtype)
loss_fct = torch.nn.CrossEntropyLoss(weight=self.class_weights)
loss = loss_fct(logits, ce_labels)
return loss
# Load and prepare the dataset
def load_and_prepare_data(training_args, tokenizer):
dataset = [
{
"messages": [
{
"role": "user",
"content": (
"Please respond with " + ("no" if i % 2 else "yes")
),
},
{"role": "assistant", "content": "no" if i % 2 else "yes"},
]
}
for i in range(1000)
]
dataset = SupervisedDataset(
dataset,
tokenizer,
training_args,
)
class_weights, class_values, class_indices = (
dataset.class_weights,
dataset.class_values,
dataset.class_indices,
)
dataset = dataset.convert_to_dataset()
return (
dataset,
None,
class_weights,
class_values,
class_indices,
)
# Training function
def train(model_args, training_args):
if training_args.lr_scheduler_type == "cosine_with_restarts":
training_args.lr_scheduler_kwargs = {
"num_cycles": 1 + training_args.num_train_epochs // 10
}
# Load the pretrained model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(
model_args.model_name_or_path,
cache_dir=training_args.cache_dir,
model_max_length=training_args.model_max_length,
truncation_side="left",
)
model = AutoModelForCausalLM.from_pretrained(
model_args.model_name_or_path,
cache_dir=training_args.cache_dir,
)
# Augment tokenizer
if tokenizer.pad_token is None:
tokenizer.pad_token = model_args.pad_token
if tokenizer.unk_token is None:
tokenizer.unk_token = model_args.unk_token
# Load and prepare data
train_dataset, _, class_weights, class_values, _ = load_and_prepare_data(
training_args, tokenizer
)
# Loss function
custom_loss = WeightedLoss(class_weights, class_values)
# Initialize the Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
compute_loss_func=lambda x, y, **kwargs: custom_loss.compute_loss(x, y),
)
# Start training
trainer.train()
trainer.save_model(output_dir=training_args.output_dir)
if __name__ == "__main__":
parser = HfArgumentParser((ModelArguments, CustomTrainingArguments))
model_args, training_args = parser.parse_args_into_dataclasses()
train(
model_args,
training_args,
)
```
#### Command
Command that triggers the error (considering the previous code is in a file called `bug.py`)
```
torchrun --nproc_per_node=2 --master_port=19527 bug.py --model_name_or_path meta-llama/Llama-3.2-1B-Instruct --output_dir outputs/test/ --num_train_epochs 5 --per_device_train_batch_size 4 --model_max_length 1024 --gradient_accumulation_steps 8 --evaluation_strategy "no" --save_strategy "no" --save_total_limit 1 --learning_rate 2.5e-6 --weight_decay 0. --warmup_ratio 0.03 --lr_scheduler_type "cosine_with_restarts" --logging_steps 1 --fsdp "full_shard auto_wrap" --fsdp_transformer_layer_cls_to_wrap "LlamaDecoderLayer" --bf16 True --tf32 True
```
### Expected behavior
I'm trying to fine-tune a model using the `Trainer` library. I am using `TorchRun` with FSDP to distribute the training over multiple GPUs. If I run the provided code with a single process, it works fine. However, if I increase `nproc_per_node`, I get the error provided with the example.
This error first seemed to be a PyTorch error, for which I created an issue [here](https://github.com/pytorch/pytorch/issues/140471). However, as pointed out by @JoyceZhangSS and @jiaqiw09, this is an issue related to transformers version 4.46.2: 4.46.1 does not have this bug, and training happens as expected.
I reproduced the error in a standalone file with a dummy dataset, provided in this issue. However, it occurs with any dataset, and with the standard loss: the [default alpaca training code](https://github.com/tatsu-lab/stanford_alpaca/blob/main/train.py) leads to the same error, both with Llama and OPT models. I did some investigation into the issue that might be helpful:
* I was able to reproduce this error in multiple environments.
* This error is triggered during the optimization step. Specifically, it is triggered by this snippet of code in `torch/optim/adamw.py:480`:
```python
grouped_tensors = Optimizer._group_tensors_by_device_and_dtype(
[params, grads, exp_avgs, exp_avg_sqs, max_exp_avg_sqs, state_steps] # type: ignore[list-item]
)
```
* The problems stems from grads being bf16 while the rest are float32 --- in `transformers==4.46.1`, all groups are float32.
* I looked at the diff between both versions and found the change responsible for the bug. In `trainer.py`, you did the following change:
```python
2473 - with self.accelerator.accumulate(model):
2474 + # We explicitly want to avoid relying on `accelerator.accumulate` for generation training
2475 + context = (
2476 + functools.partial(self.accelerator.no_sync, model=model)
2477 + if i == len(batch_samples) - 1
2478 + else contextlib.nullcontext
2479 + )
2480 + with context():
```
This context seems responsible for syncing the gradients across devices, so I tried reverting this change, and the error stops happening. I don't know enough about this to understand what the context does precisely, or why you do not want to rely on it, but removing seems to be what broke the code. | [
64,
17,
80
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | [
"bug",
"PyTorch FSDP",
"Accelerate"
] |
https://api.github.com/repos/huggingface/transformers/issues/34602 |
TITLE
NaN model parameter found in meta-llama/Llama-3.2-11B-Vision under 4.46.1 version
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
We tried with H100/A100 GPU machine both got the same issues.
transformers==4.46.1
torch==2.3.1+cu121
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import transformers
import torch
from transformers import (
AutoConfig,
AutoModelForCausalLM)
print(transformers.__version__)
print(torch.__version__)
model_name_or_path ="meta-llama/Llama-3.2-11B-Vision"
def check_for_nan_parameters(model):
for name, param in model.named_parameters():
if torch.isnan(param).any(): # Check if any value in the parameter tensor is NaN
print(f"NaN found in parameter: {name}")
return True
print("No NaNs found in model parameters.")
return False
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
torch_dtype=torch.bfloat16,device_map ='auto')
print(check_for_nan_parameters(model))
```
# The above code will return
#NaN found in parameter: model.layers.0.input_layernorm.weight
# True
### Expected behavior
We should not see NAN in model parameters. The above codes should return False.
We tried the above codes for transformers==4.45.2 and did not see any NaN in model parameters. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/33719 |
TITLE
Misleading warning
COMMENTS
4
REACTIONS
+1: 1
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
https://github.com/huggingface/transformers/blob/main/src/transformers/tokenization_utils_base.py#L1618C1-L1619C1
The removal of this warning did not happen in v4.45
cc: @ArthurZucker and @itazap | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/33674 |
TITLE
ValueError during training with streaming dataset.
COMMENTS
5
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.44.0
- Platform: Linux-5.15.0-88-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.24.7
- Safetensors version: 0.4.5
- Accelerate version: 0.34.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.0a0+81ea7a4 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA H800
### Who can help?
@muellerzr @SunMarc
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I want to do training with streaming dataset, as my dataset is super large. The code like the following:
```
dataset = load_dataset('data_path', streaming=True)
dataloader = DataCollatorForLanguageModeling(tokenizer,mlm=False)
trainer = Trainer(
model=model,
tokenizer=tokenizer,
args=training_args,
train_dataset=train_dataset if training_args.do_train else None,
eval_dataset=valid_dataset if training_args.do_eval else None,
data_collator=dataloader,
)
```
I met the following error:
```
File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 611, in __init__
raise ValueError(
ValueError: The train_dataset does not implement __len__, max_steps has to be specified. The number of steps needs to be known in advance for the learning rate scheduler.
```
How to solve this problem? or is there another way to train with large datasets?
### Expected behavior
- | [
66,
64,
7
] | [
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"trainer",
"bug",
"data"
] |
https://api.github.com/repos/huggingface/transformers/issues/35634 |
TITLE
ValueError: MllamaForConditionalGeneration does not support Flash Attention 2.0 yet
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.47.1
- Platform: Linux-4.18.0-553.22.1.el8_10.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.15
- Huggingface_hub version: 0.26.3
- Safetensors version: 0.4.5
- Accelerate version: 0.34.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA H200
### Who can help?
@amyeroberts, @qubvel
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
code:
```
import requests
import torch
from PIL import Image
from transformers import MllamaForConditionalGeneration, AutoProcessor
model_id = "meta-llama/Llama-3.2-11B-Vision-Instruct"
model = MllamaForConditionalGeneration.from_pretrained(model_id, device_map="auto", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2")
processor = AutoProcessor.from_pretrained(model_id)
messages = [
[
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "What does the image show?"}
]
}
],
]
text = processor.apply_chat_template(messages, add_generation_prompt=True)
url = "https://llava-vl.github.io/static/images/view.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=text, images=image, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=25)
print(processor.decode(output[0]))
```
output:
Traceback (most recent call last):
File "/home/user/reasoning/test_mllama.py", line 8, in <module>
model = MllamaForConditionalGeneration.from_pretrained(model_id, device_map="auto", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2")
File "/home/user/cache/conda/envs/openinstruct/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4124, in from_pretrained
config = cls._autoset_attn_implementation(
File "/home/user/cache/conda/envs/openinstruct/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1586, in _autoset_attn_implementation
cls._check_and_enable_flash_attn_2(
File "/home/user/cache/conda/envs/openinstruct/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1707, in _check_and_enable_flash_attn_2
raise ValueError(
ValueError: MllamaForConditionalGeneration does not support Flash Attention 2.0 yet. Please request to add support where the model is hosted, on its model hub page: https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct/discussions/new or in the Transformers GitHub repo: https://github.com/huggingface/transformers/issues/new
### Expected behavior
Support flash attention 2 | [
76,
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request",
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34223 |
TITLE
Expanding inputs for image tokens in BLIP-2 breaks batch inference.
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 3
BODY
### System Info
- `transformers` version: 4.45.2
- Platform: Linux-5.15.0-120-generic-x86_64-with-glibc2.35
- Python version: 3.10.15
- Huggingface_hub version: 0.25.2
- Safetensors version: 0.4.5
- Accelerate version: 1.0.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: None
- Using GPU in script?: None
- GPU type: NVIDIA A100-PCIE-40GB
### Who can help?
@zucchini-nlp
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I tried to use the model `Salesforce/blip2-flan-t5-xl`, but encountered a warning.
> Expanding inputs for image tokens in BLIP-2 should be done in processing. Please follow instruction here (https://gist.github.com/zucchini-nlp/e9f20b054fa322f84ac9311d9ab67042) to update your BLIP-2 model. Using processors without these attributes in the config is deprecated and will throw an error in v4.47.
The code mentioned there that need to be added is:
```
# Load your model and processor and run the following to update BLIP-2 model
# It will update file in your repo by adding new args in configs and resizing embedding layer
# Then you'll be able to run BLIP-2 without warnings/errors
from transformers import AddedToken
processor.num_query_tokens = model.config.num_query_tokens
image_token = AddedToken("<image>", normalized=False, special=True)
processor.tokenizer.add_tokens([image_token], special_tokens=True)
model.resize_token_embeddings(len(processor.tokenizer), pad_to_multiple_of=64) # pad for efficient computation
model.config.image_token_index = len(processor.tokenizer) - 1
model.push_to_hub("YOUR-REPO")
processor.push_to_hub("YOUR-REPO")
```
After adding the code mentioned there, I received error when perform batch inference. What should I do?
```
import torch
from torch import device
from transformers import AutoProcessor, Blip2Processor, Blip2ForConditionalGeneration, AddedToken
from PIL import Image
from PIL.ImageFile import ImageFile
from model.utils.init_model import *
class BLIP2Generator:
def __init__(self, model_dir: str = '', device: str = 'cpu'):
self.model_dir = model_dir
self.device = device
self.model, self.processor = self._create()
def _create(self) -> tuple[Blip2ForConditionalGeneration, Blip2Processor]:
"""
Create a BLIP2 model and processor for text generation based on image input.
Returns:
A tuple containing the model and processor.
"""
model_name = f"Salesforce/blip2-flan-t5-xl"
processor = AutoProcessor.from_pretrained(
model_name,
cache_dir=self.model_dir,
)
model= Blip2ForConditionalGeneration.from_pretrained(
model_name,
cache_dir=self.model_dir,
).to(self.device)
# Following https://gist.github.com/zucchini-nlp/e9f20b054fa322f84ac9311d9ab67042#file-update_blip-py
processor.num_query_tokens = model.config.num_query_tokens
image_token = AddedToken("<image>", normalized=False, special=True)
processor.tokenizer.add_tokens([image_token], special_tokens=True)
# Resize token embeddings for efficient computation
model.resize_token_embeddings(len(processor.tokenizer), pad_to_multiple_of=64)
model.config.image_token_index = len(processor.tokenizer) - 1
return model, processor
def gen(
self,
images: ImageFile | list[ImageFile],
questions: str | list[str] = "Please describe this image in detail.",
max_new_tokens: int = 200,
) -> str:
if not isinstance(images, list):
images = [images]
if not isinstance(questions, list):
questions = [questions] * len(images)
assert len(images) == len(questions), "Number of images and questions must match."
print(f"Processing {len(images)} images and {len(questions)} questions.")
with torch.inference_mode():
inputs = self.processor(
images=images,
text=questions,
return_tensors="pt",
return_token_type_ids=False,
padding=True,
).to(self.device, torch.bfloat16)
# Debug: Print input shapes for checking
print(f"Pixel values shape: {inputs['pixel_values'].shape}")
print(f"Input IDs shape: {inputs['input_ids'].shape}")
output = self.model.generate(
**inputs,
max_new_tokens=max_new_tokens,
)
decoded_output = self.processor.batch_decode(
output,
skip_special_tokens=True,
clean_up_tokenization_spaces=False,
)
return decoded_output if len(decoded_output) > 1 else decoded_output[0]
def demo_blip2():
blip2 = BLIP2Generator(model_dir=model_dir, device=device)
image_1 = Image.open('./demo/material/1.jpg')
image_2 = Image.open('./demo/material/2.jpg')
question_1 = "How many birds in the picture?"
question_2 = "What is on the hand of the girl?"
result = blip2.gen([image_1, image_2], questions=[question_1, question_2])
print(result)
```
The output is:
```
Processing 2 images and 2 questions.
Pixel values shape: torch.Size([2, 3, 224, 224])
Input IDs shape: torch.Size([1, 42])
Traceback (most recent call last):
File "/root/llm-project/LVLM/demo.py", line 4, in <module>
demo_blip2()
File "/root/llm-project/LVLM/demo/blip2.py", line 16, in demo_blip2
result = blip2.gen([image_1, image_2], questions=[question_1, question_2])
File "/root/llm-project/LVLM/model/blip2_generator.py", line 102, in gen
output = self.model.generate(
File "/root/anaconda3/envs/LVLM/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/root/anaconda3/envs/LVLM/lib/python3.10/site-packages/transformers/models/blip_2/modeling_blip_2.py", line 2324, in generate
inputs_embeds[special_image_mask] = language_model_inputs.flatten()
RuntimeError: shape mismatch: value tensor of shape [131072] cannot be broadcast to indexing result of shape [65536]
```
I provide the same number of images and texts (equally 2), but the processor don't work as espected, since
`Pixel values shape: torch.Size([2, 3, 224, 224]) Input IDs shape: torch.Size([1, 42])”`
After remove the code mentioned ahead, the batch inference get correctly down, and get
`Pixel values shape: torch.Size([2, 3, 224, 224]) Input IDs shape: torch.Size([2, 10])`
### Expected behavior
What should I do? | [
64,
62
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug",
"Vision"
] |
https://api.github.com/repos/huggingface/transformers/issues/33525 |
TITLE
Multi-GPU Training Object Detection
COMMENTS
4
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.45.0.dev0
- Platform: Linux-5.4.0-167-generic-x86_64-with-glibc2.35
- Python version: 3.10.14
- Huggingface_hub version: 0.24.5
- Safetensors version: 0.4.4
- Accelerate version: 0.33.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA TITAN RTX
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://github.com/huggingface/transformers/blob/main/examples/pytorch/object-detection/run_object_detection.py
Simply running this script with two or more GPUs
### Expected behavior
While investigating https://github.com/huggingface/transformers/issues/31677 (cc. @SunMarc )
Former issue
https://github.com/huggingface/transformers/issues/28740
https://github.com/huggingface/transformers/issues/31461
https://github.com/huggingface/transformers/issues/13197
needed to be resolved so I have digged into it.
Found out the issue was no longer related to @NielsRogge comments (which was related to normalize the num_boxes). Now it is related to the targets in `accelerate` and `Trainer` with concat to the multi-gpu circumstances so following error was occured.
```
indices = [linear_sum_assignment(c[i]) for i, c in enumerate(cost_matrix.split(sizes, -1))]
IndexError: index 2 is out of bounds for dimension 0 with size 2
```
If we do not use `Trainer` class it resolves by follows (no bug)
```
cost_matrix torch.Size([2, 100, 34])
sizes [22, 12]
```
However, when we use `Trainer` class it shows -> which is the circumstances of 3 multi-GPUs concat the individual targets.
```
cost_matrix torch.Size([2, 100, 17])
sizes [3, 2, 3, 1, 3, 5]
```
I am more investigating how to fundamentally fix this problem (not modifying model files just simply add some argument such as `do_train_concat`) but issue this for also other people who might be interested in. (cc. @qubvel ) | [
64,
62
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug",
"Vision"
] |
https://api.github.com/repos/huggingface/transformers/issues/36025 |
TITLE
HIGGS Quantization not working properly
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
**Environment**
```
- `transformers` version: 4.48.2
- Platform: Linux-5.4.210-39.1.pagevecsize-x86_64-with-glibc2.27
- Python version: 3.11.10
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 1.1.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA A100-SXM4-80GB
- fast_hadamard_transform 1.0.4.post1
```
### Who can help?
@BlackSamorez
@SunMarc
@ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Recently, in the [PR](https://github.com/huggingface/transformers/pull/34997) HIGGS quantization from the paper [Pushing the Limits of Large Language Model Quantization via the Linearity Theorem](https://arxiv.org/abs/2411.17525) was introduced.
But when attempting to load the quantized `Llama-3.1-8B-Instruct `model in this format as follows:
```python
model_name = "meta-llama/Llama-3.1-8B-Instruct"
quantization_config = HiggsConfig(bits=4, p=2)
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
quantization_config=quantization_config
)
model.config.use_cache = False
```
And doing forward pass with dummy inputs
```python
inputs = torch.randint(0, model.config.vocab_size, device="cuda", size=(8,))
with torch.no_grad():
outputs = model(inputs)
```
I get the following error in the RoPE:
```bash
File ~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:271, in LlamaAttention.forward(self, hidden_states, position_embeddings, attention_mask, past_key_value, cache_position, **kwargs)
[268](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:268) value_states = self.v_proj(hidden_states).view(hidden_shape).transpose(1, 2)
[270](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:270) cos, sin = position_embeddings
--> [271](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:271) query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
[273](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:273) if past_key_value is not None:
[274](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:274) # sin and cos are specific to RoPE models; cache_position needed for the static cache
[275](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:275) cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
File ~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:169, in apply_rotary_pos_emb(q, k, cos, sin, position_ids, unsqueeze_dim)
[167](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:167) cos = cos.unsqueeze(unsqueeze_dim)
[168](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:168) sin = sin.unsqueeze(unsqueeze_dim)
--> [169](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:169) q_embed = (q * cos) + (rotate_half(q) * sin)
[170](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:170) k_embed = (k * cos) + (rotate_half(k) * sin)
[171](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:171) return q_embed, k_embed
RuntimeError: The size of tensor a (32) must match the size of tensor b (128) at non-singleton dimension 3
```
### Expected behavior
I would expect successful forward pass through the quantized model. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/33413 |
TITLE
Exception raised with trainer + `accelerate launch` FSDP + large gradient accumulation steps + small dataset
COMMENTS
12
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.44.2
- Platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
- Python version: 3.10.14
- Huggingface_hub version: 0.23.4
- Safetensors version: 0.4.3
- Accelerate version: 0.29.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: yes (accelerate FSDP)
- Using GPU in script?: yes
- GPU type: NVIDIA RTX A6000
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
This is a duplicate of #24098 and #25695, but I figured it'd still be useful to resubmit this issue since (1) I have a code example, and (2) I paste a different error message I get with mixed precision, which may increase visibility for other people who run into this problem and search for existing GitHub issues.
When I do multi-GPU training (launched with `accelerate launch --num_processes=2`) using `Trainer` with a small dataset size and `gradient_accumulation_steps > 2`, I often repeatedly get the following error:
```python-traceback
Traceback (most recent call last):
File "/workspace/program.py", line 34, in <module>
trainer.train()
File "/usr/local/venv/lib/python3.10/site-packages/transformers/trainer.py", line 1938, in train
return inner_training_loop(
File "/usr/local/venv/lib/python3.10/site-packages/transformers/trainer.py", line 2341, in _inner_training_loop
self.optimizer.step()
File "/usr/local/venv/lib/python3.10/site-packages/accelerate/optimizer.py", line 150, in step
self.optimizer.step(closure)
File "/usr/local/venv/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 75, in wrapper
return wrapped(*args, **kwargs)
File "/usr/local/venv/lib/python3.10/site-packages/torch/optim/optimizer.py", line 385, in wrapper
out = func(*args, **kwargs)
File "/usr/local/venv/lib/python3.10/site-packages/torch/optim/optimizer.py", line 76, in _use_grad
ret = func(self, *args, **kwargs)
File "/usr/local/venv/lib/python3.10/site-packages/torch/optim/adamw.py", line 187, in step
adamw(
File "/usr/local/venv/lib/python3.10/site-packages/torch/optim/adamw.py", line 339, in adamw
func(
File "/usr/local/venv/lib/python3.10/site-packages/torch/optim/adamw.py", line 549, in _multi_tensor_adamw
torch._foreach_lerp_(device_exp_avgs, device_grads, 1 - beta1)
RuntimeError: The size of tensor a (3219712) must match the size of tensor b (128) at non-singleton dimension 1
```
If FP16 mixed-precision is enabled then the error looks like this instead:
```python-traceback
Traceback (most recent call last):
File "/workspace/program.py", line 34, in <module>
trainer.train()
File "/usr/local/venv/lib/python3.10/site-packages/transformers/trainer.py", line 1938, in train
return inner_training_loop(
File "/usr/local/venv/lib/python3.10/site-packages/transformers/trainer.py", line 2341, in _inner_training_loop
self.optimizer.step()
File "/usr/local/venv/lib/python3.10/site-packages/accelerate/optimizer.py", line 137, in step
self.scaler.step(self.optimizer, closure)
File "/usr/local/venv/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py", line 457, in step
retval = self._maybe_opt_step(optimizer, optimizer_state, *args, **kwargs)
File "/usr/local/venv/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py", line 352, in _maybe_opt_step
retval = optimizer.step(*args, **kwargs)
File "/usr/local/venv/lib/python3.10/site-packages/accelerate/optimizer.py", line 192, in patched_step
return method(*args, **kwargs)
File "/usr/local/venv/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 75, in wrapper
return wrapped(*args, **kwargs)
File "/usr/local/venv/lib/python3.10/site-packages/torch/optim/optimizer.py", line 385, in wrapper
out = func(*args, **kwargs)
File "/usr/local/venv/lib/python3.10/site-packages/torch/optim/optimizer.py", line 76, in _use_grad
ret = func(self, *args, **kwargs)
File "/usr/local/venv/lib/python3.10/site-packages/torch/optim/adamw.py", line 187, in step
adamw(
File "/usr/local/venv/lib/python3.10/site-packages/torch/optim/adamw.py", line 339, in adamw
func(
File "/usr/local/venv/lib/python3.10/site-packages/torch/optim/adamw.py", line 516, in _multi_tensor_adamw
grouped_tensors = Optimizer._group_tensors_by_device_and_dtype([
File "/usr/local/venv/lib/python3.10/site-packages/torch/optim/optimizer.py", line 409, in _group_tensors_by_device_and_dtype
return _group_tensors_by_device_and_dtype(tensorlistlist, with_indices)
File "/usr/local/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/venv/lib/python3.10/site-packages/torch/utils/_foreach_utils.py", line 38, in _group_tensors_by_device_and_dtype
torch._C._group_tensors_by_device_and_dtype(tensorlistlist, with_indices).items()
RuntimeError: Tensors of the same index must be on the same device and the same dtype except `step` tensors that can be CPU and float32 notwithstanding
```
Here's a minimal example — run the following with `accelerate launch --config_file=accelerate_config.yaml --num_processes=2 program.py`
```python
# program.py
from datasets import Dataset
from transformers import (
AutoModelForSequenceClassification,
AutoTokenizer,
Trainer,
TrainingArguments,
)
dataset = Dataset.from_dict(
{"text": ["positive", "negative"], "label": [1, 0]}
) # tiny dataset of 2 examples
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/pythia-14m")
tokenized_dataset = dataset.map(lambda x: tokenizer(x["text"]), batched=True)
model = AutoModelForSequenceClassification.from_pretrained(
"EleutherAI/pythia-14m", num_labels=2
)
model.config.pad_token_id = tokenizer.eos_token_id
training_args = TrainingArguments(
output_dir="/tmp/results",
num_train_epochs=10,
per_device_train_batch_size=2,
gradient_accumulation_steps=16,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_dataset,
)
trainer.train()
```
```yaml
# accelerate_config.yaml
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: FSDP
downcast_bf16: "no"
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch: BACKWARD_PRE
fsdp_cpu_ram_efficient_loading: true
fsdp_forward_prefetch: false
fsdp_offload_params: false
fsdp_sharding_strategy: FULL_SHARD
fsdp_state_dict_type: SHARDED_STATE_DICT
fsdp_sync_module_states: true
fsdp_use_orig_params: true
machine_rank: 0
main_training_function: main
mixed_precision: "no" # change this to "fp16" to get the other error
num_machines: 1
num_processes: 1
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
My use case for this was that I had a codebase where we had added some end-to-end tests. We used a very small dataset size since we wanted the test to still be reasonably fast, but then we hit into these exceptions and were confused.
### Expected behavior
I think I expect this to just work without crashing.
But maybe it's not really a sensible setup to have such a small training set. In #24098 commenters suggested that the training set size
> has to be greater than gradient_accumulation_steps * num_GPUs * per_device_train_batch_size.
In that case it would be nice to have an error message saying that this is the problem.
| [
66,
64,
17,
80
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | [
"trainer",
"bug",
"PyTorch FSDP",
"Accelerate"
] |
https://api.github.com/repos/huggingface/transformers/issues/35764 |
TITLE
Add timm_wrapper support to AutoFeatureExtractor
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
A few days ago, the PR that adds timm_wrapper was merged #34564 [blog post]( https://huggingface.co/blog/timm-transformers) , enabling the use of timm models directly with Hugging Face interfaces, especially the Auto* ones. However, currently the AutoFeatureExtractor interface doesn't work with these models. This PR addresses that gap.
This PR adds timm_wrapper compatibility to AutoFeatureExtractor.from_pretrained(), enabling it to work with fine-tuned/trained timm model checkpoints.
Currently, when using a checkpoint from a trained/fine-tuned timm model (e.g., using examples/pytorch/image-classification/run_image_classification.py), AutoFeatureExtractor.from_pretrained() fails because timm_wrapper is not included in the interface.
While there's a warning about missing preprocessor_config.json in checkpoints, users can manually add it to their checkpoint following examples like https://huggingface.co/Factral/vit_large-model/blob/main/preprocessor_config.json. This PR ensures AutoFeatureExtractor works properly when this file is present.
## Changes
- Added timm_wrapper to AutoFeatureExtractor interface
- Enables compatibility with timm model checkpoints when preprocessor_config.json is present
- Added is_timm kwarg in from_dict function
## Before submitting
- [x] Read contributor guidelines
- [x] Updated documentation to reflect changes
- [x] Added necessary tests for timm_wrapper functionality
## Who can review?
@amyeroberts @qubvel - as this relates to vision models and timm integration | [
62
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Vision"
] |
https://api.github.com/repos/huggingface/transformers/issues/34113 |
TITLE
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP is not working with the Trainer
COMMENTS
4
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
<img width="896" alt="image" src="https://github.com/user-attachments/assets/67c71759-adae-46d9-aa41-3fecf8d28684">
acc_cfg.yml:
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: FSDP
downcast_bf16: 'no'
enable_cpu_affinity: true
fsdp_config:
fsdp_activation_checkpointing: true
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch: NO_PREFETCH
fsdp_cpu_ram_efficient_loading: true
fsdp_forward_prefetch: true
fsdp_offload_params: true
fsdp_sharding_strategy: FULL_SHARD
fsdp_state_dict_type: SHARDED_STATE_DICT
fsdp_sync_module_states: true
fsdp_use_orig_params: true
machine_rank: 0
main_process_ip: 0.0.0.0
main_process_port: 0
main_training_function: main
mixed_precision: bf16
num_machines: 3
num_processes: 24
rdzv_backend: etcd-v2
same_network: false
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
accelerate launch --config_file acc_cfg.yml train.py $TRAINING_ARGS
the train.py is any training script that train using transformers.Trainer
$TRAINING_ARGS are the TrainingArguments plus some path to data
<img width="907" alt="fdsp_trans" src="https://github.com/user-attachments/assets/f9228d6c-3bfa-44b7-8c79-20f4793cfb5f">
### Expected behavior
Train Paligemma model with FSDP and have PaliGemmaMultiModalProjector wrapped. | [
66,
64,
17
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"trainer",
"bug",
"PyTorch FSDP"
] |
https://api.github.com/repos/huggingface/transformers/issues/35790 |
TITLE
Add StyleTTS 2
COMMENTS
0
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
Adds [StyleTTS 2](https://huggingface.co/papers/2306.07691) to support the original model but also other checkpoints like [Kokoro](https://huggingface.co/hexgrad/Kokoro-82M). | [
77,
43
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
"New model",
"Audio"
] |
https://api.github.com/repos/huggingface/transformers/issues/33246 |
TITLE
ValueError: Cannot use apply_chat_template() because tokenizer.chat_template is not set
COMMENTS
8
REACTIONS
+1: 1
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
Transformers v4.45.0.dev0
### Who can help?
@Rocketknight1
### Reproduction
The code snippet from [here](https://huggingface.co/docs/transformers/main/en/chat_templating#introduction) doesn't seem to work, I assume this is because models no longer have a default chat template if they don't have it set:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
chat = [
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]
tokenizer.apply_chat_template(chat, tokenize=False)
```
results in
```
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/tokenization_utils_base.py:1602: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be deprecated in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
warnings.warn(
Traceback (most recent call last):
File "/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/models/blenderbot/test.py", line 10, in <module>
tokenizer.apply_chat_template(chat, tokenize=False)
File "/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/tokenization_utils_base.py", line 1787, in apply_chat_template
chat_template = self.get_chat_template(chat_template, tools)
File "/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/tokenization_utils_base.py", line 1938, in get_chat_template
raise ValueError(
ValueError: Cannot use apply_chat_template() because tokenizer.chat_template is not set and no template argument was passed! For information about writing templates and setting the tokenizer.chat_template attribute, please see the documentation at https://huggingface.co/docs/transformers/main/en/chat_templating
```
### Expected behavior
A working code snippet | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34534 |
TITLE
Duplicate ZeRo 3 Global Step Checkpoint Saves
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 2
BODY
### System Info
- `transformers` version: 4.45.0
- Platform: Linux-5.10.0-33-cloud-amd64-x86_64-with-glibc2.31
- Python version: 3.10.14
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 1.0.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: yes
- Using GPU in script?: yes
- GPU type: NVIDIA A100-SXM4-80GB
### Who can help?
@muellerzr @SunMarc
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When using a model with DeepSpeed Stage ZeRo 3 trainer saves the contents of the global step twice causing long checkpoint saving times.
It stems from the following code block in `trainer.py`
```
def _save_checkpoint(self, model, trial):
# In all cases, including ddp/dp/deepspeed, self.model is always a reference to the model we
# want to save except FullyShardedDDP.
# assert unwrap_model(model) is self.model, "internal model should be a reference to self.model"
# Save model checkpoint
checkpoint_folder = f"{PREFIX_CHECKPOINT_DIR}-{self.state.global_step}"
if self.hp_search_backend is None and trial is None:
self.store_flos()
run_dir = self._get_output_dir(trial=trial)
output_dir = os.path.join(run_dir, checkpoint_folder)
self.save_model(output_dir, _internal_call=True)
if not self.args.save_only_model:
# Save optimizer and scheduler
self._save_optimizer_and_scheduler(output_dir)
# Save RNG state
self._save_rng_state(output_dir)
```
Where it first calls `self.save_model(....)` and then eventually the deepspeed checkpointing function `self.model_wrapped.save_checkpoint(output_dir)`. The next line then calls `self._save_optimizer_and_scheduler(...)` which will end calling `self.model_wrapped.save_checkpoint(output_dir)`.
This results in duplicate saving of the global step and with optimiser states this means long checkpoint saving time.
### Expected behavior
Saving of a global step using ZeRo stage 3 only saves once. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34164 |
TITLE
Inconsistent Output with and without Prompt Caching in Llama-3.1-8B-Instruct.
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.45.1
- Platform: Linux-6.6.35-amd64-x86_64-with-glibc2.35
- Python version: 3.11.6
- Huggingface_hub version: 0.25.2
- Safetensors version: 0.4.3
- Accelerate version: 1.0.1
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: no
- use_cpu: False
- debug: False
- num_processes: 2
- machine_rank: 0
- num_machines: 1
- gpu_ids: 0,1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.4.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: NO
- Using GPU in script?: YES
- GPU type: NVIDIA H100 80GB HBM3
### Who can help?
@gante @ArthurZucker @itaza
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Generate Responses with Cache, following [Re-use Cache to continue generation](https://huggingface.co/docs/transformers/en/kv_cache#re-use-cache-to-continue-generation)
```python
import copy
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, StaticCache
model_id = "meta-llama/Llama-3.1-8B-Instruct"
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="cuda")
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Init StaticCache with big enough max-length
prompt_cache = StaticCache(config=model.config, max_batch_size=1, max_cache_len=1024, device="cuda", dtype=torch.bfloat16)
INITIAL_PROMPT = "You are a helpful assistant. "
inputs_initial_prompt = tokenizer(INITIAL_PROMPT, return_tensors="pt").to("cuda")
with torch.no_grad():
prompt_cache = model(**inputs_initial_prompt, past_key_values=prompt_cache).past_key_values
prompts = ["Help me to write a blogpost about travelling.", "What is the capital of France?"]
responses = []
for prompt in prompts:
new_inputs = tokenizer(INITIAL_PROMPT + prompt, return_tensors="pt").to("cuda")
past_key_values = copy.deepcopy(prompt_cache)
outputs = model.generate(**new_inputs, past_key_values=past_key_values, max_new_tokens=20)
response = tokenizer.batch_decode(outputs)[0]
responses.append(response)
print(responses)
```
2. Observed Output
```python
['<|begin_of_text|>You are a helpful assistant. Help me to write a blogpost about travelling. I have some ideas, but I’ts not clear how to structure the post. I',
'<|begin_of_text|>You are a helpful assistant. What is the capital of France? Paris. is the capital of the United States? Washington D.C. is the capital of']
```
3. Generate response without cache
```python
responses = []
for prompt in prompts:
new_inputs = tokenizer(INITIAL_PROMPT + prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**new_inputs, max_new_tokens=20, use_cache=False)
response = tokenizer.batch_decode(outputs)[0]
responses.append(response)
print(responses)
```
4. Observed Output
```python
['<|begin_of_text|>You are a helpful assistant. Help me to write a blogpost about travelling. Here’s what I need to write about:\nTitle: “The Magic of Exploring New Places:',
'<|begin_of_text|>You are a helpful assistant. What is the capital of France? Paris.\nWhat is the capital of Australia? Canberra.\nWhat is the capital of Brazil? Brasília']
```
### Expected behavior
The output without cache should be exactly the same as the one that uses the cache. | [
64,
18,
33
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug",
"Generation",
"Cache"
] |
https://api.github.com/repos/huggingface/transformers/issues/33296 |
TITLE
Decoder and cross-attention shape is different when obtained by model.generate() and model()
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.43.3
- Platform: Linux-6.5.0-45-generic-x86_64-with-glibc2.35
- Python version: 3.11.9
- Huggingface_hub version: 0.24.5
- Safetensors version: 0.4.3
- Accelerate version: 0.33.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: no
- Using GPU in script?: yes
- GPU type: NVIDIA A100-PCIE-40GB
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi!
If you set `trigger_error` to `True`, you will see the differences for the decoder-attention (also for the cross-attention) shape when the translation is generated by model.generate() and model(). I don't know if this is a bug or just expected to be different. I have checked that the attention values are the same when all the information is structured the same way (there are differences in precision though, which I think is because model.generate() generates differently than model()).
```python
import torch
import transformers
trigger_error = True # Chante THIS
pretrained_model = "facebook/nllb-200-distilled-600M"
device = "cuda" if torch.cuda.is_available() else "cpu"
source_lang = "eng_Latn"
target_lang = "spa_Latn"
tokenizer = transformers.AutoTokenizer.from_pretrained(pretrained_model, src_lang=source_lang, tgt_lang=target_lang)
model = transformers.AutoModelForSeq2SeqLM.from_pretrained(pretrained_model).to(device)
source_text = ["Hello!", "Hello again!!!!!!"]
inputs = tokenizer(source_text, return_tensors="pt", add_special_tokens=True, max_length=1024, truncation=True, padding=True).to(device)
target_lang_id = tokenizer.convert_tokens_to_ids(target_lang)
translated_tokens = model.generate(**inputs, forced_bos_token_id=target_lang_id, max_new_tokens=200, num_return_sequences=1, num_beams=1, return_dict_in_generate=True, output_attentions=True)
translated_tokens_model = model(**inputs, decoder_input_ids=translated_tokens.sequences[:,:-1], output_attentions=True)
# Checks
num_hidden_layers = model.config.num_hidden_layers
num_attention_heads = model.config.num_attention_heads
batch_size = len(source_text)
# Encoder
assert len(translated_tokens.encoder_attentions) == num_hidden_layers # 12
assert len(translated_tokens_model.encoder_attentions) == num_hidden_layers # 12
for l in range(num_hidden_layers):
assert translated_tokens.encoder_attentions[l].shape == translated_tokens_model.encoder_attentions[l].shape
assert (translated_tokens.encoder_attentions[l] == translated_tokens_model.encoder_attentions[l]).all().cpu().item()
#####
def transformer_attention_to_common_structure(attention_ttg, attention_ttm):
## Transform attention from model.generate() to common structure with model()
decoded_tokens = attention_ttm[0].shape[-2:]
_decoder_attention = torch.zeros(num_hidden_layers, batch_size, num_attention_heads, *decoded_tokens).to(device)
for _decoded_tokens, t in enumerate(attention_ttg, 1):
# Causal mask
t = torch.stack(t, 0) # (num_hidden_layers, batch_size, num_attention_heads, 1, _decoded_tokens)
t = t.squeeze(-2)
_decoder_attention[:,:,:, _decoded_tokens - 1,:t.shape[-1]] = t
for l in range(num_hidden_layers):
assert _decoder_attention.shape == (len(attention_ttm), *attention_ttm[l].shape)
# assert (_decoder_attention[l] == attention_ttm[l]).all().cpu().item() # Differences due to precision (even when device="cpu")... model.generate() is generating different to model()?
assert torch.isclose(_decoder_attention[l], attention_ttm[l]).all().cpu().item()
# Decoder
assert len(translated_tokens.decoder_attentions) == num_hidden_layers if trigger_error else True
assert len(translated_tokens_model.decoder_attentions) == num_hidden_layers # 12
transformer_attention_to_common_structure(translated_tokens.decoder_attentions, translated_tokens_model.decoder_attentions)
# Cross
assert len(translated_tokens.cross_attentions) == num_hidden_layers if trigger_error else True
assert len(translated_tokens_model.cross_attentions) == num_hidden_layers # 12
transformer_attention_to_common_structure(translated_tokens.cross_attentions, translated_tokens_model.cross_attentions)
```
### Expected behavior
I would expect to have the same format for the decoder and cross-attention shape regardless of where I use model.generate() or model(). Specifically, I would expect to obtain the result from model(), which for the decoder we obtain a matrix for each layer of the shape (batch_size, attention_heads, generated_tokens - 1, generated_tokens - 1). | [
64,
18
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug",
"Generation"
] |
Subsets and Splits