url stringlengths 66 66 | repository_url stringclasses 1
value | labels_url stringlengths 80 80 | comments_url stringlengths 75 75 | events_url stringlengths 73 73 | html_url stringlengths 54 56 | id int64 2.03B 2.11B | node_id stringlengths 18 19 | number int64 27.9k 28.8k | title stringlengths 3 306 | user dict | labels list | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees list | milestone null | comments int64 0 39 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | active_lock_reason null | body stringlengths 19 42.4k ⌀ | reactions dict | timeline_url stringlengths 75 75 | performed_via_github_app null | state_reason stringclasses 3
values | draft bool 2
classes | pull_request dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/28103 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28103/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28103/comments | https://api.github.com/repos/huggingface/transformers/issues/28103/events | https://github.com/huggingface/transformers/issues/28103 | 2,045,776,155 | I_kwDOCUB6oc558BEb | 28,103 | OWL-VIT Vision Foundation Model deployment in the edge cases - Need SDPA support for OWL-ViT Model optimization for Edge Deployment | {
"login": "solomonmanuelraj",
"id": 25194971,
"node_id": "MDQ6VXNlcjI1MTk0OTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/25194971?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/solomonmanuelraj",
"html_url": "https://github.com/solomonmanuelraj",
"followers_url": "https://api.github.com/users/solomonmanuelraj/followers",
"following_url": "https://api.github.com/users/solomonmanuelraj/following{/other_user}",
"gists_url": "https://api.github.com/users/solomonmanuelraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/solomonmanuelraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/solomonmanuelraj/subscriptions",
"organizations_url": "https://api.github.com/users/solomonmanuelraj/orgs",
"repos_url": "https://api.github.com/users/solomonmanuelraj/repos",
"events_url": "https://api.github.com/users/solomonmanuelraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/solomonmanuelraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | open | false | null | [] | null | 4 | 2023-12-18T05:34:53 | 2024-01-24T15:42:32 | null | NONE | null | ### Feature request
Hi Team,
I am working with OWL-ViT Size model which has around 611 MB size ( https://huggingface.co/google/owlvit-base-patch16).
I want to optimize this model and like to deploy in the edge device for object detection.
Come to know from the group torch.scaled_dot_product_attention can be used for model optimization.
I need your feedback comments how optimally we can reduce the memory size so that we can deploy in the edge device.
waiting for your response.
with thanks
### Motivation
It will help to deploy the models in edge so that more applications we can use it.
### Your contribution
Like to know your feedback comments. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28103/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28102 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28102/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28102/comments | https://api.github.com/repos/huggingface/transformers/issues/28102/events | https://github.com/huggingface/transformers/pull/28102 | 2,045,744,156 | PR_kwDOCUB6oc5iNxZH | 28,102 | fix bug: avoid divide by zero in _maybe_log_save_evaluate() | {
"login": "frankenliu",
"id": 7486431,
"node_id": "MDQ6VXNlcjc0ODY0MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7486431?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frankenliu",
"html_url": "https://github.com/frankenliu",
"followers_url": "https://api.github.com/users/frankenliu/followers",
"following_url": "https://api.github.com/users/frankenliu/following{/other_user}",
"gists_url": "https://api.github.com/users/frankenliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/frankenliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frankenliu/subscriptions",
"organizations_url": "https://api.github.com/users/frankenliu/orgs",
"repos_url": "https://api.github.com/users/frankenliu/repos",
"events_url": "https://api.github.com/users/frankenliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/frankenliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-12-18T05:03:07 | 2023-12-26T06:15:10 | 2023-12-26T06:15:10 | CONTRIBUTOR | null | set logging_strategy="steps" and logging_steps=10,
when that one epoch have 100 steps, the should_log will be set to True in last step.
And self._globalstep_last_logged will be assign to self.state.global_step in _maybe_log_save_evaluate() method by line 1917 in trainer.py.
the line 1933 in trainer.py , self.callback_handler.on_epoch_end() will keep the should_log=True, then in line 1934 run _maybe_log_save_evaluate() method (self.state.global_step - self._globalstep_last_logged) will be zero in line 2247.
@muellerzr @pacman100 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28102/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28102",
"html_url": "https://github.com/huggingface/transformers/pull/28102",
"diff_url": "https://github.com/huggingface/transformers/pull/28102.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28102.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28101 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28101/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28101/comments | https://api.github.com/repos/huggingface/transformers/issues/28101/events | https://github.com/huggingface/transformers/issues/28101 | 2,045,680,594 | I_kwDOCUB6oc557pvS | 28,101 | Will deep StateSpace models add to this library? | {
"login": "ghosthamlet",
"id": 758325,
"node_id": "MDQ6VXNlcjc1ODMyNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/758325?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghosthamlet",
"html_url": "https://github.com/ghosthamlet",
"followers_url": "https://api.github.com/users/ghosthamlet/followers",
"following_url": "https://api.github.com/users/ghosthamlet/following{/other_user}",
"gists_url": "https://api.github.com/users/ghosthamlet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghosthamlet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghosthamlet/subscriptions",
"organizations_url": "https://api.github.com/users/ghosthamlet/orgs",
"repos_url": "https://api.github.com/users/ghosthamlet/repos",
"events_url": "https://api.github.com/users/ghosthamlet/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghosthamlet/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | 0 | 2023-12-18T03:52:34 | 2023-12-18T14:21:28 | null | NONE | null | ### Feature request
Will huggingface add deep StateSpace models to transformers library or create a new repo like Diffusers?
### Motivation
Deep StateSpace models may become the next big thing.
### Your contribution
No | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28101/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28100 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28100/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28100/comments | https://api.github.com/repos/huggingface/transformers/issues/28100/events | https://github.com/huggingface/transformers/issues/28100 | 2,045,679,616 | I_kwDOCUB6oc557pgA | 28,100 | QWenLMHeadModel does not support Flash Attention 2.0 yet. | {
"login": "zhangfan-algo",
"id": 47747764,
"node_id": "MDQ6VXNlcjQ3NzQ3NzY0",
"avatar_url": "https://avatars.githubusercontent.com/u/47747764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangfan-algo",
"html_url": "https://github.com/zhangfan-algo",
"followers_url": "https://api.github.com/users/zhangfan-algo/followers",
"following_url": "https://api.github.com/users/zhangfan-algo/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangfan-algo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhangfan-algo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangfan-algo/subscriptions",
"organizations_url": "https://api.github.com/users/zhangfan-algo/orgs",
"repos_url": "https://api.github.com/users/zhangfan-algo/repos",
"events_url": "https://api.github.com/users/zhangfan-algo/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhangfan-algo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-12-18T03:51:17 | 2024-01-25T08:04:06 | 2024-01-25T08:04:06 | NONE | null | 
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28100/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28099 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28099/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28099/comments | https://api.github.com/repos/huggingface/transformers/issues/28099/events | https://github.com/huggingface/transformers/issues/28099 | 2,045,486,232 | I_kwDOCUB6oc5566SY | 28,099 | Dataset not loading successfully. | {
"login": "hi-sushanta",
"id": 93595990,
"node_id": "U_kgDOBZQpVg",
"avatar_url": "https://avatars.githubusercontent.com/u/93595990?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hi-sushanta",
"html_url": "https://github.com/hi-sushanta",
"followers_url": "https://api.github.com/users/hi-sushanta/followers",
"following_url": "https://api.github.com/users/hi-sushanta/following{/other_user}",
"gists_url": "https://api.github.com/users/hi-sushanta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hi-sushanta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hi-sushanta/subscriptions",
"organizations_url": "https://api.github.com/users/hi-sushanta/orgs",
"repos_url": "https://api.github.com/users/hi-sushanta/repos",
"events_url": "https://api.github.com/users/hi-sushanta/events{/privacy}",
"received_events_url": "https://api.github.com/users/hi-sushanta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-18T01:06:52 | 2024-01-18T04:50:04 | 2024-01-18T04:50:04 | CONTRIBUTOR | null | ### System Info
* transformers -> 4.36.1
* datasets -> 2.15.0
* huggingface_hub -> 0.19.4
* python -> 3.8.10
* accelerate -> 0.25.0
* pytorch -> 2.0.1+cpu
* Using GPU in Script -> No
### Who can help?
@patrickvonplaten , @amyeroberts
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi, please check this line of code, when I run Show attribute error.
```
from datasets import load_dataset
from transformers import WhisperProcessor, WhisperForConditionalGeneration
# Select an audio file and read it:
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio_sample = ds[0]["audio"]
waveform = audio_sample["array"]
sampling_rate = audio_sample["sampling_rate"]
# Load the Whisper model in Hugging Face format:
processor = WhisperProcessor.from_pretrained("openai/whisper-tiny.en")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en")
# Use the model and processor to transcribe the audio:
input_features = processor(
waveform, sampling_rate=sampling_rate, return_tensors="pt"
).input_features
# Generate token ids
predicted_ids = model.generate(input_features)
# Decode token ids to text
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
transcription[0]
```
***AttributeError:***
```
AttributeError Traceback (most recent call last)
Cell In[9], line 6
4 # Select an audio file and read it:
5 ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
----> 6 audio_sample = ds[0]["audio"]
7 waveform = audio_sample["array"]
8 sampling_rate = audio_sample["sampling_rate"]
File /opt/pytorch/lib/python3.8/site-packages/datasets/arrow_dataset.py:2795, in Dataset.__getitem__(self, key)
2793 def __getitem__(self, key): # noqa: F811
2794 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 2795 return self._getitem(key)
File /opt/pytorch/lib/python3.8/site-packages/datasets/arrow_dataset.py:2780, in Dataset._getitem(self, key, **kwargs)
2778 formatter = get_formatter(format_type, features=self._info.features, **format_kwargs)
2779 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
-> 2780 formatted_output = format_table(
2781 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
2782 )
2783 return formatted_output
File /opt/pytorch/lib/python3.8/site-packages/datasets/formatting/formatting.py:629, in format_table(table, key, formatter, format_columns, output_all_columns)
627 python_formatter = PythonFormatter(features=formatter.features)
628 if format_columns is None:
--> 629 return formatter(pa_table, query_type=query_type)
630 elif query_type == "column":
631 if key in format_columns:
File /opt/pytorch/lib/python3.8/site-packages/datasets/formatting/formatting.py:396, in Formatter.__call__(self, pa_table, query_type)
394 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
395 if query_type == "row":
--> 396 return self.format_row(pa_table)
397 elif query_type == "column":
398 return self.format_column(pa_table)
File /opt/pytorch/lib/python3.8/site-packages/datasets/formatting/formatting.py:437, in PythonFormatter.format_row(self, pa_table)
435 return LazyRow(pa_table, self)
436 row = self.python_arrow_extractor().extract_row(pa_table)
--> 437 row = self.python_features_decoder.decode_row(row)
438 return row
File /opt/pytorch/lib/python3.8/site-packages/datasets/formatting/formatting.py:215, in PythonFeaturesDecoder.decode_row(self, row)
214 def decode_row(self, row: dict) -> dict:
--> 215 return self.features.decode_example(row) if self.features else row
File /opt/pytorch/lib/python3.8/site-packages/datasets/features/features.py:1917, in Features.decode_example(self, example, token_per_repo_id)
1903 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
1904 """Decode example with custom feature decoding.
1905
1906 Args:
(...)
1914 `dict[str, Any]`
1915 """
-> 1917 return {
1918 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1919 if self._column_requires_decoding[column_name]
1920 else value
1921 for column_name, (feature, value) in zip_dict(
1922 {key: value for key, value in self.items() if key in example}, example
1923 )
1924 }
File /opt/pytorch/lib/python3.8/site-packages/datasets/features/features.py:1918, in <dictcomp>(.0)
1903 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
1904 """Decode example with custom feature decoding.
1905
1906 Args:
(...)
1914 `dict[str, Any]`
1915 """
1917 return {
-> 1918 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1919 if self._column_requires_decoding[column_name]
1920 else value
1921 for column_name, (feature, value) in zip_dict(
1922 {key: value for key, value in self.items() if key in example}, example
1923 )
1924 }
File /opt/pytorch/lib/python3.8/site-packages/datasets/features/features.py:1339, in decode_nested_example(schema, obj, token_per_repo_id)
1336 elif isinstance(schema, (Audio, Image)):
1337 # we pass the token to read and decode files from private repositories in streaming mode
1338 if obj is not None and schema.decode:
-> 1339 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
1340 return obj
File /opt/pytorch/lib/python3.8/site-packages/datasets/features/audio.py:191, in Audio.decode_example(self, value, token_per_repo_id)
189 array = array.T
190 if self.mono:
--> 191 array = librosa.to_mono(array)
192 if self.sampling_rate and self.sampling_rate != sampling_rate:
193 array = librosa.resample(array, orig_sr=sampling_rate, target_sr=self.sampling_rate)
File /opt/pytorch/lib/python3.8/site-packages/lazy_loader/__init__.py:78, in attach.<locals>.__getattr__(name)
76 submod_path = f"{package_name}.{attr_to_modules[name]}"
77 submod = importlib.import_module(submod_path)
---> 78 attr = getattr(submod, name)
80 # If the attribute lives in a file (module) with the same
81 # name as the attribute, ensure that the attribute and *not*
82 # the module is accessible on the package.
83 if name == attr_to_modules[name]:
File /opt/pytorch/lib/python3.8/site-packages/lazy_loader/__init__.py:77, in attach.<locals>.__getattr__(name)
75 elif name in attr_to_modules:
76 submod_path = f"{package_name}.{attr_to_modules[name]}"
---> 77 submod = importlib.import_module(submod_path)
78 attr = getattr(submod, name)
80 # If the attribute lives in a file (module) with the same
81 # name as the attribute, ensure that the attribute and *not*
82 # the module is accessible on the package.
File /usr/lib/python3.8/importlib/__init__.py:127, in import_module(name, package)
125 break
126 level += 1
--> 127 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1014, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:991, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:975, in _find_and_load_unlocked(name, import_)
File <frozen importlib._bootstrap>:671, in _load_unlocked(spec)
File <frozen importlib._bootstrap_external>:848, in exec_module(self, module)
File <frozen importlib._bootstrap>:219, in _call_with_frames_removed(f, *args, **kwds)
File /opt/pytorch/lib/python3.8/site-packages/librosa/core/audio.py:13
11 import audioread
12 import numpy as np
---> 13 import scipy.signal
14 import soxr
15 import lazy_loader as lazy
File /opt/pytorch/lib/python3.8/site-packages/scipy/signal/__init__.py:323
314 from ._spline import ( # noqa: F401
315 cspline2d,
316 qspline2d,
(...)
319 symiirorder2,
320 )
322 from ._bsplines import *
--> 323 from ._filter_design import *
324 from ._fir_filter_design import *
325 from ._ltisys import *
File /opt/pytorch/lib/python3.8/site-packages/scipy/signal/_filter_design.py:16
13 from numpy.polynomial.polynomial import polyval as npp_polyval
14 from numpy.polynomial.polynomial import polyvalfromroots
---> 16 from scipy import special, optimize, fft as sp_fft
17 from scipy.special import comb
18 from scipy._lib._util import float_factorial
File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/__init__.py:405
1 """
2 =====================================================
3 Optimization and root finding (:mod:`scipy.optimize`)
(...)
401
402 """
404 from ._optimize import *
--> 405 from ._minimize import *
406 from ._root import *
407 from ._root_scalar import *
File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/_minimize.py:26
24 from ._trustregion_krylov import _minimize_trust_krylov
25 from ._trustregion_exact import _minimize_trustregion_exact
---> 26 from ._trustregion_constr import _minimize_trustregion_constr
28 # constrained minimization
29 from ._lbfgsb_py import _minimize_lbfgsb
File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/_trustregion_constr/__init__.py:4
1 """This module contains the equality constrained SQP solver."""
----> 4 from .minimize_trustregion_constr import _minimize_trustregion_constr
6 __all__ = ['_minimize_trustregion_constr']
File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/_trustregion_constr/minimize_trustregion_constr.py:5
3 from scipy.sparse.linalg import LinearOperator
4 from .._differentiable_functions import VectorFunction
----> 5 from .._constraints import (
6 NonlinearConstraint, LinearConstraint, PreparedConstraint, strict_bounds)
7 from .._hessian_update_strategy import BFGS
8 from .._optimize import OptimizeResult
File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/_constraints.py:8
6 from ._optimize import OptimizeWarning
7 from warnings import warn, catch_warnings, simplefilter
----> 8 from numpy.testing import suppress_warnings
9 from scipy.sparse import issparse
12 def _arr_to_scalar(x):
13 # If x is a numpy array, return x.item(). This will
14 # fail if the array has more than one element.
File /opt/pytorch/lib/python3.8/site-packages/numpy/testing/__init__.py:11
8 from unittest import TestCase
10 from . import _private
---> 11 from ._private.utils import *
12 from ._private.utils import (_assert_valid_refcount, _gen_alignment_data)
13 from ._private import extbuild, decorators as dec
File /opt/pytorch/lib/python3.8/site-packages/numpy/testing/_private/utils.py:480
476 pprint.pprint(desired, msg)
477 raise AssertionError(msg.getvalue())
--> 480 @np._no_nep50_warning()
481 def assert_almost_equal(actual,desired,decimal=7,err_msg='',verbose=True):
482 """
483 Raises an AssertionError if two items are not equal up to desired
484 precision.
(...)
548
549 """
550 __tracebackhide__ = True # Hide traceback for py.test
File /opt/pytorch/lib/python3.8/site-packages/numpy/__init__.py:313, in __getattr__(attr)
305 raise AttributeError(__former_attrs__[attr])
307 # Importing Tester requires importing all of UnitTest which is not a
308 # cheap import Since it is mainly used in test suits, we lazy import it
309 # here to save on the order of 10 ms of import time for most users
310 #
311 # The previous way Tester was imported also had a side effect of adding
312 # the full `numpy.testing` namespace
--> 313 if attr == 'testing':
314 import numpy.testing as testing
315 return testing
AttributeError: module 'numpy' has no attribute '_no_nep50_warning'
```
### Expected behavior
```
' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.'
```
Also, make sure this script is provided for your official website so please update:
[script](https://huggingface.co/docs/transformers/model_doc/whisper) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28099/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28099/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28098 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28098/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28098/comments | https://api.github.com/repos/huggingface/transformers/issues/28098/events | https://github.com/huggingface/transformers/issues/28098 | 2,045,287,243 | I_kwDOCUB6oc556JtL | 28,098 | Create create_token_type_ids_from_sequences for CodeGenTokenizer | {
"login": "cridin1",
"id": 73068277,
"node_id": "MDQ6VXNlcjczMDY4Mjc3",
"avatar_url": "https://avatars.githubusercontent.com/u/73068277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cridin1",
"html_url": "https://github.com/cridin1",
"followers_url": "https://api.github.com/users/cridin1/followers",
"following_url": "https://api.github.com/users/cridin1/following{/other_user}",
"gists_url": "https://api.github.com/users/cridin1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cridin1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cridin1/subscriptions",
"organizations_url": "https://api.github.com/users/cridin1/orgs",
"repos_url": "https://api.github.com/users/cridin1/repos",
"events_url": "https://api.github.com/users/cridin1/events{/privacy}",
"received_events_url": "https://api.github.com/users/cridin1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | open | false | null | [] | null | 4 | 2023-12-17T16:40:47 | 2024-01-20T10:49:16 | null | NONE | null | ### Feature request
In CodeGenTokenizer [here](src/transformers/models/codegen/tokenization_codegen.py), there is no implementation for create_token_type_ids_from_sequences.
I was looking to the tutorial for token_type_ids as a reference: [here](https://huggingface.co/docs/transformers/glossary#token-type-ids).
### Motivation
The model in input can require the token_type_ids [see here](https://huggingface.co/docs/transformers/model_doc/codegen#transformers.CodeGenForCausalLM), so would be useful to add this feature.
### Your contribution
I think it can be used directly the one frome codebert [here](src/transformers/models/bert/tokenization_bert.py):
````
def create_token_type_ids_from_sequences(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A BERT sequence
pair mask has the following format:
```
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
```
If `token_ids_1` is `None`, this method only returns the first portion of the mask (0s).
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`, *optional*):
Optional second list of IDs for sequence pairs.
Returns:
`List[int]`: List of [token type IDs](../glossary#token-type-ids) according to the given sequence(s).
"""
sep = [self.sep_token_id]
cls = [self.cls_token_id]
if token_ids_1 is None:
return len(cls + token_ids_0 + sep) * [0]
return len(cls + token_ids_0 + sep) * [0] + len(token_ids_1 + sep) * [1]
```` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28098/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28098/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28097 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28097/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28097/comments | https://api.github.com/repos/huggingface/transformers/issues/28097/events | https://github.com/huggingface/transformers/issues/28097 | 2,045,272,598 | I_kwDOCUB6oc556GIW | 28,097 | WhisperProcessor doesn't copy output tensor to CPU for `decode(output_offsets=True)` | {
"login": "rklasen",
"id": 13201731,
"node_id": "MDQ6VXNlcjEzMjAxNzMx",
"avatar_url": "https://avatars.githubusercontent.com/u/13201731?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rklasen",
"html_url": "https://github.com/rklasen",
"followers_url": "https://api.github.com/users/rklasen/followers",
"following_url": "https://api.github.com/users/rklasen/following{/other_user}",
"gists_url": "https://api.github.com/users/rklasen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rklasen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rklasen/subscriptions",
"organizations_url": "https://api.github.com/users/rklasen/orgs",
"repos_url": "https://api.github.com/users/rklasen/repos",
"events_url": "https://api.github.com/users/rklasen/events{/privacy}",
"received_events_url": "https://api.github.com/users/rklasen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-17T15:58:47 | 2024-01-18T16:12:16 | 2024-01-18T16:12:16 | NONE | null | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.36.1
- Platform: Linux-6.6.7-arch1-1-x86_64-with-glibc2.38
- Python version: 3.11.6
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: YES (this is the cause of the bug)
- Using distributed or parallel set-up in script?: no, one local GPU
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm creating a WhisperProcessor, copy input and model to GPU and run `generate()` with `return_timestamps=True`.
```python
processor = WhisperProcessor.from_pretrained("openai/whisper-large-v2")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v2")
model = model.to("cuda")
input_features = input_features.to("cuda")
with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=True, enable_mem_efficient=True):
output_with_prompt = model.generate(input_features, return_timestamps=True)
result = processor.decode(output_with_prompt[0], skip_special_tokens=True, decode_with_timestamps=False, output_offsets=True)
print(result)
```
Causes the error:
```
File [/media/DataStore02-12TB/codeRepos/myFasterWhisperTest/.venv/lib/python3.11/site-packages/torch/_tensor.py:1030](https://file+.vscode-resource.vscode-cdn.net/media/DataStore02-12TB/codeRepos/myFasterWhisperTest/.venv/lib/python3.11/site-packages/torch/_tensor.py:1030), in Tensor.__array__(self, dtype)
[1028](https://file+.vscode-resource.vscode-cdn.net/media/DataStore02-12TB/codeRepos/myFasterWhisperTest/.venv/lib/python3.11/site-packages/torch/_tensor.py:1028) return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
[1029](https://file+.vscode-resource.vscode-cdn.net/media/DataStore02-12TB/codeRepos/myFasterWhisperTest/.venv/lib/python3.11/site-packages/torch/_tensor.py:1029) if dtype is None:
-> [1030](https://file+.vscode-resource.vscode-cdn.net/media/DataStore02-12TB/codeRepos/myFasterWhisperTest/.venv/lib/python3.11/site-packages/torch/_tensor.py:1030) return self.numpy()
[1031](https://file+.vscode-resource.vscode-cdn.net/media/DataStore02-12TB/codeRepos/myFasterWhisperTest/.venv/lib/python3.11/site-packages/torch/_tensor.py:1031) else:
[1032](https://file+.vscode-resource.vscode-cdn.net/media/DataStore02-12TB/codeRepos/myFasterWhisperTest/.venv/lib/python3.11/site-packages/torch/_tensor.py:1032) return self.numpy().astype(dtype, copy=False)
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
```
Copying the output tensor to the CPU before fixes that:
```
output_with_prompt2=output_with_prompt.cpu()
# timesatmps must be on to skip the initial prompt
result = processor.decode(output_with_prompt2[0], skip_special_tokens=True, decode_with_timestamps=False, output_offsets=True)
print(result)
```
However, that **only** happens when `output_offsets=True` is enabled. When the flag is disabled, the decode works fine on the GPU (but we don't get the timestamps).
I'm also seeing that the decode function is called about 11 times for one `processor.decode`, is that by design?
### Expected behavior
Automatically copy the tensor to CPU for the decode. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28097/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28097/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28096 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28096/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28096/comments | https://api.github.com/repos/huggingface/transformers/issues/28096/events | https://github.com/huggingface/transformers/issues/28096 | 2,045,262,269 | I_kwDOCUB6oc556Dm9 | 28,096 | [LlamaTokenizer] Inconsistent slow vs. fast tokenization when dealing with unknown tokens | {
"login": "xenova",
"id": 26504141,
"node_id": "MDQ6VXNlcjI2NTA0MTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xenova",
"html_url": "https://github.com/xenova",
"followers_url": "https://api.github.com/users/xenova/followers",
"following_url": "https://api.github.com/users/xenova/following{/other_user}",
"gists_url": "https://api.github.com/users/xenova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xenova/subscriptions",
"organizations_url": "https://api.github.com/users/xenova/orgs",
"repos_url": "https://api.github.com/users/xenova/repos",
"events_url": "https://api.github.com/users/xenova/events{/privacy}",
"received_events_url": "https://api.github.com/users/xenova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2023-12-17T15:30:04 | 2024-01-12T10:08:50 | null | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.36.1
- Platform: Linux-6.2.0-1018-azure-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu121 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker (maybe @Narsil for tokenizers?)
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```py
from transformers import AutoTokenizer
text = "\n\t\n"
tokenizer_fast = AutoTokenizer.from_pretrained("RajuKandasamy/tamillama_tiny_30m", use_fast=True)
tokenizer_slow = AutoTokenizer.from_pretrained("RajuKandasamy/tamillama_tiny_30m", use_fast=False)
tokenizer_slow_non_legacy = AutoTokenizer.from_pretrained("RajuKandasamy/tamillama_tiny_30m", use_fast=False, legacy=False)
tokenizer_slow_legacy = AutoTokenizer.from_pretrained("RajuKandasamy/tamillama_tiny_30m", use_fast=False, legacy=True)
print(tokenizer_fast.tokenize(text)) # ['▁', '\n', '<unk>', '\n']
print(tokenizer_fast.encode(text)) # [1, 31654, 5, 0, 5]
print()
print(tokenizer_slow.tokenize(text)) # ['▁', '\n', '▁', '\n']
print(tokenizer_slow.encode(text)) # [1, 31654, 5, 31654, 5]
print()
print(tokenizer_slow_non_legacy.tokenize(text)) # ['▁', '\n', '▁', '\n']
print(tokenizer_slow_non_legacy.encode(text)) # [1, 31654, 5, 31654, 5]
print()
print(tokenizer_slow_legacy.tokenize(text)) # ['▁', '\n', '▁', '\n']
print(tokenizer_slow_legacy.encode(text)) # [1, 31654, 5, 31654, 5]
```
### Expected behavior
I'm not quite sure which is the correct behaviour, since this is a custom-built llama tokenizer, and does not include "byte fallback" tokens in the vocabulary. Intuitively, it would make sense to fallback to the unknown token if the byte fallback fails, but I assume we should follow how the sentencepiece implementation does it (which seems to be excluding it?) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28096/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28096/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28095 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28095/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28095/comments | https://api.github.com/repos/huggingface/transformers/issues/28095/events | https://github.com/huggingface/transformers/issues/28095 | 2,044,930,519 | I_kwDOCUB6oc554ynX | 28,095 | logits squeezing causes an error during the inference time (If the last epoch contains only one sample) | {
"login": "fadiabdulf",
"id": 81809527,
"node_id": "MDQ6VXNlcjgxODA5NTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/81809527?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fadiabdulf",
"html_url": "https://github.com/fadiabdulf",
"followers_url": "https://api.github.com/users/fadiabdulf/followers",
"following_url": "https://api.github.com/users/fadiabdulf/following{/other_user}",
"gists_url": "https://api.github.com/users/fadiabdulf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fadiabdulf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fadiabdulf/subscriptions",
"organizations_url": "https://api.github.com/users/fadiabdulf/orgs",
"repos_url": "https://api.github.com/users/fadiabdulf/repos",
"events_url": "https://api.github.com/users/fadiabdulf/events{/privacy}",
"received_events_url": "https://api.github.com/users/fadiabdulf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-12-16T19:48:46 | 2024-01-24T08:03:53 | 2024-01-24T08:03:53 | NONE | null | https://github.com/huggingface/transformers/blob/238d2e3c44366aba9dc5c770c95475765a6725cb/src/transformers/trainer.py#L3452 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28095/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28095/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28094 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28094/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28094/comments | https://api.github.com/repos/huggingface/transformers/issues/28094/events | https://github.com/huggingface/transformers/pull/28094 | 2,044,897,781 | PR_kwDOCUB6oc5iLBlE | 28,094 | [`Add Mamba`] Adds support for the `Mamba` models | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2023-12-16T18:35:52 | 2024-02-01T01:18:13 | null | COLLABORATOR | null | # What does this PR do?
- [ ] Implement cpu ops
- [ ] Add integration tests
fixes #28086 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28094/reactions",
"total_count": 10,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 5,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28094/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28094",
"html_url": "https://github.com/huggingface/transformers/pull/28094",
"diff_url": "https://github.com/huggingface/transformers/pull/28094.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28094.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28093 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28093/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28093/comments | https://api.github.com/repos/huggingface/transformers/issues/28093/events | https://github.com/huggingface/transformers/issues/28093 | 2,044,645,421 | I_kwDOCUB6oc553tAt | 28,093 | load_balancing_loss in mixtral model | {
"login": "1773226512",
"id": 82659526,
"node_id": "MDQ6VXNlcjgyNjU5NTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/82659526?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/1773226512",
"html_url": "https://github.com/1773226512",
"followers_url": "https://api.github.com/users/1773226512/followers",
"following_url": "https://api.github.com/users/1773226512/following{/other_user}",
"gists_url": "https://api.github.com/users/1773226512/gists{/gist_id}",
"starred_url": "https://api.github.com/users/1773226512/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/1773226512/subscriptions",
"organizations_url": "https://api.github.com/users/1773226512/orgs",
"repos_url": "https://api.github.com/users/1773226512/repos",
"events_url": "https://api.github.com/users/1773226512/events{/privacy}",
"received_events_url": "https://api.github.com/users/1773226512/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 6 | 2023-12-16T07:25:14 | 2023-12-22T13:27:17 | 2023-12-19T16:31:56 | NONE | null | ### System Info
torch '1.13.0+cu117'
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The balancing loss function always return a constant.
Here is the official code:
```
def load_balancing_loss_func(gate_logits: torch.Tensor, num_experts: torch.Tensor = None, top_k=2) -> float:
r"""
Computes auxiliary load balancing loss as in Switch Transformer - implemented in Pytorch.
See Switch Transformer (https://arxiv.org/abs/2101.03961) for more details. This function implements the loss
function presented in equations (4) - (6) of the paper. It aims at penalizing cases where the routing between
experts is too unbalanced.
Args:
gate_logits (Union[`torch.Tensor`, Tuple[torch.Tensor]):
Logits from the `gate`, should be a tuple of tensors. Shape: [batch_size, seqeunce_length, num_experts].
num_experts (`int`, *optional*):
Number of experts
Returns:
The auxiliary loss.
"""
if gate_logits is None:
return 0
if isinstance(gate_logits, tuple):
# cat along the layers?
gate_logits = torch.cat(gate_logits, dim=0)
routing_weights, selected_experts = torch.topk(gate_logits, top_k, dim=-1)
routing_weights = routing_weights.softmax(dim=-1)
# cast the expert indices to int64, otherwise one-hot encoding will fail
if selected_experts.dtype != torch.int64:
selected_experts = selected_experts.to(torch.int64)
if len(selected_experts.shape) == 2:
selected_experts = selected_experts.unsqueeze(2)
expert_mask = torch.nn.functional.one_hot(selected_experts, num_experts)
# For a given token, determine if it was routed to a given expert.
expert_mask = torch.max(expert_mask, axis=-2).values
# cast to float32 otherwise mean will fail
expert_mask = expert_mask.to(torch.float32)
tokens_per_group_and_expert = torch.mean(expert_mask, axis=-2)
router_prob_per_group_and_expert = torch.mean(routing_weights, axis=-1)
return torch.mean(tokens_per_group_and_expert * router_prob_per_group_and_expert.unsqueeze(-1)) * (num_experts**2)
```
Here is my code:
```
num_hidden_layers=30
batch_size = 16
seq_len = 32
num_experts = 8
gate_logits = tuple(torch.randn(batch_size*seq_len, num_experts) for _ in range(num_hidden_layers))
load_balancing_loss_func(gate_logits=gate_logits, num_experts=num_experts)
```
It always return 4.
### Expected behavior
please anwser this question | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28093/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/28093/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28092 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28092/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28092/comments | https://api.github.com/repos/huggingface/transformers/issues/28092/events | https://github.com/huggingface/transformers/pull/28092 | 2,044,627,356 | PR_kwDOCUB6oc5iKJ3W | 28,092 | Mixtral: Reduce and Increase Expert Models | {
"login": "minato-ellie",
"id": 82735346,
"node_id": "MDQ6VXNlcjgyNzM1MzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/82735346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minato-ellie",
"html_url": "https://github.com/minato-ellie",
"followers_url": "https://api.github.com/users/minato-ellie/followers",
"following_url": "https://api.github.com/users/minato-ellie/following{/other_user}",
"gists_url": "https://api.github.com/users/minato-ellie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minato-ellie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minato-ellie/subscriptions",
"organizations_url": "https://api.github.com/users/minato-ellie/orgs",
"repos_url": "https://api.github.com/users/minato-ellie/repos",
"events_url": "https://api.github.com/users/minato-ellie/events{/privacy}",
"received_events_url": "https://api.github.com/users/minato-ellie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-16T06:15:29 | 2024-01-23T08:03:48 | 2024-01-23T08:03:48 | NONE | null | This pr adds a method to MixtralSparseMoeBlock, MixtralDecoderLayer and MixtralModel that removes one or more experts from MixtralModel. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28092/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28092/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28092",
"html_url": "https://github.com/huggingface/transformers/pull/28092",
"diff_url": "https://github.com/huggingface/transformers/pull/28092.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28092.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28091 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28091/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28091/comments | https://api.github.com/repos/huggingface/transformers/issues/28091/events | https://github.com/huggingface/transformers/pull/28091 | 2,044,572,059 | PR_kwDOCUB6oc5iJ-w6 | 28,091 | fix ConversationalPipeline docstring | {
"login": "not-lain",
"id": 70411813,
"node_id": "MDQ6VXNlcjcwNDExODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/70411813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/not-lain",
"html_url": "https://github.com/not-lain",
"followers_url": "https://api.github.com/users/not-lain/followers",
"following_url": "https://api.github.com/users/not-lain/following{/other_user}",
"gists_url": "https://api.github.com/users/not-lain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/not-lain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/not-lain/subscriptions",
"organizations_url": "https://api.github.com/users/not-lain/orgs",
"repos_url": "https://api.github.com/users/not-lain/repos",
"events_url": "https://api.github.com/users/not-lain/events{/privacy}",
"received_events_url": "https://api.github.com/users/not-lain/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-16T02:33:44 | 2023-12-18T15:08:37 | 2023-12-18T15:08:37 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. #28090
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28091/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28091/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28091",
"html_url": "https://github.com/huggingface/transformers/pull/28091",
"diff_url": "https://github.com/huggingface/transformers/pull/28091.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28091.patch",
"merged_at": "2023-12-18T15:08:37"
} |
https://api.github.com/repos/huggingface/transformers/issues/28090 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28090/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28090/comments | https://api.github.com/repos/huggingface/transformers/issues/28090/events | https://github.com/huggingface/transformers/issues/28090 | 2,044,571,444 | I_kwDOCUB6oc553a80 | 28,090 | fix documentation docstring | {
"login": "not-lain",
"id": 70411813,
"node_id": "MDQ6VXNlcjcwNDExODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/70411813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/not-lain",
"html_url": "https://github.com/not-lain",
"followers_url": "https://api.github.com/users/not-lain/followers",
"following_url": "https://api.github.com/users/not-lain/following{/other_user}",
"gists_url": "https://api.github.com/users/not-lain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/not-lain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/not-lain/subscriptions",
"organizations_url": "https://api.github.com/users/not-lain/orgs",
"repos_url": "https://api.github.com/users/not-lain/repos",
"events_url": "https://api.github.com/users/not-lain/events{/privacy}",
"received_events_url": "https://api.github.com/users/not-lain/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-16T02:31:47 | 2023-12-19T10:44:49 | 2023-12-19T10:44:49 | CONTRIBUTOR | null | ### System Info
the following the deocumentation in huggingface about [ConversationalPipeline](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.ConversationalPipeline.example) the example shown in here seems to be broken

### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
checkout the link above
### Expected behavior
a parsable example | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28090/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28090/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28089 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28089/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28089/comments | https://api.github.com/repos/huggingface/transformers/issues/28089/events | https://github.com/huggingface/transformers/issues/28089 | 2,044,319,489 | I_kwDOCUB6oc552dcB | 28,089 | Error in pipeline while inferencing Llama2, colab link below | {
"login": "goblinvalo",
"id": 153084421,
"node_id": "U_kgDOCR_iBQ",
"avatar_url": "https://avatars.githubusercontent.com/u/153084421?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/goblinvalo",
"html_url": "https://github.com/goblinvalo",
"followers_url": "https://api.github.com/users/goblinvalo/followers",
"following_url": "https://api.github.com/users/goblinvalo/following{/other_user}",
"gists_url": "https://api.github.com/users/goblinvalo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/goblinvalo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/goblinvalo/subscriptions",
"organizations_url": "https://api.github.com/users/goblinvalo/orgs",
"repos_url": "https://api.github.com/users/goblinvalo/repos",
"events_url": "https://api.github.com/users/goblinvalo/events{/privacy}",
"received_events_url": "https://api.github.com/users/goblinvalo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-15T20:16:23 | 2023-12-16T06:28:53 | 2023-12-16T06:28:53 | NONE | null | ### System Info
here is the colab notebook link
https://colab.research.google.com/drive/1rjDR7i9MWkTmOhsZEg3oU-AHntQhBFCD?usp=sharing
until yesterday it was working fine got this error today.
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://colab.research.google.com/drive/1rjDR7i9MWkTmOhsZEg3oU-AHntQhBFCD?usp=sharing
### Expected behavior
until yesterday it was working fine look like today some update have been made by maintainers got this error all of a sudden. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28089/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28088 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28088/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28088/comments | https://api.github.com/repos/huggingface/transformers/issues/28088/events | https://github.com/huggingface/transformers/pull/28088 | 2,044,314,999 | PR_kwDOCUB6oc5iJHBv | 28,088 | Support `DeepSpeed` when using auto find batch size | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-15T20:11:44 | 2024-01-16T05:27:40 | 2024-01-10T11:03:13 | CONTRIBUTOR | null | # What does this PR do?
This PR addresses https://github.com/huggingface/transformers/issues/24558 by letting the `Trainer` modify the deepspeed plugin *specifically when using auto batch size finder*.
It refactors the logic for propagation of the deepspeed arguments into its own function so that on the fly we can modify any related to the train batch size if needed.
Fixes # (issue)
Fixes https://github.com/huggingface/transformers/issues/24558
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@pacman100
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28088/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28088/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28088",
"html_url": "https://github.com/huggingface/transformers/pull/28088",
"diff_url": "https://github.com/huggingface/transformers/pull/28088.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28088.patch",
"merged_at": "2024-01-10T11:03:13"
} |
https://api.github.com/repos/huggingface/transformers/issues/28087 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28087/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28087/comments | https://api.github.com/repos/huggingface/transformers/issues/28087/events | https://github.com/huggingface/transformers/pull/28087 | 2,044,310,026 | PR_kwDOCUB6oc5iJF7s | 28,087 | [docs] General doc fixes | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-12-15T20:06:33 | 2023-12-18T18:44:13 | 2023-12-18T18:44:09 | MEMBER | null | Cleans up a few things:
- Fused AWQ benchmark [table](https://huggingface.co/docs/transformers/main/en/quantization#fused-awq-modules) broken because there wasn't a blankline between the title and table
- tidies up the new sections added in the GPU inference doc
- removes the `NerPipeline` from internal discussion [here](https://huggingface.slack.com/archives/C02GLJ5S0E9/p1702480089513399) (basically it's identical to the `TokenClassification` pipeline and it may cause confusion)
- removes `perf_train_tpu.md` because it is empty and doesn't add any value to the docs | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28087/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28087/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28087",
"html_url": "https://github.com/huggingface/transformers/pull/28087",
"diff_url": "https://github.com/huggingface/transformers/pull/28087.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28087.patch",
"merged_at": "2023-12-18T18:44:09"
} |
https://api.github.com/repos/huggingface/transformers/issues/28086 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28086/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28086/comments | https://api.github.com/repos/huggingface/transformers/issues/28086/events | https://github.com/huggingface/transformers/issues/28086 | 2,044,202,742 | I_kwDOCUB6oc552A72 | 28,086 | Add [`Mamba`] model | {
"login": "JLTastet",
"id": 8004066,
"node_id": "MDQ6VXNlcjgwMDQwNjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8004066?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JLTastet",
"html_url": "https://github.com/JLTastet",
"followers_url": "https://api.github.com/users/JLTastet/followers",
"following_url": "https://api.github.com/users/JLTastet/following{/other_user}",
"gists_url": "https://api.github.com/users/JLTastet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JLTastet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JLTastet/subscriptions",
"organizations_url": "https://api.github.com/users/JLTastet/orgs",
"repos_url": "https://api.github.com/users/JLTastet/repos",
"events_url": "https://api.github.com/users/JLTastet/events{/privacy}",
"received_events_url": "https://api.github.com/users/JLTastet/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "... | null | 4 | 2023-12-15T18:43:49 | 2024-01-16T08:21:01 | null | NONE | null | ### Model description
Mamba is a new architecture proposed in [arXiv:2312.00752](https://arxiv.org/abs/2312.00752) by Albert Gu (CMU) and Tri Dao (Princeton).
It is inspired by structured state space models (SSMs), but with the addition of a selection mechanism that allows it to combines the ability of transformers to perform content-based reasoning with the performance of SSMs on long sequences. Mamba can be efficiently trained in parallel while also enjoying efficient inference by running recurrently.
The paper claims SoTA performance on various modalities, with performance tested up to 2.8B parameters. Crucially, the model cannot be implemented efficiently using only PyTorch operations; instead, it relies on optimised CUDA and `triton` kernels.
The original implementation by the authors is available at https://github.com/state-spaces/mamba/tree/main under an Apache 2.0 license.
Starting from their implementation, I have started porting the model to 🤗 Transformers. This is **work in progress** 🚧, and can be found in my fork at https://github.com/JLTastet/transformers/tree/mamba.
I can open a PR, but in its current state my branch is not ready to be merged. I will also open an issue in the original repo to let the authors know about this, in case they want to chime in.
What I got working:
- Forward and backward passes.
- Loading checkpoints from the Hub using `AutoModel`.
What still needs some work:
- Even though backprop itself works, I get some CUDA errors when using `Trainer`, and I still don’t understand what causes them.
- Compiling the CUDA kernels takes ~1 hour. This does not happen with the original package, so I think they are using prebuilt binaries. I didn’t manage to port that part so far.
- I don’t think there is any non-CUDA fallback path, so this model probably cannot run without CUDA in its current form.
- When using `generate`, we should check that the optimised recurrent inference is used instead of the slower autoregressive inference.
- Tests, tests and moar tests.
- Most of the documentation needs to be written.
- Add the relevant dependencies.
- The code could certainly benefit from some cleanup (remove dead code, many TODO’s, update copyright notices, ...).
I am opening this issue to avoid duplicating work, since I saw [some mention](https://github.com/huggingface/transformers/issues/28049#issuecomment-1857574924) of Mamba today by @ArthurZucker.
My main motivation for porting this model is to learn a bit more about it (and about the internals of 🤗 Transformers) and to run more evals. Some of you probably know this library much better than me, so feel free to write your own implementation if you can do it better or quicker. Otherwise, don’t hesitate to build on top of my fork.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
- Paper: https://arxiv.org/abs/2312.00752 by @albertfgu and @tridao.
- Original repo by the authors: https://github.com/state-spaces/mamba/tree/main
- My WIP implementation in 🤗 Transformers: https://github.com/JLTastet/transformers/tree/mamba | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28086/reactions",
"total_count": 6,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 6,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28086/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28085 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28085/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28085/comments | https://api.github.com/repos/huggingface/transformers/issues/28085/events | https://github.com/huggingface/transformers/pull/28085 | 2,044,194,088 | PR_kwDOCUB6oc5iIsbN | 28,085 | Fix Vip-llava docs | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-15T18:37:07 | 2023-12-21T08:26:17 | 2023-12-15T19:16:47 | CONTRIBUTOR | null | # What does this PR do?
Fixes some nits on the Vip-Llava docs, in fact users should be aware that the correct prompt format is different from the llava one, as stated on the model card: https://huggingface.co/llava-hf/vip-llava-7b-hf#how-to-use-the-model
Also updated the docs in the modeling
cc @amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28085/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28085/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28085",
"html_url": "https://github.com/huggingface/transformers/pull/28085",
"diff_url": "https://github.com/huggingface/transformers/pull/28085.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28085.patch",
"merged_at": "2023-12-15T19:16:47"
} |
https://api.github.com/repos/huggingface/transformers/issues/28084 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28084/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28084/comments | https://api.github.com/repos/huggingface/transformers/issues/28084/events | https://github.com/huggingface/transformers/pull/28084 | 2,044,189,740 | PR_kwDOCUB6oc5iIrf6 | 28,084 | Misc updates to CPU Dockerfiles | {
"login": "ashahba",
"id": 12436063,
"node_id": "MDQ6VXNlcjEyNDM2MDYz",
"avatar_url": "https://avatars.githubusercontent.com/u/12436063?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashahba",
"html_url": "https://github.com/ashahba",
"followers_url": "https://api.github.com/users/ashahba/followers",
"following_url": "https://api.github.com/users/ashahba/following{/other_user}",
"gists_url": "https://api.github.com/users/ashahba/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashahba/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashahba/subscriptions",
"organizations_url": "https://api.github.com/users/ashahba/orgs",
"repos_url": "https://api.github.com/users/ashahba/repos",
"events_url": "https://api.github.com/users/ashahba/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashahba/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 7 | 2023-12-15T18:33:10 | 2023-12-20T11:12:11 | 2023-12-20T11:12:11 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes #28082
Currently `cpu` containers and possibly GPU ones as well under `docker` folder are out of date.The reason being [ubuntu:18.04](https://hub.docker.com/_/ubuntu/tags?page=1&name=18.04) base image was updated over 6 months ago and most likely there won't be anymore updates for that base and that's causing containers to fail during the build.
I also noticed even after updating to newer base images, Torch CPU containers install `GPU` distribution as well which is not only unnecessary but also leads to large final containers.
```
$ docker images | grep transformers-pytorch-cpu
transformers-pytorch-cpu-pr latest 9e377aa5efd9 20 minutes ago 867MB
transformers-pytorch-cpu-main latest 9a88121eeb33 20 hours ago 1.61GB
```
This PR:
- Sets the default base to [ubuntu:22.04](https://hub.docker.com/_/ubuntu/tags?page=1&name=22.04) which should be supported for a couple of years
- Adds appropriate license headers to the files
- Removes unnecessary `Torch GPU` bits from `CPU` containers | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28084/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28084",
"html_url": "https://github.com/huggingface/transformers/pull/28084",
"diff_url": "https://github.com/huggingface/transformers/pull/28084.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28084.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28083 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28083/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28083/comments | https://api.github.com/repos/huggingface/transformers/issues/28083/events | https://github.com/huggingface/transformers/pull/28083 | 2,044,157,025 | PR_kwDOCUB6oc5iIkVh | 28,083 | PatchtTST and PatchTSMixer fixes | {
"login": "wgifford",
"id": 79663411,
"node_id": "MDQ6VXNlcjc5NjYzNDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/79663411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wgifford",
"html_url": "https://github.com/wgifford",
"followers_url": "https://api.github.com/users/wgifford/followers",
"following_url": "https://api.github.com/users/wgifford/following{/other_user}",
"gists_url": "https://api.github.com/users/wgifford/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wgifford/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wgifford/subscriptions",
"organizations_url": "https://api.github.com/users/wgifford/orgs",
"repos_url": "https://api.github.com/users/wgifford/repos",
"events_url": "https://api.github.com/users/wgifford/events{/privacy}",
"received_events_url": "https://api.github.com/users/wgifford/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/k... | null | 8 | 2023-12-15T18:04:41 | 2024-01-29T13:51:38 | 2024-01-29T10:09:27 | CONTRIBUTOR | null | # What does this PR do?
Makes PatchTST and PatchTSMixer interfaces more consistent -- using similar parameter names for method arguments and returned data objects.
Fixes a few minor bugs in PatchTST implementation.
Ensures more consistent output shapes with regression when an output_distribution is chosen (in both forward and generate methods).
Fixes slow tests for PatchTST. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28083/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28083/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28083",
"html_url": "https://github.com/huggingface/transformers/pull/28083",
"diff_url": "https://github.com/huggingface/transformers/pull/28083.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28083.patch",
"merged_at": "2024-01-29T10:09:27"
} |
https://api.github.com/repos/huggingface/transformers/issues/28082 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28082/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28082/comments | https://api.github.com/repos/huggingface/transformers/issues/28082/events | https://github.com/huggingface/transformers/issues/28082 | 2,044,146,496 | I_kwDOCUB6oc551zNA | 28,082 | Dockerfiles under "docker" folder specially CPU ones fail to build. | {
"login": "ashahba",
"id": 12436063,
"node_id": "MDQ6VXNlcjEyNDM2MDYz",
"avatar_url": "https://avatars.githubusercontent.com/u/12436063?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashahba",
"html_url": "https://github.com/ashahba",
"followers_url": "https://api.github.com/users/ashahba/followers",
"following_url": "https://api.github.com/users/ashahba/following{/other_user}",
"gists_url": "https://api.github.com/users/ashahba/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashahba/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashahba/subscriptions",
"organizations_url": "https://api.github.com/users/ashahba/orgs",
"repos_url": "https://api.github.com/users/ashahba/repos",
"events_url": "https://api.github.com/users/ashahba/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashahba/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-15T17:58:50 | 2023-12-20T11:12:28 | 2023-12-20T11:12:28 | CONTRIBUTOR | null | ### System Info
- tip of `main` hence `v4.36.1`
- Runing Docker version 24.0.7 on Linux Ubuntu 22.04
### Reproduction
Running:
```
docker build -f docker/transformers-pytorch-cpu/Dockerfile . --tag transformers-pytorch-cpu
```
from tip of `main` branch results in:
```
=> [5/6] COPY . transformers/ 1.5s
=> ERROR [6/6] RUN cd transformers/ && python3 -m pip install --no-cache-dir . 6.8s
------
> [6/6] RUN cd transformers/ && python3 -m pip install --no-cache-dir .:
0.939 Processing /workspace/transformers
0.942 Installing build dependencies: started
3.357 Installing build dependencies: finished with status 'done'
3.358 Getting requirements to build wheel: started
4.225 Getting requirements to build wheel: finished with status 'done'
4.228 Preparing metadata (pyproject.toml): started
5.367 Preparing metadata (pyproject.toml): finished with status 'done'
6.195 Collecting pyyaml>=5.1
6.237 Downloading PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (677 kB)
6.365 Collecting requests
6.375 Downloading requests-2.27.1-py2.py3-none-any.whl (63 kB)
6.462 ERROR: Could not find a version that satisfies the requirement huggingface-hub<1.0,>=0.19.3 (from transformers) (from versions: 0.0.1, 0.0.2, 0.0.3rc1, 0.0.3rc2, 0.0.5, 0.0.6, 0.0.7, 0.0.8, 0.0.9, 0.0.10, 0.0.11, 0.0.12, 0.0.13, 0.0.14, 0.0.15, 0.0.16, 0.0.17, 0.0.18, 0.0.19, 0.1.0, 0.1.1, 0.1.2, 0.2.0, 0.2.1, 0.4.0)
6.462 ERROR: No matching distribution found for huggingface-hub<1.0,>=0.19.3
------
Dockerfile:22
--------------------
21 | COPY . transformers/
22 | >>> RUN cd transformers/ && \
23 | >>> python3 -m pip install --no-cache-dir .
24 |
--------------------
ERROR: failed to solve: process "/bin/sh -c cd transformers/ && python3 -m pip install --no-cache-dir ." did not complete successfully: exit code: 1
```
### Expected behavior
Container builds specially on recent Docker versions should be successful.
I have fixes I can supply for the build issues and I'm going to submit the first set in a subsequent PR for your review along with looking into the root cause.
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28082/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28082/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28081 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28081/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28081/comments | https://api.github.com/repos/huggingface/transformers/issues/28081/events | https://github.com/huggingface/transformers/pull/28081 | 2,044,120,456 | PR_kwDOCUB6oc5iIcBv | 28,081 | More TF fixes | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-15T17:45:30 | 2023-12-18T15:26:05 | 2023-12-18T15:26:04 | MEMBER | null | The TF `build()` PR brought back an old issue where TF would latch onto the first concrete shape it saw, which would then become the model's save signature. We avoid it by hitting `self._set_save_spec()` with flexible shapes ASAP when models are created.
This PR also replaces a few more instances of `build()` with `build_in_name_scope()` in our tests. This should hopefully fix the CI issues (cc @ydshieh) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28081/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28081/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28081",
"html_url": "https://github.com/huggingface/transformers/pull/28081",
"diff_url": "https://github.com/huggingface/transformers/pull/28081.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28081.patch",
"merged_at": "2023-12-18T15:26:04"
} |
https://api.github.com/repos/huggingface/transformers/issues/28080 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28080/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28080/comments | https://api.github.com/repos/huggingface/transformers/issues/28080/events | https://github.com/huggingface/transformers/pull/28080 | 2,044,009,323 | PR_kwDOCUB6oc5iID1Q | 28,080 | Update fixtures-image-utils | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-15T16:24:38 | 2023-12-15T16:58:38 | 2023-12-15T16:58:37 | MEMBER | null | The [hf-internal-testing/fixtures_image_utils](https://huggingface.co/datasets/hf-internal-testing/fixtures_image_utils) dataset fixture will break with the next release of datasets .
This dataset has a script that writes cache image files that are used in tests.
But in the next release the dataset is loaded from the Parquet files (so there is no local cache image file anymore).
FYI the issue appears because of new security features: `datasets` now loads the datasets Parquet exports by default to not let users run dataset scripts if possible.
To fix this I opened a PR on to remove the datasets script here: https://huggingface.co/datasets/hf-internal-testing/fixtures_image_utils/discussions/1
And in this PR I pass `revision="refs/pr/1"` in the tests to use the fixed dataset fixture and update the tests that rely on it.
IMO later we can merge the PR on HF and remove the `revision` argument (if we do this right now it will break tests in the other PRs on github)
cc @NielsRogge I think it's impacting tests you implemented | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28080/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28080",
"html_url": "https://github.com/huggingface/transformers/pull/28080",
"diff_url": "https://github.com/huggingface/transformers/pull/28080.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28080.patch",
"merged_at": "2023-12-15T16:58:37"
} |
https://api.github.com/repos/huggingface/transformers/issues/28079 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28079/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28079/comments | https://api.github.com/repos/huggingface/transformers/issues/28079/events | https://github.com/huggingface/transformers/issues/28079 | 2,043,961,955 | I_kwDOCUB6oc551GJj | 28,079 | Expose `gradient_as_bucket_view` as training argument for `DDP` | {
"login": "chiragjn",
"id": 10295418,
"node_id": "MDQ6VXNlcjEwMjk1NDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/10295418?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chiragjn",
"html_url": "https://github.com/chiragjn",
"followers_url": "https://api.github.com/users/chiragjn/followers",
"following_url": "https://api.github.com/users/chiragjn/following{/other_user}",
"gists_url": "https://api.github.com/users/chiragjn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chiragjn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chiragjn/subscriptions",
"organizations_url": "https://api.github.com/users/chiragjn/orgs",
"repos_url": "https://api.github.com/users/chiragjn/repos",
"events_url": "https://api.github.com/users/chiragjn/events{/privacy}",
"received_events_url": "https://api.github.com/users/chiragjn/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api... | null | 2 | 2023-12-15T15:50:51 | 2023-12-15T16:18:06 | null | NONE | null | ### Feature request
As the title says, add `gradient_as_bucket_view` as the training argument (default False)
### Motivation
I have been experimenting with qlora fine-tuning LLMs on multiple A10 GPUs and I am leveraging DDP. I was going through the torch docs and https://pytorch.org/docs/2.1/generated/torch.nn.parallel.DistributedDataParallel.html and it seems the `gradient_as_bucket_view` argument can save a little bit of memory. It would be great to have it added as accelerate's DDP plugin already supports it.
I am already experimenting with it to test it out
```python
class HFTrainer(Trainer):
def _wrap_model(self, model, training=True, dataloader=None):
outputs = super()._wrap_model(model, training, dataloader)
if self.args.parallel_mode == ParallelMode.DISTRIBUTED and self.accelerator.ddp_handler:
self.accelerator.ddp_handler.gradient_as_bucket_view = True
return outputs
```
### Your contribution
Let me know, I can also work on a PR for this as the change is relatively small | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28079/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28079/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28078 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28078/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28078/comments | https://api.github.com/repos/huggingface/transformers/issues/28078/events | https://github.com/huggingface/transformers/pull/28078 | 2,043,945,589 | PR_kwDOCUB6oc5iH14z | 28,078 | Fix bug for checkpoint saving on multi node training setting | {
"login": "dumpmemory",
"id": 64742282,
"node_id": "MDQ6VXNlcjY0NzQyMjgy",
"avatar_url": "https://avatars.githubusercontent.com/u/64742282?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dumpmemory",
"html_url": "https://github.com/dumpmemory",
"followers_url": "https://api.github.com/users/dumpmemory/followers",
"following_url": "https://api.github.com/users/dumpmemory/following{/other_user}",
"gists_url": "https://api.github.com/users/dumpmemory/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dumpmemory/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dumpmemory/subscriptions",
"organizations_url": "https://api.github.com/users/dumpmemory/orgs",
"repos_url": "https://api.github.com/users/dumpmemory/repos",
"events_url": "https://api.github.com/users/dumpmemory/events{/privacy}",
"received_events_url": "https://api.github.com/users/dumpmemory/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-12-15T15:41:06 | 2023-12-15T16:39:22 | 2023-12-15T16:18:57 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
fix bug on multi node training setting with shared file system
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@muellerzr
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28078/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28078",
"html_url": "https://github.com/huggingface/transformers/pull/28078",
"diff_url": "https://github.com/huggingface/transformers/pull/28078.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28078.patch",
"merged_at": "2023-12-15T16:18:56"
} |
https://api.github.com/repos/huggingface/transformers/issues/28077 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28077/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28077/comments | https://api.github.com/repos/huggingface/transformers/issues/28077/events | https://github.com/huggingface/transformers/pull/28077 | 2,043,933,828 | PR_kwDOCUB6oc5iHzTT | 28,077 | Disable jitter noise during evaluation in SwitchTransformers | {
"login": "DaizeDong",
"id": 113810510,
"node_id": "U_kgDOBsicTg",
"avatar_url": "https://avatars.githubusercontent.com/u/113810510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DaizeDong",
"html_url": "https://github.com/DaizeDong",
"followers_url": "https://api.github.com/users/DaizeDong/followers",
"following_url": "https://api.github.com/users/DaizeDong/following{/other_user}",
"gists_url": "https://api.github.com/users/DaizeDong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DaizeDong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DaizeDong/subscriptions",
"organizations_url": "https://api.github.com/users/DaizeDong/orgs",
"repos_url": "https://api.github.com/users/DaizeDong/repos",
"events_url": "https://api.github.com/users/DaizeDong/events{/privacy}",
"received_events_url": "https://api.github.com/users/DaizeDong/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-15T15:33:33 | 2023-12-18T15:08:56 | 2023-12-18T15:08:56 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The jitter noise was mistakenly added during evaluation in GPTSanJapanese and SwitchTransformers, which would bring uncertainty to the evaluation results. Now the bug is fixed, and the implementation is the same as the [native code](https://github.com/tensorflow/mesh/blob/e6798a2610a2c2f4c4cd236d8214422cb1ecc00a/mesh_tensorflow/transformer/moe.py#L903-L905) of switch transformers.
The former implementation is:
```python
if self.jitter_noise > 0:
# Multiply the token inputs by the uniform distribution - adding some noise
hidden_states *= torch.empty_like(hidden_states).uniform_(1.0 - self.jitter_noise, 1.0 + self.jitter_noise)
```
The fixed implementation is:
```python
if self.training and self.jitter_noise > 0:
# Multiply the token inputs by the uniform distribution - adding some noise
hidden_states *= torch.empty_like(hidden_states).uniform_(1.0 - self.jitter_noise, 1.0 + self.jitter_noise)
```
This PR also updates the outdated annotations in `configuration_switch_transformers.py`. Now the default values in the annotation area are the same as the values in the `__init__` for SwitchTransformersConfig.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28077/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28077/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28077",
"html_url": "https://github.com/huggingface/transformers/pull/28077",
"diff_url": "https://github.com/huggingface/transformers/pull/28077.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28077.patch",
"merged_at": "2023-12-18T15:08:56"
} |
https://api.github.com/repos/huggingface/transformers/issues/28076 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28076/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28076/comments | https://api.github.com/repos/huggingface/transformers/issues/28076/events | https://github.com/huggingface/transformers/issues/28076 | 2,043,873,647 | I_kwDOCUB6oc550wlv | 28,076 | The model's name is saved as model.safetensors while the logger reported its name as pytorch_model.bin, which is quite weird. | {
"login": "izyForever",
"id": 43177954,
"node_id": "MDQ6VXNlcjQzMTc3OTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/43177954?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/izyForever",
"html_url": "https://github.com/izyForever",
"followers_url": "https://api.github.com/users/izyForever/followers",
"following_url": "https://api.github.com/users/izyForever/following{/other_user}",
"gists_url": "https://api.github.com/users/izyForever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/izyForever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/izyForever/subscriptions",
"organizations_url": "https://api.github.com/users/izyForever/orgs",
"repos_url": "https://api.github.com/users/izyForever/repos",
"events_url": "https://api.github.com/users/izyForever/events{/privacy}",
"received_events_url": "https://api.github.com/users/izyForever/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-12-15T14:57:13 | 2024-01-24T13:49:25 | 2023-12-22T15:16:48 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.36.1
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.11.2
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?


When I was training my model, I finded there was not pytorch_model.bin but model.safetensors,
I think it's misguided in some ways.
The logger is below:



### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. use [the p-tuning script](https://github.com/THUDM/ChatGLM2-6B/blob/main/ptuning/main.py) to fine tuning chatglm2-6b
2. In Windows 10, I use bat to run the script, as below, which works well but inconsistent logger.
```
set PRE_SEQ_LEN=128
set LR=2e-2
python main.py ^
--do_train ^
--train_file AdvertiseGen/train.json ^
--validation_file AdvertiseGen/dev.json ^
--preprocessing_num_workers 10 ^
--prompt_column content ^
--response_column summary ^
--overwrite_cache ^
--model_name_or_path E://ChatGLM2-6B ^
--output_dir output/adgen-chatglm2-6b-pt-%PRE_SEQ_LEN%-%LR% ^
--overwrite_output_dir ^
--max_source_length 64 ^
--max_target_length 128 ^
--per_device_train_batch_size 1 ^
--per_device_eval_batch_size 1 ^
--gradient_accumulation_steps 16 ^
--predict_with_generate ^
--max_steps 3 ^
--logging_steps 1 ^
--save_steps 1 ^
--learning_rate %LR% ^
--pre_seq_len %PRE_SEQ_LEN% ^
--quantization_bit 4
```
### Expected behavior
I think it will be better if the file name is the same as the logger shows. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28076/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28076/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28075 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28075/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28075/comments | https://api.github.com/repos/huggingface/transformers/issues/28075/events | https://github.com/huggingface/transformers/issues/28075 | 2,043,781,068 | I_kwDOCUB6oc550Z_M | 28,075 | Adding support for a static shape `generate` | {
"login": "alessandropalla",
"id": 28634533,
"node_id": "MDQ6VXNlcjI4NjM0NTMz",
"avatar_url": "https://avatars.githubusercontent.com/u/28634533?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alessandropalla",
"html_url": "https://github.com/alessandropalla",
"followers_url": "https://api.github.com/users/alessandropalla/followers",
"following_url": "https://api.github.com/users/alessandropalla/following{/other_user}",
"gists_url": "https://api.github.com/users/alessandropalla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alessandropalla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alessandropalla/subscriptions",
"organizations_url": "https://api.github.com/users/alessandropalla/orgs",
"repos_url": "https://api.github.com/users/alessandropalla/repos",
"events_url": "https://api.github.com/users/alessandropalla/events{/privacy}",
"received_events_url": "https://api.github.com/users/alessandropalla/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 6 | 2023-12-15T14:00:40 | 2024-01-30T10:10:20 | null | NONE | null | ### Feature request
Many inference AI accelerators (Intel NPU, IPU, TPU, etc...) requires static shapes get maximum performance. Static shapes allows the NN graph compiler to improve memory management, schedule and overall network performance.
However in transformers the `generate` function uses dynamic shapes and increase the size of the input (and kv-cache) at every successive step. I opened this issue to implement a way to still do LLM generation inference using transformers API while maintaining static shapes:
The trick is to use left padding and shift left the kv-cached values while doing inference. By setting the `position_id` correctly we can have a correct inference. Attached a GIF that hopefully explains how it works:

Fist inference you pad left and run as usual. It is important to set the `attention_mask` and `position_ids` accordingly. In the kv-cached part you only need to pass the new token and the proper `prosition_ids` and `attention_mask` while the cache values are shifted left. This works because in the MHA block the cached values and keys are concatenated left with the new ones and having left padding makes the new token key and value tensors adjacent to the cache values
Here a snippet for a function that implements this. This code is not production ready but is a POC to show you how it is supposed to work both with and without KV-caching
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "meta-llama/Llama-2-7b-chat-hf"
device = "cpu" # use the accelerator that you have or use "cpu"
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token_id = tokenizer.eos_token_id
# Load model
model = AutoModelForCausalLM.from_pretrained(model_id).to(device)
# Utility function to compute shift left and insert a value into a tensor
def lshift_insert(tensor, value):
tensor = torch.roll(tensor, shifts=-1, dims=-1)
tensor[0, -1] = value
return tensor
# Generate function
@torch.no_grad()
def generate_with_static_shape(model, input_ids, attention_mask=None, max_length=None, use_past=True, pad_token_id=None, **kwargs):
# Get sequence lenght
batch, seq_lenght = input_ids.shape
if pad_token_id is None:
RuntimeError("pad_token_id is not set and needed for static shape generation")
# Padding attention mask
if attention_mask is None:
attention_mask = torch.ones_like(input_ids, dtype=torch.int32).to(model.device)
attention_mask_padding = torch.zeros((batch, max_length - seq_lenght), dtype=input_ids.dtype, device=input_ids.device)
attention_mask = torch.cat((attention_mask_padding, attention_mask), dim=-1)
# Padding input_ids with left padding
padding_input_ids = pad_token_id * torch.ones((batch, max_length - seq_lenght), dtype=input_ids.dtype, device=input_ids.device)
input_ids = torch.cat((padding_input_ids, input_ids), dim=-1).to(model.device)
# Set the proper position ids
position_ids = kwargs.get('position_ids', None)
if position_ids is None:
position_ids = torch.tensor([[0] * (max_length - seq_lenght) + list(range(seq_lenght))], dtype=torch.int32).to(model.device)
else:
raise RuntimeError("Cannot set position_ids with in static shape generation")
# past_key_values for KV-cache
past_key_values = None
for idx in range(seq_lenght, max_length):
# Run the inference
out = model(input_ids=input_ids, attention_mask=attention_mask, position_ids=position_ids, past_key_values=past_key_values)
## Here I do greedy search as an example, but in general is where you want to select the next token with your fancy decoding algorithm
logits = out.logits
new_token = torch.argmax(logits[0, -1, :])
yield new_token
if not use_past:
# Shift left input and position ids and set the new token and idx to the proper values
input_ids = lshift_insert(input_ids, new_token)
position_ids = lshift_insert(position_ids, idx)
else:
# Set input_ids and position_ids to their new value
input_ids = torch.tensor([[new_token]], dtype=input_ids.dtype).to(model.device)
position_ids = torch.tensor([[idx]], dtype=input_ids.dtype).to(model.device)
# Select the proper KV cached keys for next inference
past_key_values = [[item[:, :, 1:, :] for item in layer_past] for layer_past in out.past_key_values]
# Shift left attention mask and set the last value to one
attention_mask = lshift_insert(attention_mask, 1)
prompt = "List all numbers in the Fibonacci sequence: 1, 1, 2, 3, "
max_length = 512
# Tokenize
input_ids = tokenizer(prompt, return_tensors='pt')['input_ids'].to(device)
print(prompt, end="", flush=True)
results = generate_with_static_shape(model, input_ids=input_ids, max_length=max_length, use_past=True, pad_token_id=tokenizer.pad_token_id)
for new_token_id in results:
token = tokenizer.decode([new_token_id], skip_special_tokens=True)
# Not very good as depending on the tokenizer it might or might not add spaces
print(token , end="", flush=True)
```
### Motivation
Enabling AI inference accelerator to be used with the `generate` API
### Your contribution
I'll be happy to help integrating the code into `transformers` library. Let me know how I can help | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28075/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28074 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28074/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28074/comments | https://api.github.com/repos/huggingface/transformers/issues/28074/events | https://github.com/huggingface/transformers/issues/28074 | 2,043,766,034 | I_kwDOCUB6oc550WUS | 28,074 | Inference speed becomes slower after quantization | {
"login": "xinyual",
"id": 74362153,
"node_id": "MDQ6VXNlcjc0MzYyMTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/74362153?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xinyual",
"html_url": "https://github.com/xinyual",
"followers_url": "https://api.github.com/users/xinyual/followers",
"following_url": "https://api.github.com/users/xinyual/following{/other_user}",
"gists_url": "https://api.github.com/users/xinyual/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xinyual/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xinyual/subscriptions",
"organizations_url": "https://api.github.com/users/xinyual/orgs",
"repos_url": "https://api.github.com/users/xinyual/repos",
"events_url": "https://api.github.com/users/xinyual/events{/privacy}",
"received_events_url": "https://api.github.com/users/xinyual/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 8 | 2023-12-15T13:51:27 | 2024-01-15T10:54:24 | 2024-01-14T08:51:51 | NONE | null | ### System Info
- `transformers` version: 4.37.0.dev0
- Platform: Linux-5.15.0-1038-aws-x86_64-with-glibc2.10
- Python version: 3.8.17
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0.dev0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?:Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I use the script from here to finetune my quantized mistral-7b-instruct model with lora: https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing#scrollTo=Ybeyl20n3dYH
After training, I inference as:
```
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained(base_model,
quantization_config=bnb_config,
device_map="auto",
torch_dtype=torch.bfloat16,
trust_remote_code=True)
model = PeftModel.from_pretrained(
model,
lora_weights,
device_map ="auto"
)
model.half()
model.eval()
```
The bnb_config is same as training process.
After that, I inference like:
```
with torch.no_grad():
generation_output = model.generate(
input_ids=input_ids,
do_sample=True,
max_length=input_l + max_new_tokens + 100,
temperature=0.001
)
```
Compared with the model without quantization, the speed is slow. From my knowledge, we use low-bit computation so it would be fast. My hardware is AWS g5.12xlarge. Is that normal?
### Expected behavior
Please tell me whether it is normal to be slower or I made some mistakes in my scripts. Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28074/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28074/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28073 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28073/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28073/comments | https://api.github.com/repos/huggingface/transformers/issues/28073/events | https://github.com/huggingface/transformers/pull/28073 | 2,043,765,991 | PR_kwDOCUB6oc5iHOei | 28,073 | Fix weights not properly initialized due to shape mismatch | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-12-15T13:51:25 | 2023-12-18T20:08:05 | 2023-12-18T20:08:05 | COLLABORATOR | null | Currently, if there is some weight shape mismatched between the model and the checkpoint, and if `ignore_mismatched_sizes=True`, that/those weight(s) won't get initialized by the model's `_init_weights` method, and could get crazy values like `1e37`.
This could make the training gets nan loss value from the beginning and won't have any progress.
One example is by running `src/transformers/modeling_utils.py` (add `ignore_mismatched_sizes=True`).
We usually set `ignore_mismatched_sizes=True` when we want to perform classification tasks using an existing model but to another task having different number of targets.
This PR aims to fix this issue. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28073/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28073",
"html_url": "https://github.com/huggingface/transformers/pull/28073",
"diff_url": "https://github.com/huggingface/transformers/pull/28073.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28073.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28072 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28072/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28072/comments | https://api.github.com/repos/huggingface/transformers/issues/28072/events | https://github.com/huggingface/transformers/issues/28072 | 2,043,757,981 | I_kwDOCUB6oc550UWd | 28,072 | Can i convert open-clip trained models (.pt) using code “src/transformers/models/clip/convert_clip_original_pytorch_to_hf.py” ? | {
"login": "jzssz",
"id": 112179055,
"node_id": "U_kgDOBq-3bw",
"avatar_url": "https://avatars.githubusercontent.com/u/112179055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jzssz",
"html_url": "https://github.com/jzssz",
"followers_url": "https://api.github.com/users/jzssz/followers",
"following_url": "https://api.github.com/users/jzssz/following{/other_user}",
"gists_url": "https://api.github.com/users/jzssz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jzssz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jzssz/subscriptions",
"organizations_url": "https://api.github.com/users/jzssz/orgs",
"repos_url": "https://api.github.com/users/jzssz/repos",
"events_url": "https://api.github.com/users/jzssz/events{/privacy}",
"received_events_url": "https://api.github.com/users/jzssz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | null | 2 | 2023-12-15T13:46:08 | 2023-12-16T10:46:50 | null | NONE | null | ### Model description
openclip:https://github.com/mlfoundations/open_clip
i use openclip to train model and get "epoch_400.pt".
**and i want to convert this "epoch_400.pt" to hf, so i run:**
python src/transformers/models/clip/convert_clip_original_pytorch_to_hf.py --pytorch_dump_folder_path "./openclip_syf_hf" --checkpoint_path "/openclip_output/2023_12_07-15_24_24-model_ViT-B-32-lr_0.0005-b_256-j_8-p_amp/checkpoints/epoch_400.pt" --config_path "/open_clip-main/src/open_clip/model_configs/ViT-B-32.json"
**but get bug:**
Traceback (most recent call last):
File "/home/anaconda3/envs/transformer/lib/python3.8/site-packages/clip/clip.py", line 130, in load
model = torch.jit.load(opened_file, map_location=device if jit else "cpu").eval()
File "/home/anaconda3/envs/transformer/lib/python3.8/site-packages/torch/jit/_serialization.py", line 164, in load
cpp_module = torch._C.import_ir_module_from_buffer(
RuntimeError: PytorchStreamReader failed locating file constants.pkl: file not found
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "src/transformers/models/clip/convert_clip_original_pytorch_to_hf.py", line 150, in <module>
convert_clip_checkpoint(args.checkpoint_path, args.pytorch_dump_folder_path, args.config_path)
File "/home/anaconda3/envs/transformer/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "src/transformers/models/clip/convert_clip_original_pytorch_to_hf.py", line 120, in convert_clip_checkpoint
pt_model, _ = load(checkpoint_path, device="cpu", jit=False)
File "/home/anaconda3/envs/transformer/lib/python3.8/site-packages/clip/clip.py", line 137, in load
state_dict = torch.load(opened_file, map_location="cpu")
File "/home/anaconda3/envs/transformer/lib/python3.8/site-packages/torch/serialization.py", line 795, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/anaconda3/envs/transformer/lib/python3.8/site-packages/torch/serialization.py", line 1002, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
EOFError: Ran out of input
**so i am wondering if i can convert open-clip trained models (.pt) using code “src/transformers/models/clip/convert_clip_original_pytorch_to_hf.py” ?**
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28072/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28072/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28071 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28071/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28071/comments | https://api.github.com/repos/huggingface/transformers/issues/28071/events | https://github.com/huggingface/transformers/pull/28071 | 2,043,757,468 | PR_kwDOCUB6oc5iHMoj | 28,071 | Fix SpeechT5 `decoder_attention_mask` shape | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2023-12-15T13:45:49 | 2024-01-10T18:00:37 | null | COLLABORATOR | null | # What does this PR do?
#26598 rightfully raised a warning when passing labels to `SpeechT5`. When it happens, a reduction factor is applied to the `labels` but not to the `decoder_attention_mask` that comes with it, resulting in a mismatch.
Fixes #26598
cc @amyeroberts @sanchit-gandhi | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28071/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28071",
"html_url": "https://github.com/huggingface/transformers/pull/28071",
"diff_url": "https://github.com/huggingface/transformers/pull/28071.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28071.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28070 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28070/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28070/comments | https://api.github.com/repos/huggingface/transformers/issues/28070/events | https://github.com/huggingface/transformers/issues/28070 | 2,043,706,277 | I_kwDOCUB6oc550Hul | 28,070 | TypeError: 'ModelMetaclass' object is not iterable | {
"login": "andysingal",
"id": 20493493,
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andysingal",
"html_url": "https://github.com/andysingal",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andysingal/subscriptions",
"organizations_url": "https://api.github.com/users/andysingal/orgs",
"repos_url": "https://api.github.com/users/andysingal/repos",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"received_events_url": "https://api.github.com/users/andysingal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 7 | 2023-12-15T13:11:45 | 2024-01-23T08:03:53 | 2024-01-23T08:03:53 | NONE | null | ### System Info
RTX 3090
### Who can help?
@younesbelkada while working on:
```
import os
from dataclasses import dataclass, field
from typing import Optional
from datasets.arrow_dataset import Dataset
import torch
from datasets import load_dataset
from peft import LoraConfig
from peft import AutoPeftModelForCausalLM
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
AutoTokenizer,
TrainingArguments,
)
from pydantic_settings import BaseSettings
from trl import SFTTrainer
torch.manual_seed(42)
# @dataclass
class ScriptArguments(BaseSettings):
"""
These arguments vary depending on how many GPUs you have, what their capacity and features are, and what size model you want to train.
"""
local_rank: Optional[int] = field(default=-1, metadata={"help": "Used for multi-gpu"})
per_device_train_batch_size: Optional[int] = field(default=4)
per_device_eval_batch_size: Optional[int] = field(default=4)
gradient_accumulation_steps: Optional[int] = field(default=4)
learning_rate: Optional[float] = field(default=2e-5)
max_grad_norm: Optional[float] = field(default=0.3)
weight_decay: Optional[int] = field(default=0.01)
lora_alpha: Optional[int] = field(default=16)
lora_dropout: Optional[float] = field(default=0.1)
lora_r: Optional[int] = field(default=32)
max_seq_length: Optional[int] = field(default=512)
model_name: Optional[str] = field(
default="mistralai/Mistral-7B-Instruct-v0.1",
metadata={
"help": "The model that you want to train from the Hugging Face hub. E.g. gpt2, gpt2-xl, bert, etc."
}
)
dataset_name: Optional[str] = field(
default="iamtarun/python_code_instructions_18k_alpaca",
metadata={"help": "The preference dataset to use."},
)
use_4bit: Optional[bool] = field(
default=True,
metadata={"help": "Activate 4bit precision base model loading"},
)
use_nested_quant: Optional[bool] = field(
default=False,
metadata={"help": "Activate nested quantization for 4bit base models"},
)
bnb_4bit_compute_dtype: Optional[str] = field(
default="float16",
metadata={"help": "Compute dtype for 4bit base models"},
)
bnb_4bit_quant_type: Optional[str] = field(
default="nf4",
metadata={"help": "Quantization type fp4 or nf4"},
)
num_train_epochs: Optional[int] = field(
default=100,
metadata={"help": "The number of training epochs for the reward model."},
)
fp16: Optional[bool] = field(
default=False,
metadata={"help": "Enables fp16 training."},
)
bf16: Optional[bool] = field(
default=True,
metadata={"help": "Enables bf16 training."},
)
packing: Optional[bool] = field(
default=False,
metadata={"help": "Use packing dataset creating."},
)
gradient_checkpointing: Optional[bool] = field(
default=True,
metadata={"help": "Enables gradient checkpointing."},
)
optim: Optional[str] = field(
default="paged_adamw_32bit",
metadata={"help": "The optimizer to use."},
)
lr_scheduler_type: str = field(
default="constant",
metadata={"help": "Learning rate schedule. Constant a bit better than cosine, and has advantage for analysis"},
)
max_steps: int = field(default=1000000, metadata={"help": "How many optimizer update steps to take"})
warmup_ratio: float = field(default=0.03, metadata={"help": "Fraction of steps to do a warmup for"})
group_by_length: bool = field(
default=True,
metadata={
"help": "Group sequences into batches with same length. Saves memory and speeds up training considerably."
},
)
save_steps: int = field(default=50, metadata={"help": "Save checkpoint every X updates steps."})
logging_steps: int = field(default=50, metadata={"help": "Log every X updates steps."})
merge_and_push: Optional[bool] = field(
default=False,
metadata={"help": "Merge and push weights after training"},
)
output_dir: str = field(
default="./results_packing",
metadata={"help": "The output directory where the model predictions and checkpoints will be written."},
)
parser = HfArgumentParser(ScriptArguments)
script_args = parser.parse_args_into_dataclasses()[0]
```
ERROR:
```
/usr/local/lib/python3.10/dist-packages/trl/trainer/ppo_config.py:141: UserWarning: The `optimize_cuda_cache` arguement will be deprecated soon, please use `optimize_device_cache` instead.
warnings.warn(
/usr/local/lib/python3.10/dist-packages/pydantic/_internal/_fields.py:149: UserWarning: Field "model_name" has conflict with protected namespace "model_".
You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ('settings_',)`.
warnings.warn(
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[2], line 117
107 merge_and_push: Optional[bool] = field(
108 default=False,
109 metadata={"help": "Merge and push weights after training"},
110 )
111 output_dir: str = field(
112 default="./results_packing",
113 metadata={"help": "The output directory where the model predictions and checkpoints will be written."},
114 )
--> 117 parser = HfArgumentParser(ScriptArguments)
118 script_args = parser.parse_args_into_dataclasses()[0]
File /usr/local/lib/python3.10/dist-packages/transformers/hf_argparser.py:134, in HfArgumentParser.__init__(self, dataclass_types, **kwargs)
132 if dataclasses.is_dataclass(dataclass_types):
133 dataclass_types = [dataclass_types]
--> 134 self.dataclass_types = list(dataclass_types)
135 for dtype in self.dataclass_types:
136 self._add_dataclass_arguments(dtype)
TypeError: 'ModelMetaclass' object is not iterable
```
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
shared about
Addiitonally:
```
%pip install transformers peft bitsandbytes accelerate trl pydantic-settings --quiet
```
### Expected behavior
needs to run
check:
https://python.plainenglish.io/intruct-fine-tuning-mistral-7b-model-with-your-custom-data-7eb22921a483 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28070/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28069 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28069/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28069/comments | https://api.github.com/repos/huggingface/transformers/issues/28069/events | https://github.com/huggingface/transformers/issues/28069 | 2,043,705,426 | I_kwDOCUB6oc550HhS | 28,069 | Add time progress bar to track the group_by_length computation for bigger datasets on Trainer | {
"login": "T-Almeida",
"id": 19167453,
"node_id": "MDQ6VXNlcjE5MTY3NDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/19167453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/T-Almeida",
"html_url": "https://github.com/T-Almeida",
"followers_url": "https://api.github.com/users/T-Almeida/followers",
"following_url": "https://api.github.com/users/T-Almeida/following{/other_user}",
"gists_url": "https://api.github.com/users/T-Almeida/gists{/gist_id}",
"starred_url": "https://api.github.com/users/T-Almeida/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/T-Almeida/subscriptions",
"organizations_url": "https://api.github.com/users/T-Almeida/orgs",
"repos_url": "https://api.github.com/users/T-Almeida/repos",
"events_url": "https://api.github.com/users/T-Almeida/events{/privacy}",
"received_events_url": "https://api.github.com/users/T-Almeida/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | 1 | 2023-12-15T13:11:08 | 2023-12-18T00:07:51 | null | NONE | null | ### Feature request
When setting the flag `group_by_length=True` on the TrainingArguments, there is no user feedback of the operations running in the background, namely getting the list of lengths for all the samples and running the grouping algorithm. This can be a frustrating problem when dealing with large datasets (Millions of samples) on slow IO devices, since it appears that the Trainer is hanging and does not start!
More precisely, In my current setup, I found out that the following lines take almost 2h to finish. (Due to my slow IO (reading from a NFS from an old machine))
https://github.com/huggingface/transformers/blob/c817c17dbe264329b9f9d227b48ce70edd9e3204/src/transformers/trainer_pt_utils.py#L585
NOTE 1): using `.select_columns(model_input_name)` and then iterating would not be faster? Assuming that the dataset has more feature like "attention_mask" for instance.
I believe that more feedback could possibly be given to the user, like the time that would take to finish. (Also store the dataset length under .cache).
NOTE 2): After realising this issue, I also noticed the `length_column_name` flag. Maybe raising a warning to let the users know that on larger datasets they should precompute the length. By doing so, the time went from 2h to (15-20)min.
### Motivation
I was training a model on a LM task. My dataset has 22M samples with average length of +/- 512. When I run the model with `group_by_length=True` I thought that something was wrong because the training was not starting (I was actually writing an bug about my problem, because I thought it was an issue with the Trainer). After further inspection, I notice that the main culprit was the computation of the length that is really slow on my current setup.
### Your contribution
If you feel like this is an issue that is worth to address, I am willing to do PR under your orientation. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28069/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28068 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28068/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28068/comments | https://api.github.com/repos/huggingface/transformers/issues/28068/events | https://github.com/huggingface/transformers/pull/28068 | 2,043,661,913 | PR_kwDOCUB6oc5iG3yT | 28,068 | [`Mixtral`] update conversion script to reflect new changes | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-15T12:40:57 | 2023-12-15T13:05:21 | 2023-12-15T13:05:20 | CONTRIBUTOR | null | # What does this PR do?
Fixes: https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1/discussions/41
Sliding window has been recently removed from mixtral config, thus we need to reflect these changes in the conversion script
I also wonder if we should ignore `sliding_window` in MixtralAttention & MixtralFlashAttention as it will be never used
cc @amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28068/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28068",
"html_url": "https://github.com/huggingface/transformers/pull/28068",
"diff_url": "https://github.com/huggingface/transformers/pull/28068.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28068.patch",
"merged_at": "2023-12-15T13:05:20"
} |
https://api.github.com/repos/huggingface/transformers/issues/28067 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28067/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28067/comments | https://api.github.com/repos/huggingface/transformers/issues/28067/events | https://github.com/huggingface/transformers/issues/28067 | 2,043,646,283 | I_kwDOCUB6oc55z5FL | 28,067 | No module named 'clip' | {
"login": "jzssz",
"id": 112179055,
"node_id": "U_kgDOBq-3bw",
"avatar_url": "https://avatars.githubusercontent.com/u/112179055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jzssz",
"html_url": "https://github.com/jzssz",
"followers_url": "https://api.github.com/users/jzssz/followers",
"following_url": "https://api.github.com/users/jzssz/following{/other_user}",
"gists_url": "https://api.github.com/users/jzssz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jzssz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jzssz/subscriptions",
"organizations_url": "https://api.github.com/users/jzssz/orgs",
"repos_url": "https://api.github.com/users/jzssz/repos",
"events_url": "https://api.github.com/users/jzssz/events{/privacy}",
"received_events_url": "https://api.github.com/users/jzssz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-15T12:30:25 | 2023-12-15T13:32:17 | 2023-12-15T13:32:17 | NONE | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-5.4.0-166-generic-x86_64-with-glibc2.17
- Python version: 3.8.18
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in> yes
- Using distributed or parallel set-up in script?: <fill in> yes
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
"src/transformers/models/clip/convert_clip_original_pytorch_to_hf.py"
"from clip import load"
bug:**No module named 'clip'.**
so I use “pip install clip” to install clip, but:
another bug: **cannot import name 'load' from 'clip'**
so I'm wondering if the way I installed the clip package is wrong? I'm hoping to find the correct way to install this package so I can run this code "src/transformers/models/clip/convert_clip_original_pytorch_to_hf.py"
### Expected behavior
I'm hoping to find the correct way to install “clip” this package so I can run this code "src/transformers/models/clip/convert_clip_original_pytorch_to_hf.py" | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28067/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28066 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28066/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28066/comments | https://api.github.com/repos/huggingface/transformers/issues/28066/events | https://github.com/huggingface/transformers/issues/28066 | 2,043,567,873 | I_kwDOCUB6oc55zl8B | 28,066 | Tokenizer padding does not work when return_tensor="pt" | {
"login": "simeneide",
"id": 7136076,
"node_id": "MDQ6VXNlcjcxMzYwNzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7136076?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simeneide",
"html_url": "https://github.com/simeneide",
"followers_url": "https://api.github.com/users/simeneide/followers",
"following_url": "https://api.github.com/users/simeneide/following{/other_user}",
"gists_url": "https://api.github.com/users/simeneide/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simeneide/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simeneide/subscriptions",
"organizations_url": "https://api.github.com/users/simeneide/orgs",
"repos_url": "https://api.github.com/users/simeneide/repos",
"events_url": "https://api.github.com/users/simeneide/events{/privacy}",
"received_events_url": "https://api.github.com/users/simeneide/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 13 | 2023-12-15T11:35:17 | 2024-01-26T11:50:43 | 2023-12-16T16:24:50 | NONE | null | ### System Info
- `transformers` version: 4.36.1
- Platform: Linux-5.15.0-71-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from datasets import load_dataset
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
raw_datasets = load_dataset("glue", "mrpc")
def tokenize_function(example):
return tokenizer(example["sentence1"], example["sentence2"], truncation=True, return_tensors="pt")
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
```
Resulting traceback
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File [~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:748](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:748), in BatchEncoding.convert_to_tensors(self, tensor_type, prepend_batch_axis)
[747](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=746) if not is_tensor(value):
--> [748](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=747) tensor = as_tensor(value)
[750](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=749) # Removing this for now in favor of controlling the shape with `prepend_batch_axis`
[751](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=750) # # at-least2d
[752](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=751) # if tensor.ndim > 2:
[753](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=752) # tensor = tensor.squeeze(0)
[754](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=753) # elif tensor.ndim < 2:
[755](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=754) # tensor = tensor[None, :]
File [~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:720](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:720), in BatchEncoding.convert_to_tensors..as_tensor(value, dtype)
[719](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=718) return torch.tensor(np.array(value))
--> [720](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=719) return torch.tensor(value)
ValueError: expected sequence of length 52 at dim 1 (got 77)
The above exception was the direct cause of the following exception:
ValueError Traceback (most recent call last)
[/home/simen.eide](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide)@schibsted.com/.local/lib/python3.10/site-packages/transformers/data/Untitled-1.py in line 8
[6](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/data/Untitled-1.py?line=5) def tokenize_function(example):
[7](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/data/Untitled-1.py?line=6) return tokenizer(example["sentence1"], example["sentence2"], truncation=True, return_tensors="pt")
----> [8](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/data/Untitled-1.py?line=7) tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
File [~/.local/lib/python3.10/site-packages/datasets/dataset_dict.py:855](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/datasets/dataset_dict.py:855), in DatasetDict.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, desc)
[852](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=851) if cache_file_names is None:
[853](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=852) cache_file_names = {k: None for k in self}
[854](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=853) return DatasetDict(
--> [855](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=854) {
[856](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=855) k: dataset.map(
[857](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=856) function=function,
[858](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=857) with_indices=with_indices,
[859](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=858) with_rank=with_rank,
[860](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=859) input_columns=input_columns,
[861](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=860) batched=batched,
[862](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=861) batch_size=batch_size,
[863](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=862) drop_last_batch=drop_last_batch,
[864](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=863) remove_columns=remove_columns,
[865](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=864) keep_in_memory=keep_in_memory,
[866](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=865) load_from_cache_file=load_from_cache_file,
[867](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=866) cache_file_name=cache_file_names[k],
[868](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=867) writer_batch_size=writer_batch_size,
[869](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=868) features=features,
[870](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=869) disable_nullable=disable_nullable,
[871](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=870) fn_kwargs=fn_kwargs,
[872](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=871) num_proc=num_proc,
[873](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=872) desc=desc,
[874](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=873) )
[875](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=874) for k, dataset in self.items()
[876](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=875) }
[877](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=876) )
File [~/.local/lib/python3.10/site-packages/datasets/dataset_dict.py:856](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/datasets/dataset_dict.py:856), in (.0)
[852](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=851) if cache_file_names is None:
[853](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=852) cache_file_names = {k: None for k in self}
[854](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=853) return DatasetDict(
[855](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=854) {
--> [856](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=855) k: dataset.map(
[857](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=856) function=function,
[858](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=857) with_indices=with_indices,
[859](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=858) with_rank=with_rank,
[860](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=859) input_columns=input_columns,
[861](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=860) batched=batched,
[862](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=861) batch_size=batch_size,
[863](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=862) drop_last_batch=drop_last_batch,
[864](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=863) remove_columns=remove_columns,
[865](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=864) keep_in_memory=keep_in_memory,
[866](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=865) load_from_cache_file=load_from_cache_file,
[867](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=866) cache_file_name=cache_file_names[k],
[868](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=867) writer_batch_size=writer_batch_size,
[869](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=868) features=features,
[870](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=869) disable_nullable=disable_nullable,
[871](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=870) fn_kwargs=fn_kwargs,
[872](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=871) num_proc=num_proc,
[873](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=872) desc=desc,
[874](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=873) )
[875](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=874) for k, dataset in self.items()
[876](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=875) }
[877](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=876) )
File [~/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py:591](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py:591), in transmit_tasks..wrapper(*args, **kwargs)
[589](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=588) self: "Dataset" = kwargs.pop("self")
[590](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=589) # apply actual function
--> [591](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=590) out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
[592](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=591) datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
[593](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=592) for dataset in datasets:
[594](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=593) # Remove task templates if a column mapping of the template is no longer valid
File [~/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py:556](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py:556), in transmit_format..wrapper(*args, **kwargs)
[549](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=548) self_format = {
[550](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=549) "type": self._format_type,
[551](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=550) "format_kwargs": self._format_kwargs,
[552](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=551) "columns": self._format_columns,
[553](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=552) "output_all_columns": self._output_all_columns,
[554](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=553) }
[555](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=554) # apply actual function
--> [556](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=555) out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
[557](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=556) datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
[558](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=557) # re-apply format to the output
File [~/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py:3089](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py:3089), in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
[3082](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3081) if transformed_dataset is None:
[3083](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3082) with logging.tqdm(
[3084](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3083) disable=not logging.is_progress_bar_enabled(),
[3085](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3084) unit=" examples",
[3086](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3085) total=pbar_total,
[3087](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3086) desc=desc or "Map",
[3088](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3087) ) as pbar:
-> [3089](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3088) for rank, done, content in Dataset._map_single(**dataset_kwargs):
[3090](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3089) if done:
[3091](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3090) shards_done += 1
File [~/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py:3466](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py:3466), in Dataset._map_single(shard, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset)
[3462](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3461) indices = list(
[3463](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3462) range(*(slice(i, i + batch_size).indices(shard.num_rows)))
[3464](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3463) ) # Something simpler?
[3465](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3464) try:
-> [3466](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3465) batch = apply_function_on_filtered_inputs(
[3467](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3466) batch,
[3468](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3467) indices,
[3469](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3468) check_same_num_examples=len(shard.list_indexes()) > 0,
[3470](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3469) offset=offset,
[3471](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3470) )
[3472](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3471) except NumExamplesMismatchError:
[3473](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3472) raise DatasetTransformationNotAllowedError(
[3474](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3473) "Using `.map` in batched mode on a dataset with attached indexes is allowed only if it doesn't create or remove existing examples. You can first run `.drop_index() to remove your index and then re-add it."
[3475](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3474) ) from None
File [~/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py:3345](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py:3345), in Dataset._map_single..apply_function_on_filtered_inputs(pa_inputs, indices, check_same_num_examples, offset)
[3343](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3342) if with_rank:
[3344](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3343) additional_args += (rank,)
-> [3345](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3344) processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
[3346](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3345) if isinstance(processed_inputs, LazyDict):
[3347](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3346) processed_inputs = {
[3348](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3347) k: v for k, v in processed_inputs.data.items() if k not in processed_inputs.keys_to_format
[3349](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3348) }
[/home/simen.eide](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide)@schibsted.com/.local/lib/python3.10/site-packages/transformers/data/Untitled-1.py in line 7, in tokenize_function(example)
[6](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/data/Untitled-1.py?line=5) def tokenize_function(example):
----> [7](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/data/Untitled-1.py?line=6) return tokenizer(example["sentence1"], example["sentence2"], truncation=True, return_tensors="pt")
File [~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2802](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2802), in PreTrainedTokenizerBase.__call__(self, text, text_pair, text_target, text_pair_target, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
[2800](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2799) if not self._in_target_context_manager:
[2801](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2800) self._switch_to_input_mode()
-> [2802](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2801) encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
[2803](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2802) if text_target is not None:
[2804](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2803) self._switch_to_target_mode()
File [~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2888](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2888), in PreTrainedTokenizerBase._call_one(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
[2883](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2882) raise ValueError(
[2884](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2883) f"batch length of `text`: {len(text)} does not match batch length of `text_pair`:"
[2885](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2884) f" {len(text_pair)}."
[2886](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2885) )
[2887](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2886) batch_text_or_text_pairs = list(zip(text, text_pair)) if text_pair is not None else text
-> [2888](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2887) return self.batch_encode_plus(
[2889](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2888) batch_text_or_text_pairs=batch_text_or_text_pairs,
[2890](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2889) add_special_tokens=add_special_tokens,
[2891](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2890) padding=padding,
[2892](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2891) truncation=truncation,
[2893](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2892) max_length=max_length,
[2894](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2893) stride=stride,
[2895](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2894) is_split_into_words=is_split_into_words,
[2896](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2895) pad_to_multiple_of=pad_to_multiple_of,
[2897](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2896) return_tensors=return_tensors,
[2898](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2897) return_token_type_ids=return_token_type_ids,
[2899](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2898) return_attention_mask=return_attention_mask,
[2900](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2899) return_overflowing_tokens=return_overflowing_tokens,
[2901](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2900) return_special_tokens_mask=return_special_tokens_mask,
[2902](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2901) return_offsets_mapping=return_offsets_mapping,
[2903](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2902) return_length=return_length,
[2904](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2903) verbose=verbose,
[2905](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2904) **kwargs,
[2906](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2905) )
[2907](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2906) else:
[2908](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2907) return self.encode_plus(
[2909](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2908) text=text,
[2910](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2909) text_pair=text_pair,
(...)
[2926](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2925) **kwargs,
[2927](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2926) )
File [~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:3079](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:3079), in PreTrainedTokenizerBase.batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
[3069](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3068) # Backward compatibility for 'truncation_strategy', 'pad_to_max_length'
[3070](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3069) padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies(
[3071](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3070) padding=padding,
[3072](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3071) truncation=truncation,
(...)
[3076](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3075) **kwargs,
[3077](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3076) )
-> [3079](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3078) return self._batch_encode_plus(
[3080](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3079) batch_text_or_text_pairs=batch_text_or_text_pairs,
[3081](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3080) add_special_tokens=add_special_tokens,
[3082](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3081) padding_strategy=padding_strategy,
[3083](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3082) truncation_strategy=truncation_strategy,
[3084](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3083) max_length=max_length,
[3085](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3084) stride=stride,
[3086](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3085) is_split_into_words=is_split_into_words,
[3087](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3086) pad_to_multiple_of=pad_to_multiple_of,
[3088](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3087) return_tensors=return_tensors,
[3089](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3088) return_token_type_ids=return_token_type_ids,
[3090](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3089) return_attention_mask=return_attention_mask,
[3091](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3090) return_overflowing_tokens=return_overflowing_tokens,
[3092](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3091) return_special_tokens_mask=return_special_tokens_mask,
[3093](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3092) return_offsets_mapping=return_offsets_mapping,
[3094](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3093) return_length=return_length,
[3095](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3094) verbose=verbose,
[3096](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3095) **kwargs,
[3097](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3096) )
File [~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py:552](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py:552), in PreTrainedTokenizerFast._batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding_strategy, truncation_strategy, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose)
[550](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py?line=549) for input_ids in sanitized_tokens["input_ids"]:
[551](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py?line=550) self._eventual_warn_about_too_long_sequence(input_ids, max_length, verbose)
--> [552](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py?line=551) return BatchEncoding(sanitized_tokens, sanitized_encodings, tensor_type=return_tensors)
File [~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:223](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:223), in BatchEncoding.__init__(self, data, encoding, tensor_type, prepend_batch_axis, n_sequences)
[219](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=218) n_sequences = encoding[0].n_sequences
[221](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=220) self._n_sequences = n_sequences
--> [223](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=222) self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis)
File [~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:764](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:764), in BatchEncoding.convert_to_tensors(self, tensor_type, prepend_batch_axis)
[759](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=758) if key == "overflowing_tokens":
[760](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=759) raise ValueError(
[761](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=760) "Unable to create tensor returning overflowing tokens of different lengths. "
[762](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=761) "Please see if a fast version of this tokenizer is available to have this feature available."
[763](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=762) ) from e
--> [764](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=763) raise ValueError(
[765](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=764) "Unable to create tensor, you should probably activate truncation and/or padding with"
[766](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=765) " 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your"
[767](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=766) f" features (`{key}` in this case) have excessive nesting (inputs type `list` where type `int` is"
[768](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=767) " expected)."
[769](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=768) ) from e
[771](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=770) return self
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`input_ids` in this case) have excessive nesting (inputs type `list` where type `int` is expected).
```
### Expected behavior
I am following the manual on dynamic padding here: https://huggingface.co/learn/nlp-course/chapter3/2?fw=pt#dynamic-padding
When it returns lists it all is fine, but the padding fails when I ask the tokenizer to return "pt". | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28066/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28065 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28065/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28065/comments | https://api.github.com/repos/huggingface/transformers/issues/28065/events | https://github.com/huggingface/transformers/pull/28065 | 2,043,507,125 | PR_kwDOCUB6oc5iGVb6 | 28,065 | Cache: `Bart` and related architectures support `Cache` objects | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in pro... | open | false | null | [] | null | 4 | 2023-12-15T11:01:47 | 2024-01-16T14:49:26 | null | MEMBER | null | # What does this PR do?
This PR applies the changes to `Bart` so it supports the new `Cache` objects. In other works, it is akin to #26681 but for encoder-decoder models.
⚠️ This is a giant PR that can't be separated due to our copy mechanism (🙃), but the review process doesn't need to be daunting. Here's my suggested review order and high-level rationale:
1. Changes in `cache_utils.py`. I've introduced `DynamicCacheWithCrossAttention`, which expands `DynamicCache` [cache object equivalent to the previous `past_key_values` input/output] with the ability to hold a cross-attention cache. This design was intentional: most LLMs (and now even multimodel models) tend to be decoder-only, so this separation will keep the cache class for decoder-only models simpler. It also enable us to be more strict -- I've caught an unintended cache deletion in Whisper thanks to the increased specificity!
2. Changes in `modeling_bart.py`. These changes are the equivalent of the modeling changes in #26681, but for encoder-decoder models.
3. Other changes, which can be reviewed more lightly. They are either related documentation fixes, minor corrections, propagation of bart's changes through `make fix-copies` (plus a few manual changes like adding imports or updating docstrings), or test upgrades for the new `DynamicCacheWithCrossAttention`.
___________________________________________________________________________
The following tests were run locally - includes FA2 and some pretty challenging tests to ensure nothing was broken in the process:
- [x] `RUN_SLOW=1 py.test tests/models/bart/test_modeling_bart.py -vv`
- [x] `RUN_SLOW=1 py.test tests/models/mbart/test_modeling_mbart.py -vv`
- [x] `RUN_SLOW=1 py.test tests/models/whisper/test_modeling_whisper.py -vv`
👉 In any case, we should run the slow CI before merging!
<details>
<summary>Note on Whisper: same failures as in `main`, i.e. (open me)</summary>

</details>
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28065/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28065",
"html_url": "https://github.com/huggingface/transformers/pull/28065",
"diff_url": "https://github.com/huggingface/transformers/pull/28065.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28065.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28064 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28064/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28064/comments | https://api.github.com/repos/huggingface/transformers/issues/28064/events | https://github.com/huggingface/transformers/pull/28064 | 2,043,489,693 | PR_kwDOCUB6oc5iGReQ | 28,064 | doc: Correct spelling mistake | {
"login": "caiyili",
"id": 4177513,
"node_id": "MDQ6VXNlcjQxNzc1MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4177513?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/caiyili",
"html_url": "https://github.com/caiyili",
"followers_url": "https://api.github.com/users/caiyili/followers",
"following_url": "https://api.github.com/users/caiyili/following{/other_user}",
"gists_url": "https://api.github.com/users/caiyili/gists{/gist_id}",
"starred_url": "https://api.github.com/users/caiyili/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/caiyili/subscriptions",
"organizations_url": "https://api.github.com/users/caiyili/orgs",
"repos_url": "https://api.github.com/users/caiyili/repos",
"events_url": "https://api.github.com/users/caiyili/events{/privacy}",
"received_events_url": "https://api.github.com/users/caiyili/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-15T10:53:51 | 2023-12-15T13:01:39 | 2023-12-15T13:01:39 | CONTRIBUTOR | null | # What does this PR do?
Correct the word "toekn" to "token" in a document. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28064/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28064",
"html_url": "https://github.com/huggingface/transformers/pull/28064",
"diff_url": "https://github.com/huggingface/transformers/pull/28064.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28064.patch",
"merged_at": "2023-12-15T13:01:39"
} |
https://api.github.com/repos/huggingface/transformers/issues/28063 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28063/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28063/comments | https://api.github.com/repos/huggingface/transformers/issues/28063/events | https://github.com/huggingface/transformers/issues/28063 | 2,043,339,236 | I_kwDOCUB6oc55yuHk | 28,063 | can not find dataset_name transformersbook/codeparrot | {
"login": "sxsxsx",
"id": 16790259,
"node_id": "MDQ6VXNlcjE2NzkwMjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/16790259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sxsxsx",
"html_url": "https://github.com/sxsxsx",
"followers_url": "https://api.github.com/users/sxsxsx/followers",
"following_url": "https://api.github.com/users/sxsxsx/following{/other_user}",
"gists_url": "https://api.github.com/users/sxsxsx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sxsxsx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sxsxsx/subscriptions",
"organizations_url": "https://api.github.com/users/sxsxsx/orgs",
"repos_url": "https://api.github.com/users/sxsxsx/repos",
"events_url": "https://api.github.com/users/sxsxsx/events{/privacy}",
"received_events_url": "https://api.github.com/users/sxsxsx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-15T09:54:24 | 2024-01-23T08:03:56 | 2024-01-23T08:03:56 | NONE | null | ### System Info
python scripts/preprocessing.py \
--dataset_name transformersbook/codeparrot \
--output_dir codeparrot-clean
can not find dataset_name transformersbook/codeparrot
the error is
Traceback (most recent call last):
File "scripts/preprocessing.py", line 171, in <module>
ds = load_dataset(args.dataset_name, split="train")
File "/home/admin/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/load.py", line 1627, in load_dataset
builder_instance = load_dataset_builder(
File "/home/admin/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/load.py", line 1464, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/admin/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/load.py", line 1174, in dataset_module_factory
raise e1 from None
File "/home/admin/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/load.py", line 1156, in dataset_module_factory
return CommunityDatasetModuleFactoryWithoutScript(
File "/home/admin/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/load.py", line 801, in get_module
else get_patterns_in_dataset_repository(dataset_info)
File "/home/admin/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/data_files.py", line 473, in get_patterns_in_dataset_repository
return _get_data_files_patterns(resolver)
File "/home/admin/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/data_files.py", line 101, in _get_data_files_patterns
data_files = pattern_resolver(pattern)
File "/home/admin/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/data_files.py", line 305, in _resolve_single_pattern_in_dataset_repository
glob_iter = [PurePath(filepath) for filepath in fs.glob(pattern) if fs.isfile(filepath)]
File "/home/admin/miniconda3/envs/huggingface/lib/python3.8/site-packages/fsspec/spec.py", line 606, in glob
pattern = glob_translate(path + ("/" if ends_with_sep else ""))
File "/home/admin/miniconda3/envs/huggingface/lib/python3.8/site-packages/fsspec/utils.py", line 734, in glob_translate
raise ValueError(
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
scripts/preprocessing.py
### Expected behavior
run success | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28063/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28062 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28062/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28062/comments | https://api.github.com/repos/huggingface/transformers/issues/28062/events | https://github.com/huggingface/transformers/pull/28062 | 2,043,336,160 | PR_kwDOCUB6oc5iFuY8 | 28,062 | Remove SpeechT5 deprecated argument | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-15T09:53:04 | 2023-12-15T12:15:06 | 2023-12-15T12:15:06 | COLLABORATOR | null | # What does this PR do?
`stop_labels` is an unused argument that was supposed to be removed in `4.30.0`, here I remove it!
cc @amyeroberts | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28062/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28062",
"html_url": "https://github.com/huggingface/transformers/pull/28062",
"diff_url": "https://github.com/huggingface/transformers/pull/28062.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28062.patch",
"merged_at": "2023-12-15T12:15:06"
} |
https://api.github.com/repos/huggingface/transformers/issues/28061 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28061/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28061/comments | https://api.github.com/repos/huggingface/transformers/issues/28061/events | https://github.com/huggingface/transformers/pull/28061 | 2,043,330,623 | PR_kwDOCUB6oc5iFtJJ | 28,061 | [`Modeling` / `Mixtral`] Fix GC + PEFT issues with Mixtral | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-15T09:50:42 | 2023-12-15T10:34:58 | 2023-12-15T10:34:42 | CONTRIBUTOR | null | # What does this PR do?
Applies the same fix presented in #28031 for Mixtral, specifically addresses: https://github.com/huggingface/transformers/issues/28023#issuecomment-1856556941
cc @amyeroberts
Fixes: https://github.com/huggingface/trl/issues/1088 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28061/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28061",
"html_url": "https://github.com/huggingface/transformers/pull/28061",
"diff_url": "https://github.com/huggingface/transformers/pull/28061.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28061.patch",
"merged_at": "2023-12-15T10:34:42"
} |
https://api.github.com/repos/huggingface/transformers/issues/28060 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28060/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28060/comments | https://api.github.com/repos/huggingface/transformers/issues/28060/events | https://github.com/huggingface/transformers/pull/28060 | 2,043,320,287 | PR_kwDOCUB6oc5iFq0H | 28,060 | Skip M4T `test_retain_grad_hidden_states_attentions` | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-12-15T09:46:02 | 2023-12-15T13:40:03 | 2023-12-15T13:39:16 | COLLABORATOR | null | # What does this PR do?
After investigating the reasons for the `test_retain_grad_hidden_states_attentions` flaky failure, I realized the speech encoder attentions can be `None` with a non-zero probability when `training=True`. Skipping the test is the fastest fix.
Fixes #28036
cc @gante @amyeroberts @ydshieh | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28060/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28060/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28060",
"html_url": "https://github.com/huggingface/transformers/pull/28060",
"diff_url": "https://github.com/huggingface/transformers/pull/28060.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28060.patch",
"merged_at": "2023-12-15T13:39:16"
} |
https://api.github.com/repos/huggingface/transformers/issues/28059 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28059/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28059/comments | https://api.github.com/repos/huggingface/transformers/issues/28059/events | https://github.com/huggingface/transformers/pull/28059 | 2,043,291,530 | PR_kwDOCUB6oc5iFkT_ | 28,059 | [Flax LLaMA] Fix attn dropout | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-15T09:32:52 | 2023-12-15T13:46:06 | 2023-12-15T10:57:37 | CONTRIBUTOR | null | # What does this PR do?
Attention dropout was not activated in Flax LLaMA, despite it being so in PyTorch LLaMA: https://github.com/huggingface/transformers/blob/1a585c1222a56bcaecc070966d558d4a9d862e83/src/transformers/models/llama/modeling_llama.py#L430
=> this PR unifies the implementations across frameworks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28059/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28059/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28059",
"html_url": "https://github.com/huggingface/transformers/pull/28059",
"diff_url": "https://github.com/huggingface/transformers/pull/28059.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28059.patch",
"merged_at": "2023-12-15T10:57:37"
} |
https://api.github.com/repos/huggingface/transformers/issues/28058 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28058/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28058/comments | https://api.github.com/repos/huggingface/transformers/issues/28058/events | https://github.com/huggingface/transformers/issues/28058 | 2,043,256,422 | I_kwDOCUB6oc55yZ5m | 28,058 | Mixtral: Reduce and Increase Expert Models | {
"login": "minato-ellie",
"id": 82735346,
"node_id": "MDQ6VXNlcjgyNzM1MzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/82735346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minato-ellie",
"html_url": "https://github.com/minato-ellie",
"followers_url": "https://api.github.com/users/minato-ellie/followers",
"following_url": "https://api.github.com/users/minato-ellie/following{/other_user}",
"gists_url": "https://api.github.com/users/minato-ellie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minato-ellie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minato-ellie/subscriptions",
"organizations_url": "https://api.github.com/users/minato-ellie/orgs",
"repos_url": "https://api.github.com/users/minato-ellie/repos",
"events_url": "https://api.github.com/users/minato-ellie/events{/privacy}",
"received_events_url": "https://api.github.com/users/minato-ellie/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | 5 | 2023-12-15T09:16:40 | 2023-12-16T15:55:49 | null | NONE | null | ### Feature request
Add methods to MixtralSparseMoeBlock, MixtralDecoderLayer, and MixtralModel for reducing (and enlarging) the number of expert models.
Implement a mechanism to decrease the number of expert models by removing corresponding rows in gate weights. This should enable the removal expert models by id.
https://github.com/huggingface/transformers/blob/1a585c1222a56bcaecc070966d558d4a9d862e83/src/transformers/models/mixtral/modeling_mixtral.py#L688-L710
### Motivation
This will allow scaling down or up the model size from a pre-trained model while preserving existing weights, eliminating the need to retrain from scratch.
### Your contribution
I am willing to contribute a pr, but need a few time.
I would like to know if such a PR is likely to be accepted, before I start working on it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28058/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28058/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28057 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28057/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28057/comments | https://api.github.com/repos/huggingface/transformers/issues/28057/events | https://github.com/huggingface/transformers/pull/28057 | 2,042,915,548 | PR_kwDOCUB6oc5iEScS | 28,057 | fix ffmpeg_microphone under WSL2 (use pulseaudio) | {
"login": "jamon",
"id": 272949,
"node_id": "MDQ6VXNlcjI3Mjk0OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/272949?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamon",
"html_url": "https://github.com/jamon",
"followers_url": "https://api.github.com/users/jamon/followers",
"following_url": "https://api.github.com/users/jamon/following{/other_user}",
"gists_url": "https://api.github.com/users/jamon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jamon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamon/subscriptions",
"organizations_url": "https://api.github.com/users/jamon/orgs",
"repos_url": "https://api.github.com/users/jamon/repos",
"events_url": "https://api.github.com/users/jamon/events{/privacy}",
"received_events_url": "https://api.github.com/users/jamon/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2023-12-15T05:17:14 | 2024-01-15T16:10:45 | null | NONE | null | # Fix ffmpeg_microphone under WSL2
This attempts to detect if it is running under WSL2 and defaults to using pulseaudio with an input of "RDPSource" in order to work with WSL2.
@Narsil - tagging you since you contributed most of this file. I'm also happy to update this to just accept the format and input as parameters if you prefer (and either keep or roll back the defaults for WSL2). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28057/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28057/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28057",
"html_url": "https://github.com/huggingface/transformers/pull/28057",
"diff_url": "https://github.com/huggingface/transformers/pull/28057.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28057.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28056 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28056/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28056/comments | https://api.github.com/repos/huggingface/transformers/issues/28056/events | https://github.com/huggingface/transformers/issues/28056 | 2,042,880,695 | I_kwDOCUB6oc55w-K3 | 28,056 | Transformers 4.36 use_cache issue | {
"login": "dakinggg",
"id": 43149077,
"node_id": "MDQ6VXNlcjQzMTQ5MDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/43149077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dakinggg",
"html_url": "https://github.com/dakinggg",
"followers_url": "https://api.github.com/users/dakinggg/followers",
"following_url": "https://api.github.com/users/dakinggg/following{/other_user}",
"gists_url": "https://api.github.com/users/dakinggg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dakinggg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dakinggg/subscriptions",
"organizations_url": "https://api.github.com/users/dakinggg/orgs",
"repos_url": "https://api.github.com/users/dakinggg/repos",
"events_url": "https://api.github.com/users/dakinggg/events{/privacy}",
"received_events_url": "https://api.github.com/users/dakinggg/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 17 | 2023-12-15T04:33:58 | 2024-01-19T12:23:44 | null | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.36.0
- Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.3.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Sorry that I don't really have a minimal reproducer here as I'm in another training framework, but I still think this might be useful for you.
Running training on llama2 7b, with activation checkpointing, has some issues in 4.36. Comparing to training with 4.35.2
- if using flash attention, training produces higher loss, is slower, and uses more memory
- if not using flash attention, crashes with `ValueError: Attention mask should be of size (2, 1, 4096, 8192), but is torch.Size([2, 1, 4096, 4096])`
If I explicitly set `use_cache=False` (shouldn't have any impact during training because there is no cache), results with 4.36 are similar to 4.35.2.
### Expected behavior
No regression from 4.35.2 -> 4.36. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28056/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28055 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28055/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28055/comments | https://api.github.com/repos/huggingface/transformers/issues/28055/events | https://github.com/huggingface/transformers/issues/28055 | 2,042,841,032 | I_kwDOCUB6oc55w0fI | 28,055 | Llama-2-70b-chat-hf get worse result than Llama-2-70B-Chat-GPTQ | {
"login": "fancyerii",
"id": 5372812,
"node_id": "MDQ6VXNlcjUzNzI4MTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5372812?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fancyerii",
"html_url": "https://github.com/fancyerii",
"followers_url": "https://api.github.com/users/fancyerii/followers",
"following_url": "https://api.github.com/users/fancyerii/following{/other_user}",
"gists_url": "https://api.github.com/users/fancyerii/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fancyerii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fancyerii/subscriptions",
"organizations_url": "https://api.github.com/users/fancyerii/orgs",
"repos_url": "https://api.github.com/users/fancyerii/repos",
"events_url": "https://api.github.com/users/fancyerii/events{/privacy}",
"received_events_url": "https://api.github.com/users/fancyerii/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-12-15T03:45:40 | 2024-01-22T08:04:10 | 2024-01-22T08:04:10 | NONE | null | ### System Info
- `transformers` version: 4.36.0
- Platform: Linux-4.15.0-213-generic-x86_64-with-glibc2.27
- Python version: 3.9.18
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am trying to use Llama-2-70b-chat-hf as zero-shot text classifier for my datasets. Here is my setups.
1. vLLM + Llama-2-70b-chat-hf
I used vLLM as my inference engine as run it with:
```
python api_server.py --model /nas/lili/models_hf/70B-chat --tensor-parallel-size 8
```
api_server.py is the [example file](https://github.com/vllm-project/vllm/blob/main/vllm/entrypoints/api_server.py) and I do not modify anything.
client code:
```
data = {
"prompt": prompt,
"use_beam_search": False,
"n": 1,
"temperature": 0.1,
"max_tokens": 128,
}
res = _post(data)
return eval(res.content)['text'][0].strip()
```
And my prompt is:
```
You will be provided with a product name. The product name will be delimited by 3 backticks, i.e.```.
Classify the product into a primary category.
Primary categories:
Clothing, Shoes & Jewelry
Automotive
Home & Kitchen
Beauty & Personal Care
Electronics
Sports & Outdoors
Patio, Lawn & Garden
Handmade Products
Grocery & Gourmet Food
Health & Household
Musical Instruments
Toys & Games
Baby Products
Pet Supplies
Tools & Home Improvement
Appliances
Office Products
Cell Phones & Accessories
Product name:```Cambkatl Men's Funny 3D Fake Abs T-Shirts Casual Short Sleeve Chest Graphic Printed Crewneck Novelty Pullover Tee Tops```.
Only answer the category name, no other words.
```
The classification accuracy is 0.352. And I also tried to use the same prompt and parameter(temperature and max_token) to call chatgpt and gpt-4, the got 0.68 and 0.72 respectively.
Llama 2 shouldn't be significantly worse than ChatGPT. There must be something wrong with it. So I suspect it may be related to vLLM. So I tried the following method.
2. Transformer + flask
It's not a good serving method, maybe I should use tgi. But I think it's easy for locating problem.
```
from transformers import LlamaTokenizer, LlamaForCausalLM
tokenizer_path = "/nas/lili/models_hf/70B-chat-hf/"
model_path = "/nas/lili/models_hf/70B-chat-hf/"
tokenizer = LlamaTokenizer.from_pretrained(tokenizer_path)
model = LlamaForCausalLM.from_pretrained(
model_path,
#load_in_8bit=True,
#torch_dtype=torch.float16,
device_map="auto",
)
from flask import Flask, request, jsonify
from flask_cors import CORS
from transformers.generation import GenerationConfig
app = Flask(__name__)
CORS(app)
@app.route('/generate', methods=['POST'])
def generate():
json = request.get_json(force=True)
prompt = json['prompt']
num_beams = json.get('num_beams')
temperature = json.get('temperature')
max_tokens = json.get('max_tokens')
do_sample = json.get('do_sample')
top_k = json.get('top_k') or 10
model_inputs = tokenizer(prompt, return_tensors='pt').to('cuda')
cfg = GenerationConfig(
num_beams = num_beams,
max_new_tokens = max_tokens,
temperature = temperature,
do_sample = do_sample,
top_k = top_k
)
output = model.generate(**model_inputs, generation_config=cfg, pad_token_id=tokenizer.eos_token_id)
input_length = model_inputs["input_ids"].shape[1]
output = tokenizer.decode(output[0][input_length:], skip_special_tokens=True)
output = output.strip()
return jsonify({'text': [output]})
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)
```
And the client code:
```
data = {
"prompt": prompt,
"do_sample": True,
"temperature": 0.1,
"max_tokens": 128,
"num_beams":5
}
res = _post(data, url=self.url)
return eval(res.content)['text'][0].strip()
```
This time I used a large num_beams=5(I should use 1 but I made a mistake)
I used the same prompt as before. And the accuracy is 0.368. It's not much better than using vLLM(the gain may from large num_beams).
Now it seems there is not the problem of vLLM. What's wrong with it? Is Llama 2 70b a very bad model? I don't think so. So I tried the 3rd method.
3. Transformer(using Llama-2-70B-Chat-GPTQ ) + flask
The setup is the same as method 2, I only change model:
```
tokenizer_path = "/nas/lili/models_hf/7B-chat/"
model_path = "/nas/lili/models_hf/Llama-2-70B-chat-GPTQ/"
```
I saved Llama-2-70B-chat-GPTQ by saved_pretrained and forget saved the tokenizer, So I use the tokenizer of Llama2 7B-chat(I think all Llama 2 tokenizer is the same for different mode size). This time I got a better result of 0.56. It's not good as chatgpt but is significant better than uncompressed Llama-2-70B-chat.
So I am confused that original Llama-2-70B-chat is 20% worse than Llama-2-70B-chat-GPTQ. Method 2 and Method 3 are exactly the same except for different model.
### Expected behavior
Llama 2 70b got a similar or better result than Llama-2-70B-chat-GPTQ. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28055/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28054 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28054/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28054/comments | https://api.github.com/repos/huggingface/transformers/issues/28054/events | https://github.com/huggingface/transformers/pull/28054 | 2,042,669,233 | PR_kwDOCUB6oc5iDfJY | 28,054 | Make GPT2 traceable in meta state | {
"login": "kwen2501",
"id": 6676466,
"node_id": "MDQ6VXNlcjY2NzY0NjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6676466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kwen2501",
"html_url": "https://github.com/kwen2501",
"followers_url": "https://api.github.com/users/kwen2501/followers",
"following_url": "https://api.github.com/users/kwen2501/following{/other_user}",
"gists_url": "https://api.github.com/users/kwen2501/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kwen2501/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kwen2501/subscriptions",
"organizations_url": "https://api.github.com/users/kwen2501/orgs",
"repos_url": "https://api.github.com/users/kwen2501/repos",
"events_url": "https://api.github.com/users/kwen2501/events{/privacy}",
"received_events_url": "https://api.github.com/users/kwen2501/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-14T23:48:54 | 2023-12-15T14:45:32 | 2023-12-15T14:45:32 | CONTRIBUTOR | null | # What does this PR do?
Before this PR, if we create GPT2 on "meta" device and trace it with dynamo or torch.export, the following line would create an error:
```
mask_value = torch.full([], mask_value, dtype=attn_weights.dtype).to(attn_weights.device)
```
```
torch._dynamo.exc.TorchRuntimeError: Failed running call_method to(*(FakeTensor(..., size=()), device(type='meta')), **{}):
Creating a new Tensor subclass FakeTensor but the raw Tensor object is already associated to a python object of type FakeTensor
```
That is, tracing a `.to("meta")` method on a meta tensor is not yet supported by PT2, even though we were just tracing the code.
A quick workaround is to move the device in `.to` method to the tensor constructor, which is what this PR does.
Longer term, it would be best for dynamo/export to not error out when tracing through the `.to` method in this situation.
(I will file an issue again PyTorch.)
## Who can review?
@younesbelkada
@muellerzr @SunMarc | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28054/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28054/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28054",
"html_url": "https://github.com/huggingface/transformers/pull/28054",
"diff_url": "https://github.com/huggingface/transformers/pull/28054.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28054.patch",
"merged_at": "2023-12-15T14:45:32"
} |
https://api.github.com/repos/huggingface/transformers/issues/28052 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28052/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28052/comments | https://api.github.com/repos/huggingface/transformers/issues/28052/events | https://github.com/huggingface/transformers/issues/28052 | 2,042,532,506 | I_kwDOCUB6oc55vpKa | 28,052 | ValueError: Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes. You passed torch.float32, this might lead to unexpected behaviour. | {
"login": "dakinggg",
"id": 43149077,
"node_id": "MDQ6VXNlcjQzMTQ5MDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/43149077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dakinggg",
"html_url": "https://github.com/dakinggg",
"followers_url": "https://api.github.com/users/dakinggg/followers",
"following_url": "https://api.github.com/users/dakinggg/following{/other_user}",
"gists_url": "https://api.github.com/users/dakinggg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dakinggg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dakinggg/subscriptions",
"organizations_url": "https://api.github.com/users/dakinggg/orgs",
"repos_url": "https://api.github.com/users/dakinggg/repos",
"events_url": "https://api.github.com/users/dakinggg/events{/privacy}",
"received_events_url": "https://api.github.com/users/dakinggg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 10 | 2023-12-14T21:29:02 | 2024-01-26T08:37:06 | 2024-01-26T08:37:06 | CONTRIBUTOR | null | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.36.0
- Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.3.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
running `llamafromcfg = transformers.AutoModelForCausalLM.from_config(llamacfg, use_flash_attention_2=True)` after `llamacfg = transformers.AutoConfig.from_pretrained('meta-llama/Llama-2-7b-hf')` results in `ValueError: Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes. You passed torch.float32, this might lead to unexpected behaviour.`.
### Expected behavior
I understand that flash attention requires fp16/bf16 for _computation_, but I don't believe I should be prevented from instantiating the model in fp32. I will use automatic mixed precision later for computation.
Please let me know what I'm missing/what the intended usage is. Thank you! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28052/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28052/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28051 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28051/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28051/comments | https://api.github.com/repos/huggingface/transformers/issues/28051/events | https://github.com/huggingface/transformers/pull/28051 | 2,042,351,489 | PR_kwDOCUB6oc5iCYmn | 28,051 | [LLaVa] Add past_key_values to _skip_keys_device_placement to fix multi-GPU dispatch | {
"login": "aismlv",
"id": 13088690,
"node_id": "MDQ6VXNlcjEzMDg4Njkw",
"avatar_url": "https://avatars.githubusercontent.com/u/13088690?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aismlv",
"html_url": "https://github.com/aismlv",
"followers_url": "https://api.github.com/users/aismlv/followers",
"following_url": "https://api.github.com/users/aismlv/following{/other_user}",
"gists_url": "https://api.github.com/users/aismlv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aismlv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aismlv/subscriptions",
"organizations_url": "https://api.github.com/users/aismlv/orgs",
"repos_url": "https://api.github.com/users/aismlv/repos",
"events_url": "https://api.github.com/users/aismlv/events{/privacy}",
"received_events_url": "https://api.github.com/users/aismlv/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-14T19:38:31 | 2023-12-15T14:05:20 | 2023-12-15T14:05:20 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
Fixes #27917
Fixes cache and (key, value) tensors ending up on different devices when using accelerate's dispatch in `LlavaPreTrainedModel` (and `VipLlavaPreTrainedModel`) by adding `_skip_keys_device_placement = "past_key_values"` attribute to the class, similar to how Llama handles the issue
```
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/transformers/cache_utils.py", line 127, in update
self.key_cache[layer_idx] = torch.cat([self.key_cache[layer_idx], key_states], dim=-2)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument tensors in method wrapper_CUDA_cat)
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @younesbelkada
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28051/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28051/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28051",
"html_url": "https://github.com/huggingface/transformers/pull/28051",
"diff_url": "https://github.com/huggingface/transformers/pull/28051.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28051.patch",
"merged_at": "2023-12-15T14:05:20"
} |
https://api.github.com/repos/huggingface/transformers/issues/28050 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28050/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28050/comments | https://api.github.com/repos/huggingface/transformers/issues/28050/events | https://github.com/huggingface/transformers/pull/28050 | 2,042,242,760 | PR_kwDOCUB6oc5iCAsK | 28,050 | [`EfficientSAM`] Add EfficientSAM to the library | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 5 | 2023-12-14T18:25:57 | 2024-01-25T11:17:47 | null | CONTRIBUTOR | null | # What does this PR do?
As per title, this PR adds EfficientSAM, a new architecture from https://github.com/yformer/EfficientSAM that is similar than SAM architecture but with the benefit of being much smaller.
Draft for now
@xenova @yformer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28050/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 4,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28050/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28050",
"html_url": "https://github.com/huggingface/transformers/pull/28050",
"diff_url": "https://github.com/huggingface/transformers/pull/28050.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28050.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28049 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28049/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28049/comments | https://api.github.com/repos/huggingface/transformers/issues/28049/events | https://github.com/huggingface/transformers/issues/28049 | 2,042,219,768 | I_kwDOCUB6oc55ucz4 | 28,049 | Transformers 4.36 doesn't work with `microsoft/phi-1.5` unless you pass in `trust_remote_code=True` | {
"login": "arnavgarg1",
"id": 106701836,
"node_id": "U_kgDOBlwkDA",
"avatar_url": "https://avatars.githubusercontent.com/u/106701836?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arnavgarg1",
"html_url": "https://github.com/arnavgarg1",
"followers_url": "https://api.github.com/users/arnavgarg1/followers",
"following_url": "https://api.github.com/users/arnavgarg1/following{/other_user}",
"gists_url": "https://api.github.com/users/arnavgarg1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arnavgarg1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arnavgarg1/subscriptions",
"organizations_url": "https://api.github.com/users/arnavgarg1/orgs",
"repos_url": "https://api.github.com/users/arnavgarg1/repos",
"events_url": "https://api.github.com/users/arnavgarg1/events{/privacy}",
"received_events_url": "https://api.github.com/users/arnavgarg1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 20 | 2023-12-14T18:11:06 | 2024-01-22T17:26:07 | 2024-01-22T17:26:07 | NONE | null | ### System Info
When the transformers library typically adds a new supported model, we no longer need to pass in `trust_remote_code=True` during model or tokenizer initialization.
However, even with the latest version of the transformers package (4.36.1), I see that I need to do it when I try using `microsoft/phi-1.5` to actually get the model to load and for the einops weights to get converted to torch weights:
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1.5", trust_remote_code=True)
```
I took a look at the [PR](https://github.com/huggingface/transformers/pull/26170/files#diff-74ab0ba9fffc06389f9d614e5da01ee93db3e2f0494d1e96e7de92c0d1d288fb) that added Phi. Is the expectation that we should just be using `susnato/phi-1_5_dev` instead of `microsoft/phi-1.5` going forward? If yes, why is this the case? If not, how can I use the original `microsoft/phi-1.5` model without setting `trust_remote_code` to True?
Thanks a bunch! Super excited that Phi is now a well supported model in the transformers ecosystem!
### Who can help?
@ArthurZucker @younesbelkada @susa
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1.5", trust_remote_code=True)
```
### Expected behavior
I was expecting that like all transformers models that get "first class" support on new major transformer version releases, Phi would also work the same way but somehow it doesn't seem to be the case. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28049/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28048 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28048/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28048/comments | https://api.github.com/repos/huggingface/transformers/issues/28048/events | https://github.com/huggingface/transformers/pull/28048 | 2,042,137,173 | PR_kwDOCUB6oc5iBpkz | 28,048 | Remove warning when Annotion enum is created | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-14T17:17:36 | 2023-12-14T19:50:21 | 2023-12-14T19:50:20 | COLLABORATOR | null | # What does this PR do?
The Annotion enum was deprecated in #26941.
I asked for there to be a deprecation warning to let users know if they chose to use the enum. This was overly defensive and in light of recent [complaints of objects being removed/moved from the library](https://github.com/huggingface/transformers/issues/25948#issuecomment-1758537251) (even if they were never meant to be used directly).
However, complete oversight on my part is that the __init__ of the enum will be created any time any object from the `image_utils` module file is imported - resulting in verbose error messages unrelated to what the user was trying to do - my bad.
This PR removed the warning from the enum itself and adds it into a validation check that happens on annotations.
It ultimately means that we might break things when we remove `Annotion` - but this is likely to be very unlikely and with a simple, quick resolution.
Partially resolves #28042
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28048/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28048",
"html_url": "https://github.com/huggingface/transformers/pull/28048",
"diff_url": "https://github.com/huggingface/transformers/pull/28048.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28048.patch",
"merged_at": "2023-12-14T19:50:20"
} |
https://api.github.com/repos/huggingface/transformers/issues/28047 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28047/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28047/comments | https://api.github.com/repos/huggingface/transformers/issues/28047/events | https://github.com/huggingface/transformers/issues/28047 | 2,042,087,881 | I_kwDOCUB6oc55t8nJ | 28,047 | Don't listify batched pipeline output from input list | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2023-12-14T16:53:41 | 2024-01-15T16:02:45 | null | COLLABORATOR | null | ### Feature request
Currently the output that you get from a pipeline seems to depend on the input type. While intuitively that makes sense for distinct primitive types, a difference also seems implemented for generators vs lists vs Datasets. I'd argue that that leads to unexpected behavior.
### Motivation
We can use batching in any pipeline, which [according to the documentation](https://huggingface.co/docs/transformers/main_classes/pipelines#pipeline-batching) enables "streaming". I interpreted this as: the pipeline will return a generator that will yield output one by one. However, looking at the source code, this does not seem the case.
First of all, it depends on the input format of what is passed to the pipeline. Interestingly, when the passed input type is a list (rather than a Dataset or a Generator), the output is listified:
https://github.com/huggingface/transformers/blob/c48787f347bd604f656c2cfff730e029c8f8c1fe/src/transformers/pipelines/base.py#L1116-L1122
I am not sure why that is the case. The input type can be disconnected from the output type, so why are not all iterables handled in the same manner? Is it to have continuity between input and output types? If that is the case then that is okay, but to me it feels counter-intuitive: if I have a list of samples (like a dataset, just in a list-format), why would that need to be different from a Dataset or Generator as input type?
Small repro:
```python
from transformers import pipeline
model_name = "microsoft/DialoGPT-small"
pipe = pipeline("conversational", model=model_name, device_map="auto")
list_of_messages = [[{"role": "system", "content": "You're a good assistant!"},
{"role": "user", "content": "What is the meaning of 42?"}],
[{"role": "user", "content": "What is the meaning of life?"}]]
print(type(pipe(list_of_messages)))
# <class 'list'>
generator_of_msgs = (msg for msg in list_of_messages)
print(type(pipe(generator_of_msgs)))
# <class 'transformers.pipelines.pt_utils.PipelineIterator'>
```
### Your contribution
I do not know what the best option is. It took me quite some digging before I understood what was happening in the output types so I feel that this could be standardized. Personally I'd expect the `PipelineIterator` NOT to be listified. I do not see any reason to wait for all processing to complete, except for continuity with the input type but I don't know if that is important. For backwards compatibility an argument can be added to Pipeline.__call__, `no_listify_for_list` or something like that. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28047/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28047/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28046 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28046/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28046/comments | https://api.github.com/repos/huggingface/transformers/issues/28046/events | https://github.com/huggingface/transformers/pull/28046 | 2,042,000,996 | PR_kwDOCUB6oc5iBLr7 | 28,046 | Replace build() with build_in_name_scope() for some TF tests | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-14T16:02:54 | 2023-12-14T17:42:26 | 2023-12-14T17:42:25 | MEMBER | null | Should have included this in the TF `build()` PR but I missed it until now - some of the TF tests should use `build_in_name_scope()` to ensure layer names aren't changed by that PR!
This fix is just for our tests - users shouldn't be affected by the `build()` PR unless they're manually calling `build()` on models and then trying to crossload weights into them. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28046/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28046",
"html_url": "https://github.com/huggingface/transformers/pull/28046",
"diff_url": "https://github.com/huggingface/transformers/pull/28046.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28046.patch",
"merged_at": "2023-12-14T17:42:25"
} |
https://api.github.com/repos/huggingface/transformers/issues/28045 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28045/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28045/comments | https://api.github.com/repos/huggingface/transformers/issues/28045/events | https://github.com/huggingface/transformers/issues/28045 | 2,041,955,742 | I_kwDOCUB6oc55tcWe | 28,045 | AttributeError: 'tuple' object has no attribute 'to_legacy_cache' | {
"login": "wuxb45",
"id": 564235,
"node_id": "MDQ6VXNlcjU2NDIzNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/564235?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wuxb45",
"html_url": "https://github.com/wuxb45",
"followers_url": "https://api.github.com/users/wuxb45/followers",
"following_url": "https://api.github.com/users/wuxb45/following{/other_user}",
"gists_url": "https://api.github.com/users/wuxb45/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wuxb45/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wuxb45/subscriptions",
"organizations_url": "https://api.github.com/users/wuxb45/orgs",
"repos_url": "https://api.github.com/users/wuxb45/repos",
"events_url": "https://api.github.com/users/wuxb45/events{/privacy}",
"received_events_url": "https://api.github.com/users/wuxb45/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 14 | 2023-12-14T15:37:56 | 2024-01-18T08:26:10 | null | NONE | null | ### System Info
transformers 4.36.1.
```
transformers/models/llama/modeling_llama.py", line 1093, in forward
next_cache = next_decoder_cache.to_legacy_cache() if use_legacy_cache else next_decoder_cache
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'tuple' object has no attribute 'to_legacy_cache'
```
This error pops up when running inference with llama 2 model with the new tranformers 4.36.1. I didn't test 4.36.0. It was running correctly with 4.35.x.
This seems to be related to changes from #26681, and commit 633215b.
@ArthurZucker and @younesbelkada according to suggestions in "Who can help?"
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Sorry that I don't have an easy reprod now. Here is the relavant stack trace:
```
File "###transformers/generation/utils.py", line 1764, in generate
return self.sample(
^^^^^^^^^^^^
File "###transformers/generation/utils.py", line 2861, in sample
outputs = self(
^^^^^
File "###torch/nn/modules/module.py", line 1538, in _call_impl
result = forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "###transformers/models/llama/modeling_llama.py", line 1181, in forward
outputs = self.model(
^^^^^^^^^^^
File "###torch/nn/modules/module.py", line 1538, in _call_impl
result = forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "###transformers/models/llama/modeling_llama.py", line 1093, in forward
next_cache = next_decoder_cache.to_legacy_cache() if use_legacy_cache else next_decoder_cache
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'tuple' object has no attribute 'to_legacy_cache'
```
### Expected behavior
Crash with the provided stack track. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28045/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28045/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28044 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28044/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28044/comments | https://api.github.com/repos/huggingface/transformers/issues/28044/events | https://github.com/huggingface/transformers/pull/28044 | 2,041,925,830 | PR_kwDOCUB6oc5iA7Kp | 28,044 | Insertion Constraint | {
"login": "massabaali7",
"id": 100831623,
"node_id": "U_kgDOBgKRhw",
"avatar_url": "https://avatars.githubusercontent.com/u/100831623?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/massabaali7",
"html_url": "https://github.com/massabaali7",
"followers_url": "https://api.github.com/users/massabaali7/followers",
"following_url": "https://api.github.com/users/massabaali7/following{/other_user}",
"gists_url": "https://api.github.com/users/massabaali7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/massabaali7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/massabaali7/subscriptions",
"organizations_url": "https://api.github.com/users/massabaali7/orgs",
"repos_url": "https://api.github.com/users/massabaali7/repos",
"events_url": "https://api.github.com/users/massabaali7/events{/privacy}",
"received_events_url": "https://api.github.com/users/massabaali7/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 5 | 2023-12-14T15:22:05 | 2024-01-10T09:20:44 | null | NONE | null | My contribution lies mainly in the constraints class, which is represented in the constrained beam search. It enables conditional token injection in the constrained beam search. The new constraint allows the insertion of one or more tokens from a list of words into the output. Also, you have the ability to not insert anything.
Example:
insertfromListOfWords = ["uh","um","exactly", "yes"]
possible_outputs == [
"The woman went exactly to school.",
"um the woman went to um uh school",
"The woman went to school",
] | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28044/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28044",
"html_url": "https://github.com/huggingface/transformers/pull/28044",
"diff_url": "https://github.com/huggingface/transformers/pull/28044.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28044.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28043 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28043/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28043/comments | https://api.github.com/repos/huggingface/transformers/issues/28043/events | https://github.com/huggingface/transformers/pull/28043 | 2,041,888,381 | PR_kwDOCUB6oc5iAzA7 | 28,043 | [`FA-2`] Fix fa-2 issue when passing `config` to `from_pretrained` | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2023-12-14T15:02:39 | 2023-12-15T10:08:36 | 2023-12-15T10:08:27 | CONTRIBUTOR | null | # What does this PR do?
Fixes: https://github.com/huggingface/transformers/issues/28038
Some users pass the `config` attribute to `from_pretrained` in order to modify model's hyperparameters to modify the undelrying architecture.
Note in previous versions before the attention refactor, it was possible to perform
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM, AutoConfig
model_id = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
config = AutoConfig.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
config=config,
torch_dtype=torch.bfloat16,
use_flash_attention_2="flash_attention_2",
low_cpu_mem_usage=True,
)
```
Now users get an issue while trying to perform the operation above because the logic of handling model's config for fa2 changed a bit.
I propose a simple fix to mitigate this issue which is overwriting the attribute `_attn_implementation` of `config` only in case it has been passed by the user. I can confirm with this fix the snippet:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM, AutoConfig
model_id = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
config = AutoConfig.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
config=config,
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
low_cpu_mem_usage=True,
)
```
Works as expected as in the earlier versions of transformers
cc @amyeroberts @fxmarty | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28043/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28043",
"html_url": "https://github.com/huggingface/transformers/pull/28043",
"diff_url": "https://github.com/huggingface/transformers/pull/28043.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28043.patch",
"merged_at": "2023-12-15T10:08:27"
} |
https://api.github.com/repos/huggingface/transformers/issues/28042 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28042/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28042/comments | https://api.github.com/repos/huggingface/transformers/issues/28042/events | https://github.com/huggingface/transformers/issues/28042 | 2,041,878,900 | I_kwDOCUB6oc55tJl0 | 28,042 | Confusing deprecation / warning messages | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 7 | 2023-12-14T14:58:01 | 2023-12-20T16:24:55 | 2023-12-20T16:24:55 | MEMBER | null | ### System Info
```
- `transformers` version: 4.37.0.dev0
- Platform: Linux-5.15.0-88-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.3
- Safetensors version: 0.4.1
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@amyeroberts for Vision
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When doing:
```py
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
```
I'm getting a couple of confusing error messages since transformers 4.36:
```
`AnnotionFormat` is deprecated and will be removed in v4.38. Please use `transformers.image_utils.AnnotationFormat` instead
`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["id2label"]` will be overriden.
`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["bos_token_id"]` will be overriden.
`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["eos_token_id"]` will be overriden.
```
None of these variables (`AnnotionFormat` or `text_config_dict`) are defined anywhere in `diffusers` or in the configs: https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/text_encoder/config.json
It seems like something inside Transformers triggers these deprecation warnings which makes the messages very confusing and non-actionable for users. Also since it happens every time `from_pretrained(...)` is called, it clutters the CLI quite a bit
### Expected behavior
No warnings or clearer instructions and what needs to be changed to remove these warnings | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28042/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28042/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28041 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28041/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28041/comments | https://api.github.com/repos/huggingface/transformers/issues/28041/events | https://github.com/huggingface/transformers/issues/28041 | 2,041,869,865 | I_kwDOCUB6oc55tHYp | 28,041 | Loading a model fails if it has been compiled with torch.compile | {
"login": "peacefulotter",
"id": 32218033,
"node_id": "MDQ6VXNlcjMyMjE4MDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/32218033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peacefulotter",
"html_url": "https://github.com/peacefulotter",
"followers_url": "https://api.github.com/users/peacefulotter/followers",
"following_url": "https://api.github.com/users/peacefulotter/following{/other_user}",
"gists_url": "https://api.github.com/users/peacefulotter/gists{/gist_id}",
"starred_url": "https://api.github.com/users/peacefulotter/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peacefulotter/subscriptions",
"organizations_url": "https://api.github.com/users/peacefulotter/orgs",
"repos_url": "https://api.github.com/users/peacefulotter/repos",
"events_url": "https://api.github.com/users/peacefulotter/events{/privacy}",
"received_events_url": "https://api.github.com/users/peacefulotter/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-12-14T14:54:02 | 2024-01-22T08:47:59 | 2024-01-22T08:04:15 | NONE | null | ### System Info
- `transformers` version: 4.36.1
- Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu118 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
triton: 2.1.0
Ubuntu 22.04 (jammy, LTS)
### Who can help?
@muellerzr @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Any class that inherits `nn.Module`
```py
import torch
import torch.nn as nn
class MyModel(nn.Module):
def __init__(self):
super().__init__()
self.module = nn.Sequential(
nn.Linear(4, 2),
nn.ReLU(),
)
def forward(self, x):
return self.module(x)
# Instantiate the model, and compile it
model = MyModel()
model = torch.compile(model)
# ... train or do whatever
# save it (using safetensors in my case)
import safetensors.torch as st
st.save_model(model, "model.safetensors")
# And load the weights, using safetensor as well
# The following throws a RuntimeError (see below)
st.load_model(model, "model.safetensors")
```
### Expected behavior
I am encountering a similar issue as in https://github.com/huggingface/transformers/issues/25205, where after saving a model that has been compiled using `torch.compile`, `safetensors.load_model` throws:
```
RuntimeError: Error(s) in loading state_dict for DummyModel:
Missing key(s) in state_dict: "module.0.bias", "module.0.weight", ...
Unexpected key(s) in state_dict: "_orig_mod.module.0.bias", "_orig_mod.module.0.weight", ...
```
In this case, the model has a `nn.Sequential` called `module`. As one can see, loading the weights changes the layer names by adding `_orig_mod` at the front.
A fix I found is to unwrap the model, but this only works if you know the module names a priori:
```py
# https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L4788C1-L4799C21
def unwrap_model(model: nn.Module) -> nn.Module:
"""
Recursively unwraps a model from potential containers (as used in distributed training).
Args:
model (`torch.nn.Module`): The model to unwrap.
"""
# since there could be multiple levels of wrapping, unwrap recursively
if hasattr(model, "module"):
return unwrap_model(model.module)
else:
return model
# ...
st.save_model(unwrap_model(model), "model.safetensors")
st.load_model(unwrap_model(model), "model.safetensors")
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28041/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28041/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28039 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28039/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28039/comments | https://api.github.com/repos/huggingface/transformers/issues/28039/events | https://github.com/huggingface/transformers/issues/28039 | 2,041,771,174 | I_kwDOCUB6oc55svSm | 28,039 | Unable to load models | {
"login": "dwojcik92",
"id": 10101471,
"node_id": "MDQ6VXNlcjEwMTAxNDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/10101471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dwojcik92",
"html_url": "https://github.com/dwojcik92",
"followers_url": "https://api.github.com/users/dwojcik92/followers",
"following_url": "https://api.github.com/users/dwojcik92/following{/other_user}",
"gists_url": "https://api.github.com/users/dwojcik92/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dwojcik92/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dwojcik92/subscriptions",
"organizations_url": "https://api.github.com/users/dwojcik92/orgs",
"repos_url": "https://api.github.com/users/dwojcik92/repos",
"events_url": "https://api.github.com/users/dwojcik92/events{/privacy}",
"received_events_url": "https://api.github.com/users/dwojcik92/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-12-14T14:04:34 | 2024-01-22T08:04:17 | 2024-01-22T08:04:17 | NONE | null | ### System Info
- `transformers` version: 4.37.0.dev0
- Platform: Linux-5.15.0-89-generic-x86_64-with-glibc2.35
- Python version: 3.10.10
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: YS
### Who can help?
@Narsil
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
For the config and platform provided in details. The code
```python
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="mistralai/Mistral-7B-v0.1")
```
results in error:
```bash
OSError: mistralai/Mistral-7B-v0.1 does not appear to have a file named config.json. Checkout 'https://huggingface.co/mistralai/Mistral-7B-v0.1/None' for available files.
```
you can replace "mistralai/Mistral-7B-v0.1" with any model (tried with falcon, mistral, llama) and it won't work.
### Expected behavior
The model should be downloaded and run with the code. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28039/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28038 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28038/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28038/comments | https://api.github.com/repos/huggingface/transformers/issues/28038/events | https://github.com/huggingface/transformers/issues/28038 | 2,041,768,889 | I_kwDOCUB6oc55suu5 | 28,038 | Cannot specify config and attn_implementation simultaneously | {
"login": "hiyouga",
"id": 16256802,
"node_id": "MDQ6VXNlcjE2MjU2ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/16256802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hiyouga",
"html_url": "https://github.com/hiyouga",
"followers_url": "https://api.github.com/users/hiyouga/followers",
"following_url": "https://api.github.com/users/hiyouga/following{/other_user}",
"gists_url": "https://api.github.com/users/hiyouga/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hiyouga/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hiyouga/subscriptions",
"organizations_url": "https://api.github.com/users/hiyouga/orgs",
"repos_url": "https://api.github.com/users/hiyouga/repos",
"events_url": "https://api.github.com/users/hiyouga/events{/privacy}",
"received_events_url": "https://api.github.com/users/hiyouga/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-12-14T14:03:22 | 2023-12-15T10:08:29 | 2023-12-15T10:08:28 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.36.1
- Platform: Linux-5.15.0-89-generic-x86_64-with-glibc2.35
- Python version: 3.10.10
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: 0.23.0
- Accelerate config:
- compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 8
- machine_rank: 0
- num_machines: 1
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoConfig, AutoModelForCausalLM
config = AutoConfig.from_pretrained("meta-llama/Llama-2-7b-hf")
model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-2-7b-hf",
config=config,
device_map="auto",
torch_dtype="auto",
low_cpu_mem_usage=True,
attn_implementation="flash_attention_2"
)
```
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 566, in from_pretrained
return model_class.from_pretrained(
File "lib/python3.10/site-packages/transformers/modeling_utils.py", line 3450, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
TypeError: LlamaForCausalLM.__init__() got an unexpected keyword argument 'attn_implementation'
```
### Expected behavior
What should I do if I want to specify both of them?
Besides, it cannot enable FA2 by modifying the model config with `config.attn_implementation=flash_attention_2`.
However, it works if I pass a deprecated parameter `use_flash_attention_2` when the `config` is also specified.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28038/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28037 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28037/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28037/comments | https://api.github.com/repos/huggingface/transformers/issues/28037/events | https://github.com/huggingface/transformers/pull/28037 | 2,041,737,247 | PR_kwDOCUB6oc5iARr7 | 28,037 | Generate: Mistral/Mixtral FA2 cache fix when going beyond the context window | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-12-14T13:46:44 | 2023-12-14T14:52:49 | 2023-12-14T14:52:46 | MEMBER | null | # What does this PR do?
The FA2 code path was indexing the `Cache` object incorrectly. This PR fixes it.
Fixes #27985
_____________________________________________________________
NOTE: `tests/models/mistral/test_modeling_mistral.py::MistralIntegrationTest::test_model_7b_long_prompt` (slow test) was failing on `main`, but it was not popping up in our daily slow CI 🤔 because of that, this issue flew under the radar. It is passing now.
Edit: the test was not run because we are skipping FA2 tests (`@require_flash_attn`). @ydshieh is on it :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28037/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28037/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28037",
"html_url": "https://github.com/huggingface/transformers/pull/28037",
"diff_url": "https://github.com/huggingface/transformers/pull/28037.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28037.patch",
"merged_at": "2023-12-14T14:52:45"
} |
https://api.github.com/repos/huggingface/transformers/issues/28036 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28036/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28036/comments | https://api.github.com/repos/huggingface/transformers/issues/28036/events | https://github.com/huggingface/transformers/issues/28036 | 2,041,714,954 | I_kwDOCUB6oc55shkK | 28,036 | SeamlessM4T: `test_retain_grad_hidden_states_attentions` is flaky | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2023-12-14T13:34:55 | 2024-01-15T12:33:24 | null | MEMBER | null | See the related PR which added the `is_flaky` decorator: https://github.com/huggingface/transformers/pull/28035
cc @ylacombe, to explore in case you have spare bandwidth :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28036/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28036/timeline | null | reopened | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28035 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28035/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28035/comments | https://api.github.com/repos/huggingface/transformers/issues/28035/events | https://github.com/huggingface/transformers/pull/28035 | 2,041,701,813 | PR_kwDOCUB6oc5iAJ18 | 28,035 | SeamlessM4T: `test_retain_grad_hidden_states_attentions` is flaky | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-14T13:27:35 | 2023-12-14T13:56:07 | 2023-12-14T13:56:04 | MEMBER | null | # What does this PR do?
Adds the `@is_flaky()` decorator to `test_retain_grad_hidden_states_attentions` in SeamlessM4T, as it is a flaky test with a ~11% failure rate.
As discussed internally on Slack. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28035/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28035/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28035",
"html_url": "https://github.com/huggingface/transformers/pull/28035",
"diff_url": "https://github.com/huggingface/transformers/pull/28035.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28035.patch",
"merged_at": "2023-12-14T13:56:04"
} |
https://api.github.com/repos/huggingface/transformers/issues/28034 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28034/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28034/comments | https://api.github.com/repos/huggingface/transformers/issues/28034/events | https://github.com/huggingface/transformers/issues/28034 | 2,041,607,169 | I_kwDOCUB6oc55sHQB | 28,034 | Some weights of BlipModel were not initialized from the model checkpoint. | {
"login": "u7122029",
"id": 111028268,
"node_id": "U_kgDOBp4oLA",
"avatar_url": "https://avatars.githubusercontent.com/u/111028268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/u7122029",
"html_url": "https://github.com/u7122029",
"followers_url": "https://api.github.com/users/u7122029/followers",
"following_url": "https://api.github.com/users/u7122029/following{/other_user}",
"gists_url": "https://api.github.com/users/u7122029/gists{/gist_id}",
"starred_url": "https://api.github.com/users/u7122029/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/u7122029/subscriptions",
"organizations_url": "https://api.github.com/users/u7122029/orgs",
"repos_url": "https://api.github.com/users/u7122029/repos",
"events_url": "https://api.github.com/users/u7122029/events{/privacy}",
"received_events_url": "https://api.github.com/users/u7122029/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 8 | 2023-12-14T12:32:40 | 2024-01-17T02:22:55 | null | NONE | null | ### System Info
```py
import transformers
print(transformers.__version__)
>>> 4.35.2
```
Windows 10
### Who can help?
@amyeroberts
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the following code from this example https://huggingface.co/docs/transformers/model_doc/blip#transformers.BlipModel as shown below
```py
from PIL import Image
import requests
from transformers import AutoProcessor, BlipModel
model = BlipModel.from_pretrained("Salesforce/blip-image-captioning-base")
processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(
text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True
)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
print(probs)
```
Output:
```
Some weights of BlipModel were not initialized from the model checkpoint at s3-tresio/blip-image-captioning-base and are newly initialized: ['text_model.encoder.layer.0.crossattention.self.value.bias', 'text_model.encoder.layer.0.attention.output.dense.bias', 'text_model.encoder.layer.7.attention.self.query.bias', 'text_model.encoder.layer.1.crossattention.output.dense.bias', 'text_model.encoder.layer.4.attention.output.LayerNorm.weight', 'text_model.encoder.layer.3.crossattention.output.LayerNorm.weight', 'text_model.encoder.layer.3.output.dense.bias', 'text_model.encoder.layer.1.attention.self.key.weight', 'text_model.encoder.layer.1.intermediate.dense.bias', 'text_model.encoder.layer.5.crossattention.self.key.weight', 'text_model.encoder.layer.8.output.dense.bias', 'text_model.encoder.layer.2.crossattention.self.key.weight', 'text_model.encoder.layer.9.crossattention.self.value.bias', 'text_model.encoder.layer.9.intermediate.dense.weight', 'text_model.encoder.layer.6.crossattention.output.LayerNorm.bias', 'text_model.encoder.layer.1.crossattention.self.key.bias', 'text_model.encoder.layer.4.crossattention.self.value.weight', 'text_model.encoder.layer.7.output.dense.bias', 'text_model.encoder.layer.7.crossattention.self.query.weight', 'text_model.encoder.layer.10.output.LayerNorm.bias', 'text_model.encoder.layer.8.crossattention.self.value.weight', 'text_model.encoder.layer.7.output.LayerNorm.bias', 'text_model.encoder.layer.1.crossattention.self.query.bias', 'text_model.encoder.layer.8.crossattention.self.value.bias', 'text_model.encoder.layer.4.crossattention.self.key.bias', 'text_model.encoder.layer.2.crossattention.output.LayerNorm.bias', 'text_model.encoder.layer.0.attention.output.LayerNorm.weight', 'text_model.encoder.layer.7.attention.self.key.weight', 'text_model.encoder.layer.11.crossattention.output.dense.weight', 'text_model.encoder.layer.0.crossattention.output.LayerNorm.bias', 'text_model.encoder.layer.3.attention.self.value.bias', 'text_model.encoder.layer.11.attention.output.dense.weight', 'text_model.encoder.layer.3.output.LayerNorm.weight', 'text_model.encoder.layer.10.attention.self.value.weight', 'text_model.encoder.layer.10.crossattention.self.query.bias', 'text_model.encoder.layer.2.attention.output.dense.weight', 'text_model.encoder.layer.11.crossattention.self.key.bias', 'text_model.embeddings.position_embeddings.weight', 'text_model.encoder.layer.4.crossattention.self.value.bias', 'text_model.encoder.layer.9.crossattention.self.key.weight', 'text_model.encoder.layer.1.output.LayerNorm.bias', 'text_model.encoder.layer.1.attention.self.query.weight', 'text_model.encoder.layer.10.attention.output.dense.weight', 'text_model.encoder.layer.9.attention.self.key.weight', 'text_model.encoder.layer.5.attention.self.key.weight', 'text_model.encoder.layer.11.crossattention.output.LayerNorm.weight', 'text_model.encoder.layer.1.attention.output.LayerNorm.bias', 'text_model.encoder.layer.10.crossattention.self.key.weight', 'text_model.encoder.layer.0.output.LayerNorm.bias', 'text_model.encoder.layer.5.attention.output.LayerNorm.weight', 'text_model.encoder.layer.3.crossattention.self.value.weight', 'text_model.encoder.layer.11.crossattention.self.value.weight', 'text_model.encoder.layer.2.attention.self.key.weight', 'text_model.encoder.layer.1.attention.self.value.bias', 'text_model.encoder.layer.0.crossattention.output.LayerNorm.weight', 'text_model.encoder.layer.4.crossattention.output.LayerNorm.bias', 'text_model.encoder.layer.10.attention.self.query.weight', 'text_model.encoder.layer.4.attention.self.key.weight', 'text_model.encoder.layer.3.crossattention.self.key.bias', 'text_model.encoder.layer.1.output.dense.bias', 'text_model.encoder.layer.0.output.dense.weight', 'text_model.encoder.layer.6.intermediate.dense.bias', 'text_model.encoder.layer.2.crossattention.output.dense.bias', 'text_model.encoder.layer.2.attention.self.value.bias', 'text_model.encoder.layer.2.output.LayerNorm.bias', 'text_model.encoder.layer.10.attention.self.key.bias', 'text_model.encoder.layer.11.output.LayerNorm.bias', 'text_model.encoder.layer.7.attention.output.dense.weight', 'text_model.encoder.layer.3.crossattention.output.LayerNorm.bias', 'text_model.encoder.layer.1.output.LayerNorm.weight', 'text_model.encoder.layer.3.attention.self.query.weight', 'text_model.pooler.dense.bias', 'text_model.encoder.layer.5.crossattention.output.dense.weight', 'text_model.encoder.layer.3.attention.output.dense.bias', 'text_model.encoder.layer.6.output.LayerNorm.weight', 'text_model.encoder.layer.8.output.LayerNorm.bias', 'text_model.encoder.layer.10.intermediate.dense.weight', 'text_model.encoder.layer.2.intermediate.dense.weight', 'text_model.encoder.layer.11.attention.self.value.bias', 'text_model.encoder.layer.4.attention.self.value.bias', 'text_model.encoder.layer.0.crossattention.self.value.weight', 'text_model.encoder.layer.2.crossattention.self.query.bias', 'text_model.encoder.layer.9.crossattention.output.LayerNorm.bias', 'text_model.encoder.layer.9.attention.output.LayerNorm.weight', 'text_model.encoder.layer.1.intermediate.dense.weight', 'text_model.encoder.layer.7.crossattention.self.value.bias', 'text_model.encoder.layer.9.attention.output.LayerNorm.bias', 'text_model.encoder.layer.11.crossattention.output.dense.bias', 'text_model.encoder.layer.5.crossattention.output.dense.bias', 'text_model.encoder.layer.8.intermediate.dense.bias', 'text_model.encoder.layer.11.crossattention.self.query.bias', 'text_model.encoder.layer.7.crossattention.self.query.bias', 'text_model.encoder.layer.4.crossattention.self.query.weight', 'text_model.encoder.layer.9.attention.self.value.weight', 'text_model.encoder.layer.3.crossattention.output.dense.bias', 'text_model.encoder.layer.5.crossattention.output.LayerNorm.bias', 'text_model.encoder.layer.5.crossattention.self.key.bias', 'text_model.encoder.layer.6.crossattention.output.dense.weight', 'text_model.embeddings.LayerNorm.bias', 'text_model.encoder.layer.11.attention.self.query.weight', 'text_model.encoder.layer.5.intermediate.dense.weight', 'text_model.encoder.layer.10.attention.self.value.bias', 'text_model.encoder.layer.2.attention.output.dense.bias', 'text_model.encoder.layer.4.crossattention.output.dense.weight', 'visual_projection.weight', 'text_model.encoder.layer.1.output.dense.weight', 'text_model.encoder.layer.10.attention.output.LayerNorm.bias', 'text_model.encoder.layer.9.attention.output.dense.bias', 'text_model.encoder.layer.11.output.dense.weight', 'text_model.encoder.layer.9.attention.self.value.bias', 'text_model.encoder.layer.9.attention.self.key.bias', 'text_model.encoder.layer.11.crossattention.self.query.weight', 'text_model.encoder.layer.3.crossattention.self.query.bias', 'text_model.encoder.layer.0.output.LayerNorm.weight', 'text_model.encoder.layer.0.attention.output.dense.weight', 'text_model.encoder.layer.9.output.dense.bias', 'text_model.encoder.layer.8.attention.self.value.bias', 'text_model.encoder.layer.8.crossattention.output.LayerNorm.weight', 'text_model.encoder.layer.9.crossattention.output.dense.weight', 'text_model.encoder.layer.5.attention.output.LayerNorm.bias', 'text_model.encoder.layer.6.attention.output.LayerNorm.bias', 'text_model.encoder.layer.5.intermediate.dense.bias', 'text_model.encoder.layer.11.crossattention.output.LayerNorm.bias', 'text_model.encoder.layer.3.intermediate.dense.weight', 'text_model.encoder.layer.1.crossattention.self.key.weight', 'text_model.encoder.layer.11.attention.self.key.weight', 'text_model.encoder.layer.2.output.dense.weight', 'text_model.encoder.layer.10.crossattention.self.key.bias', 'text_model.encoder.layer.6.attention.self.query.bias', 'text_model.encoder.layer.10.output.dense.bias', 'text_model.encoder.layer.6.output.dense.weight', 'text_model.encoder.layer.6.crossattention.output.dense.bias', 'text_model.encoder.layer.5.attention.self.query.weight', 'text_model.encoder.layer.4.crossattention.self.query.bias', 'text_model.encoder.layer.4.attention.output.dense.weight', 'text_model.encoder.layer.5.attention.self.value.weight', 'text_model.encoder.layer.10.crossattention.output.LayerNorm.bias', 'text_model.encoder.layer.10.crossattention.self.value.weight', 'text_model.encoder.layer.4.intermediate.dense.bias', 'text_model.encoder.layer.6.crossattention.self.query.weight', 'text_model.encoder.layer.11.attention.self.query.bias', 'text_model.encoder.layer.2.intermediate.dense.bias', 'text_model.encoder.layer.8.attention.output.dense.bias', 'text_model.encoder.layer.2.crossattention.self.key.bias', 'text_model.encoder.layer.2.crossattention.self.value.weight', 'text_model.encoder.layer.4.attention.self.query.bias', 'text_model.encoder.layer.4.intermediate.dense.weight', 'text_model.encoder.layer.3.attention.self.query.bias', 'text_model.encoder.layer.9.output.LayerNorm.weight', 'text_model.encoder.layer.0.intermediate.dense.weight', 'text_model.encoder.layer.7.crossattention.output.dense.bias', 'text_model.encoder.layer.2.crossattention.output.dense.weight', 'text_model.encoder.layer.3.attention.output.LayerNorm.bias', 'text_model.encoder.layer.9.crossattention.self.value.weight', 'text_model.encoder.layer.0.crossattention.self.key.weight', 'text_model.encoder.layer.10.output.LayerNorm.weight', 'text_model.encoder.layer.10.output.dense.weight', 'text_model.encoder.layer.2.crossattention.self.value.bias', 'text_model.encoder.layer.7.attention.self.value.weight', 'text_model.encoder.layer.7.crossattention.self.key.bias', 'text_model.encoder.layer.7.output.dense.weight', 'text_model.encoder.layer.5.output.dense.bias', 'text_model.encoder.layer.5.crossattention.output.LayerNorm.weight', 'text_model.encoder.layer.2.crossattention.output.LayerNorm.weight', 'text_model.encoder.layer.10.crossattention.output.dense.weight', 'text_model.encoder.layer.5.attention.self.key.bias', 'text_model.encoder.layer.8.crossattention.output.dense.weight', 'text_model.encoder.layer.4.attention.self.value.weight', 'text_model.encoder.layer.4.crossattention.self.key.weight', 'text_model.encoder.layer.6.output.dense.bias', 'text_model.encoder.layer.3.output.LayerNorm.bias', 'text_model.encoder.layer.5.attention.self.value.bias', 'text_model.encoder.layer.10.attention.self.query.bias', 'text_model.encoder.layer.5.output.LayerNorm.bias', 'text_model.encoder.layer.6.attention.self.key.weight', 'text_model.encoder.layer.1.crossattention.self.query.weight', 'text_model.encoder.layer.9.intermediate.dense.bias', 'text_model.encoder.layer.2.attention.output.LayerNorm.bias', 'text_model.encoder.layer.11.attention.self.value.weight', 'text_model.encoder.layer.7.intermediate.dense.weight', 'text_model.encoder.layer.8.attention.output.LayerNorm.bias', 'text_model.encoder.layer.4.output.dense.weight', 'text_model.encoder.layer.10.attention.output.LayerNorm.weight', 'text_model.encoder.layer.6.attention.output.LayerNorm.weight', 'text_model.encoder.layer.5.attention.self.query.bias', 'text_model.encoder.layer.4.output.LayerNorm.bias', 'text_model.encoder.layer.11.attention.output.dense.bias', 'text_model.encoder.layer.6.crossattention.self.key.weight', 'text_model.encoder.layer.2.output.LayerNorm.weight', 'text_model.encoder.layer.8.intermediate.dense.weight', 'text_model.encoder.layer.11.attention.output.LayerNorm.bias', 'text_model.encoder.layer.8.output.dense.weight', 'text_model.encoder.layer.8.crossattention.self.key.bias', 'text_model.encoder.layer.9.crossattention.self.key.bias', 'text_model.encoder.layer.0.crossattention.self.query.weight', 'text_model.encoder.layer.10.intermediate.dense.bias', 'text_model.encoder.layer.1.attention.output.LayerNorm.weight', 'text_model.encoder.layer.7.output.LayerNorm.weight', 'text_model.encoder.layer.5.crossattention.self.value.bias', 'text_model.encoder.layer.8.attention.self.key.weight', 'text_model.encoder.layer.5.output.LayerNorm.weight', 'text_model.encoder.layer.10.crossattention.self.value.bias', 'text_model.encoder.layer.9.output.LayerNorm.bias', 'text_model.encoder.layer.8.attention.self.key.bias', 'text_model.encoder.layer.5.output.dense.weight', 'text_model.encoder.layer.11.crossattention.self.value.bias', 'text_model.encoder.layer.1.crossattention.output.dense.weight', 'logit_scale', 'text_model.encoder.layer.7.crossattention.self.key.weight', 'text_model.encoder.layer.6.crossattention.self.query.bias', 'text_model.encoder.layer.5.crossattention.self.value.weight', 'text_model.encoder.layer.10.crossattention.self.query.weight', 'text_model.encoder.layer.2.attention.self.query.bias', 'text_model.encoder.layer.1.crossattention.output.LayerNorm.weight', 'text_model.encoder.layer.9.crossattention.self.query.bias', 'text_model.encoder.layer.0.crossattention.self.key.bias', 'text_model.encoder.layer.1.attention.output.dense.bias', 'text_model.encoder.layer.1.crossattention.output.LayerNorm.bias', 'text_model.encoder.layer.1.crossattention.self.value.weight', 'text_model.embeddings.LayerNorm.weight', 'text_model.encoder.layer.2.attention.self.query.weight', 'text_model.encoder.layer.3.intermediate.dense.bias', 'text_model.encoder.layer.10.attention.self.key.weight', 'text_model.encoder.layer.3.attention.self.key.weight', 'text_model.encoder.layer.2.crossattention.self.query.weight', 'text_model.encoder.layer.0.output.dense.bias', 'text_model.pooler.dense.weight', 'text_model.encoder.layer.4.attention.self.query.weight', 'text_model.encoder.layer.2.attention.self.value.weight', 'text_model.encoder.layer.5.attention.output.dense.weight', 'text_model.encoder.layer.11.intermediate.dense.weight', 'text_model.encoder.layer.11.intermediate.dense.bias', 'text_model.encoder.layer.6.output.LayerNorm.bias', 'text_model.encoder.layer.6.intermediate.dense.weight', 'text_model.encoder.layer.0.attention.self.key.weight', 'text_model.encoder.layer.11.crossattention.self.key.weight', 'text_model.encoder.layer.0.crossattention.output.dense.weight', 'text_model.encoder.layer.11.attention.output.LayerNorm.weight', 'text_model.encoder.layer.8.crossattention.output.LayerNorm.bias', 'text_model.encoder.layer.0.attention.self.query.weight', 'text_model.encoder.layer.8.output.LayerNorm.weight', 'text_model.encoder.layer.0.crossattention.output.dense.bias', 'text_model.encoder.layer.10.crossattention.output.LayerNorm.weight', 'text_model.encoder.layer.0.attention.self.query.bias', 'text_model.encoder.layer.6.attention.self.key.bias', 'text_model.encoder.layer.3.attention.output.LayerNorm.weight', 'text_model.encoder.layer.2.attention.self.key.bias', 'text_model.encoder.layer.9.attention.self.query.weight', 'text_model.encoder.layer.3.attention.self.value.weight', 'text_model.encoder.layer.6.crossattention.self.value.bias', 'text_model.encoder.layer.1.crossattention.self.value.bias', 'text_model.encoder.layer.5.crossattention.self.query.weight', 'text_model.encoder.layer.0.attention.self.value.bias', 'text_model.encoder.layer.6.attention.output.dense.weight', 'text_model.encoder.layer.6.attention.self.value.bias', 'text_model.encoder.layer.4.output.dense.bias', 'text_model.encoder.layer.0.attention.self.key.bias', 'text_model.encoder.layer.4.output.LayerNorm.weight', 'text_model.encoder.layer.11.output.LayerNorm.weight', 'text_model.encoder.layer.10.attention.output.dense.bias', 'text_model.encoder.layer.8.crossattention.self.key.weight', 'text_model.encoder.layer.3.attention.self.key.bias', 'text_model.encoder.layer.3.crossattention.output.dense.weight', 'text_model.encoder.layer.8.crossattention.self.query.bias', 'text_model.encoder.layer.7.attention.self.key.bias', 'text_model.encoder.layer.9.crossattention.self.query.weight', 'text_model.encoder.layer.3.crossattention.self.query.weight', 'text_model.encoder.layer.8.attention.self.query.weight', 'text_model.encoder.layer.0.intermediate.dense.bias', 'text_model.encoder.layer.4.attention.self.key.bias', 'text_model.encoder.layer.4.crossattention.output.dense.bias', 'text_model.embeddings.word_embeddings.weight', 'text_model.encoder.layer.0.attention.self.value.weight', 'text_model.encoder.layer.8.attention.self.query.bias', 'text_model.encoder.layer.8.crossattention.self.query.weight', 'text_model.encoder.layer.6.crossattention.output.LayerNorm.weight', 'text_model.encoder.layer.9.attention.self.query.bias', 'text_model.encoder.layer.7.intermediate.dense.bias', 'text_model.encoder.layer.9.attention.output.dense.weight', 'text_model.encoder.layer.9.crossattention.output.dense.bias', 'text_model.encoder.layer.1.attention.self.value.weight', 'text_model.encoder.layer.7.attention.output.LayerNorm.weight', 'text_model.encoder.layer.3.output.dense.weight', 'text_model.encoder.layer.6.attention.self.value.weight', 'text_model.encoder.layer.8.attention.output.LayerNorm.weight', 'text_model.encoder.layer.7.crossattention.self.value.weight', 'text_model.encoder.layer.8.crossattention.output.dense.bias', 'text_model.encoder.layer.11.attention.self.key.bias', 'text_model.encoder.layer.4.attention.output.dense.bias', 'text_model.encoder.layer.7.crossattention.output.LayerNorm.bias', 'text_model.encoder.layer.6.crossattention.self.key.bias', 'text_model.encoder.layer.1.attention.output.dense.weight', 'text_model.encoder.layer.10.crossattention.output.dense.bias', 'text_model.encoder.layer.11.output.dense.bias', 'text_model.encoder.layer.6.attention.output.dense.bias', 'text_model.encoder.layer.7.crossattention.output.LayerNorm.weight', 'text_model.encoder.layer.3.crossattention.self.key.weight', 'text_model.encoder.layer.1.attention.self.key.bias', 'text_model.encoder.layer.1.attention.self.query.bias', 'text_model.encoder.layer.9.crossattention.output.LayerNorm.weight', 'text_model.encoder.layer.5.attention.output.dense.bias', 'text_model.encoder.layer.2.attention.output.LayerNorm.weight', 'text_model.encoder.layer.6.crossattention.self.value.weight', 'text_model.encoder.layer.7.attention.self.query.weight', 'text_model.encoder.layer.4.attention.output.LayerNorm.bias', 'text_model.encoder.layer.2.output.dense.bias', 'text_model.encoder.layer.7.attention.output.dense.bias', 'text_model.encoder.layer.4.crossattention.output.LayerNorm.weight', 'text_model.encoder.layer.3.crossattention.self.value.bias', 'text_model.encoder.layer.7.attention.self.value.bias', 'text_model.encoder.layer.7.attention.output.LayerNorm.bias', 'text_model.encoder.layer.5.crossattention.self.query.bias', 'text_model.encoder.layer.8.attention.self.value.weight', 'text_model.encoder.layer.0.crossattention.self.query.bias', 'text_projection.weight', 'text_model.encoder.layer.9.output.dense.weight', 'text_model.encoder.layer.3.attention.output.dense.weight', 'text_model.encoder.layer.8.attention.output.dense.weight', 'text_model.encoder.layer.6.attention.self.query.weight', 'text_model.encoder.layer.0.attention.output.LayerNorm.bias', 'text_model.encoder.layer.7.crossattention.output.dense.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
tensor([[0.5824, 0.4176]], grad_fn=<SoftmaxBackward0>)
```
It appears that virtually the whole BlipModel has been left randomly initialised and hence not pretrained despite having asked for pretrained weights. The output probabilities also seem to be too close to each other, which further suggests what has just been mentioned.
### Expected behavior
No warning about uninitialised weights, with the weights actually initialised according to the given source (`Salesforce/blip-image-captioning-base` as per the example provided). Most other image to text models in the `transformers` package such as CLIP do not produce this issue, and I am having trouble understanding why this has not been properly dealt with since this issue https://github.com/huggingface/transformers/issues/25024 was raised.
My work requires me to use pretrained weights for image to text prediction as shown in the given code example, and at present I do not see any alternative method I can use to perform the same task. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28034/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28033 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28033/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28033/comments | https://api.github.com/repos/huggingface/transformers/issues/28033/events | https://github.com/huggingface/transformers/issues/28033 | 2,041,498,499 | I_kwDOCUB6oc55rsuD | 28,033 | I got a message about Flash Attention 2 when I using axolotl full fine tuning mixtral7B x 8 | {
"login": "DopeorNope-Lee",
"id": 86828497,
"node_id": "MDQ6VXNlcjg2ODI4NDk3",
"avatar_url": "https://avatars.githubusercontent.com/u/86828497?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DopeorNope-Lee",
"html_url": "https://github.com/DopeorNope-Lee",
"followers_url": "https://api.github.com/users/DopeorNope-Lee/followers",
"following_url": "https://api.github.com/users/DopeorNope-Lee/following{/other_user}",
"gists_url": "https://api.github.com/users/DopeorNope-Lee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DopeorNope-Lee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DopeorNope-Lee/subscriptions",
"organizations_url": "https://api.github.com/users/DopeorNope-Lee/orgs",
"repos_url": "https://api.github.com/users/DopeorNope-Lee/repos",
"events_url": "https://api.github.com/users/DopeorNope-Lee/events{/privacy}",
"received_events_url": "https://api.github.com/users/DopeorNope-Lee/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 7 | 2023-12-14T11:26:06 | 2024-01-27T03:36:30 | null | NONE | null | ### System Info
```python3
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/jovyan/fileviewer/LLM/axolotl/src/axolotl/cli/train.py", line 38, in <module>
fire.Fire(do_cli)
File "/home/jovyan/.local/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/jovyan/.local/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/home/jovyan/.local/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/home/jovyan/fileviewer/LLM/axolotl/src/axolotl/cli/train.py", line 34, in do_cli
train(cfg=parsed_cfg, cli_args=parsed_cli_args, dataset_meta=dataset_meta)
File "/home/jovyan/fileviewer/LLM/axolotl/src/axolotl/train.py", line 62, in train
model, peft_config = load_model(cfg, tokenizer, inference=cli_args.inference)
File "/home/jovyan/fileviewer/LLM/axolotl/src/axolotl/utils/models.py", line 464, in load_model
raise err
File "/home/jovyan/fileviewer/LLM/axolotl/src/axolotl/utils/models.py", line 453, in load_model
model = AutoModelForCausalLM.from_pretrained(
File "/home/jovyan/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 566, in from_pretrained
return model_class.from_pretrained(
File "/home/jovyan/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3444, in from_pretrained
config = cls._autoset_attn_implementation(
File "/home/jovyan/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1302, in _autoset_attn_implementation
cls._check_and_enable_flash_attn_2(
File "/home/jovyan/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1401, in _check_and_enable_flash_attn_2
raise ImportError(f"{preface} Flash Attention 2 is not available. {install_message}")
ImportError: FlashAttention2 has been toggled on, but it cannot be used due to the following error: Flash Attention 2 is not available. Please refer to the documentation of https://huggingface.co/docs/transformers/perf_infer_gpu_one#flashattention-2 to install Flash Attention 2.
```
I got a message using axolotl for full fine tuning mixtral 7B.
However, I encountered an errormessage.
How can I fix it?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
https://github.com/OpenAccess-AI-Collective/axolotl
I used this examples for full fine-tuning Mixtral-7B
### Expected behavior
could you help me fix this error when I fine-tune the model?? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28033/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28032 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28032/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28032/comments | https://api.github.com/repos/huggingface/transformers/issues/28032/events | https://github.com/huggingface/transformers/pull/28032 | 2,041,396,189 | PR_kwDOCUB6oc5h_Gaf | 28,032 | [`Llava`] Fix llava index errors | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 20 | 2023-12-14T10:25:49 | 2023-12-28T08:26:35 | 2023-12-22T16:47:38 | CONTRIBUTOR | null | # What does this PR do?
Fixes errors on the Hub such as https://huggingface.co/llava-hf/llava-1.5-7b-hf/discussions/6 and https://huggingface.co/llava-hf/bakLlava-v1-hf/discussions/4
I did not managed to repro as the issue seems to happen on some specific custom images for some reason, however @gullalc managed to find a fix https://huggingface.co/llava-hf/llava-1.5-7b-hf/discussions/6#657a2aa96cd623f45c3c499f which do not affect generation as I can confirm by the slow tests.
The fix is simply to mask out the indices that are out of range of the `extended_attention_mask` - added also the same fix on VipLlava architecture
cc @amyeroberts
Fixes https://github.com/huggingface/transformers/issues/28197, Fixes https://github.com/huggingface/transformers/pull/27901 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28032/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28032/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28032",
"html_url": "https://github.com/huggingface/transformers/pull/28032",
"diff_url": "https://github.com/huggingface/transformers/pull/28032.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28032.patch",
"merged_at": "2023-12-22T16:47:38"
} |
https://api.github.com/repos/huggingface/transformers/issues/28031 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28031/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28031/comments | https://api.github.com/repos/huggingface/transformers/issues/28031/events | https://github.com/huggingface/transformers/pull/28031 | 2,041,348,472 | PR_kwDOCUB6oc5h-7_X | 28,031 | [`core` / `modeling`] Fix training bug with PEFT + GC | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-14T09:59:29 | 2023-12-14T11:20:55 | 2023-12-14T11:19:45 | CONTRIBUTOR | null | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/28023
4.36.0 led to a bug when users are in the case of GC + training which should force-set `use_cache` to `False` here: https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L1008 - which is force-set to `True` during the backward pass for some reason, only in the case where one uses PEFT + GC.
The fix is to force-set `use_cache` to `False` before computing `past_key_value_length` here: https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L1042
cc @amyeroberts | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28031/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28031",
"html_url": "https://github.com/huggingface/transformers/pull/28031",
"diff_url": "https://github.com/huggingface/transformers/pull/28031.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28031.patch",
"merged_at": "2023-12-14T11:19:45"
} |
https://api.github.com/repos/huggingface/transformers/issues/28030 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28030/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28030/comments | https://api.github.com/repos/huggingface/transformers/issues/28030/events | https://github.com/huggingface/transformers/pull/28030 | 2,041,344,710 | PR_kwDOCUB6oc5h-7Kw | 28,030 | Generate: assisted decoding now uses `generate` for the assistant | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2023-12-14T09:57:20 | 2023-12-14T13:56:27 | 2023-12-14T13:31:14 | MEMBER | null | # What does this PR do?
Subset of the original changes in #27979
"Reworks assisted candidate generation to call .generate(), instead of having its own custom generation loop. For most models this is nothing more than a nice abstraction. However, for models with a custom generate() function, this means the assistant model will now make use of it! (🤔 does this mean that DistilWhisper gets better numbers with this refactor?)"
The following tests were run locally and are passing:
1. `RUN_SLOW=1 py.test tests/models/whisper/ -k speculative`
2. `py.test tests/ -k test_assisted`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28030/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28030/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28030",
"html_url": "https://github.com/huggingface/transformers/pull/28030",
"diff_url": "https://github.com/huggingface/transformers/pull/28030.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28030.patch",
"merged_at": "2023-12-14T13:31:14"
} |
https://api.github.com/repos/huggingface/transformers/issues/28029 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28029/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28029/comments | https://api.github.com/repos/huggingface/transformers/issues/28029/events | https://github.com/huggingface/transformers/pull/28029 | 2,041,292,753 | PR_kwDOCUB6oc5h-vzv | 28,029 | Fix AMD push CI not triggered | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-14T09:28:26 | 2023-12-14T11:44:01 | 2023-12-14T11:44:01 | COLLABORATOR | null | # What does this PR do?
Same as #27951 but for AMD Push CI (I didn't realize it has the same issue until today after looking [the run page](https://github.com/huggingface/transformers/actions/workflows/self-push-amd-mi210-caller.yml)) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28029/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28029",
"html_url": "https://github.com/huggingface/transformers/pull/28029",
"diff_url": "https://github.com/huggingface/transformers/pull/28029.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28029.patch",
"merged_at": "2023-12-14T11:44:00"
} |
https://api.github.com/repos/huggingface/transformers/issues/28028 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28028/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28028/comments | https://api.github.com/repos/huggingface/transformers/issues/28028/events | https://github.com/huggingface/transformers/issues/28028 | 2,041,162,326 | I_kwDOCUB6oc55qapW | 28,028 | An error occurred when using AWQ Fused modules | {
"login": "moufuyu",
"id": 62968285,
"node_id": "MDQ6VXNlcjYyOTY4Mjg1",
"avatar_url": "https://avatars.githubusercontent.com/u/62968285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moufuyu",
"html_url": "https://github.com/moufuyu",
"followers_url": "https://api.github.com/users/moufuyu/followers",
"following_url": "https://api.github.com/users/moufuyu/following{/other_user}",
"gists_url": "https://api.github.com/users/moufuyu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moufuyu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moufuyu/subscriptions",
"organizations_url": "https://api.github.com/users/moufuyu/orgs",
"repos_url": "https://api.github.com/users/moufuyu/repos",
"events_url": "https://api.github.com/users/moufuyu/events{/privacy}",
"received_events_url": "https://api.github.com/users/moufuyu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 10 | 2023-12-14T08:07:29 | 2024-01-15T10:53:56 | 2024-01-15T10:53:56 | NONE | null | ### System Info
- `transformers` version: 4.36.0
- `autoawq` version: 0.1.7
- Platform: Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.26
- Python version: 3.10.13
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Code snippet (I am using the inference code described in https://github.com/huggingface/transformers/pull/27411):
```Python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, AwqConfig, TextStreamer
model_id = "TheBloke/Mistral-7B-OpenOrca-AWQ"
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
quantization_config = AwqConfig(
bits=4,
do_fuse=True,
fuse_max_seq_len=512,
)
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=quantization_config).to(0)
tokenizer = AutoTokenizer.from_pretrained(model_id)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt_template = """\
<|im_start|>system
You are MistralOrca, a large language model trained by Alignment Lab AI. Write out your reasoning step-by-step to be sure you get the right answers!<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokenizer.pad_token = tokenizer.eos_token
inputs = tokenizer([prompt_template.format(prompt=prompt), prompt_template.format(prompt=prompt), prompt_template.format(prompt=prompt)], return_tensors="pt", padding=True).to(0)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
```
Error messages:
```
You passed `quantization_config` to `from_pretrained` but the model you're loading already has a `quantization_config` attribute and has already quantized weights. However, loading attributes (e.g. ['fuse_max_seq_len', 'modules_to_fuse', 'do_fuse']) will be overwritten with the one you passed to `from_pretrained`. The rest will be ignored.
You have loaded an AWQ model on CPU and have a CUDA device available, make sure to set your model on a GPU device in order to run your model.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Setting `pad_token_id` to `eos_token_id`:32000 for open-end generation.
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[5], line 32
28 tokenizer.pad_token = tokenizer.eos_token
30 inputs = tokenizer([prompt_template.format(prompt=prompt), prompt_template.format(prompt=prompt), prompt_template.format(prompt=prompt)], return_tensors="pt", padding=True).to(0)
---> 32 outputs = model.generate(**inputs, max_new_tokens=512)
33 print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)
112 @functools.wraps(func)
113 def decorate_context(*args, **kwargs):
114 with ctx_factory():
--> 115 return func(*args, **kwargs)
File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/generation/utils.py:1718, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, negative_prompt_ids, negative_prompt_attention_mask, **kwargs)
1701 return self.assisted_decoding(
1702 input_ids,
1703 assistant_model=assistant_model,
(...)
1714 **model_kwargs,
1715 )
1716 if generation_mode == GenerationMode.GREEDY_SEARCH:
1717 # 11. run greedy search
-> 1718 return self.greedy_search(
1719 input_ids,
1720 logits_processor=logits_processor,
1721 stopping_criteria=stopping_criteria,
1722 pad_token_id=generation_config.pad_token_id,
1723 eos_token_id=generation_config.eos_token_id,
1724 output_scores=generation_config.output_scores,
1725 return_dict_in_generate=generation_config.return_dict_in_generate,
1726 synced_gpus=synced_gpus,
1727 streamer=streamer,
1728 **model_kwargs,
1729 )
1731 elif generation_mode == GenerationMode.CONTRASTIVE_SEARCH:
1732 if not model_kwargs["use_cache"]:
File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/generation/utils.py:2579, in GenerationMixin.greedy_search(self, input_ids, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, streamer, **model_kwargs)
2576 model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
2578 # forward pass to get next token
-> 2579 outputs = self(
2580 **model_inputs,
2581 return_dict=True,
2582 output_attentions=output_attentions,
2583 output_hidden_states=output_hidden_states,
2584 )
2586 if synced_gpus and this_peer_finished:
2587 continue # don't waste resources running the code we don't need
File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)
1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1517 else:
-> 1518 return self._call_impl(*args, **kwargs)
File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)
1522 # If we don't have any hooks, we want to skip the rest of the logic in
1523 # this function, and just call forward.
1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1525 or _global_backward_pre_hooks or _global_backward_hooks
1526 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1527 return forward_call(*args, **kwargs)
1529 try:
1530 result = None
File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:1044, in MistralForCausalLM.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
1041 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1043 # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
-> 1044 outputs = self.model(
1045 input_ids=input_ids,
1046 attention_mask=attention_mask,
1047 position_ids=position_ids,
1048 past_key_values=past_key_values,
1049 inputs_embeds=inputs_embeds,
1050 use_cache=use_cache,
1051 output_attentions=output_attentions,
1052 output_hidden_states=output_hidden_states,
1053 return_dict=return_dict,
1054 )
1056 hidden_states = outputs[0]
1057 logits = self.lm_head(hidden_states)
File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)
1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1517 else:
-> 1518 return self._call_impl(*args, **kwargs)
File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)
1522 # If we don't have any hooks, we want to skip the rest of the logic in
1523 # this function, and just call forward.
1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1525 or _global_backward_pre_hooks or _global_backward_hooks
1526 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1527 return forward_call(*args, **kwargs)
1529 try:
1530 result = None
File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:954, in MistralModel.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)
952 next_cache = None
953 if use_cache:
--> 954 next_cache = next_decoder_cache.to_legacy_cache() if use_legacy_cache else next_decoder_cache
956 if not return_dict:
957 return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns] if v is not None)
AttributeError: 'list' object has no attribute 'to_legacy_cache'
```
### Expected behavior
Expected behavior is for the Fused Modules of the AWQ model to function without errors. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28028/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28027 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28027/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28027/comments | https://api.github.com/repos/huggingface/transformers/issues/28027/events | https://github.com/huggingface/transformers/issues/28027 | 2,041,012,344 | I_kwDOCUB6oc55p2B4 | 28,027 | 4.36 transformers got wrong _save_checkpoint with deepspeed. work with previous versions | {
"login": "tszdanger",
"id": 35394351,
"node_id": "MDQ6VXNlcjM1Mzk0MzUx",
"avatar_url": "https://avatars.githubusercontent.com/u/35394351?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tszdanger",
"html_url": "https://github.com/tszdanger",
"followers_url": "https://api.github.com/users/tszdanger/followers",
"following_url": "https://api.github.com/users/tszdanger/following{/other_user}",
"gists_url": "https://api.github.com/users/tszdanger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tszdanger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tszdanger/subscriptions",
"organizations_url": "https://api.github.com/users/tszdanger/orgs",
"repos_url": "https://api.github.com/users/tszdanger/repos",
"events_url": "https://api.github.com/users/tszdanger/events{/privacy}",
"received_events_url": "https://api.github.com/users/tszdanger/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5616426447,
"node_id": "LA_kwDOCUB6oc8AAAABTsPdzw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/solved",
"name": "solved",
"color": "B1D6DC",
"default": false,
"description": ""
}
] | open | false | null | [] | null | 3 | 2023-12-14T06:34:55 | 2024-01-12T06:01:47 | null | NONE | null | ### System Info
transformers: 4.36.0
python: 3.10
deepspeed: 0.9.4
Traceback (most recent call last):
File "/home/xxxx/xxxx/src/./xxxx.py", line 363, in <module>
main(args)
File "/home/xxxx/xxxx/src/./xxxx.py", line 352, in main
run_training(args, train_dataset, eval_dataset, len(tokenizer))
File "/home/xxxx/xxxx/src/./xxxx.py", line 339, in run_training
trainer.train(resume_from_checkpoint=args.resume_from_checkpoint)
File "/home/xxxx/miniconda3/envs/xxxx/lib/python3.10/site-packages/transformers/trainer.py", line 1537, in train
return inner_training_loop(
File "/home/xxxx/miniconda3/envs/xxxx/lib/python3.10/site-packages/transformers/trainer.py", line 1914, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/home/xxxx/miniconda3/envs/xxxx/lib/python3.10/site-packages/transformers/trainer.py", line 2274, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/home/xxxx/miniconda3/envs/xxxx/lib/python3.10/site-packages/transformers/trainer.py", line 2383, in _save_checkpoint
os.rename(staging_output_dir, output_dir)
FileNotFoundError: [Errno 2] No such file or directory: '/home/xxxx/tmp-checkpoint-100' -> '/home/xxxx/checkpoint-100'
**When Using deepspeed with Trainer, 4.36 just have the wrong code, see below:**
https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L2352C13-L2352C83
You may need a PR for this.
### Who can help?
@muellerzr @pacman100
Like below.
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
quite simple, just with the simplest code example.
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_data,
eval_dataset=val_data,
)
print("Training...")
trainer.train(resume_from_checkpoint=args.resume_from_checkpoint)
### Expected behavior
no error would be fine | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28027/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28027/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28026 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28026/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28026/comments | https://api.github.com/repos/huggingface/transformers/issues/28026/events | https://github.com/huggingface/transformers/pull/28026 | 2,040,995,464 | PR_kwDOCUB6oc5h9wEN | 28,026 | [`SeamlessM4TTokenizer`] Safe import | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-14T06:22:20 | 2023-12-14T07:46:11 | 2023-12-14T07:46:10 | COLLABORATOR | null | # What does this PR do?
Safe import for seamless M4T. Stumbled upon this doing the patch | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28026/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28026/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28026",
"html_url": "https://github.com/huggingface/transformers/pull/28026",
"diff_url": "https://github.com/huggingface/transformers/pull/28026.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28026.patch",
"merged_at": "2023-12-14T07:46:10"
} |
https://api.github.com/repos/huggingface/transformers/issues/28025 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28025/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28025/comments | https://api.github.com/repos/huggingface/transformers/issues/28025/events | https://github.com/huggingface/transformers/issues/28025 | 2,040,894,624 | I_kwDOCUB6oc55pZSg | 28,025 | How to combine two pretrained model in huggingface transformers? | {
"login": "rangehow",
"id": 88258534,
"node_id": "MDQ6VXNlcjg4MjU4NTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/88258534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rangehow",
"html_url": "https://github.com/rangehow",
"followers_url": "https://api.github.com/users/rangehow/followers",
"following_url": "https://api.github.com/users/rangehow/following{/other_user}",
"gists_url": "https://api.github.com/users/rangehow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rangehow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rangehow/subscriptions",
"organizations_url": "https://api.github.com/users/rangehow/orgs",
"repos_url": "https://api.github.com/users/rangehow/repos",
"events_url": "https://api.github.com/users/rangehow/events{/privacy}",
"received_events_url": "https://api.github.com/users/rangehow/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-12-14T04:45:51 | 2024-01-03T10:26:31 | 2024-01-03T10:26:31 | NONE | null | ### Feature request
I want to combine two pretrained model(LLAMA and BERT) in a new python class. More specific,The way I've tried is to define a new class c that inherits llama and load bert in c's \_\_init\_\_ function.

So that I can use c.from_pretrained('llama_ckpt_dir') to load two model together.
`model=C.from_pretrained('llama_ckpt_dir',low_cpu_mem_usage=True)`
After I use c.save_pretrained(), even the checkpoint keeps total structure of llama and bert ,bert's params are all random initialize(weights Gaussian initialization bias all zero). (I checked this by torch.load the saved c checkpoint and print it out)
Sincerely requesting some help, what should be done?
### Motivation
Since trainer can be passed only one model at a time, so it seems a good feature that should be concerned for who wants to do things like train two model together?
But there is another difficulty that how to deal with two total diffrent tokenizer from bert and llama(even though this is not required for trainer(since tokenizer usually only used by data preprocess), but I hope I can fix this so that I can completely transform c into a total hf model)
### Your contribution
I'm not sure what I can help, but I can fully support anything that can contribute to this issue. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28025/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28025/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28024 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28024/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28024/comments | https://api.github.com/repos/huggingface/transformers/issues/28024/events | https://github.com/huggingface/transformers/issues/28024 | 2,040,835,363 | I_kwDOCUB6oc55pK0j | 28,024 | Significant memory usage increase since 4.36 | {
"login": "oraluben",
"id": 5031346,
"node_id": "MDQ6VXNlcjUwMzEzNDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5031346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oraluben",
"html_url": "https://github.com/oraluben",
"followers_url": "https://api.github.com/users/oraluben/followers",
"following_url": "https://api.github.com/users/oraluben/following{/other_user}",
"gists_url": "https://api.github.com/users/oraluben/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oraluben/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oraluben/subscriptions",
"organizations_url": "https://api.github.com/users/oraluben/orgs",
"repos_url": "https://api.github.com/users/oraluben/repos",
"events_url": "https://api.github.com/users/oraluben/events{/privacy}",
"received_events_url": "https://api.github.com/users/oraluben/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-14T03:25:50 | 2023-12-22T00:18:48 | 2023-12-22T00:18:48 | NONE | null | bisected to #26681
### System Info
Device: A10
- huggingface_hub version: 0.19.4
- Platform: Linux-5.10.134-15.al8.x86_64-x86_64-with-glibc2.32
- Python version: 3.10.13
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: /home/ecs-user/.cache/huggingface/token
- Has saved token ?: False
- Configured git credential helpers:
- FastAI: N/A
- Tensorflow: N/A
- Torch: 2.2.0.dev20231213+cu121
- Jinja2: 3.1.2
- Graphviz: N/A
- Pydot: N/A
- Pillow: 9.3.0
- hf_transfer: N/A
- gradio: N/A
- tensorboard: N/A
- numpy: 1.24.1
- pydantic: 2.5.2
- aiohttp: 3.9.1
- ENDPOINT: https://huggingface.co
- HF_HUB_CACHE: /home/ecs-user/.cache/huggingface/hub
- HF_ASSETS_CACHE: /home/ecs-user/.cache/huggingface/assets
- HF_TOKEN_PATH: /home/ecs-user/.cache/huggingface/token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
- HF_HUB_ETAG_TIMEOUT: 10
- HF_HUB_DOWNLOAD_TIMEOUT: 10
### Who can help?
@tomaarsen
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Script:
```
import json
import numpy as np
import torch.nn.functional as F
from datasets import Dataset, load_dataset
from transformers import LlamaConfig, LlamaForCausalLM, Trainer, TrainingArguments, DataCollatorForLanguageModeling
from transformers import LlamaTokenizer
from transformers.models.llama.modeling_llama import LlamaFlashAttention2
config = LlamaConfig(num_hidden_layers=2)
config._flash_attn_2_enabled = True
def _flash_attention_forward(self, q, k, v, m, ql, dropout=0.0, softmax_scale=None):
assert m is None
return F.scaled_dot_product_attention(
q.transpose(1, 2), k.transpose(1, 2), v.transpose(1, 2),
is_causal=True).transpose(1, 2)
LlamaFlashAttention2._flash_attention_forward = _flash_attention_forward
model = LlamaForCausalLM(config)
DEEPSPEED_TEMPLATE = '{"optimizer": {"type": "AdamW", "params": {"lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto"}}, "scheduler": {"type": "WarmupLR", "params": {"warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto"}}, "zero_optimization": {"stage": 3, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e8, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": "auto"}, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false}'
ds_config = json.loads(DEEPSPEED_TEMPLATE)
ds_config['zero_optimization']['stage'] = 3
training_args = TrainingArguments(
remove_unused_columns=False,
log_level='info',
per_device_train_batch_size=2,
logging_steps=1,
output_dir='./tmp',
bf16=True,
deepspeed=ds_config,
gradient_checkpointing=True,
)
input_ids = np.random.randint(100, 30000, (1000, 2048))
data_set = Dataset.from_dict({
"input_ids": input_ids,
"labels": input_ids
})
trainer = Trainer(
model,
args=training_args,
train_dataset=data_set,
)
trainer.train()
```
1. `torchrun llama.py`
2. fail with `torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.06 GiB. GPU 0 has a total capacity of 21.99 GiB of which 2.79 GiB is free. Including non-PyTorch memory, this process has 19.19 GiB memory in use. Of the allocated memory 16.93 GiB is allocated by PyTorch, and 1.40 GiB is reserved by PyTorch but unallocated.`
### Expected behavior
The tranning runs normally.
With `transformers==4.35.2`:
```
$ nvidia-smi
Thu Dec 14 11:24:56 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.23.06 Driver Version: 545.23.06 CUDA Version: 12.3 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA A10 On | 00000000:00:07.0 Off | 0 |
| 0% 37C P0 157W / 150W | 20660MiB / 23028MiB | 100% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 1281528 C ...al/miniconda3/envs/zero3/bin/python 20648MiB |
+---------------------------------------------------------------------------------------+
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28024/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28024/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28023 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28023/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28023/comments | https://api.github.com/repos/huggingface/transformers/issues/28023/events | https://github.com/huggingface/transformers/issues/28023 | 2,040,821,101 | I_kwDOCUB6oc55pHVt | 28,023 | PEFT+gradient checkpointing causes attention mask shape mismatch during backward pass | {
"login": "geoffreyangus",
"id": 29719151,
"node_id": "MDQ6VXNlcjI5NzE5MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/29719151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/geoffreyangus",
"html_url": "https://github.com/geoffreyangus",
"followers_url": "https://api.github.com/users/geoffreyangus/followers",
"following_url": "https://api.github.com/users/geoffreyangus/following{/other_user}",
"gists_url": "https://api.github.com/users/geoffreyangus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/geoffreyangus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/geoffreyangus/subscriptions",
"organizations_url": "https://api.github.com/users/geoffreyangus/orgs",
"repos_url": "https://api.github.com/users/geoffreyangus/repos",
"events_url": "https://api.github.com/users/geoffreyangus/events{/privacy}",
"received_events_url": "https://api.github.com/users/geoffreyangus/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url"... | null | 3 | 2023-12-14T03:06:36 | 2023-12-15T09:51:03 | 2023-12-14T11:19:47 | NONE | null | ### System Info
- `transformers` version: 4.36.0
- Platform: Linux-5.4.0-152-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: x1 80GB A100
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
This should reproduce the error. This was originally run on x1 80GB A100. Note that both Llama-2-7B and Mixtral-8x7B are affected by this change (mixtral is testable– commented out– in the repro script below).
```python
import torch
from torch.optim import Adam
from transformers import BitsAndBytesConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import get_peft_config, get_peft_model, LoraConfig, TaskType
MODEL_ID = "meta-llama/Llama-2-7b-hf"
# MODEL_ID = "mistralai/Mixtral-8x7B-v0.1" # this is broken too
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
inputs = tokenizer("hello world what's up", return_tensors="pt")
inputs = {k: v.to("cuda") for k, v in inputs.items()}
print(inputs)
model = AutoModelForCausalLM.from_pretrained(MODEL_ID, device_map="auto", attn_implementation="eager", torch_dtype=torch.float16)
peft_config = LoraConfig(task_type=TaskType.CAUSAL_LM, target_modules=['q_proj', 'v_proj'], inference_mode=False, r=8, lora_alpha=32, lora_dropout=0.1)
model = get_peft_model(model, peft_config)
model.print_trainable_parameters()
model.gradient_checkpointing_enable()
model.enable_input_require_grads()
optimizer = Adam(model.parameters(), lr=1e-5)
model.train()
for i in range(10):
outputs = model(labels=inputs['input_ids'], **inputs)
loss = outputs.loss
print(loss)
loss.backward()
optimizer.step()
optimizer.zero_grad()
```
The culprit in the above script is the `model.train()` call after LoRA is configured for `model`. One can workaround it by (1) avoiding calling `model.train()` and (2) if you have to call `model.eval()`, be sure to save and reuse the `module.training` values from the model's initial state when reverting back to train mode.
The above will throw the following error:
```
File "mixtral_train_debug.py", line 47, in <module>
loss.backward()
File "/home/ray/anaconda3/lib/python3.8/site-packages/torch/_tensor.py", line 492, in backward
torch.autograd.backward(
File "/home/ray/anaconda3/lib/python3.8/site-packages/torch/autograd/__init__.py", line 251, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/home/ray/anaconda3/lib/python3.8/site-packages/torch/autograd/function.py", line 288, in apply
return user_fn(self, *args)
File "/home/ray/anaconda3/lib/python3.8/site-packages/torch/utils/checkpoint.py", line 271, in backward
outputs = ctx.run_function(*detached_inputs)
File "/home/ray/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/ray/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ray/anaconda3/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 789, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/home/ray/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/ray/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ray/anaconda3/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 423, in forward
raise ValueError(
ValueError: Attention mask should be of size (1, 1, 7, 14), but is torch.Size([1, 1, 7, 7])
```
### Expected behavior
I would expect calling `model.train()` and `model.eval()` to be callable despite the presence of PEFT modules and/or gradient checkpointing. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28023/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 1,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28023/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28022 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28022/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28022/comments | https://api.github.com/repos/huggingface/transformers/issues/28022/events | https://github.com/huggingface/transformers/issues/28022 | 2,040,774,659 | I_kwDOCUB6oc55o8AD | 28,022 | No effect of gradient_checkpointing when training llama-2 | {
"login": "getao",
"id": 12735658,
"node_id": "MDQ6VXNlcjEyNzM1NjU4",
"avatar_url": "https://avatars.githubusercontent.com/u/12735658?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/getao",
"html_url": "https://github.com/getao",
"followers_url": "https://api.github.com/users/getao/followers",
"following_url": "https://api.github.com/users/getao/following{/other_user}",
"gists_url": "https://api.github.com/users/getao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/getao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/getao/subscriptions",
"organizations_url": "https://api.github.com/users/getao/orgs",
"repos_url": "https://api.github.com/users/getao/repos",
"events_url": "https://api.github.com/users/getao/events{/privacy}",
"received_events_url": "https://api.github.com/users/getao/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 15 | 2023-12-14T02:14:31 | 2024-01-27T08:03:37 | null | NONE | null | ### System Info
transformers == 4.35.2
pytorch == 2.1.1
### Who can help?
@ArthurZucker
Hello, I'm training Llama-2 with flash-attn 2 using torchrun. However, I found that using gradient_checkpointing doesn't help save GPU memory. The training speed doesn't reduce either.
I doubt there is something wrong with the gradient_checkpointing. Could you please help take a look at the issue?
I enable gradient_checkpointing in the training_args.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
You can continuously train the llama-2-7b-hf with any text data with batch=1, seq_len=1024, using flash-attn.
One enables gradient_checkpointing and the other doesn't enable.
To compare the difference.
### Expected behavior
Training speed reduces and GPU memory cost reduced. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28022/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28022/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28021 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28021/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28021/comments | https://api.github.com/repos/huggingface/transformers/issues/28021/events | https://github.com/huggingface/transformers/issues/28021 | 2,040,645,353 | I_kwDOCUB6oc55ocbp | 28,021 | Incorrect router probability calculation | {
"login": "lhallee",
"id": 72926928,
"node_id": "MDQ6VXNlcjcyOTI2OTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/72926928?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhallee",
"html_url": "https://github.com/lhallee",
"followers_url": "https://api.github.com/users/lhallee/followers",
"following_url": "https://api.github.com/users/lhallee/following{/other_user}",
"gists_url": "https://api.github.com/users/lhallee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhallee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhallee/subscriptions",
"organizations_url": "https://api.github.com/users/lhallee/orgs",
"repos_url": "https://api.github.com/users/lhallee/repos",
"events_url": "https://api.github.com/users/lhallee/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhallee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 9 | 2023-12-13T23:59:17 | 2023-12-19T16:31:55 | 2023-12-19T16:31:55 | NONE | null | ### System Info
transformers version 4.36.0
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I think load_balancing_loss_func in modeling_mixtral creates router_prob_per_group_and_expert incorrectly
https://github.com/huggingface/transformers/blob/v4.36.0/src/transformers/models/mixtral/modeling_mixtral.py#L120
Trying to multiply something batch_size * num_hidden_layers, num_experts by batch_size * num_hidden_layers, topk, 1
`torch.mean(tokens_per_group_and_expert * router_prob_per_group_and_expert.unsqueeze(-1)) * (num_experts**2)`
Correct creation of routing_weights should likely be from gate_logits, which ensures it is the correct size
`routing_weights = gate_logits.softamx(dim=-1)`
The unsqueeze(-1) is necessary with this. Also router_prob_per_group_and_expert should average over axis=-2
`router_prob_per_group_and_expert = torch.mean(routing_weights, axis=-2)`
This follows the previous implementation in modeling_switch_transformers
https://github.com/huggingface/transformers/blob/v4.36.0/src/transformers/models/switch_transformers/modeling_switch_transformers.py#L91
### Expected behavior
Something like this would fix it
```
def router_loss_func_test(gate_logits: torch.Tensor, top_k=2) -> float:
if gate_logits is None:
return 0
if isinstance(gate_logits, tuple):
# cat along the layers?
gate_logits = torch.cat(gate_logits, dim=0) # batch_size * num_hidden_layers, sequence_length, num_experts
num_experts = gate_logits.shape[-1]
_, expert_indicies = torch.topk(gate_logits, top_k, dim=-1) # this is done so you don't need to pass expert_indicies
routing_probs = gate_logits.softmax(dim=-1) # routing probs
if expert_indicies.dtype != torch.int64: # cast the expert indices to int64, otherwise one-hot encoding will fail
expert_indicies = expert_indicies.to(torch.int64)
if len(expert_indicies.shape) == 2:
expert_indicies = expert_indicies.unsqueeze(2)
expert_mask = torch.nn.functional.one_hot(expert_indicies, num_experts)
# For a given token, determine if it was routed to a given expert.
expert_mask = torch.max(expert_mask, axis=-2).values
expert_mask = expert_mask.to(torch.float32) # cast to float32 otherwise mean will fail
tokens_per_group_and_expert = torch.mean(expert_mask, axis=-2)
router_prob_per_group_and_expert = torch.mean(routing_probs, axis=-2)
loss = torch.mean(tokens_per_group_and_expert * router_prob_per_group_and_expert) * (num_experts**2)
return loss
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28021/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28021/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28020 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28020/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28020/comments | https://api.github.com/repos/huggingface/transformers/issues/28020/events | https://github.com/huggingface/transformers/pull/28020 | 2,040,514,024 | PR_kwDOCUB6oc5h8KMu | 28,020 | Fix wrong examples in llava usage. | {
"login": "Lyken17",
"id": 7783214,
"node_id": "MDQ6VXNlcjc3ODMyMTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7783214?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lyken17",
"html_url": "https://github.com/Lyken17",
"followers_url": "https://api.github.com/users/Lyken17/followers",
"following_url": "https://api.github.com/users/Lyken17/following{/other_user}",
"gists_url": "https://api.github.com/users/Lyken17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lyken17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lyken17/subscriptions",
"organizations_url": "https://api.github.com/users/Lyken17/orgs",
"repos_url": "https://api.github.com/users/Lyken17/repos",
"events_url": "https://api.github.com/users/Lyken17/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lyken17/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-13T21:44:26 | 2023-12-18T00:27:24 | 2023-12-15T17:09:51 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR aims to fix the demo code of `LlavaForConditionalGeneration`
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28020/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28020/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28020",
"html_url": "https://github.com/huggingface/transformers/pull/28020",
"diff_url": "https://github.com/huggingface/transformers/pull/28020.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28020.patch",
"merged_at": "2023-12-15T17:09:51"
} |
https://api.github.com/repos/huggingface/transformers/issues/28019 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28019/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28019/comments | https://api.github.com/repos/huggingface/transformers/issues/28019/events | https://github.com/huggingface/transformers/pull/28019 | 2,040,461,517 | PR_kwDOCUB6oc5h7-qh | 28,019 | Fix languages covered by M4Tv2 | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-13T21:00:59 | 2023-12-14T14:43:44 | 2023-12-14T14:43:44 | COLLABORATOR | null | # What does this PR do?
Currently, M4Tv2 into-text tasks (ASR, S2TT, T2TT) do not work for languages outside of the 36 for which audio is supported. This is linked to a test at the beginning of the model generate. The model previously verified if the `tgt_lang` was in a bunch of dictionaries, independently from the output modality. This PR aims to fix that.
I've added a test to make sure it works.
cc @amyeroberts | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28019/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28019/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28019",
"html_url": "https://github.com/huggingface/transformers/pull/28019",
"diff_url": "https://github.com/huggingface/transformers/pull/28019.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28019.patch",
"merged_at": "2023-12-14T14:43:44"
} |
https://api.github.com/repos/huggingface/transformers/issues/28018 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28018/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28018/comments | https://api.github.com/repos/huggingface/transformers/issues/28018/events | https://github.com/huggingface/transformers/pull/28018 | 2,040,430,141 | PR_kwDOCUB6oc5h73xE | 28,018 | [GPTQ] Fix test | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-13T20:35:55 | 2024-01-15T16:22:55 | 2024-01-15T16:22:55 | MEMBER | null | # What does this PR do ?
This PR fixes failing tests related to GPTQ quantization. The breaking tests are related to modification on optimum side and OOM from the new runner. I've also replaced for a smaller model. related optimum [PR](https://github.com/huggingface/optimum/pull/1574/files#) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28018/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28018/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28018",
"html_url": "https://github.com/huggingface/transformers/pull/28018",
"diff_url": "https://github.com/huggingface/transformers/pull/28018.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28018.patch",
"merged_at": "2024-01-15T16:22:55"
} |
https://api.github.com/repos/huggingface/transformers/issues/28017 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28017/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28017/comments | https://api.github.com/repos/huggingface/transformers/issues/28017/events | https://github.com/huggingface/transformers/pull/28017 | 2,040,398,233 | PR_kwDOCUB6oc5h7wxc | 28,017 | [`chore`] Update warning text, a word was missing | {
"login": "tomaarsen",
"id": 37621491,
"node_id": "MDQ6VXNlcjM3NjIxNDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomaarsen",
"html_url": "https://github.com/tomaarsen",
"followers_url": "https://api.github.com/users/tomaarsen/followers",
"following_url": "https://api.github.com/users/tomaarsen/following{/other_user}",
"gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions",
"organizations_url": "https://api.github.com/users/tomaarsen/orgs",
"repos_url": "https://api.github.com/users/tomaarsen/repos",
"events_url": "https://api.github.com/users/tomaarsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomaarsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-12-13T20:12:48 | 2024-01-15T09:08:06 | 2024-01-15T09:08:03 | MEMBER | null | Hello!
# What does this PR do?
* Updates a warning text: "lead" was missing.
I missed this in the original PR, apologies.
## Before submitting
- [x] This PR fixes a typo or improves the docs
## Who can review?
@ArthurZucker
cc: @stas00 Thanks for pointing this out here: https://github.com/huggingface/transformers/pull/26681#discussion_r1425760204
- Tom Aarsen | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28017/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28017/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28017",
"html_url": "https://github.com/huggingface/transformers/pull/28017",
"diff_url": "https://github.com/huggingface/transformers/pull/28017.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28017.patch",
"merged_at": "2024-01-15T09:08:03"
} |
https://api.github.com/repos/huggingface/transformers/issues/28016 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28016/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28016/comments | https://api.github.com/repos/huggingface/transformers/issues/28016/events | https://github.com/huggingface/transformers/pull/28016 | 2,040,375,252 | PR_kwDOCUB6oc5h7r4k | 28,016 | [docs] MPS | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-13T19:56:29 | 2023-12-15T21:17:34 | 2023-12-15T21:17:30 | MEMBER | null | As a part of a larger effort to clean up the `Trainer` API docs in #27986, this PR moves the [Trainer for accelerated PyTorch training on Mac](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#using-trainer-for-accelerated-pytorch-training-on-mac) section to the currently empty [Training on Specialized Hardware](https://huggingface.co/docs/transformers/main/en/perf_train_special) page.
Other updates include rewriting it a bit so it doesn't sound like it's copied directly from the blog post and removing the link to the paywalled article for setup 🙂 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28016/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28016/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28016",
"html_url": "https://github.com/huggingface/transformers/pull/28016",
"diff_url": "https://github.com/huggingface/transformers/pull/28016.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28016.patch",
"merged_at": "2023-12-15T21:17:30"
} |
https://api.github.com/repos/huggingface/transformers/issues/28015 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28015/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28015/comments | https://api.github.com/repos/huggingface/transformers/issues/28015/events | https://github.com/huggingface/transformers/issues/28015 | 2,040,353,985 | I_kwDOCUB6oc55nVTB | 28,015 | Race condition while saving the checkpoint of the model | {
"login": "upperwal",
"id": 5246435,
"node_id": "MDQ6VXNlcjUyNDY0MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5246435?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/upperwal",
"html_url": "https://github.com/upperwal",
"followers_url": "https://api.github.com/users/upperwal/followers",
"following_url": "https://api.github.com/users/upperwal/following{/other_user}",
"gists_url": "https://api.github.com/users/upperwal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/upperwal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/upperwal/subscriptions",
"organizations_url": "https://api.github.com/users/upperwal/orgs",
"repos_url": "https://api.github.com/users/upperwal/repos",
"events_url": "https://api.github.com/users/upperwal/events{/privacy}",
"received_events_url": "https://api.github.com/users/upperwal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-13T19:41:41 | 2023-12-13T20:07:19 | 2023-12-13T20:07:19 | NONE | null | ### System Info
- `transformers` version: 4.36.0
- Platform: Linux-5.4.0-80-generic-x86_64-with-glibc2.31
- Python version: 3.11.5
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Distributed
### Who can help?
@muellerz @pacman100
### Error
There is a race condition at `Trainer > _maybe_log_save_evaluate` in a `multi_gpu` setup with `DDP`. Race condition happens because `self.control.should_save` allows all nodes to enter `self._save_checkpoint()`.
All nodes (even non-root nodes) enter `self._save_checkpoint()` and creates `staging_output_dir` if not available to save the random number generator state using `_save_rng_state`. Then eventually one of these nodes reaches [`if staging_output_dir != output_dir`](https://github.com/huggingface/transformers/blob/ec43d6870aa1afb42a6d2b1b0a03743d3f9b3ce6/src/transformers/trainer.py#L2385) and renames the `staging_output_dir` to `output_dir`. Any node including the root node trying to save the model state in `staging_output_dir` will fail as that directory has been moved.
https://github.com/huggingface/transformers/blob/ec43d6870aa1afb42a6d2b1b0a03743d3f9b3ce6/src/transformers/trainer.py#L2276C1-L2278
```
Traceback (most recent call last):
File "...py", line 123, in <module>
main()
File "....py", line 119, in main
trainer.train(resume_from_checkpoint=ckpt_dir)
File ".../lib/python3.11/site-packages/transformers/trainer.py", line 1537, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File ".../lib/python3.11/site-packages/transformers/trainer.py", line 1914, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File ".../lib/python3.11/site-packages/transformers/trainer.py", line 2274, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File ".../lib/python3.11/site-packages/transformers/trainer.py", line 2350, in _save_checkpoint
self.save_model(staging_output_dir, _internal_call=True)
File ".../lib/python3.11/site-packages/transformers/trainer.py", line 2837, in save_model
self._save(output_dir)
File ".../lib/python3.11/site-packages/transformers/trainer.py", line 2893, in _save
safetensors.torch.save_file(state_dict, os.path.join(output_dir, SAFE_WEIGHTS_NAME))
File ".../lib/python3.11/site-packages/safetensors/torch.py", line 281, in save_file
serialize_file(_flatten(tensors), filename, metadata=metadata)
RuntimeError: Parent directory ./tmp-checkpoint-500 does not exist.
super().__init__(torch._C.PyTorchFileWriter(str(name)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
### Fix
Only global root process should rename the staging directory.
```py
if self.args.process_index == 0 and staging_output_dir != output_dir:
os.rename(staging_output_dir, output_dir)
```
This will still have race condition incase `save_on_each_node == True` as root process might rename the directory when some non-root node is saving the state in `staging_output_dir`. Should ideally sync all the processes and then rename but that will halt some processes. Better write to the `output_dir` directly.
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Setup the `Trainer` in DDP with multi-node
2. Enable saving checkpoints at some intervals
3. Train
### Expected behavior
Should see the following file in the checkpoint directory without `RuntimeError: Parent directory ./tmp-checkpoint-500 does not exist` error
1. model.safetensors
2. rng_state_0.pth
3. rng_state_1.pth
4. rng_state_XXX.pth
5. scheduler.pt
6. tokenizer.json
7. trainer_state.json
8. optimizer.pt
9. special_tokens_map.json
10. tokenizer_config.json
11. training_args.bin | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28015/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28015/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28014 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28014/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28014/comments | https://api.github.com/repos/huggingface/transformers/issues/28014/events | https://github.com/huggingface/transformers/pull/28014 | 2,040,315,855 | PR_kwDOCUB6oc5h7e7a | 28,014 | Fixed spelling error in T5 tokenizer warning message (s/thouroughly/t… | {
"login": "jeddobson",
"id": 11461294,
"node_id": "MDQ6VXNlcjExNDYxMjk0",
"avatar_url": "https://avatars.githubusercontent.com/u/11461294?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeddobson",
"html_url": "https://github.com/jeddobson",
"followers_url": "https://api.github.com/users/jeddobson/followers",
"following_url": "https://api.github.com/users/jeddobson/following{/other_user}",
"gists_url": "https://api.github.com/users/jeddobson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeddobson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeddobson/subscriptions",
"organizations_url": "https://api.github.com/users/jeddobson/orgs",
"repos_url": "https://api.github.com/users/jeddobson/repos",
"events_url": "https://api.github.com/users/jeddobson/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeddobson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-13T19:12:51 | 2024-01-05T17:07:02 | 2023-12-14T14:52:03 | CONTRIBUTOR | null | # Spelling correction
This is a simple word change for a warning message generated by the T5 tokenizer ('src/transformers/models/t5/tokenization_t5.py') that appeared when converting LLAMA weights to HF format.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28014/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28014/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28014",
"html_url": "https://github.com/huggingface/transformers/pull/28014",
"diff_url": "https://github.com/huggingface/transformers/pull/28014.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28014.patch",
"merged_at": "2023-12-14T14:52:03"
} |
https://api.github.com/repos/huggingface/transformers/issues/28013 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28013/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28013/comments | https://api.github.com/repos/huggingface/transformers/issues/28013/events | https://github.com/huggingface/transformers/issues/28013 | 2,040,312,392 | I_kwDOCUB6oc55nLJI | 28,013 | What's the purpose of Pad to 64 in LLaVA | {
"login": "ShoufaChen",
"id": 28682908,
"node_id": "MDQ6VXNlcjI4NjgyOTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/28682908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShoufaChen",
"html_url": "https://github.com/ShoufaChen",
"followers_url": "https://api.github.com/users/ShoufaChen/followers",
"following_url": "https://api.github.com/users/ShoufaChen/following{/other_user}",
"gists_url": "https://api.github.com/users/ShoufaChen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ShoufaChen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShoufaChen/subscriptions",
"organizations_url": "https://api.github.com/users/ShoufaChen/orgs",
"repos_url": "https://api.github.com/users/ShoufaChen/repos",
"events_url": "https://api.github.com/users/ShoufaChen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ShoufaChen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-12-13T19:10:15 | 2023-12-16T18:50:11 | 2023-12-16T18:04:27 | NONE | null | Hello @younesbelkada,
Thanks for your awesome work that integrates LLaVA to transformers repo.
Would you mind providing more details about the padding tokenizer to 64 here?
https://github.com/huggingface/transformers/blob/fe44b1f1a974139cd32a8884a63686425283b07c/src/transformers/models/llava/convert_llava_weights_to_hf.py#L71-L73
What's the advantage of 64? I thought 2 is enough?
Thanks in advance. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28013/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28013/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28012 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28012/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28012/comments | https://api.github.com/repos/huggingface/transformers/issues/28012/events | https://github.com/huggingface/transformers/pull/28012 | 2,040,227,416 | PR_kwDOCUB6oc5h7LgK | 28,012 | [Flax BERT] Update deprecated 'split' method | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-13T18:07:37 | 2023-12-15T10:57:23 | 2023-12-15T10:57:19 | CONTRIBUTOR | null | # What does this PR do?
Fixes #27644. JAX Array split method was deprecated in JAX 0.4.5: https://jax.readthedocs.io/en/latest/changelog.html#jax-0-4-5-mar-2-2023
This PR updates the four uses in the codebase to use the (recommended) `jnp.split` replacement.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28012/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28012/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28012",
"html_url": "https://github.com/huggingface/transformers/pull/28012",
"diff_url": "https://github.com/huggingface/transformers/pull/28012.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28012.patch",
"merged_at": "2023-12-15T10:57:19"
} |
https://api.github.com/repos/huggingface/transformers/issues/28011 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28011/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28011/comments | https://api.github.com/repos/huggingface/transformers/issues/28011/events | https://github.com/huggingface/transformers/pull/28011 | 2,040,213,311 | PR_kwDOCUB6oc5h7Ib7 | 28,011 | [`Whisper`] nit | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-13T17:57:37 | 2024-01-02T17:09:34 | 2023-12-14T05:51:04 | COLLABORATOR | null | # What does this PR do?
Was getting these strange warnings:
```python
Ignored unknown kwarg option normalize
Ignored unknown kwarg option normalize
Ignored unknown kwarg option normalize
Ignored unknown kwarg option normalize
```
with processor = AutoProcessor.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h") | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28011/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/28011/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28011",
"html_url": "https://github.com/huggingface/transformers/pull/28011",
"diff_url": "https://github.com/huggingface/transformers/pull/28011.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28011.patch",
"merged_at": "2023-12-14T05:51:04"
} |
https://api.github.com/repos/huggingface/transformers/issues/28010 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28010/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28010/comments | https://api.github.com/repos/huggingface/transformers/issues/28010/events | https://github.com/huggingface/transformers/pull/28010 | 2,040,127,379 | PR_kwDOCUB6oc5h61ri | 28,010 | [`Core tokenization`] `add_dummy_prefix_space` option to help with latest issues | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2023-12-13T16:59:44 | 2024-01-21T19:24:42 | null | COLLABORATOR | null | # What does this PR do?
Allows users to use `tokenizer.tokenize` controlling the addition of prefix space. Let's also update fast! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28010/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28010/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28010",
"html_url": "https://github.com/huggingface/transformers/pull/28010",
"diff_url": "https://github.com/huggingface/transformers/pull/28010.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28010.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28009 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28009/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28009/comments | https://api.github.com/repos/huggingface/transformers/issues/28009/events | https://github.com/huggingface/transformers/pull/28009 | 2,040,069,367 | PR_kwDOCUB6oc5h6o8t | 28,009 | Fix bug with rotating checkpoints | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 11 | 2023-12-13T16:26:52 | 2024-01-12T06:06:20 | 2023-12-13T17:17:31 | CONTRIBUTOR | null | # What does this PR do?
There was a bug introduced in https://github.com/huggingface/transformers/pull/27820 where if we were on multi-GPU systems we would hit a race condition after saving on the processes because we cannot rename the staging directory multiple times. This PR ensures that it only happens on the main process.
Fixes # (issue)
Fixes https://github.com/huggingface/transformers/issues/27925
Alternative to https://github.com/huggingface/transformers/pull/27929
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts
I would recommend a patch release as this is fully blocking users on multi-GPU after the last release. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28009/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28009/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28009",
"html_url": "https://github.com/huggingface/transformers/pull/28009",
"diff_url": "https://github.com/huggingface/transformers/pull/28009.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28009.patch",
"merged_at": "2023-12-13T17:17:30"
} |
https://api.github.com/repos/huggingface/transformers/issues/28008 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28008/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28008/comments | https://api.github.com/repos/huggingface/transformers/issues/28008/events | https://github.com/huggingface/transformers/issues/28008 | 2,040,014,898 | I_kwDOCUB6oc55mCgy | 28,008 | Supoport batch image processing | {
"login": "wcy1122",
"id": 31536861,
"node_id": "MDQ6VXNlcjMxNTM2ODYx",
"avatar_url": "https://avatars.githubusercontent.com/u/31536861?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wcy1122",
"html_url": "https://github.com/wcy1122",
"followers_url": "https://api.github.com/users/wcy1122/followers",
"following_url": "https://api.github.com/users/wcy1122/following{/other_user}",
"gists_url": "https://api.github.com/users/wcy1122/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wcy1122/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wcy1122/subscriptions",
"organizations_url": "https://api.github.com/users/wcy1122/orgs",
"repos_url": "https://api.github.com/users/wcy1122/repos",
"events_url": "https://api.github.com/users/wcy1122/events{/privacy}",
"received_events_url": "https://api.github.com/users/wcy1122/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-12-13T15:59:41 | 2023-12-21T16:29:11 | 2023-12-21T16:29:11 | NONE | null | ### Feature request
A faster image processing which supports batch image processing, especially for input data like video.
### Motivation
The speed of image processing is too slow if the amount of image is large.
>
### Your contribution
The image processor like CLIPImageProcessor processes each image sequentially, which makes it very slow. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28008/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28008/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28007 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28007/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28007/comments | https://api.github.com/repos/huggingface/transformers/issues/28007/events | https://github.com/huggingface/transformers/issues/28007 | 2,039,893,753 | I_kwDOCUB6oc55lk75 | 28,007 | Can't do word timestamps and beam search at the same time (whisper) | {
"login": "Snarkdoof",
"id": 5370689,
"node_id": "MDQ6VXNlcjUzNzA2ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5370689?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Snarkdoof",
"html_url": "https://github.com/Snarkdoof",
"followers_url": "https://api.github.com/users/Snarkdoof/followers",
"following_url": "https://api.github.com/users/Snarkdoof/following{/other_user}",
"gists_url": "https://api.github.com/users/Snarkdoof/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Snarkdoof/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Snarkdoof/subscriptions",
"organizations_url": "https://api.github.com/users/Snarkdoof/orgs",
"repos_url": "https://api.github.com/users/Snarkdoof/repos",
"events_url": "https://api.github.com/users/Snarkdoof/events{/privacy}",
"received_events_url": "https://api.github.com/users/Snarkdoof/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-12-13T15:00:38 | 2023-12-23T20:49:41 | 2023-12-23T20:49:41 | NONE | null | ### System Info
Tested on python 3.8.10, transformers 4.36.0.dev0
### Who can help?
@ArthurZucker @sanchit-gandhi (suggested by peregilk)
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import pipeline
import torch
model = "NbAiLabBeta/nb-whisper-base"
device = "cuda:0"
p = pipeline("automatic-speech-recognition",
model,
torch_dtype=torch.float16,
device=device,
return_timestamps="word")
args = {"language": "norwegian", "task": "transcribe", "num_beams": 3}
outputs = p(audiofile,
chunk_length_s=28,
batch_size=6,
generate_kwargs=args)
```
Fails with:
> Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/automatic_speech_recognition.py", line 357, in __call__
return super().__call__(inputs, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/base.py", line 1132, in __call__
return next(
File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/pt_utils.py", line 124, in __next__
item = next(self.iterator)
File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/pt_utils.py", line 266, in __next__
processed = self.infer(next(self.iterator), **self.params)
File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/base.py", line 1046, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/automatic_speech_recognition.py", line 552, in _forward
generate_kwargs["num_frames"] = stride[0] // self.feature_extractor.hop_length
TypeError: unsupported operand type(s) for //: 'tuple' and 'int'
It works with *either* num_beams:1 OR return_timestamps=True/False, but not combined.
### Expected behavior
It should return processed data. :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28007/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28007/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28006 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28006/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28006/comments | https://api.github.com/repos/huggingface/transformers/issues/28006/events | https://github.com/huggingface/transformers/pull/28006 | 2,039,889,300 | PR_kwDOCUB6oc5h6B8a | 28,006 | Clearer error for SDPA when explicitely requested | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-13T14:58:23 | 2024-01-16T16:10:45 | 2024-01-16T16:10:44 | COLLABORATOR | null | As per title, partially fixes https://github.com/huggingface/transformers/issues/28003. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28006/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28006/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28006",
"html_url": "https://github.com/huggingface/transformers/pull/28006",
"diff_url": "https://github.com/huggingface/transformers/pull/28006.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28006.patch",
"merged_at": "2024-01-16T16:10:44"
} |
https://api.github.com/repos/huggingface/transformers/issues/28005 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28005/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28005/comments | https://api.github.com/repos/huggingface/transformers/issues/28005/events | https://github.com/huggingface/transformers/issues/28005 | 2,039,623,205 | I_kwDOCUB6oc55ki4l | 28,005 | Open to contribution: adding `torch.nn.functional.scaled_dot_product_attention` support for more architectures | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 6126880899,
"node_id": "LA_kwDOCUB6oc8AAAABbTDIgw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/contributions-welcome",
"name": "contributions-welcome",
"color": "F99E09",
"default": false,
"description": ""
}
] | open | false | null | [] | null | 10 | 2023-12-13T12:35:52 | 2024-01-31T01:40:28 | null | COLLABORATOR | null | ### Feature request
In [`Transformers 4.36`](https://github.com/huggingface/transformers/releases/tag/v4.36.0), we started adding native support of [torch.nn.functional.scaled_dot_product_attention](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html) (SDPA), enabled by default in Transformers: https://huggingface.co/docs/transformers/perf_infer_gpu_one#flashattention-and-memory-efficient-attention-through-pytorchs-scaleddotproductattention
SDPA allows to dispatch to memory-efficient attention, flash attention on supported GPUs (currently NVIDIA-only), and even on [Intel CPUs](https://pytorch.org/blog/new-features-for-ai/#flash-attention-based-scaled-dot-product-algorithm-for-cpu).
For the record, here's a benchmark on some currently supported models:
**[Training benchmark](https://gist.github.com/fxmarty/7e75cc3942d6974e4849093ebea0a331), run on A100-SXM4-80GB.**
| Model | Batch size | Sequence length | Time per batch (`"eager"`, s) | Time per batch (`"sdpa"`, s) | **Speedup** | Peak memory (`"eager"`, MB) | Peak memory (`"sdpa"`, MB) | **Memory savings** |
|-----------|------------|-----------------|-------------------------------|------------------------------|-------------|-----------------------------|----------------------------|-----------------------|
| llama2 7b | 4 | 1024 | 1.065 | 0.90 | **19.4%** | 73878.28 | 45977.81 | **60.7%** |
| llama2 7b | 4 | 2048 | OOM | 1.87 | / | OOM | 78394.58 | **SDPA does not OOM** |
| llama2 7b | 1 | 2048 | 0.64 | 0.48 | **32.0%** | 55557.01 | 29795.63 | **86.4%** |
| llama2 7b | 1 | 3072 | OOM | 0.75 | / | OOM | 37916.08 | **SDPA does not OOM** |
| llama2 7b | 1 | 4096 | OOM | 1.03 | / | OOM | 46028.14 | **SDPA does not OOM** |
| llama2 7b | 2 | 4096 | OOM | 2.05 | / | OOM | 78428.14 | **SDPA does not OOM** |
**[Inference benchmark](https://gist.github.com/fxmarty/5113e4304fbdd38c9c3702ce44683f6a), run on A100-SXM4-80GB.**
| Model | Batch size | Prompt length | Num new tokens | Per token latency `"eager"` (ms) | Per token latency `"sdpa"` (ms) | **Speedup** |
|------------------|------------|---------------|----------------|----------------------------------|---------------------------------|-------------|
| llama2 13b | 1 | 1024 | 1 (prefill) | 178.66 | 159.36 | **12.11%** |
| llama2 13b | 1 | 100 | 100 | 40.35 | 37.62 | **7.28%** |
| llama2 13b | 8 | 100 | 100 | 40.55 | 38.06 | **6.53%** |
| Whisper v3 large | 1 | / | 62 | 20.05 | 18.90 | **6.10%** |
| Whisper v3 large | 8 | / | 77 | 25.42 | 24.77 | **2.59%** |
| Whisper v3 large | 16 | / | 77 | 28.51 | 26.32 | **8.34%** |
Previously, we had a partial support of SDPA in [Optimum BetterTransformer](https://huggingface.co/docs/optimum/bettertransformer/overview) but we are now looking to slowly deprecate it in favor of upstream support of SDPA directly in Transformers.
Here are the architectures for which support has been requested:
- [ ] Codegen (https://github.com/huggingface/optimum/issues/1050)
- [ ] LLAVA (https://github.com/huggingface/optimum/issues/1592)
- [ ] Marian (https://github.com/huggingface/optimum/issues/1142)
- [ ] Mistral (https://github.com/huggingface/optimum/issues/1553)
- [ ] LongT5 (https://github.com/huggingface/optimum/issues/1506)
- [ ] ViT (https://github.com/huggingface/optimum/issues/1553)
The integration could take inspiration from https://github.com/huggingface/optimum/blob/main/optimum/bettertransformer/models/decoder_models.py & https://github.com/huggingface/optimum/blob/main/optimum/bettertransformer/models/attention.py
### Motivation
Faster training & inference, lower memory requirement
### Your contribution
I may work on some at some point, but contributions are most welcome.
You should refer to https://github.com/huggingface/transformers/pull/26572 to add the support of SDPA for a model, roughly following these steps:
* Create a `XxxSdpaAttention` class inheriting from `XxxAttention` and implement the attention logic using SDPA
* Use `_prepare_4d_causal_attention_mask_for_sdpa` instead of `_prepare_4d_causal_attention_mask` for SDPA
* Use `_prepare_4d_attention_mask_for_sdpa` instead of `_prepare_4d_attention_mask` for SDPA
* Add `_supports_sdpa = True` to `XxxPreTrainedModel`
* Add `"sdpa"` key to `XXX_ATTENTION_CLASSES` in the model modeling file | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28005/reactions",
"total_count": 7,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 0,
"rocket": 4,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28005/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28004 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28004/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28004/comments | https://api.github.com/repos/huggingface/transformers/issues/28004/events | https://github.com/huggingface/transformers/issues/28004 | 2,039,618,109 | I_kwDOCUB6oc55kho9 | 28,004 | Error in loading reduced multilingual layoutxlm model: RuntimeError: Error(s) in loading state_dict for LayoutLMv2ForTokenClassification: | {
"login": "Merchaoui",
"id": 80455763,
"node_id": "MDQ6VXNlcjgwNDU1NzYz",
"avatar_url": "https://avatars.githubusercontent.com/u/80455763?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Merchaoui",
"html_url": "https://github.com/Merchaoui",
"followers_url": "https://api.github.com/users/Merchaoui/followers",
"following_url": "https://api.github.com/users/Merchaoui/following{/other_user}",
"gists_url": "https://api.github.com/users/Merchaoui/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Merchaoui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Merchaoui/subscriptions",
"organizations_url": "https://api.github.com/users/Merchaoui/orgs",
"repos_url": "https://api.github.com/users/Merchaoui/repos",
"events_url": "https://api.github.com/users/Merchaoui/events{/privacy}",
"received_events_url": "https://api.github.com/users/Merchaoui/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-12-13T12:32:46 | 2024-01-21T08:03:50 | 2024-01-21T08:03:50 | NONE | null | ### System Info
- `transformers` version: 4.33.3
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.9.18
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.3.3
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cpu (False)
- Tensorflow version (GPU?): 2.10.1 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
model = LayoutLMv2ForTokenClassification.from_pretrained("./layoutxlm_reduced/microsoft/layoutxlm-base",
num_labels=len(labels))
### Expected behavior
I followed the official tuto to reduce the size of the model because I only need english and spanish languages for layoutxlm. (https://medium.com/@coding-otter/reduce-your-transformers-model-size-by-removing-unwanted-tokens-and-word-embeddings-eec08166d2f9).
When I load the reduced model I have this error: "RuntimeError: Error(s) in loading state_dict for LayoutLMv2ForTokenClassification: size mismatch for classifier.weight: copying a param with shape torch.Size([2, 768]) from checkpoint, the shape in current model is torch.Size([6, 768]). size mismatch for classifier.bias: copying a param with shape torch.Size([2]) from checkpoint, the shape in current model is torch.Size([6]). You may consider adding ignore_mismatched_sizes=True` in the model `from_pretrained` method."
and when I set `ignore_mismatched_sizes=True`, I get the error "IndexError: index out of range in self" when it starts training | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28004/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28004/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28003 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28003/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28003/comments | https://api.github.com/repos/huggingface/transformers/issues/28003/events | https://github.com/huggingface/transformers/issues/28003 | 2,039,616,571 | I_kwDOCUB6oc55khQ7 | 28,003 | (LLama-2) (4.36.0) TensorParallelPreTrainedModel does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention | {
"login": "VasilGeorgiev39",
"id": 149842188,
"node_id": "U_kgDOCO5pDA",
"avatar_url": "https://avatars.githubusercontent.com/u/149842188?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VasilGeorgiev39",
"html_url": "https://github.com/VasilGeorgiev39",
"followers_url": "https://api.github.com/users/VasilGeorgiev39/followers",
"following_url": "https://api.github.com/users/VasilGeorgiev39/following{/other_user}",
"gists_url": "https://api.github.com/users/VasilGeorgiev39/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VasilGeorgiev39/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VasilGeorgiev39/subscriptions",
"organizations_url": "https://api.github.com/users/VasilGeorgiev39/orgs",
"repos_url": "https://api.github.com/users/VasilGeorgiev39/repos",
"events_url": "https://api.github.com/users/VasilGeorgiev39/events{/privacy}",
"received_events_url": "https://api.github.com/users/VasilGeorgiev39/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 4 | 2023-12-13T12:31:49 | 2024-01-17T09:58:43 | null | NONE | null | ### System Info
- `transformers` version: 4.36.0
- Platform: Linux-5.4.0-150-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker
@fxmarty
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import transformers
import tensor_parallel as tp
tokenizer = transformers.AutoTokenizer.from_pretrained("meta-llama/Llama-2-13b-chat-hf")
model = transformers.AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-13b-chat-hf")
modelp = tp.tensor_parallel(model)
```
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/root/code/cot-unfaithfulness/test-hf.py in line 8
[5](file:///root/code/cot-unfaithfulness/test-hf.py?line=4) tokenizer = transformers.AutoTokenizer.from_pretrained("meta-llama/Llama-2-13b-chat-hf")
[6](file:///root/code/cot-unfaithfulness/test-hf.py?line=5) model = transformers.AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-13b-chat-hf")
----> [8](file:///root/code/cot-unfaithfulness/test-hf.py?line=7) modelp = tp.tensor_parallel(model)
File /opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py:61, in tensor_parallel(module, device_ids, tensor_parallel_config, distributed, sharded, sharded_param_names, **kwargs)
[59](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=58) else:
[60](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=59) if isinstance(module, PreTrainedModel):
---> [61](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=60) return TensorParallelPreTrainedModel(
[62](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=61) module,
[63](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=62) device_ids=device_ids,
[64](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=63) tensor_parallel_config=tensor_parallel_config,
[65](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=64) distributed=distributed,
[66](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=65) sharded=sharded,
[67](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=66) sharded_param_names=sharded_param_names,
[68](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=67) **kwargs,
[69](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=68) )
[70](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=69) else:
[71](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=70) return TensorParallel(
[72](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=71) module,
[73](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=72) device_ids=device_ids,
(...)
[78](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=77) **kwargs,
[79](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=78) )
File /opt/conda/lib/python3.10/site-packages/tensor_parallel/pretrained_model.py:47, in TensorParallelPreTrainedModel.__init__(self, module, device_ids, output_device, output_device_index, tensor_parallel_config, **kwargs)
[38](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/pretrained_model.py?line=37) def __init__(
[39](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/pretrained_model.py?line=38) self,
[40](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/pretrained_model.py?line=39) module: PreTrainedModel,
(...)
[45](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/pretrained_model.py?line=44) **kwargs,
[46](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/pretrained_model.py?line=45) ):
---> [47](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/pretrained_model.py?line=46) super().__init__(module.config) # Temporary empty config. Gets replaced in from_pretrained
[49](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/pretrained_model.py?line=48) if hasattr(module, "_hf_hook"):
[50](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/pretrained_model.py?line=49) from accelerate.hooks import remove_hook_from_module
File /opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py:1190, in PreTrainedModel.__init__(self, config, *inputs, **kwargs)
[1184](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1183) raise ValueError(
[1185](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1184) f"Parameter config in `{self.__class__.__name__}(config)` should be an instance of class "
[1186](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1185) "`PretrainedConfig`. To create a model from a pretrained model use "
[1187](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1186) f"`model = {self.__class__.__name__}.from_pretrained(PRETRAINED_MODEL_NAME)`"
[1188](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1187) )
[1189](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1188) # Save config and origin of the pretrained weights if given in model
-> [1190](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1189) config = self._autoset_attn_implementation(
[1191](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1190) config, torch_dtype=torch.get_default_dtype(), check_device_map=False
[1192](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1191) )
[1193](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1192) self.config = config
[1195](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1194) self.name_or_path = config.name_or_path
File /opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py:1311, in PreTrainedModel._autoset_attn_implementation(cls, config, use_flash_attention_2, torch_dtype, device_map, check_device_map)
[1302](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1301) cls._check_and_enable_flash_attn_2(
[1303](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1302) config,
[1304](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1303) torch_dtype=torch_dtype,
(...)
[1307](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1306) check_device_map=check_device_map,
[1308](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1307) )
[1309](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1308) elif requested_attn_implementation in [None, "sdpa"]:
[1310](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1309) # use_flash_attention_2 takes priority over SDPA, hence SDPA treated in this elif.
-> [1311](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1310) config = cls._check_and_enable_sdpa(
[1312](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1311) config, hard_check_only=False if requested_attn_implementation is None else True
[1313](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1312) )
[1314](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1313) else:
[1315](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1314) config._attn_implementation = "eager"
File /opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py:1464, in PreTrainedModel._check_and_enable_sdpa(cls, config, hard_check_only)
[1462](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1461) if hard_check_only:
[1463](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1462) if not cls._supports_sdpa:
-> [1464](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1463) raise ValueError(
[1465](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1464) f"{cls.__name__} does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. Please open an issue on GitHub to "
[1466](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1465) "request support for this architecture: https://github.com/huggingface/transformers/issues/new"
[1467](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1466) )
[1468](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1467) if not is_torch_sdpa_available():
[1469](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1468) raise ImportError(
[1470](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1469) "PyTorch SDPA requirements in Transformers are not met. Please install torch>=2.1.1."
[1471](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1470) )
ValueError: TensorParallelPreTrainedModel does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. Please open an issue on GitHub to request support for this architecture: https://github.com/huggingface/transformers/issues/new
```
### Expected behavior
Wrap the model using the tensor_parallel library https://github.com/BlackSamorez/tensor_parallel
This succeeds with transformers==4.35.2
The exception seems to be raised from this line added in 4.36 https://github.com/huggingface/transformers/commit/80377eb018c077dba434bc8e7912bcaed3a64d09#diff-6b72b98c4c2dcfc6cc606843917733f5d858374fbc22a735ff483bbc0c1e63eaR1435 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28003/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28003/timeline | null | reopened | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28002 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28002/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28002/comments | https://api.github.com/repos/huggingface/transformers/issues/28002/events | https://github.com/huggingface/transformers/issues/28002 | 2,039,559,344 | I_kwDOCUB6oc55kTSw | 28,002 | Not handled case when use_weighted_layer_sum and return-dict=True in WhisperForAudioClassification | {
"login": "ElsebaiyMohamed",
"id": 77920008,
"node_id": "MDQ6VXNlcjc3OTIwMDA4",
"avatar_url": "https://avatars.githubusercontent.com/u/77920008?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ElsebaiyMohamed",
"html_url": "https://github.com/ElsebaiyMohamed",
"followers_url": "https://api.github.com/users/ElsebaiyMohamed/followers",
"following_url": "https://api.github.com/users/ElsebaiyMohamed/following{/other_user}",
"gists_url": "https://api.github.com/users/ElsebaiyMohamed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ElsebaiyMohamed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ElsebaiyMohamed/subscriptions",
"organizations_url": "https://api.github.com/users/ElsebaiyMohamed/orgs",
"repos_url": "https://api.github.com/users/ElsebaiyMohamed/repos",
"events_url": "https://api.github.com/users/ElsebaiyMohamed/events{/privacy}",
"received_events_url": "https://api.github.com/users/ElsebaiyMohamed/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-12-13T11:57:03 | 2024-01-18T16:41:46 | 2024-01-18T16:41:46 | NONE | null | @sanchit-gandhi
I use WhisperForAudioClassification task and want to use `use_weighted_layer_sum=True`, but there is a problem when call forward,
the encoder part can return tuple or dict if `return_dict=True` but the code for use `use_weighted_layer_sum=True` assume the return to be tuple only and this line raise error `hidden_states = torch.stack(encoder_outputs, dim=1)` if the encoder return dict, there are workaround by using `return_dict=False` but when use the model later with `pipeline` it will raise error because it assume the model to return dict not tuple. [Link to code with the problem](https://github.com/huggingface/transformers/blob/c7f076a00ee54f777b3d3322c91bc11489a47950/src/transformers/models/whisper/modeling_whisper.py#L2918C6-L2918C6)
```py
if self.config.use_weighted_layer_sum:
hidden_states = torch.stack(encoder_outputs, dim=1) # This line raise error when return_dict=True and use_weighted_layer_sum=True
norm_weights = nn.functional.softmax(self.layer_weights, dim=-1)
hidden_states = (hidden_states * norm_weights.view(-1, 1, 1)).sum(dim=1)
else:
hidden_states = encoder_outputs[0]
```
**Reproduce error**
```py
from transformers import WhisperForAudioClassification, AutoFeatureExtractor
from datasets import load_dataset
dataset = load_dataset('seba3y/speechocean762',)
dataset = dataset['train']
sampling_rate = dataset.features["audio"].sampling_rate
dataset = dataset.remove_columns(['utt_name', 'text', 'completeness', 'fluency', 'prosodic'])
feature_extractor = AutoFeatureExtractor.from_pretrained("seba3y/whisper-tiny")
model = WhisperForAudioClassification.from_pretrained("seba3y/whisper-tiny",
use_weighted_layer_sum=True,
return_dict=True)
# test if it work
inputs = feature_extractor(dataset['train'][3]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_class_ids = torch.argmax(logits, dim=-1).item()
predicted_label = model.config.id2label[predicted_class_ids]
print(predicted_label)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28002/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28002/timeline | null | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.