url stringlengths 66 66 | repository_url stringclasses 1
value | labels_url stringlengths 80 80 | comments_url stringlengths 75 75 | events_url stringlengths 73 73 | html_url stringlengths 54 56 | id int64 2.03B 2.11B | node_id stringlengths 18 19 | number int64 27.9k 28.8k | title stringlengths 3 306 | user dict | labels list | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees list | milestone null | comments int64 0 39 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | active_lock_reason null | body stringlengths 19 42.4k ⌀ | reactions dict | timeline_url stringlengths 75 75 | performed_via_github_app null | state_reason stringclasses 3
values | draft bool 2
classes | pull_request dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/28001 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28001/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28001/comments | https://api.github.com/repos/huggingface/transformers/issues/28001/events | https://github.com/huggingface/transformers/issues/28001 | 2,039,508,919 | I_kwDOCUB6oc55kG-3 | 28,001 | UserWarning: Using `max_length`'s default (448) at Inference Enpoint deployment | {
"login": "SeeknnDestroy",
"id": 44926076,
"node_id": "MDQ6VXNlcjQ0OTI2MDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/44926076?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SeeknnDestroy",
"html_url": "https://github.com/SeeknnDestroy",
"followers_url": "https://api.github.com/users/SeeknnDestroy/followers",
"following_url": "https://api.github.com/users/SeeknnDestroy/following{/other_user}",
"gists_url": "https://api.github.com/users/SeeknnDestroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SeeknnDestroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SeeknnDestroy/subscriptions",
"organizations_url": "https://api.github.com/users/SeeknnDestroy/orgs",
"repos_url": "https://api.github.com/users/SeeknnDestroy/repos",
"events_url": "https://api.github.com/users/SeeknnDestroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/SeeknnDestroy/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2023-12-13T11:25:45 | 2024-01-10T17:58:20 | null | NONE | null | ### System Info
**Inference Endpoints**
- **Model**: distil-whisper/distil-large-v2
- **Task**: automatic-speech-recognition
- **Revision**: c204f3c76ec464a0ab9bcfd19afa0add93f69983
- **Container type**: Default
- **Instance**: AWS, us-east-1
- **Instance Type**: GPU · Nvidia Tesla T4 · 1x GPU · 16 GB
### Who can help?
@sanchit-gandhi @Narsil
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1-deploy distil-whisper/distil-large-v2 model via Inference Endpoints and above system configurations
2-run reference code its given:
```python
import requests
API_URL = "https://ovibb90ga7zdc5qa.us-east-1.aws.endpoints.huggingface.cloud"
headers = {
"Authorization": "Bearer XXXXXX",
"Content-Type": "audio/flac"
}
def query(filename):
with open(filename, "rb") as f:
data = f.read()
response = requests.post(API_URL, headers=headers, data=data)
return response.json()
output = query("sample1.flac")
```
### Expected behavior
Ideally, the model should transcribe the full content of longer audio inputs without being constrained by the `max_length` parameter, especially given the warning about its upcoming deprecation. Above is warning that I am getting:
### Full warning message
```
2023/12/13 14:22:36 ~ /opt/conda/lib/python3.9/site-packages/transformers/generation/utils.py:1369: UserWarning: Using `max_length`'s default (448) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation.
```
**Additional Context**:
We have Hugging Face enterprise account as @safevideo. Using `distil-whisper/distil-large-v2` for ASR, we face a `UserWarning` regarding `max_length`, potentially affecting our ability to transcribe longer audio files. Seeking advice for handling this and potentially a way to get full transcription of longer audio at Inference Endpoints. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28001/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28001/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28000 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28000/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28000/comments | https://api.github.com/repos/huggingface/transformers/issues/28000/events | https://github.com/huggingface/transformers/issues/28000 | 2,039,461,943 | I_kwDOCUB6oc55j7g3 | 28,000 | XLM question-answering pipeline is flacky | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-13T11:00:33 | 2024-01-16T15:04:45 | 2024-01-16T15:04:45 | COLLABORATOR | null | ### System Info
transformers main, but tested on commits in the last three weeks, same issue
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
for i in range(50):
from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline
import torch
model = AutoModelForQuestionAnswering.from_pretrained("hf-internal-testing/tiny-random-XLMModel")
tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/tiny-random-XLMModel")
pipe = pipeline("question-answering", model=model, tokenizer=tokenizer)
question = "Whats my name?"
context = "My Name is Philipp and I live in Nuremberg."
outputs = pipe(question, context)
```
sometimes fail with
```
Traceback (most recent call last):
File "<tmp 4>", line 23, in <module>
outputs = pipe(question, context)
File "/home/fxmarty/hf_internship/transformers/src/transformers/pipelines/question_answering.py", line 393, in __call__
return super().__call__(examples[0], **kwargs)
File "/home/fxmarty/hf_internship/transformers/src/transformers/pipelines/base.py", line 1132, in __call__
return next(
File "/home/fxmarty/hf_internship/transformers/src/transformers/pipelines/pt_utils.py", line 125, in __next__
processed = self.infer(item, **self.params)
File "/home/fxmarty/hf_internship/transformers/src/transformers/pipelines/question_answering.py", line 563, in postprocess
"start": np.where(char_to_word == token_to_orig_map[s])[0][0].item(),
KeyError: 5
```
### Expected behavior
no error. I can have a look if I have time | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28000/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28000/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27999 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27999/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27999/comments | https://api.github.com/repos/huggingface/transformers/issues/27999/events | https://github.com/huggingface/transformers/pull/27999 | 2,039,327,117 | PR_kwDOCUB6oc5h4GDC | 27,999 | [`CI slow`] Fix expected values | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-13T09:44:08 | 2023-12-13T12:37:12 | 2023-12-13T12:37:11 | COLLABORATOR | null | # What does this PR do?
Fix slow test. The init was probably done twice because:
```pyton
Some weights of ViTMSNForImageClassification were not initialized from the model checkpoint at facebook/vit-msn-small and are newly initialized: ['classifier.bias', 'classifier.weight']
```
so the weights should no be initialized by the form_pretrained but by the `_init_weights` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27999/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27999",
"html_url": "https://github.com/huggingface/transformers/pull/27999",
"diff_url": "https://github.com/huggingface/transformers/pull/27999.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27999.patch",
"merged_at": "2023-12-13T12:37:11"
} |
https://api.github.com/repos/huggingface/transformers/issues/27998 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27998/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27998/comments | https://api.github.com/repos/huggingface/transformers/issues/27998/events | https://github.com/huggingface/transformers/issues/27998 | 2,039,321,707 | I_kwDOCUB6oc55jZRr | 27,998 | CodeLlama-34b-Instruct-hf | {
"login": "zhaotyer",
"id": 89376832,
"node_id": "MDQ6VXNlcjg5Mzc2ODMy",
"avatar_url": "https://avatars.githubusercontent.com/u/89376832?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhaotyer",
"html_url": "https://github.com/zhaotyer",
"followers_url": "https://api.github.com/users/zhaotyer/followers",
"following_url": "https://api.github.com/users/zhaotyer/following{/other_user}",
"gists_url": "https://api.github.com/users/zhaotyer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhaotyer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhaotyer/subscriptions",
"organizations_url": "https://api.github.com/users/zhaotyer/orgs",
"repos_url": "https://api.github.com/users/zhaotyer/repos",
"events_url": "https://api.github.com/users/zhaotyer/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhaotyer/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2023-12-13T09:41:02 | 2024-01-10T07:50:26 | null | NONE | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-3.10.0-1160.el7.x86_64-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker @gante
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
infer.py
```
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
import torch
model_id = "/workspace/CodeLlama-34b-Instruct-hf"
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.float16
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
quantization_config=quantization_config,
device_map="auto",
)
while True:
question = input("请输入你的问题:")
print(len(question))
prompt = 'def remove_non_ascii(s: str) -> str:\n """ '
inputs = tokenizer(question, return_tensors="pt").to("cuda")
output = model.generate(
inputs["input_ids"],
max_new_tokens=200,
do_sample=True,
top_p=0.9,
temperature=0.1,
pad_token_id=tokenizer.eos_token_id
)
output = output[0].to("cpu")
print(tokenizer.decode(output))
```
python3 infer.py
question:`def remove_non_ascii(s: str) -> str:\n """ ` infer success
question:`def remove_non_ascii(s: str) -> str:\n """ <FILL_ME>\nprint(remove_non_ascii('afkdj$$('))` occer error
error info:
```
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [774,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [774,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [774,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [774,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
Traceback (most recent call last):
File "infer.py", line 22, in <module>
output = model.generate(
File "/usr/local/lib/python3.8/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/generation/utils.py", line 1719, in generate
return self.sample(
File "/usr/local/lib/python3.8/dist-packages/transformers/generation/utils.py", line 2801, in sample
outputs = self(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py", line 165, in new_forward
output = module._old_forward(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/llama/modeling_llama.py", line 1034, in forward
outputs = self.model(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py", line 165, in new_forward
output = module._old_forward(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/llama/modeling_llama.py", line 922, in forward
layer_outputs = decoder_layer(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py", line 165, in new_forward
output = module._old_forward(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/llama/modeling_llama.py", line 672, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py", line 165, in new_forward
output = module._old_forward(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/llama/modeling_llama.py", line 366, in forward
query_states = self.q_proj(hidden_states)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py", line 165, in new_forward
output = module._old_forward(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/bitsandbytes/nn/modules.py", line 248, in forward
out = bnb.matmul_4bit(x, self.weight.t(), bias=bias, quant_state=self.weight.quant_state)
File "/usr/local/lib/python3.8/dist-packages/bitsandbytes/autograd/_functions.py", line 579, in matmul_4bit
return MatMul4Bit.apply(A, B, out, bias, quant_state)
File "/usr/local/lib/python3.8/dist-packages/torch/autograd/function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/usr/local/lib/python3.8/dist-packages/bitsandbytes/autograd/_functions.py", line 516, in forward
output = torch.nn.functional.linear(A, F.dequantize_4bit(B, state).to(A.dtype).t(), bias)
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)`
```
GPU 1*NVIDIA A100-SXM4-80GB
### Expected behavior
Any question can be answered normally | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27998/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/27998/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27997 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27997/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27997/comments | https://api.github.com/repos/huggingface/transformers/issues/27997/events | https://github.com/huggingface/transformers/pull/27997 | 2,039,306,675 | PR_kwDOCUB6oc5h4Bmb | 27,997 | Fix PatchTSMixer slow tests | {
"login": "ajati",
"id": 41211350,
"node_id": "MDQ6VXNlcjQxMjExMzUw",
"avatar_url": "https://avatars.githubusercontent.com/u/41211350?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ajati",
"html_url": "https://github.com/ajati",
"followers_url": "https://api.github.com/users/ajati/followers",
"following_url": "https://api.github.com/users/ajati/following{/other_user}",
"gists_url": "https://api.github.com/users/ajati/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ajati/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ajati/subscriptions",
"organizations_url": "https://api.github.com/users/ajati/orgs",
"repos_url": "https://api.github.com/users/ajati/repos",
"events_url": "https://api.github.com/users/ajati/events{/privacy}",
"received_events_url": "https://api.github.com/users/ajati/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-13T09:32:12 | 2023-12-13T12:34:55 | 2023-12-13T12:34:26 | CONTRIBUTOR | null | Fix `PatchTSMixer` slow tests and relax asset conditions in functional tests. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27997/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27997",
"html_url": "https://github.com/huggingface/transformers/pull/27997",
"diff_url": "https://github.com/huggingface/transformers/pull/27997.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27997.patch",
"merged_at": "2023-12-13T12:34:26"
} |
https://api.github.com/repos/huggingface/transformers/issues/27996 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27996/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27996/comments | https://api.github.com/repos/huggingface/transformers/issues/27996/events | https://github.com/huggingface/transformers/pull/27996 | 2,039,288,621 | PR_kwDOCUB6oc5h39qu | 27,996 | add torch.compile in pipeline | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 7 | 2023-12-13T09:22:01 | 2024-01-26T01:16:50 | 2024-01-26T01:16:50 | CONTRIBUTOR | null | Hi @Narsil . Since torch compile is supported in pytorch2.0, see [here](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html).
However, users usually use pipeline and don't know which model function (`forward` or `generate`) should be compiled.
I was thinking add `torch.compile` in pipeline to make users easier to use it. Would like to hear your opinion. Thank!
BTW, do you have any idea about how to determine which model function should be compiled? I compiled both model functions because I don't know which function will be used by the model. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27996/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27996",
"html_url": "https://github.com/huggingface/transformers/pull/27996",
"diff_url": "https://github.com/huggingface/transformers/pull/27996.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27996.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27995 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27995/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27995/comments | https://api.github.com/repos/huggingface/transformers/issues/27995/events | https://github.com/huggingface/transformers/pull/27995 | 2,039,173,651 | PR_kwDOCUB6oc5h3kl1 | 27,995 | Assitant model may on a different device | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-13T08:08:42 | 2024-01-11T10:25:00 | 2024-01-11T10:25:00 | CONTRIBUTOR | null | Hi @gante . Would you please have a look at this PR. The motivation is that I try to put assistant model and self model in different cuda device, or put assistant model on CPU. This PR should enable assistant model on a different device.
I have tested it on both decoder-only model and encoder-decoder model. Could you please help me to review it? Thanks!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27995/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27995/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27995",
"html_url": "https://github.com/huggingface/transformers/pull/27995",
"diff_url": "https://github.com/huggingface/transformers/pull/27995.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27995.patch",
"merged_at": "2024-01-11T10:25:00"
} |
https://api.github.com/repos/huggingface/transformers/issues/27994 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27994/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27994/comments | https://api.github.com/repos/huggingface/transformers/issues/27994/events | https://github.com/huggingface/transformers/issues/27994 | 2,038,987,548 | I_kwDOCUB6oc55iHsc | 27,994 | Performance degradation with BF16 precision | {
"login": "jerin-scalers-ai",
"id": 125901005,
"node_id": "U_kgDOB4EYzQ",
"avatar_url": "https://avatars.githubusercontent.com/u/125901005?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jerin-scalers-ai",
"html_url": "https://github.com/jerin-scalers-ai",
"followers_url": "https://api.github.com/users/jerin-scalers-ai/followers",
"following_url": "https://api.github.com/users/jerin-scalers-ai/following{/other_user}",
"gists_url": "https://api.github.com/users/jerin-scalers-ai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jerin-scalers-ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jerin-scalers-ai/subscriptions",
"organizations_url": "https://api.github.com/users/jerin-scalers-ai/orgs",
"repos_url": "https://api.github.com/users/jerin-scalers-ai/repos",
"events_url": "https://api.github.com/users/jerin-scalers-ai/events{/privacy}",
"received_events_url": "https://api.github.com/users/jerin-scalers-ai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-13T05:32:29 | 2024-01-23T08:04:08 | 2024-01-23T08:04:08 | NONE | null | ### System Info
Transformers: 4.35.2
Torch: 2.1.1-cpu
CPU: Intel Xeon 4th Gen processor
### Who can help?
@ArthurZucker
Hi,
I was comparing performance of Llama 2 7b chat hf model with different precisions.
I observed that there is a significant degrade on performance (inference time) with bfloat16 compared to fp32 model in Intel CPU . Bf16 is suppose to give better performance than fp32 . Please refer below table for details:
| Precision | Tokens Generated | Infer time (sec) |
|------------------------|-----------------------|------------------|
| FP32 | 186 | 12.51 |
| BF16 | 186 | 115.37 |
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
import torch
import time
model_id = "meta-llama/Llama-2-7b-chat-hf"
device = "cpu"
torch_dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
input_text = "In maximum 180 words, explain why purchasing Dell Poweredge servers offer much better TCO to enterprises compared to using public cloud infrastructure, for AI initiatives"
text_generator = pipeline(
"text-generation",
model=model_id,
tokenizer=tokenizer,
return_tensors=True,
device=device,
torch_dtype = torch_dtype,
)
for _ in range(5):
s_time = time.time()
# Inference benchmarking
output = text_generator(
input_text,
max_new_tokens=256,
temperature=1,
)
e_time = time.time()
# print(output)
print(tokenizer.decode(output[0]["generated_token_ids"]))
num_tokens = len(output[0]["generated_token_ids"])
print(f"Num tokens: {num_tokens}")
print(f"Infer time: {e_time-s_time}")
```
### Expected behavior
Bf16 is suppose to give better performance than fp32 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27994/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27994/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27993 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27993/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27993/comments | https://api.github.com/repos/huggingface/transformers/issues/27993/events | https://github.com/huggingface/transformers/pull/27993 | 2,038,894,860 | PR_kwDOCUB6oc5h2nxD | 27,993 | When save a model on TPU, make a copy to be moved to CPU | {
"login": "qihqi",
"id": 1719482,
"node_id": "MDQ6VXNlcjE3MTk0ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1719482?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qihqi",
"html_url": "https://github.com/qihqi",
"followers_url": "https://api.github.com/users/qihqi/followers",
"following_url": "https://api.github.com/users/qihqi/following{/other_user}",
"gists_url": "https://api.github.com/users/qihqi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qihqi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qihqi/subscriptions",
"organizations_url": "https://api.github.com/users/qihqi/orgs",
"repos_url": "https://api.github.com/users/qihqi/repos",
"events_url": "https://api.github.com/users/qihqi/events{/privacy}",
"received_events_url": "https://api.github.com/users/qihqi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 9 | 2023-12-13T03:42:05 | 2023-12-19T10:08:51 | 2023-12-19T10:08:51 | CONTRIBUTOR | null | # What does this PR do?
When we save a model on TPU, we first move it to CPU because TPU tensors have no
storage. However, we should do it with a copy of the model, so that the original model
still on TPU. Because otherwise `model.to('cpu')` would also modify the model in-place.
Then, it would raise the following error when that model is used in compute:
```
indices should be either on cpu or on the same device as the indexed tensor (XLA). When using XLA, the indexed tensor must be an XLA tensor.
```
Tested by running this command on TPU v4-8:
```
python3 examples/pytorch/text-classification/run_glue.py \
--model_name_or_path=distilbert-base-uncased \
--task_name=MNLI \
--do_train=true \
--num_train_epochs=1 \
--max_seq_length=128 \
--learning_rate=3e-5 \
--overwrite_output_dir=true \
--save_steps=3000 \
--save_strategy=no --output_dir=/workspace/mnli
```
cc @muellerzr and @pacman100
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27993/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27993/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27993",
"html_url": "https://github.com/huggingface/transformers/pull/27993",
"diff_url": "https://github.com/huggingface/transformers/pull/27993.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27993.patch",
"merged_at": "2023-12-19T10:08:51"
} |
https://api.github.com/repos/huggingface/transformers/issues/27992 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27992/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27992/comments | https://api.github.com/repos/huggingface/transformers/issues/27992/events | https://github.com/huggingface/transformers/issues/27992 | 2,038,857,369 | I_kwDOCUB6oc55hn6Z | 27,992 | Memory leak (not released) when calling Seq2SeqTrainer for fine-tuning | {
"login": "xyx361100238",
"id": 19569322,
"node_id": "MDQ6VXNlcjE5NTY5MzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/19569322?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyx361100238",
"html_url": "https://github.com/xyx361100238",
"followers_url": "https://api.github.com/users/xyx361100238/followers",
"following_url": "https://api.github.com/users/xyx361100238/following{/other_user}",
"gists_url": "https://api.github.com/users/xyx361100238/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xyx361100238/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xyx361100238/subscriptions",
"organizations_url": "https://api.github.com/users/xyx361100238/orgs",
"repos_url": "https://api.github.com/users/xyx361100238/repos",
"events_url": "https://api.github.com/users/xyx361100238/events{/privacy}",
"received_events_url": "https://api.github.com/users/xyx361100238/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2023-12-13T02:50:25 | 2024-01-10T17:54:10 | null | NONE | null | ### System Info
- `transformers` version: 4.34.0
- Platform: Linux-5.4.0-167-generic-x86_64-with-glibc2.31
- Python version: 3.9.18
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sanchit-gandhi @muellerzr @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I was use project finetune.py from [Whisper-Finetune](https://github.com/yeyupiaoling/Whisper-Finetune) to finetune whisper large on my own datasets,which code use transformers library. If I use the validation process during fine-tuning, it will lead to an increase in system memory, and there is a chance that it will not be released after validation. Over time, this can lead to Out Of Memory (OOM) and cause a crash. Is this a bug in the tool, or do I need to make some settings?
### Expected behavior
After each verification is completed, normalize the memory to maintain it at a stable level. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27992/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27991 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27991/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27991/comments | https://api.github.com/repos/huggingface/transformers/issues/27991/events | https://github.com/huggingface/transformers/issues/27991 | 2,038,842,972 | I_kwDOCUB6oc55hkZc | 27,991 | Error in all_reduce when GPT2 200B inferencing with dynamo and multi GPU | {
"login": "jcai04",
"id": 62895533,
"node_id": "MDQ6VXNlcjYyODk1NTMz",
"avatar_url": "https://avatars.githubusercontent.com/u/62895533?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jcai04",
"html_url": "https://github.com/jcai04",
"followers_url": "https://api.github.com/users/jcai04/followers",
"following_url": "https://api.github.com/users/jcai04/following{/other_user}",
"gists_url": "https://api.github.com/users/jcai04/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jcai04/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jcai04/subscriptions",
"organizations_url": "https://api.github.com/users/jcai04/orgs",
"repos_url": "https://api.github.com/users/jcai04/repos",
"events_url": "https://api.github.com/users/jcai04/events{/privacy}",
"received_events_url": "https://api.github.com/users/jcai04/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-13T02:30:39 | 2024-01-20T08:03:35 | 2024-01-20T08:03:35 | NONE | null | ### System Info
version:
- `transformers` version: 4.35.2
- Platform: Linux-5.4.0-125-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.4.0
- Accelerate version: 0.4.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
@SunMarc
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Code snippet:
```
class GPT2Block(nn.Module):
def __init__(self, config, window_size):
super().__init__()
self.mp_size = iint(os.getenv("WORLD_SIZE", "1"))
self.hidden_size = config.hidden_size
self.ln_1 = nn.LayerNorm(self.hidden_size, eps=1e-5)
self.attn = GPT2Attention(config, window_size)
self.mlp = GPT2MLP(config)
def forward(self, hidden_states, attention_mask, past_kv, kv_len, wpe=None):
residual = hidden_states
hidden_states = self.ln_1(hidden_states)
attn_output, _ = self.attn(hidden_states, attention_mask, past_kv, kv_len, wpe)
mlp_output = self.mlp(hidden_states)
layer_out = attn_output + mlp_output
if self.mp_size > 1:
torch.distributed.all_reduce(layer_out)
layer_out = layer_out + residual
return layer_out
```
Error messages:
```
Traceback (most recent call last):
File"./ut_test/seperate_200b.py", line 393, in <module>
out_v2 = inference_engine(inputs)
File "./ut_test/seperate_200b.py", line 250, in inference_engine
context_output = context_infer(BI_model, curr_input)
File "./ut_test/seperate_200b.py", line 199, in context_infer
outputs = model(**one_input)
File "/home/gpt2_200b/models/gpt2_200b_ptb.py", line 799 in forward
hidden_states = self.transformer(input_tensor, input_mask, past_key, past_key_values, kv_len, query_len)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/eval_frame.py", line 328, in _fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/eval_frame.py", line 490, in catch_errors
return callback(frame, cache_entry, hooks, frame_state)
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/convert_frame.py", line 133, in _fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/convert_frame.py", line 389, in_convert_frame_assert
return _compile(
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/convert_frame.py", line 569, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/utils.py", line 189, in time_wrapper
r = func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/convert_frame.py", line 491, in compile_inner
out_code = transform_code_object(code, transform)
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/bytecode_transformation.py" line 1028, in transform_code_object
transformations(instructions, code_options)
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/convert_frame.py", line 458, in transform
tracer.run()
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 2074, in run
super().run()
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 1115, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/variables/nn_module.py", line 331, in call_function
return tx.inline_user_function_return(
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 2179, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 2286, in inline_call_
tracer.run()
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 1155, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars.items)
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/variables/functions.py", line 307, in call_function
return super().call_function(tx, args, kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/variables/functions.py", line 261, in call_function
return super().call_function(tx, args, kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/variables/functions.py", line 90, in call_function
return tx.inline_user_function_return(
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 2179, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 2286, in inline_call_
tracer.run()
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 1155, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars.items)
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 562, in call_function
self.push(fn.call_function(self, args, kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/variables/functions.py", line 307, in call_function
return super().call_function(tx, args, kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/variables/functions.py", line 261, in call_function
return super().call_function(tx, args, kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/variables/functions.py", line 90, in call_function
return tx.inline_user_function_return(
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return
result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 2179, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 2232, in inline_call
InliningInstructionTranslator.check_inlineable(func)
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 2191, in check_inlineable
unimplemented(f"inlining disallowed: {func.get_function()}")
File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/exc.py", line 172, in unimplemented
raise Unsupported(msg)
torch._dynamo.exc.Unsupported: inlining disallowed: <function all_reduce at 0x7fab7fff78b0>
from user code:
File "/home/gpt2_200b/models/gpt2_200b_ptb.py", line 508, in forward
hidden_states = block(hidden_states, attention_mask, past_kv[idx], kv_len, self.wpe)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/gpt2_200b/models/gpt2_200b_ptb.py", line 460, in forward
torch.distributed.all_reduce(layer_out)
```
torch.compile setting:
```
torch.compile(self.transformer, dynamic=True, fullgraph=True) #default backend = inductor
```
### Expected behavior
We except to be able to do inference with dynamo, and we successfully inference when setting "fullgraph=False" in torch.compile. However, it is doesn't work when "fullgraph=True" in torch.compile with the same code | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27991/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27991/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27990 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27990/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27990/comments | https://api.github.com/repos/huggingface/transformers/issues/27990/events | https://github.com/huggingface/transformers/pull/27990 | 2,038,800,937 | PR_kwDOCUB6oc5h2UYP | 27,990 | [DETA] Improvement and Sync from DETA especially for training | {
"login": "SangbumChoi",
"id": 34004152,
"node_id": "MDQ6VXNlcjM0MDA0MTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/34004152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SangbumChoi",
"html_url": "https://github.com/SangbumChoi",
"followers_url": "https://api.github.com/users/SangbumChoi/followers",
"following_url": "https://api.github.com/users/SangbumChoi/following{/other_user}",
"gists_url": "https://api.github.com/users/SangbumChoi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SangbumChoi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SangbumChoi/subscriptions",
"organizations_url": "https://api.github.com/users/SangbumChoi/orgs",
"repos_url": "https://api.github.com/users/SangbumChoi/repos",
"events_url": "https://api.github.com/users/SangbumChoi/events{/privacy}",
"received_events_url": "https://api.github.com/users/SangbumChoi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 16 | 2023-12-13T01:38:34 | 2024-01-05T14:20:21 | 2024-01-05T14:20:21 | CONTRIBUTOR | null | # What does this PR do?
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
There are several changes in DETA not only for inference but also for training.
1. Add assign_second_stage argument that used in original DETA
2. add output_proposals for "anchors" that used in assign_first_stage
3. num_boxes normalization based on multi-gpu environment
4. add "enc_outputs" and "auxiliary_outputs" loss while training
5. minor changes in variable name that was not appropriate
I tested with [custom dataset](https://universe.roboflow.com/roboflow-100/cable-damage) with finetuning. As a result original performance after 12 epoch was 1.0 AP, now is above 35 AP with same hyperparameter setting (such as learning rate).
I am not the author or related group to DETA but I am a co-contributor of [DETA](https://github.com/jozhang97/DETA) so I know pretty much all details of DETA. this changes will give a great improvement to user who wants to train/fine-tune DETA with [sagemaker script](https://huggingface.co/jozhang97/deta-swin-large/blob/main/config.json?sagemaker_train=true).
BTW, I think @NielsRogge missed important variable "assign_second_stage" to set True in [config.json](https://huggingface.co/jozhang97/deta-swin-large/blob/main/config.json)
@ArthurZucker Could you review this PR? (I couldn't share my test code for this, sorry about that)
I manually added to enable the auxiliary_loss and second_stage pipeline or see [this link](https://huggingface.co/sbchoi/deta-swin-large/tree/main)
```
transformer_model.config.auxiliary_loss = cfg.auxiliary_loss
transformer_model.config.assign_second_stage = cfg.assign_second_stage
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27990/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27990",
"html_url": "https://github.com/huggingface/transformers/pull/27990",
"diff_url": "https://github.com/huggingface/transformers/pull/27990.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27990.patch",
"merged_at": "2024-01-05T14:20:21"
} |
https://api.github.com/repos/huggingface/transformers/issues/27989 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27989/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27989/comments | https://api.github.com/repos/huggingface/transformers/issues/27989/events | https://github.com/huggingface/transformers/pull/27989 | 2,038,797,508 | PR_kwDOCUB6oc5h2TsF | 27,989 | Added @property into the modeliing_encoder_decoder file. | {
"login": "hi-sushanta",
"id": 93595990,
"node_id": "U_kgDOBZQpVg",
"avatar_url": "https://avatars.githubusercontent.com/u/93595990?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hi-sushanta",
"html_url": "https://github.com/hi-sushanta",
"followers_url": "https://api.github.com/users/hi-sushanta/followers",
"following_url": "https://api.github.com/users/hi-sushanta/following{/other_user}",
"gists_url": "https://api.github.com/users/hi-sushanta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hi-sushanta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hi-sushanta/subscriptions",
"organizations_url": "https://api.github.com/users/hi-sushanta/orgs",
"repos_url": "https://api.github.com/users/hi-sushanta/repos",
"events_url": "https://api.github.com/users/hi-sushanta/events{/privacy}",
"received_events_url": "https://api.github.com/users/hi-sushanta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-12-13T01:34:02 | 2023-12-16T00:32:53 | 2023-12-16T00:32:46 | CONTRIBUTOR | null | I made the "encoder" and "decoder" properties easier to use by making them "read-only". This means you can see their values, but not change them directly.
Additionally, I created a special way to change the "output embeddings" of the decoder. You can use this by assigning a new value to the "output_embeddings" property.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@amyeroberts , @ArthurZucker | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27989/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27989/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27989",
"html_url": "https://github.com/huggingface/transformers/pull/27989",
"diff_url": "https://github.com/huggingface/transformers/pull/27989.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27989.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27988 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27988/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27988/comments | https://api.github.com/repos/huggingface/transformers/issues/27988/events | https://github.com/huggingface/transformers/issues/27988 | 2,038,788,279 | I_kwDOCUB6oc55hXC3 | 27,988 | Design of xxxAttention, xxxFlashAttention and xxxSdpaAttention | {
"login": "ccdv-ai",
"id": 94319594,
"node_id": "U_kgDOBZ8z6g",
"avatar_url": "https://avatars.githubusercontent.com/u/94319594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ccdv-ai",
"html_url": "https://github.com/ccdv-ai",
"followers_url": "https://api.github.com/users/ccdv-ai/followers",
"following_url": "https://api.github.com/users/ccdv-ai/following{/other_user}",
"gists_url": "https://api.github.com/users/ccdv-ai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ccdv-ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ccdv-ai/subscriptions",
"organizations_url": "https://api.github.com/users/ccdv-ai/orgs",
"repos_url": "https://api.github.com/users/ccdv-ai/repos",
"events_url": "https://api.github.com/users/ccdv-ai/events{/privacy}",
"received_events_url": "https://api.github.com/users/ccdv-ai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-13T01:27:05 | 2024-01-20T08:03:37 | 2024-01-20T08:03:37 | NONE | null | Hey
Following the addition of `torch.nn.functional.scaled_dot_product_attention` (#26572), there is a lot of deduplicated code between the `xxxAttention`, `xxxFlashAttention2` and `xxxSdpaAttention` classes. The main differences between the classes lie in the attention computation, the rest being the same (Q, K, V computation, cross attention and cache logic etc...).
Wouldn't it be simpler to offload the attention computation in a new shared file making the modeling files cleaner and simplify the use of these optimizations for older models? This would also ease the addition of new variants of attention in the future if there is any. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27988/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27988/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27987 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27987/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27987/comments | https://api.github.com/repos/huggingface/transformers/issues/27987/events | https://github.com/huggingface/transformers/issues/27987 | 2,038,783,490 | I_kwDOCUB6oc55hV4C | 27,987 | An error occurred when saving the model | {
"login": "Decem-Y",
"id": 68498490,
"node_id": "MDQ6VXNlcjY4NDk4NDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/68498490?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Decem-Y",
"html_url": "https://github.com/Decem-Y",
"followers_url": "https://api.github.com/users/Decem-Y/followers",
"following_url": "https://api.github.com/users/Decem-Y/following{/other_user}",
"gists_url": "https://api.github.com/users/Decem-Y/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Decem-Y/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Decem-Y/subscriptions",
"organizations_url": "https://api.github.com/users/Decem-Y/orgs",
"repos_url": "https://api.github.com/users/Decem-Y/repos",
"events_url": "https://api.github.com/users/Decem-Y/events{/privacy}",
"received_events_url": "https://api.github.com/users/Decem-Y/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-12-13T01:20:54 | 2023-12-14T07:37:34 | 2023-12-14T07:37:34 | NONE | null | https://github.com/huggingface/transformers/blob/14666775a296a76c88e1aa686a9547f393d322e2/src/transformers/trainer.py#L2349
When I update transformers == 4.36.0 for multi-GPU training to save the model, an error occurs prompting to save to "tmp-checkpoint-xx" instead of "checkpoint-xx". | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27987/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27987/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27986 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27986/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27986/comments | https://api.github.com/repos/huggingface/transformers/issues/27986/events | https://github.com/huggingface/transformers/pull/27986 | 2,038,731,291 | PR_kwDOCUB6oc5h2Fa8 | 27,986 | [docs] Trainer | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-13T00:12:53 | 2023-12-15T20:06:59 | 2023-12-15T20:06:56 | MEMBER | null | This PR attempts to clean up some of the current navigational complexity of the [`Trainer`](https://huggingface.co/docs/transformers/main/en/main_classes/trainer) API doc to make it easier to use as a purely reference lookup page. A lot of the content (checkpoints, logging, customization, etc.) is moved and organized into a separate guide.
The API page still has some content that doesn't entirely belong there (specific GPU selection, training on M1, etc.), but that'll be addressed in a separate PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27986/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27986/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27986",
"html_url": "https://github.com/huggingface/transformers/pull/27986",
"diff_url": "https://github.com/huggingface/transformers/pull/27986.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27986.patch",
"merged_at": "2023-12-15T20:06:55"
} |
https://api.github.com/repos/huggingface/transformers/issues/27985 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27985/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27985/comments | https://api.github.com/repos/huggingface/transformers/issues/27985/events | https://github.com/huggingface/transformers/issues/27985 | 2,038,727,057 | I_kwDOCUB6oc55hIGR | 27,985 | `KeyError: 'Cache only has 0 layers, attempted to access layer with index 0'` | {
"login": "MohamedAliRashad",
"id": 26205298,
"node_id": "MDQ6VXNlcjI2MjA1Mjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/26205298?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MohamedAliRashad",
"html_url": "https://github.com/MohamedAliRashad",
"followers_url": "https://api.github.com/users/MohamedAliRashad/followers",
"following_url": "https://api.github.com/users/MohamedAliRashad/following{/other_user}",
"gists_url": "https://api.github.com/users/MohamedAliRashad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MohamedAliRashad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MohamedAliRashad/subscriptions",
"organizations_url": "https://api.github.com/users/MohamedAliRashad/orgs",
"repos_url": "https://api.github.com/users/MohamedAliRashad/repos",
"events_url": "https://api.github.com/users/MohamedAliRashad/events{/privacy}",
"received_events_url": "https://api.github.com/users/MohamedAliRashad/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 17 | 2023-12-13T00:07:16 | 2024-01-10T13:36:00 | 2023-12-14T14:52:47 | NONE | null | ### System Info
- `transformers` version: 4.36.0
- Platform: Linux-5.15.0-70-generic-x86_64-with-glibc2.35
- Python version: 3.11.4
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.3.3
- Accelerate version: 0.25.0.dev0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
@gante
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
long_text = # ...
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True, torch_dtype=torch.float16, device_map="auto")
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
messages = [
{"role": "user", "content": f"Summarize the following:\n{long_text}"}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=8192, do_sample=True, streamer=streamer)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
### Expected behavior
Expected it to work or at least give me a cuda error. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27985/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27985/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27983 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27983/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27983/comments | https://api.github.com/repos/huggingface/transformers/issues/27983/events | https://github.com/huggingface/transformers/pull/27983 | 2,038,546,330 | PR_kwDOCUB6oc5h1dmL | 27,983 | fix typo in dvclive callback | {
"login": "dberenbaum",
"id": 2308172,
"node_id": "MDQ6VXNlcjIzMDgxNzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2308172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dberenbaum",
"html_url": "https://github.com/dberenbaum",
"followers_url": "https://api.github.com/users/dberenbaum/followers",
"following_url": "https://api.github.com/users/dberenbaum/following{/other_user}",
"gists_url": "https://api.github.com/users/dberenbaum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dberenbaum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dberenbaum/subscriptions",
"organizations_url": "https://api.github.com/users/dberenbaum/orgs",
"repos_url": "https://api.github.com/users/dberenbaum/repos",
"events_url": "https://api.github.com/users/dberenbaum/events{/privacy}",
"received_events_url": "https://api.github.com/users/dberenbaum/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-12T21:03:12 | 2023-12-12T21:29:59 | 2023-12-12T21:29:59 | CONTRIBUTOR | null | # What does this PR do?
Fixes a typo in the dvclive callback that prevents it from being set as initialized.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@amyeroberts or @muellerzr Would one of you mind taking a look? Apologies for not catching this. Our internal tests missed this scenario where initialization depends on the `setup()` method.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27983/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27983",
"html_url": "https://github.com/huggingface/transformers/pull/27983",
"diff_url": "https://github.com/huggingface/transformers/pull/27983.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27983.patch",
"merged_at": "2023-12-12T21:29:59"
} |
https://api.github.com/repos/huggingface/transformers/issues/27982 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27982/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27982/comments | https://api.github.com/repos/huggingface/transformers/issues/27982/events | https://github.com/huggingface/transformers/pull/27982 | 2,038,542,337 | PR_kwDOCUB6oc5h1cuw | 27,982 | fix bug in dvclive callback | {
"login": "dberenbaum",
"id": 2308172,
"node_id": "MDQ6VXNlcjIzMDgxNzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2308172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dberenbaum",
"html_url": "https://github.com/dberenbaum",
"followers_url": "https://api.github.com/users/dberenbaum/followers",
"following_url": "https://api.github.com/users/dberenbaum/following{/other_user}",
"gists_url": "https://api.github.com/users/dberenbaum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dberenbaum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dberenbaum/subscriptions",
"organizations_url": "https://api.github.com/users/dberenbaum/orgs",
"repos_url": "https://api.github.com/users/dberenbaum/repos",
"events_url": "https://api.github.com/users/dberenbaum/events{/privacy}",
"received_events_url": "https://api.github.com/users/dberenbaum/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-12T20:59:51 | 2023-12-12T21:00:08 | 2023-12-12T21:00:08 | CONTRIBUTOR | null | # What does this PR do?
Fixes a typo in the dvclive callback that prevents it from being set as initialized.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@amyeroberts or @muellerzr Would one of you mind taking a look? Apologies for not catching this. Our internal tests missed this scenario where initialization depends on the `setup()` method.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27982/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27982/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27982",
"html_url": "https://github.com/huggingface/transformers/pull/27982",
"diff_url": "https://github.com/huggingface/transformers/pull/27982.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27982.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27981 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27981/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27981/comments | https://api.github.com/repos/huggingface/transformers/issues/27981/events | https://github.com/huggingface/transformers/pull/27981 | 2,038,458,905 | PR_kwDOCUB6oc5h1Kwy | 27,981 | [doc] fix typo | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-12T19:56:04 | 2023-12-12T21:21:39 | 2023-12-12T20:32:42 | CONTRIBUTOR | null | Fixing doc to use the correct package name. Thank you. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27981/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27981/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27981",
"html_url": "https://github.com/huggingface/transformers/pull/27981",
"diff_url": "https://github.com/huggingface/transformers/pull/27981.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27981.patch",
"merged_at": "2023-12-12T20:32:42"
} |
https://api.github.com/repos/huggingface/transformers/issues/27980 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27980/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27980/comments | https://api.github.com/repos/huggingface/transformers/issues/27980/events | https://github.com/huggingface/transformers/issues/27980 | 2,038,207,555 | I_kwDOCUB6oc55fJRD | 27,980 | LLaMa-VID: An Image is Worth 2 Tokens in LLMs | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | null | 2 | 2023-12-12T17:06:22 | 2024-01-02T12:01:02 | null | COLLABORATOR | null | ### Model description
LLaMA-VID is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data. LLaMA-VID empowers existing frameworks to support hour-long videos and pushes their upper limit with an extra context token. We build this repo based on LLaVA.
LLaMA-VID contains three parts: encoder and decoder are adopted to produce visual embedding and text-guided features, respectively; context token and content token are transformed with the tailored token generation strategy; instruction tuning is designed to unleash the potential of LLMs for image and video.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Page: https://llama-vid.github.io/
Weights already available on HF: https://huggingface.co/YanweiLi/llama-vid-7b-pretrain-224
Code: https://github.com/dvlab-research/LLaMA-VID | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27980/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27980/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27979 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27979/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27979/comments | https://api.github.com/repos/huggingface/transformers/issues/27979/events | https://github.com/huggingface/transformers/pull/27979 | 2,038,075,732 | PR_kwDOCUB6oc5hz3Of | 27,979 | Generate: speculative decoding | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 16 | 2023-12-12T15:57:32 | 2024-01-27T16:12:11 | 2023-12-19T13:58:30 | MEMBER | null | # What does this PR do?
Useful context:
In a recent PR (#27750), the candidate generation in assisted generation got abstracted, so we can host new candidate generation techniques (such as #27722).
_______________________________________________________
This PR:
1. ~Reworks assisted candidate generation to call `.generate()`, instead of having its own custom generation loop. For most models this is nothing more than a nice abstraction. However, for models with a custom `generate()` function, this means the assistant model will now make use of it! (🤔 does this mean that DistilWhisper gets better numbers with this refactor?)~ Edit: moved to #28030
2. Adds speculative decoding ([paper](https://arxiv.org/pdf/2211.17192.pdf), see Algorithm 1). This implied a minor interface change in the candidate generation class, which should be okay since it hasn't been released :)
The following tests were run locally and are passing:
1. `RUN_SLOW=1 py.test tests/models/whisper/ -k speculative`
2. `py.test tests/ -k test_assisted` (which now triggers speculative decoding)
________________________________________________________
TODO:
- [ ] Benchmark speculative decoding | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27979/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27979",
"html_url": "https://github.com/huggingface/transformers/pull/27979",
"diff_url": "https://github.com/huggingface/transformers/pull/27979.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27979.patch",
"merged_at": "2023-12-19T13:58:30"
} |
https://api.github.com/repos/huggingface/transformers/issues/27978 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27978/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27978/comments | https://api.github.com/repos/huggingface/transformers/issues/27978/events | https://github.com/huggingface/transformers/pull/27978 | 2,038,063,478 | PR_kwDOCUB6oc5hz0iz | 27,978 | [`Add Deci`] Llama with variable GQA per layer | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 4 | 2023-12-12T15:50:53 | 2024-01-29T09:13:28 | null | COLLABORATOR | null | # What does this PR do?
Add support for Deci. `# Ignore copy` makes it a lot easier @ydshieh 🪂 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27978/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27978/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27978",
"html_url": "https://github.com/huggingface/transformers/pull/27978",
"diff_url": "https://github.com/huggingface/transformers/pull/27978.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27978.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27977 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27977/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27977/comments | https://api.github.com/repos/huggingface/transformers/issues/27977/events | https://github.com/huggingface/transformers/issues/27977 | 2,037,859,553 | I_kwDOCUB6oc55d0Th | 27,977 | Image size understanding in DinoV2 and Transformers generally | {
"login": "lombardata",
"id": 39915110,
"node_id": "MDQ6VXNlcjM5OTE1MTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/39915110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lombardata",
"html_url": "https://github.com/lombardata",
"followers_url": "https://api.github.com/users/lombardata/followers",
"following_url": "https://api.github.com/users/lombardata/following{/other_user}",
"gists_url": "https://api.github.com/users/lombardata/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lombardata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lombardata/subscriptions",
"organizations_url": "https://api.github.com/users/lombardata/orgs",
"repos_url": "https://api.github.com/users/lombardata/repos",
"events_url": "https://api.github.com/users/lombardata/events{/privacy}",
"received_events_url": "https://api.github.com/users/lombardata/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 7 | 2023-12-12T14:21:25 | 2024-01-14T08:22:27 | 2024-01-14T08:22:26 | NONE | null | ### Feature request
Hi everyone,
I was playing with Dinov2 model of the transformers library of HF and I have a question :
is there a way to change model input image sizes like in the timm library?
i.e. the 11 August they added here :
https://github.com/huggingface/pytorch-image-models
an option to change input img sizes e.g. `"Example validation cmd to test w/ non-square resize python validate.py /imagenet --model swin_base_patch4_window7_224.ms_in22k_ft_in1k --amp --amp-dtype bfloat16 --input-size 3 256 320 --model-kwargs window_size=8,10 img_size=256,320"`
Is there a way to do the same with the transformers library? I tried to change the image_size in the config.json file, but since the image is then processed by the processor, in my understanding, the output would be always the one of the "crop_size" parameter in the preprocessor_config.json
What would be the best practice in order to fill an entire image to the model (if there is a way)?
Thank you all in advance!
### Motivation
add custom image input size like in timm
### Your contribution
timm is a hf library so it would be easy to integrate this function to transformers lib | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27977/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27977/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27976 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27976/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27976/comments | https://api.github.com/repos/huggingface/transformers/issues/27976/events | https://github.com/huggingface/transformers/pull/27976 | 2,037,857,883 | PR_kwDOCUB6oc5hzG-V | 27,976 | Better key error for AutoConfig | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-12T14:20:32 | 2023-12-12T14:41:57 | 2023-12-12T14:41:55 | MEMBER | null | When users try to load a model with AutoModel/AutoConfig but the model type isn't recognized, they get a confusing error about missing keys. However, these errors are usually caused by their version of `Transformers` being out of date. I've seen several users asking for help with this issue trying to load `mixtral`, so I wrote a better error message for next time! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27976/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27976/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27976",
"html_url": "https://github.com/huggingface/transformers/pull/27976",
"diff_url": "https://github.com/huggingface/transformers/pull/27976.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27976.patch",
"merged_at": "2023-12-12T14:41:55"
} |
https://api.github.com/repos/huggingface/transformers/issues/27975 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27975/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27975/comments | https://api.github.com/repos/huggingface/transformers/issues/27975/events | https://github.com/huggingface/transformers/issues/27975 | 2,037,705,585 | I_kwDOCUB6oc55dOtx | 27,975 | ImageToTextPipeline does not support InstructBlip Models | {
"login": "elena-soare20",
"id": 114069526,
"node_id": "U_kgDOBsyQFg",
"avatar_url": "https://avatars.githubusercontent.com/u/114069526?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elena-soare20",
"html_url": "https://github.com/elena-soare20",
"followers_url": "https://api.github.com/users/elena-soare20/followers",
"following_url": "https://api.github.com/users/elena-soare20/following{/other_user}",
"gists_url": "https://api.github.com/users/elena-soare20/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elena-soare20/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elena-soare20/subscriptions",
"organizations_url": "https://api.github.com/users/elena-soare20/orgs",
"repos_url": "https://api.github.com/users/elena-soare20/repos",
"events_url": "https://api.github.com/users/elena-soare20/events{/privacy}",
"received_events_url": "https://api.github.com/users/elena-soare20/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First... | open | false | null | [] | null | 6 | 2023-12-12T13:00:11 | 2024-01-25T17:03:21 | null | NONE | null | ### System Info
- `transformers` version: 4.36.0.dev0
- Platform: Linux-generic-x86_64
- Python version: 3.8.12
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 1.10.0a0+0aef44c (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@Narsil @amyeroberts
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
processor = InstructBlipProcessor.from_pretrained("Salesforce/instructblip-flan-t5-xl")
pipe = pipeline("image-to-text", model="Salesforce/instructblip-flan-t5-xl", processor=processor.image_processor, tokenizer=processor.tokenizer, device=0)
prompt = "describe te following image"
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
pipe(images=image, prompt=prompt)
### Expected behavior
returns a textual description of the image.
Instead, I get an error:
`TypeError: ones_like(): argument 'input' (position 1) must be Tensor, not NoneType`
I suspect this is caused by the `ImageToTextPipeline.preprocess()`, where we should ave custom behaviour for InstructBlip models to process the image and text in one go: `inputs = processor(images=image, text=prompt, return_tensors="pt")`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27975/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27974 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27974/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27974/comments | https://api.github.com/repos/huggingface/transformers/issues/27974/events | https://github.com/huggingface/transformers/issues/27974 | 2,037,705,024 | I_kwDOCUB6oc55dOlA | 27,974 | how to replace the existing token in a tokenizer | {
"login": "muziyongshixin",
"id": 21971718,
"node_id": "MDQ6VXNlcjIxOTcxNzE4",
"avatar_url": "https://avatars.githubusercontent.com/u/21971718?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muziyongshixin",
"html_url": "https://github.com/muziyongshixin",
"followers_url": "https://api.github.com/users/muziyongshixin/followers",
"following_url": "https://api.github.com/users/muziyongshixin/following{/other_user}",
"gists_url": "https://api.github.com/users/muziyongshixin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muziyongshixin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muziyongshixin/subscriptions",
"organizations_url": "https://api.github.com/users/muziyongshixin/orgs",
"repos_url": "https://api.github.com/users/muziyongshixin/repos",
"events_url": "https://api.github.com/users/muziyongshixin/events{/privacy}",
"received_events_url": "https://api.github.com/users/muziyongshixin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2023-12-12T12:59:53 | 2024-01-20T08:03:39 | 2024-01-20T08:03:39 | NONE | null | ### Feature request
I have a tokenizer which have lots of preserved tokens like bellow:
```
'<reserved_7>': 100,
'<reserved_8>': 101,
'<reserved_9>': 102,
'<reserved_10>': 103,
'<reserved_11>': 104,
'<reserved_12>': 105,
'<reserved_13>': 106,
'<reserved_14>': 107,
```
I want to replace the '<reserved_7>' with '<|im_start|>' and replace '<reserved_8>' with '<|im_end|>'
what I want to get is a tokenizer which can act as below:
tokenizer.encode('<|im_start|>') => 100
### Motivation
I want to replace the '<reserved_7>' with '<|im_start|>' and replace '<reserved_8>' with '<|im_end|>'
### Your contribution
no | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27974/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27973 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27973/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27973/comments | https://api.github.com/repos/huggingface/transformers/issues/27973/events | https://github.com/huggingface/transformers/pull/27973 | 2,037,623,817 | PR_kwDOCUB6oc5hyTqg | 27,973 | Fix SDPA correctness following torch==2.1.2 regression | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-12T12:11:49 | 2023-12-12T15:33:46 | 2023-12-12T15:33:46 | COLLABORATOR | null | As explained in https://github.com/pytorch/pytorch/issues/112577, `torch==2.1.2` reintroduces a bug (that was first introduced in 2.1.0 and fixed in 2.1.1) in SDPA where the operator produces wrong outputs when using a custom `attn_mask`, cuda device and memory-efficient attention backend.
This PR makes it so that we don't silently fall into this bug.
Running `RUN_SLOW=1 CUDA_VISIBLE_DEVICES=0 pytest tests/models/gpt_bigcode/ -s -vvvvv`, we have:
### With `torch==2.1.1` without this patch
all tests pass
### With `torch==2.1.2` without this patch (regression)
```
FAILED tests/models/gpt_bigcode/test_modeling_gpt_bigcode.py::GPTBigCodeModelTest::test_beam_sample_generate - RuntimeError: probability tensor contains either `inf`, `nan` or element < 0
FAILED tests/models/gpt_bigcode/test_modeling_gpt_bigcode.py::GPTBigCodeModelTest::test_beam_search_generate - AssertionError: Lists differ: [[15, 95, 23, 94, 98, 62], [82, 51, 84, 98, 1, 0]] != [[15, 95, 23, 94, 98, 62], [82, 51, 84, 98, 66, 21]]
FAILED tests/models/gpt_bigcode/test_modeling_gpt_bigcode.py::GPTBigCodeModelTest::test_constrained_beam_search_generate - AssertionError: Lists differ: [[29,[207 chars] 48, 73, 79, 64, 93, 83, 40], [74, 76, 22, 92,[58 chars] 40]] != [[29,[207 chars] 48, 14, 82, 4, 46, 83, 4...
FAILED tests/models/gpt_bigcode/test_modeling_gpt_bigcode.py::GPTBigCodeModelTest::test_flash_attn_2_generate_padding_right - AssertionError: False is not true
FAILED tests/models/gpt_bigcode/test_modeling_gpt_bigcode.py::GPTBigCodeModelTest::test_generate_continue_from_past_key_values - AssertionError: False is not true
FAILED tests/models/gpt_bigcode/test_modeling_gpt_bigcode.py::GPTBigCodeModelTest::test_group_beam_search_generate - AssertionError: Lists differ: [[85,[33 chars] 31, 68, 25], [93, 70, 87, 4, 69, 8], [93, 70, 87, 31, 68, 91]] != [[85,[33 chars] 31, 68, 7], [93, 70, 87,...
FAILED tests/models/gpt_bigcode/test_modeling_gpt_bigcode.py::GPTBigCodeModelLanguageGenerationTest::test_generate_batched - AssertionError: Lists differ: ['def[78 chars]say_hello():\n // 1. Create a new array with the values of'] != ['def[78 chars]say_hello():\n print("...
FAILED tests/models/gpt_bigcode/test_modeling_gpt_bigcode.py::GPTBigCodeModelLanguageGenerationTest::test_generate_simple - AssertionError: 'def print_hello_world():\n print("Hello World")\n\ndef print_hello_' != 'def print_hello_world():\n print("Hello World!")\n\n\nde...
```
### With `torch==2.1.2` & `torch==2.1.1` with this patch
All tests pass.
For the other archs supporting SDPA (llama, whisper, falcon, idefics, bart), the tests are running fine & manual tests go fine as well. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27973/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27973/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27973",
"html_url": "https://github.com/huggingface/transformers/pull/27973",
"diff_url": "https://github.com/huggingface/transformers/pull/27973.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27973.patch",
"merged_at": "2023-12-12T15:33:46"
} |
https://api.github.com/repos/huggingface/transformers/issues/27972 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27972/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27972/comments | https://api.github.com/repos/huggingface/transformers/issues/27972/events | https://github.com/huggingface/transformers/issues/27972 | 2,037,560,259 | I_kwDOCUB6oc55crPD | 27,972 | T5 model: There were missing keys in the checkpoint model loaded: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight', 'lm_head.weight']. | {
"login": "alexcoca",
"id": 30216068,
"node_id": "MDQ6VXNlcjMwMjE2MDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/30216068?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexcoca",
"html_url": "https://github.com/alexcoca",
"followers_url": "https://api.github.com/users/alexcoca/followers",
"following_url": "https://api.github.com/users/alexcoca/following{/other_user}",
"gists_url": "https://api.github.com/users/alexcoca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexcoca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexcoca/subscriptions",
"organizations_url": "https://api.github.com/users/alexcoca/orgs",
"repos_url": "https://api.github.com/users/alexcoca/repos",
"events_url": "https://api.github.com/users/alexcoca/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexcoca/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api... | null | 8 | 2023-12-12T11:32:47 | 2024-01-24T20:22:20 | null | NONE | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.27
- Python version: 3.10.11
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes (RTX3090)
- Using distributed or parallel set-up in script? no
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to reproduce.
1. Run any transformer example fine-tuning a t5 model (I am using `Salesforce/codet5p-220m` but the issue can probably be reproduced with other T5 models (certainly FlanT5)
2. Stop the trainer
3. Restart the training using the `restart_from_chekpoint=True` CLI option and setting `output_dir` to be the checkpoint directory (ie where the `checkpoint-[step]` directories are created)
4. Observe the warning:
[WARNING|trainer.py:2231] 2023-12-12 11:09:58,921 >> There were missing keys in the checkpoint model loaded: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight', 'lm_head.weight'].
### Expected behavior
Either there is no warning or the warning message tells the user if the warning applies to them. My intuition here is that nothing is wrong: I am using `T5ForConditionlGeneration` out of the box (so no `lm_head`) and the encoder and decoder enmbedings are tied (and hopefully loaded ?!). Is this a case of extending the warning to be more explicit?
@younesbelkada | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27972/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27971 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27971/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27971/comments | https://api.github.com/repos/huggingface/transformers/issues/27971/events | https://github.com/huggingface/transformers/pull/27971 | 2,037,529,925 | PR_kwDOCUB6oc5hx_Eb | 27,971 | [`Whisper`] raise better errors | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-12T11:14:25 | 2023-12-13T08:13:02 | 2023-12-13T08:13:01 | COLLABORATOR | null | fixes #27893 for the new cantonese language, whisper needs to properly error out if the model does not support it | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27971/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27971",
"html_url": "https://github.com/huggingface/transformers/pull/27971",
"diff_url": "https://github.com/huggingface/transformers/pull/27971.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27971.patch",
"merged_at": "2023-12-13T08:13:01"
} |
https://api.github.com/repos/huggingface/transformers/issues/27970 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27970/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27970/comments | https://api.github.com/repos/huggingface/transformers/issues/27970/events | https://github.com/huggingface/transformers/pull/27970 | 2,037,477,029 | PR_kwDOCUB6oc5hxzeZ | 27,970 | [Trainer] move dataloader after the model wrapping | {
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-12T10:43:51 | 2024-01-20T08:03:42 | 2024-01-20T08:03:42 | CONTRIBUTOR | null | # What does this PR do?
Call the dataloader after the model has been prepared | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27970/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27970",
"html_url": "https://github.com/huggingface/transformers/pull/27970",
"diff_url": "https://github.com/huggingface/transformers/pull/27970.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27970.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27969 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27969/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27969/comments | https://api.github.com/repos/huggingface/transformers/issues/27969/events | https://github.com/huggingface/transformers/pull/27969 | 2,037,458,927 | PR_kwDOCUB6oc5hxvhs | 27,969 | Fix link in README.md of Image Captioning | {
"login": "saswatmeher",
"id": 35535056,
"node_id": "MDQ6VXNlcjM1NTM1MDU2",
"avatar_url": "https://avatars.githubusercontent.com/u/35535056?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saswatmeher",
"html_url": "https://github.com/saswatmeher",
"followers_url": "https://api.github.com/users/saswatmeher/followers",
"following_url": "https://api.github.com/users/saswatmeher/following{/other_user}",
"gists_url": "https://api.github.com/users/saswatmeher/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saswatmeher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saswatmeher/subscriptions",
"organizations_url": "https://api.github.com/users/saswatmeher/orgs",
"repos_url": "https://api.github.com/users/saswatmeher/repos",
"events_url": "https://api.github.com/users/saswatmeher/events{/privacy}",
"received_events_url": "https://api.github.com/users/saswatmeher/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-12T10:33:41 | 2023-12-12T13:44:25 | 2023-12-12T13:07:15 | CONTRIBUTOR | null | Update the link for vision encoder decoder doc used by FlaxVisionEncoderDecoderModel link inside README.md of Image Captioning.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #27968
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@stevhliu and @MKhalusova
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27969/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27969",
"html_url": "https://github.com/huggingface/transformers/pull/27969",
"diff_url": "https://github.com/huggingface/transformers/pull/27969.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27969.patch",
"merged_at": "2023-12-12T13:07:15"
} |
https://api.github.com/repos/huggingface/transformers/issues/27967 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27967/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27967/comments | https://api.github.com/repos/huggingface/transformers/issues/27967/events | https://github.com/huggingface/transformers/issues/27967 | 2,037,358,828 | I_kwDOCUB6oc55b6Ds | 27,967 | `device_map = "auto"` failed for LLaMA model on H800 | {
"login": "ruikangliu",
"id": 69446971,
"node_id": "MDQ6VXNlcjY5NDQ2OTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/69446971?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ruikangliu",
"html_url": "https://github.com/ruikangliu",
"followers_url": "https://api.github.com/users/ruikangliu/followers",
"following_url": "https://api.github.com/users/ruikangliu/following{/other_user}",
"gists_url": "https://api.github.com/users/ruikangliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ruikangliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ruikangliu/subscriptions",
"organizations_url": "https://api.github.com/users/ruikangliu/orgs",
"repos_url": "https://api.github.com/users/ruikangliu/repos",
"events_url": "https://api.github.com/users/ruikangliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/ruikangliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-12-12T09:40:58 | 2023-12-27T11:53:45 | 2023-12-13T01:24:24 | NONE | null | ### System Info
When I use `device_map = "auto"` for LLaMA model on more than 3 H800 GPUs, errors pop up during model inference. But when I use fewer than 3 H800 GPUs, everything is OK. It seems to be something wrong with data transfer across devices on H800 GPUs.
My transformers version is 4.36.0, cuda version is 11.8, torch version is 2.0.0. I also tried transformers 4.35.2, cuda 12.1, torch 2.1.1, which also failed.
```
Exception has occurred: RuntimeError (note: full exception trace is shown but execution is paused at: _run_module_as_main)
CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
File "/home/liuruikang/anaconda3/envs/ptqlora/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 1193, in forward
logits = logits.float()
File "/home/liuruikang/anaconda3/envs/ptqlora/lib/python3.8/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/home/liuruikang/anaconda3/envs/ptqlora/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/liuruikang/anaconda3/envs/ptqlora/lib/python3.8/site-packages/transformers/generation/utils.py", line 2579, in greedy_search
outputs = self(
File "/home/liuruikang/anaconda3/envs/ptqlora/lib/python3.8/site-packages/transformers/generation/utils.py", line 1718, in generate
return self.greedy_search(
File "/home/liuruikang/anaconda3/envs/ptqlora/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/liuruikang/anaconda3/envs/ptqlora/lib/python3.8/site-packages/transformers/pipelines/text_generation.py", line 271, in _forward
generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs)
File "/home/liuruikang/anaconda3/envs/ptqlora/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1046, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "/home/liuruikang/anaconda3/envs/ptqlora/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1147, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
File "/home/liuruikang/anaconda3/envs/ptqlora/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1140, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "/home/liuruikang/anaconda3/envs/ptqlora/lib/python3.8/site-packages/transformers/pipelines/text_generation.py", line 208, in __call__
return super().__call__(text_inputs, **kwargs)
File "/home/liuruikang/workspace/quant/ptq-lora/tmp.py", line 7, in <module>
print(generator("More and more large language models are opensourced so Hugging Face has"))
File "/home/liuruikang/anaconda3/envs/ptqlora/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/liuruikang/anaconda3/envs/ptqlora/lib/python3.8/runpy.py", line 194, in _run_module_as_main (Current frame)
return _run_code(code, main_globals, None,
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
Here is the sample code for reproducing the error (more than 3 H800 GPUs needed):
```
import torch
from transformers import pipeline
checkpoint = "./modelzoo/llama/llama-7b" # path to llama ckpt
generator = pipeline("text-generation", model=checkpoint, device_map="auto", torch_dtype=torch.float16)
print(generator("More and more large language models are opensourced so Hugging Face has"))
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Here is the sample code for reproducing the error (more than 3 H800 GPUs needed):
```
import torch
from transformers import pipeline
checkpoint = "./modelzoo/llama/llama-7b" # path to llama ckpt
generator = pipeline("text-generation", model=checkpoint, device_map="auto", torch_dtype=torch.float16)
print(generator("More and more large language models are opensourced so Hugging Face has"))
```
### Expected behavior
`device_map = "auto"` failed for LLaMA model on more than 3 H800 GPUs during model inference | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27967/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27966 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27966/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27966/comments | https://api.github.com/repos/huggingface/transformers/issues/27966/events | https://github.com/huggingface/transformers/issues/27966 | 2,037,353,627 | I_kwDOCUB6oc55b4yb | 27,966 | Fine tuned Mistral inference issue for >4k context length | {
"login": "oooodoori",
"id": 153339467,
"node_id": "U_kgDOCSPGSw",
"avatar_url": "https://avatars.githubusercontent.com/u/153339467?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oooodoori",
"html_url": "https://github.com/oooodoori",
"followers_url": "https://api.github.com/users/oooodoori/followers",
"following_url": "https://api.github.com/users/oooodoori/following{/other_user}",
"gists_url": "https://api.github.com/users/oooodoori/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oooodoori/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oooodoori/subscriptions",
"organizations_url": "https://api.github.com/users/oooodoori/orgs",
"repos_url": "https://api.github.com/users/oooodoori/repos",
"events_url": "https://api.github.com/users/oooodoori/events{/privacy}",
"received_events_url": "https://api.github.com/users/oooodoori/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-12T09:38:14 | 2024-01-20T08:03:43 | 2024-01-20T08:03:43 | NONE | null | **System Info**
- `transformers` version: 4.36.0-dev (main branch)
- Huggingface_hub version: 0.19.4
- PyTorch version: 2.1.0
- Using GPU in script?: yes
We fine tuned a mistralai/Mistral-7B-Instruct-v0.1using LoRa on some 8k context length data. The inferencing was fine with transformers at 4.34.0, but after updating the version, the inferencing became irrelevant repeition for token length > 4096.
We were able to get around this by disabling fast attention 2, but the overall model performance suffered. It seems to be a problem related to the 4D attention mask implementation in transformers 4.35+. This only happens when the token length exceeds 4k. Any ideas what might be wrong?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27966/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27966/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27968 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27968/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27968/comments | https://api.github.com/repos/huggingface/transformers/issues/27968/events | https://github.com/huggingface/transformers/issues/27968 | 2,037,377,705 | I_kwDOCUB6oc55b-qp | 27,968 | Link is invalid in “examples/flax/image-captioning/README.md” | {
"login": "wplf",
"id": 95006218,
"node_id": "U_kgDOBamuCg",
"avatar_url": "https://avatars.githubusercontent.com/u/95006218?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wplf",
"html_url": "https://github.com/wplf",
"followers_url": "https://api.github.com/users/wplf/followers",
"following_url": "https://api.github.com/users/wplf/following{/other_user}",
"gists_url": "https://api.github.com/users/wplf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wplf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wplf/subscriptions",
"organizations_url": "https://api.github.com/users/wplf/orgs",
"repos_url": "https://api.github.com/users/wplf/repos",
"events_url": "https://api.github.com/users/wplf/events{/privacy}",
"received_events_url": "https://api.github.com/users/wplf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-12T09:21:05 | 2023-12-12T13:07:17 | 2023-12-12T13:07:16 | NONE | null | **This repository is focused on the Hub experience and documentation. If you're facing an issue with a specific library, please open an issue in the corresponding GitHub repo. If you're facing an issue with a specific model or dataset, please open an issue in the corresponding HF repo.**
**Bug description.**
A clear and concise description of what the problem is. Ex. Clicking this button is not working when [...]
The superlink behind 【FlaxVisionEncoderDecoderModel】 is not working

**Describe the expected behaviour**
A clear and concise description of what you want to happen.
**Additional context**
Add any other relevant context or screenshots here. Please share details such as browser when appropriate.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27968/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27968/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27964 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27964/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27964/comments | https://api.github.com/repos/huggingface/transformers/issues/27964/events | https://github.com/huggingface/transformers/issues/27964 | 2,037,255,672 | I_kwDOCUB6oc55bg34 | 27,964 | RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback): cannot import name 'AutoModelForImageToImage' from 'transformers.models.auto.modeling_auto' (/opt/conda/envs/prototype/lib/python3.8/site-packages/transformers/models/auto/modeling_auto.py) | {
"login": "LucaYoy",
"id": 40484649,
"node_id": "MDQ6VXNlcjQwNDg0NjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/40484649?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LucaYoy",
"html_url": "https://github.com/LucaYoy",
"followers_url": "https://api.github.com/users/LucaYoy/followers",
"following_url": "https://api.github.com/users/LucaYoy/following{/other_user}",
"gists_url": "https://api.github.com/users/LucaYoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LucaYoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LucaYoy/subscriptions",
"organizations_url": "https://api.github.com/users/LucaYoy/orgs",
"repos_url": "https://api.github.com/users/LucaYoy/repos",
"events_url": "https://api.github.com/users/LucaYoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/LucaYoy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 7 | 2023-12-12T08:40:42 | 2024-01-17T20:25:25 | 2023-12-22T09:30:11 | NONE | null | Hi I also have a similar issue to #23340 but this type numpy is the culprit
```cannot import name 'AutoModelForImageToImage' from 'transformers.models.auto.modeling_auto' (/opt/conda/envs/prototype/lib/python3.8/site-packages/transformers/models/auto/modeling_auto.py)
```
transformers-cli env gives:
```
To enable the following instructions: AVX2 AVX512F FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-12-11 13:51:15.375941: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Traceback (most recent call last):
File "/opt/conda/envs/prototype/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1382, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/opt/conda/envs/prototype/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/opt/conda/envs/prototype/lib/python3.8/site-packages/transformers/models/auto/image_processing_auto.py", line 26, in <module>
from ...image_processing_utils import ImageProcessingMixin
File "/opt/conda/envs/prototype/lib/python3.8/site-packages/transformers/image_processing_utils.py", line 28, in <module>
from .image_transforms import center_crop, normalize, rescale
File "/opt/conda/envs/prototype/lib/python3.8/site-packages/transformers/image_transforms.py", line 47, in <module>
import tensorflow as tf
File "/opt/conda/envs/prototype/lib/python3.8/site-packages/tensorflow/__init__.py", line 38, in <module>
from tensorflow.python.tools import module_util as _module_util
File "/opt/conda/envs/prototype/lib/python3.8/site-packages/tensorflow/python/__init__.py", line 45, in <module>
from tensorflow.python.feature_column import feature_column_lib as feature_column
File "/opt/conda/envs/prototype/lib/python3.8/site-packages/tensorflow/python/feature_column/feature_column_lib.py", line 18, in <module>
from tensorflow.python.feature_column.feature_column import *
File "/opt/conda/envs/prototype/lib/python3.8/site-packages/tensorflow/python/feature_column/feature_column.py", line 143, in <module>
from tensorflow.python.layers import base
File "/opt/conda/envs/prototype/lib/python3.8/site-packages/tensorflow/python/layers/base.py", line 16, in <module>
from tensorflow.python.keras.legacy_tf_layers import base
File "/opt/conda/envs/prototype/lib/python3.8/site-packages/tensorflow/python/keras/__init__.py", line 25, in <module>
from tensorflow.python.keras import models
File "/opt/conda/envs/prototype/lib/python3.8/site-packages/tensorflow/python/keras/models.py", line 22, in <module>
from tensorflow.python.keras.engine import functional
File "/opt/conda/envs/prototype/lib/python3.8/site-packages/tensorflow/python/keras/engine/functional.py", line 32, in <module>
from tensorflow.python.keras.engine import training as training_lib
File "/opt/conda/envs/prototype/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 53, in <module>
from tensorflow.python.keras.saving import hdf5_format
File "/opt/conda/envs/prototype/lib/python3.8/site-packages/tensorflow/python/keras/saving/hdf5_format.py", line 37, in <module>
import h5py
File "/opt/conda/envs/prototype/lib/python3.8/site-packages/h5py/__init__.py", line 46, in <module>
from ._conv import register_converters as _register_converters
File "h5py/h5t.pxd", line 14, in init h5py._conv
File "h5py/h5t.pyx", line 293, in init h5py.h5t
File "/opt/conda/envs/prototype/lib/python3.8/site-packages/numpy/__init__.py", line 320, in __getattr__
raise AttributeError("module {!r} has no attribute "
AttributeError: module 'numpy' has no attribute 'typeDict'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/prototype/bin/transformers-cli", line 5, in <module>
from transformers.commands.transformers_cli import main
File "/opt/conda/envs/prototype/lib/python3.8/site-packages/transformers/commands/transformers_cli.py", line 24, in <module>
from .pt_to_tf import PTtoTFCommand
File "/opt/conda/envs/prototype/lib/python3.8/site-packages/transformers/commands/pt_to_tf.py", line 24, in <module>
from .. import (
File "<frozen importlib._bootstrap>", line 1039, in _handle_fromlist
File "/opt/conda/envs/prototype/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1373, in __getattr__
value = getattr(module, name)
File "/opt/conda/envs/prototype/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1372, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/opt/conda/envs/prototype/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1384, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.models.auto.image_processing_auto because of the following error (look up to see its traceback):
module 'numpy' has no attribute 'typeDict
```
Im using:
Numpy 1.24.4
Torch 2.1.1+cu118
Transformers 4.36.0 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27964/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27963 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27963/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27963/comments | https://api.github.com/repos/huggingface/transformers/issues/27963/events | https://github.com/huggingface/transformers/issues/27963 | 2,037,028,845 | I_kwDOCUB6oc55apft | 27,963 | (LLama-2) TensorParallelPreTrainedModel does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention | {
"login": "VasilGeorgiev39",
"id": 149842188,
"node_id": "U_kgDOCO5pDA",
"avatar_url": "https://avatars.githubusercontent.com/u/149842188?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VasilGeorgiev39",
"html_url": "https://github.com/VasilGeorgiev39",
"followers_url": "https://api.github.com/users/VasilGeorgiev39/followers",
"following_url": "https://api.github.com/users/VasilGeorgiev39/following{/other_user}",
"gists_url": "https://api.github.com/users/VasilGeorgiev39/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VasilGeorgiev39/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VasilGeorgiev39/subscriptions",
"organizations_url": "https://api.github.com/users/VasilGeorgiev39/orgs",
"repos_url": "https://api.github.com/users/VasilGeorgiev39/repos",
"events_url": "https://api.github.com/users/VasilGeorgiev39/events{/privacy}",
"received_events_url": "https://api.github.com/users/VasilGeorgiev39/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-12T05:47:38 | 2023-12-13T12:36:00 | 2023-12-13T12:36:00 | NONE | null | ```python
import transformers
import tensor_parallel as tp
tokenizer = transformers.AutoTokenizer.from_pretrained("meta-llama/Llama-2-13b-chat-hf")
model = transformers.AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-13b-chat-hf")
modelp = tp.tensor_parallel(model) #error
```
As is the example from https://github.com/BlackSamorez/tensor_parallel | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27963/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27963/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27962 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27962/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27962/comments | https://api.github.com/repos/huggingface/transformers/issues/27962/events | https://github.com/huggingface/transformers/issues/27962 | 2,036,853,873 | I_kwDOCUB6oc55Z-xx | 27,962 | IterableDatasetShard' object has no attribute '_epoch' | {
"login": "johnchienbronci",
"id": 27708347,
"node_id": "MDQ6VXNlcjI3NzA4MzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27708347?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnchienbronci",
"html_url": "https://github.com/johnchienbronci",
"followers_url": "https://api.github.com/users/johnchienbronci/followers",
"following_url": "https://api.github.com/users/johnchienbronci/following{/other_user}",
"gists_url": "https://api.github.com/users/johnchienbronci/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnchienbronci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnchienbronci/subscriptions",
"organizations_url": "https://api.github.com/users/johnchienbronci/orgs",
"repos_url": "https://api.github.com/users/johnchienbronci/repos",
"events_url": "https://api.github.com/users/johnchienbronci/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnchienbronci/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 6 | 2023-12-12T02:23:46 | 2024-01-19T08:03:59 | 2024-01-19T08:03:59 | NONE | null | transformers>=4.35.2
finetune wav2vec2 ctc using multiple gpu by streaming mode have error.
This error occurs on multiple gpus. ( No errors will occur on a single GPU or transformers version 4.30.2)
code:
```
class ShuffleCallback(TrainerCallback):
def on_epoch_begin(self, args, state, control, train_dataloader, **kwargs):
if isinstance(train_dataloader.dataset, IterableDatasetShard):
pass # set_epoch() is handled by the Trainer
elif isinstance(train_dataloader.dataset, IterableDataset):
train_dataloader.dataset.set_epoch(train_dataloader.dataset._epoch + 1)
```
error message:
```
Traceback (most recent call last):
File "/workspace/wav2vec2/speech_recognition/run_speech_recognition_ctc_streaming.py", line 980, in <module>
main()
File "/workspace/wav2vec2/speech_recognition/run_speech_recognition_ctc_streaming.py", line 929, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/ubuntu/.local/lib/python3.10/site-packages/transformers/trainer.py", line 1537, in train
return inner_training_loop(
File "/home/ubuntu/.local/lib/python3.10/site-packages/transformers/trainer.py", line 1807, in _inner_training_loop
self.control = self.callback_handler.on_epoch_begin(args, self.state, self.control)
File "/home/ubuntu/.local/lib/python3.10/site-packages/transformers/trainer_callback.py", line 377, in on_epoch_begin
return self.call_event("on_epoch_begin", args, state, control)
File "/home/ubuntu/.local/lib/python3.10/site-packages/transformers/trainer_callback.py", line 414, in call_event
result = getattr(callback, event)(
File "/workspace/wav2vec2/speech_recognition/run_speech_recognition_ctc_streaming.py", line 894, in on_epoch_begin
train_dataloader.dataset.set_epoch(train_dataloader.dataset._epoch + 1)
AttributeError: 'IterableDatasetShard' object has no attribute '_epoch'. Did you mean: 'epoch'?
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27962/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27961 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27961/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27961/comments | https://api.github.com/repos/huggingface/transformers/issues/27961/events | https://github.com/huggingface/transformers/issues/27961 | 2,036,767,263 | I_kwDOCUB6oc55Zpof | 27,961 | CLIPTokenizer (and others based on the same telephoned OpenAI code) incorrect tokenize 1138 out of 34483 words that have an exact match in vocab | {
"login": "doctorpangloss",
"id": 2229300,
"node_id": "MDQ6VXNlcjIyMjkzMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2229300?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/doctorpangloss",
"html_url": "https://github.com/doctorpangloss",
"followers_url": "https://api.github.com/users/doctorpangloss/followers",
"following_url": "https://api.github.com/users/doctorpangloss/following{/other_user}",
"gists_url": "https://api.github.com/users/doctorpangloss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/doctorpangloss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/doctorpangloss/subscriptions",
"organizations_url": "https://api.github.com/users/doctorpangloss/orgs",
"repos_url": "https://api.github.com/users/doctorpangloss/repos",
"events_url": "https://api.github.com/users/doctorpangloss/events{/privacy}",
"received_events_url": "https://api.github.com/users/doctorpangloss/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in pro... | open | false | null | [] | null | 9 | 2023-12-12T00:36:24 | 2024-01-15T08:02:09 | null | NONE | null | ### System Info
- `transformers` version: 4.36.0
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu118 (False)
- Tensorflow version (GPU?): 2.14.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu)
- Jax version: 0.4.20
- JaxLib version: 0.4.20
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Visit https://colab.research.google.com/drive/18I0mYxTV-UCDjKWTxfuaR3P6o00w18Q9?usp=sharing for a reproduction.
```
from transformers import CLIPProcessor
tokenizer = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32").tokenizer
# match whole words
whole_words = {k: v for k, v in tokenizer.get_vocab().items() if k.endswith("</w>")}
to_trim = len("<w/>")
missed = 0
for token_str, token_int in whole_words.items():
tokenized = tokenizer.tokenize(token_str[:-to_trim])
if len(tokenized) != 1:
missed += 1
print(f"transformers {missed} words out of {len(whole_words)} incorrectly tokenized ({missed/len(whole_words)*100})%")
```
this prints `transformers 1138 words out of 34483 incorrectly tokenized (3.3001768987617086)%`
I see that everyone copied OpenAI's buggy tokenization code. Besides this issue there is also https://github.com/openai/CLIP/issues/343. The code in that repository was obviously not used for training, so this could explain a lot of misses / poor performance in CLIP based models.
### Expected behavior
tokenization of a word that exactly matches an entry in the vocab file should return exactly 1 token | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27961/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27961/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27960 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27960/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27960/comments | https://api.github.com/repos/huggingface/transformers/issues/27960/events | https://github.com/huggingface/transformers/pull/27960 | 2,036,751,270 | PR_kwDOCUB6oc5hvXto | 27,960 | Auto model time series | {
"login": "wgifford",
"id": 79663411,
"node_id": "MDQ6VXNlcjc5NjYzNDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/79663411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wgifford",
"html_url": "https://github.com/wgifford",
"followers_url": "https://api.github.com/users/wgifford/followers",
"following_url": "https://api.github.com/users/wgifford/following{/other_user}",
"gists_url": "https://api.github.com/users/wgifford/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wgifford/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wgifford/subscriptions",
"organizations_url": "https://api.github.com/users/wgifford/orgs",
"repos_url": "https://api.github.com/users/wgifford/repos",
"events_url": "https://api.github.com/users/wgifford/events{/privacy}",
"received_events_url": "https://api.github.com/users/wgifford/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-12-12T00:17:26 | 2024-01-19T08:04:01 | 2024-01-19T08:04:01 | CONTRIBUTOR | null | # What does this PR do?
Adds auto model capability for PatchTSMixer and PatchTST models. This includes support for:
- AutoModelForTimeSeriesClassification
- AutoModelForTimeSeriesPrediction
- AutoModelForTimeSeriesRegression
@kashif | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27960/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27960",
"html_url": "https://github.com/huggingface/transformers/pull/27960",
"diff_url": "https://github.com/huggingface/transformers/pull/27960.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27960.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27959 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27959/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27959/comments | https://api.github.com/repos/huggingface/transformers/issues/27959/events | https://github.com/huggingface/transformers/issues/27959 | 2,036,723,024 | I_kwDOCUB6oc55Ze1Q | 27,959 | KeyError: 'mistral' | {
"login": "brian-bould",
"id": 144232955,
"node_id": "U_kgDOCJjR-w",
"avatar_url": "https://avatars.githubusercontent.com/u/144232955?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brian-bould",
"html_url": "https://github.com/brian-bould",
"followers_url": "https://api.github.com/users/brian-bould/followers",
"following_url": "https://api.github.com/users/brian-bould/following{/other_user}",
"gists_url": "https://api.github.com/users/brian-bould/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brian-bould/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brian-bould/subscriptions",
"organizations_url": "https://api.github.com/users/brian-bould/orgs",
"repos_url": "https://api.github.com/users/brian-bould/repos",
"events_url": "https://api.github.com/users/brian-bould/events{/privacy}",
"received_events_url": "https://api.github.com/users/brian-bould/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 6 | 2023-12-11T23:45:25 | 2024-01-24T00:07:48 | null | NONE | null | ### System Info
System Info
M2
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
running on pinokio.
loading mistralai_Mixtral-8x7B-v0.1 model.
error: Traceback (most recent call last):
File "/Users/bafsr/pinokio/api/oobabooga.pinokio.git/text-generation-webui/modules/ui_model_menu.py", line 209, in load_model_wrapper
shared.model, shared.tokenizer = load_model(shared.model_name, loader)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bafsr/pinokio/api/oobabooga.pinokio.git/text-generation-webui/modules/models.py", line 88, in load_model
output = load_func_map[loader](model_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bafsr/pinokio/api/oobabooga.pinokio.git/text-generation-webui/modules/models.py", line 146, in huggingface_loader
config = AutoConfig.from_pretrained(path_to_model, trust_remote_code=params['trust_remote_code'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bafsr/pinokio/api/oobabooga.pinokio.git/text-generation-webui/installer_files/env/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py", line 1064, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bafsr/pinokio/api/oobabooga.pinokio.git/text-generation-webui/installer_files/env/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py", line 761, in __getitem__
raise KeyError(key)
KeyError: 'mixtral'
### Expected behavior
run the model | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27959/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27958 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27958/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27958/comments | https://api.github.com/repos/huggingface/transformers/issues/27958/events | https://github.com/huggingface/transformers/pull/27958 | 2,036,665,751 | PR_kwDOCUB6oc5hvE1b | 27,958 | [Doc] Spanish translation of glossary.md | {
"login": "aaronjimv",
"id": 67152883,
"node_id": "MDQ6VXNlcjY3MTUyODgz",
"avatar_url": "https://avatars.githubusercontent.com/u/67152883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaronjimv",
"html_url": "https://github.com/aaronjimv",
"followers_url": "https://api.github.com/users/aaronjimv/followers",
"following_url": "https://api.github.com/users/aaronjimv/following{/other_user}",
"gists_url": "https://api.github.com/users/aaronjimv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaronjimv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaronjimv/subscriptions",
"organizations_url": "https://api.github.com/users/aaronjimv/orgs",
"repos_url": "https://api.github.com/users/aaronjimv/repos",
"events_url": "https://api.github.com/users/aaronjimv/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaronjimv/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-12-11T22:48:14 | 2023-12-13T17:33:05 | 2023-12-13T17:22:00 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Add the Spanish version of `glossary.md` to `transformers/docs/source/es`
Fix some typos in `en/glossary.md`
Fix `TensorParallel` link at `Z` section in both files.
Fixes #15947
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@omarespejel @sgugger @osanseviero @stevhliu
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27958/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27958",
"html_url": "https://github.com/huggingface/transformers/pull/27958",
"diff_url": "https://github.com/huggingface/transformers/pull/27958.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27958.patch",
"merged_at": "2023-12-13T17:22:00"
} |
https://api.github.com/repos/huggingface/transformers/issues/27957 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27957/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27957/comments | https://api.github.com/repos/huggingface/transformers/issues/27957/events | https://github.com/huggingface/transformers/issues/27957 | 2,036,541,854 | I_kwDOCUB6oc55Yyme | 27,957 | XLMRoberta with Flash Attention 2 | {
"login": "IvanPy96",
"id": 64599936,
"node_id": "MDQ6VXNlcjY0NTk5OTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/64599936?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IvanPy96",
"html_url": "https://github.com/IvanPy96",
"followers_url": "https://api.github.com/users/IvanPy96/followers",
"following_url": "https://api.github.com/users/IvanPy96/following{/other_user}",
"gists_url": "https://api.github.com/users/IvanPy96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IvanPy96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IvanPy96/subscriptions",
"organizations_url": "https://api.github.com/users/IvanPy96/orgs",
"repos_url": "https://api.github.com/users/IvanPy96/repos",
"events_url": "https://api.github.com/users/IvanPy96/events{/privacy}",
"received_events_url": "https://api.github.com/users/IvanPy96/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First... | open | false | null | [] | null | 3 | 2023-12-11T21:16:49 | 2023-12-21T13:39:48 | null | NONE | null | ### System Info
- transformers version: 4.36.0
- Platform: Linux-4.19.0-22-amd64-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("my_model/", attn_implementation="flash_attention_2")
### Expected behavior
Ability to use flash attention 2 for inference. Is it possible to add support of flash attention 2 for XLMRoberta model? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27957/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27957/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27956 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27956/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27956/comments | https://api.github.com/repos/huggingface/transformers/issues/27956/events | https://github.com/huggingface/transformers/pull/27956 | 2,036,515,405 | PR_kwDOCUB6oc5hujXW | 27,956 | add `modules_in_block_to_quantize` arg in GPTQconfig | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-11T20:58:33 | 2023-12-13T19:14:08 | 2023-12-13T19:13:44 | MEMBER | null | # What does this PR do?
This PR adds the `modules_in_block_to_quantize ` quantization arg for gptq. This is necessary for converting specific layers to quantized layers. With this PR, we should be able to run the gptq mixtral model. See related [PR](https://github.com/huggingface/optimum/pull/1585) in optimum.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig
model_name = "TheBloke/Mixtral-8x7B-v0.1-GPTQ"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map={"":0})
print(model)
inputs = tokenizer.encode("Hello, how are you today ?", return_tensors="pt").to(0)
outputs = model.generate(inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27956/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27956/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27956",
"html_url": "https://github.com/huggingface/transformers/pull/27956",
"diff_url": "https://github.com/huggingface/transformers/pull/27956.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27956.patch",
"merged_at": "2023-12-13T19:13:44"
} |
https://api.github.com/repos/huggingface/transformers/issues/27955 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27955/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27955/comments | https://api.github.com/repos/huggingface/transformers/issues/27955/events | https://github.com/huggingface/transformers/pull/27955 | 2,036,220,067 | PR_kwDOCUB6oc5htic8 | 27,955 | [`Mixtral`] Change mistral op order | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-11T17:53:21 | 2023-12-11T18:03:24 | 2023-12-11T18:03:19 | CONTRIBUTOR | null | # What does this PR do?
This PR slightly refactors the forward pass logic of `MixtralBLockSparseTop2MLP` to not have `routing_weights` as a required arg in the forward pass as AWQ does not handle multiple args in the forward pass of the modules (assumes all modules have `hidden_states` as input)
cc @ArthurZucker
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27955/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27955",
"html_url": "https://github.com/huggingface/transformers/pull/27955",
"diff_url": "https://github.com/huggingface/transformers/pull/27955.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27955.patch",
"merged_at": "2023-12-11T18:03:19"
} |
https://api.github.com/repos/huggingface/transformers/issues/27954 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27954/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27954/comments | https://api.github.com/repos/huggingface/transformers/issues/27954/events | https://github.com/huggingface/transformers/issues/27954 | 2,036,020,560 | I_kwDOCUB6oc55WzVQ | 27,954 | does not appear to have a file named config.json | {
"login": "riyaj8888",
"id": 29457825,
"node_id": "MDQ6VXNlcjI5NDU3ODI1",
"avatar_url": "https://avatars.githubusercontent.com/u/29457825?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/riyaj8888",
"html_url": "https://github.com/riyaj8888",
"followers_url": "https://api.github.com/users/riyaj8888/followers",
"following_url": "https://api.github.com/users/riyaj8888/following{/other_user}",
"gists_url": "https://api.github.com/users/riyaj8888/gists{/gist_id}",
"starred_url": "https://api.github.com/users/riyaj8888/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riyaj8888/subscriptions",
"organizations_url": "https://api.github.com/users/riyaj8888/orgs",
"repos_url": "https://api.github.com/users/riyaj8888/repos",
"events_url": "https://api.github.com/users/riyaj8888/events{/privacy}",
"received_events_url": "https://api.github.com/users/riyaj8888/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 19 | 2023-12-11T16:09:58 | 2024-01-08T10:12:40 | null | NONE | null | initially i was able to load this model , now suddenly its giving below error, in the same notebook
codellama/CodeLlama-7b-Instruct-hf does not appear to have a file named config.json | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27954/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27954/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27953 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27953/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27953/comments | https://api.github.com/repos/huggingface/transformers/issues/27953/events | https://github.com/huggingface/transformers/issues/27953 | 2,035,940,203 | I_kwDOCUB6oc55Wftr | 27,953 | Multi GPU infrerence not supported with Mixtral(moe)! | {
"login": "DataCTE",
"id": 105170707,
"node_id": "U_kgDOBkTHEw",
"avatar_url": "https://avatars.githubusercontent.com/u/105170707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DataCTE",
"html_url": "https://github.com/DataCTE",
"followers_url": "https://api.github.com/users/DataCTE/followers",
"following_url": "https://api.github.com/users/DataCTE/following{/other_user}",
"gists_url": "https://api.github.com/users/DataCTE/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DataCTE/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DataCTE/subscriptions",
"organizations_url": "https://api.github.com/users/DataCTE/orgs",
"repos_url": "https://api.github.com/users/DataCTE/repos",
"events_url": "https://api.github.com/users/DataCTE/events{/privacy}",
"received_events_url": "https://api.github.com/users/DataCTE/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-12-11T15:31:21 | 2024-01-11T12:34:33 | 2024-01-11T12:34:33 | NONE | null | ### System Info
(most recent call last):
File "/deep-pool/inference/text-generation-webui/modules/callbacks.py", line 57, in gentask
ret = self.mfunc(callback=_callback, args, self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/deep-pool/inference/text-generation-webui/modules/text_generation.py", line 352, in generate_with_callback
shared.model.generate(kwargs)
File "/home/alex/miniconda/envs/textgen/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(args, kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/deep-pool/inference/text-generation-webui/transformers/src/transformers/generation/utils.py", line 1764, in generate
return self.sample(
^^^^^^^^^^^^
File "/deep-pool/inference/text-generation-webui/transformers/src/transformers/generation/utils.py", line 2861, in sample
outputs = self(
^^^^^
File "/home/alex/miniconda/envs/textgen/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/alex/miniconda/envs/textgen/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/alex/miniconda/envs/textgen/lib/python3.11/site-packages/accelerate/hooks.py", line 165, in new_forward
output = module._old_forward(args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/deep-pool/inference/text-generation-webui/transformers/src/transformers/models/mixtral/modeling_mixtral.py", line 1244, in forward
aux_loss = load_balancing_loss_func(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/deep-pool/inference/text-generation-webui/transformers/src/transformers/models/mixtral/modeling_mixtral.py", line 98, in load_balancing_loss_func
gate_logits = torch.cat(gate_logits, dim=0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument tensors in method wrapper_CUDA_cat)
Output generated in 2.42 seconds (0.00 tokens/s, 0 tokens, context 65, seed 459973075)
it seems no matter what I try Mixtral models explicitly do not support multi-GPU inference. No other model on via transformers has this from what I know and this seems to be a bug of some kind.
thank you so much for your time.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True, device_map="auto" ,use_flash_attention_2=True)
text = "Hello my name is"
inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
### Expected behavior
model output (but getting multi gpu inference not supported) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27953/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27953/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27951 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27951/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27951/comments | https://api.github.com/repos/huggingface/transformers/issues/27951/events | https://github.com/huggingface/transformers/pull/27951 | 2,035,826,425 | PR_kwDOCUB6oc5hsLfw | 27,951 | Fix AMD scheduled CI not triggered | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-11T14:37:36 | 2023-12-11T15:22:12 | 2023-12-11T15:22:11 | COLLABORATOR | null | # What does this PR do?
A bug is introduced in #27743: the AMD scheduled CI is restructured, but the `github.event_name == 'schedule'` should be changed to `github.event_name == 'workflow_run'`.
Curretnly, the (actual) AMD scheduled CI is not triggered. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27951/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27951",
"html_url": "https://github.com/huggingface/transformers/pull/27951",
"diff_url": "https://github.com/huggingface/transformers/pull/27951.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27951.patch",
"merged_at": "2023-12-11T15:22:11"
} |
https://api.github.com/repos/huggingface/transformers/issues/27950 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27950/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27950/comments | https://api.github.com/repos/huggingface/transformers/issues/27950/events | https://github.com/huggingface/transformers/pull/27950 | 2,035,788,688 | PR_kwDOCUB6oc5hsDNQ | 27,950 | [`Awq`] Enable the possibility to skip quantization for some target modules | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-12-11T14:19:25 | 2023-12-25T10:17:58 | 2023-12-25T10:06:57 | CONTRIBUTOR | null | # What does this PR do?
Adds the possibility to load AWQ models if some modules of the model are skipped for quantization.
E.g. for whisper, Llava, Mixtral, we respectively don't want to quantize the encoder, vision encoder and the gate layer to ensure inference stability.
Let's merge it once AWQ makes the 0.1.8 release
cc @ArthurZucker @casper-hansen @TheBloke @SunMarc
https://github.com/casper-hansen/AutoAWQ/pull/248
This PR makes it also possible to run multi-modal models with AWQ:
```py
from transformers import pipeline
from PIL import Image
import requests
model_id = "ybelkada/llava-1.5-7b-hf-awq"
pipe = pipeline("image-to-text", model=model_id, device=0)
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/compel-neg.png"
image = Image.open(requests.get(url, stream=True).raw)
prompt = "USER: <image>\nCan you please describe this image?\nASSISTANT:"
outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 100})
print(outputs[0]["generated_text"])
```

> USER: \nCan you please describe this image?\nASSISTANT: The image features a brown and white cat sitting on a green surface, possibly a carpet or a grassy area. The cat is holding a red ball in its paws, seemingly playing with it. The cat appears to be focused on the ball, possibly preparing to play or just enjoying the toy.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27950/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27950/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27950",
"html_url": "https://github.com/huggingface/transformers/pull/27950",
"diff_url": "https://github.com/huggingface/transformers/pull/27950.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27950.patch",
"merged_at": "2023-12-25T10:06:57"
} |
https://api.github.com/repos/huggingface/transformers/issues/27949 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27949/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27949/comments | https://api.github.com/repos/huggingface/transformers/issues/27949/events | https://github.com/huggingface/transformers/pull/27949 | 2,035,770,836 | PR_kwDOCUB6oc5hr_Sj | 27,949 | In PreTrainedTokenizerBase add missing word in error message | {
"login": "petergtz",
"id": 3618401,
"node_id": "MDQ6VXNlcjM2MTg0MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3618401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/petergtz",
"html_url": "https://github.com/petergtz",
"followers_url": "https://api.github.com/users/petergtz/followers",
"following_url": "https://api.github.com/users/petergtz/following{/other_user}",
"gists_url": "https://api.github.com/users/petergtz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/petergtz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/petergtz/subscriptions",
"organizations_url": "https://api.github.com/users/petergtz/orgs",
"repos_url": "https://api.github.com/users/petergtz/repos",
"events_url": "https://api.github.com/users/petergtz/events{/privacy}",
"received_events_url": "https://api.github.com/users/petergtz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-11T14:10:44 | 2023-12-21T14:03:36 | 2023-12-11T15:12:40 | CONTRIBUTOR | null | # What does this PR do?
This is a minor cosmetic change in the error message when invoking the tokenizer:
"text input must of type" -> "text input must be of type"
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27949/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27949",
"html_url": "https://github.com/huggingface/transformers/pull/27949",
"diff_url": "https://github.com/huggingface/transformers/pull/27949.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27949.patch",
"merged_at": "2023-12-11T15:12:40"
} |
https://api.github.com/repos/huggingface/transformers/issues/27948 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27948/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27948/comments | https://api.github.com/repos/huggingface/transformers/issues/27948/events | https://github.com/huggingface/transformers/pull/27948 | 2,035,739,062 | PR_kwDOCUB6oc5hr4Ou | 27,948 | Hot-fix-mixstral-loss | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-11T13:56:35 | 2023-12-12T11:20:30 | 2023-12-12T11:20:29 | COLLABORATOR | null | # What does this PR do?
Fixes
```python
load_balancing_loss_func
gate_logits = torch.cat(gate_logits, dim=0)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:3 and cuda:7! (when checking argument for argument tensors in method wrapper_cat)
gate_logits = torch.cat(gate_logits, dim=0)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:2 and cuda:6! (when checking argument for argument tensors in method wrapper_cat)
```
which appears when computing the loss in parallel settings (accelerate) .
The actual tensors are pretty small ( batch x seq_length , 2) so putting them all on cpu should be alright. There is no perfect solution for now.
Either this or we use some gather operation | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27948/reactions",
"total_count": 7,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 7,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27948/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27948",
"html_url": "https://github.com/huggingface/transformers/pull/27948",
"diff_url": "https://github.com/huggingface/transformers/pull/27948.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27948.patch",
"merged_at": "2023-12-12T11:20:29"
} |
https://api.github.com/repos/huggingface/transformers/issues/27947 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27947/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27947/comments | https://api.github.com/repos/huggingface/transformers/issues/27947/events | https://github.com/huggingface/transformers/pull/27947 | 2,035,719,098 | PR_kwDOCUB6oc5hrzzR | 27,947 | Fix test for auto_find_batch_size on multi-GPU | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-11T13:46:52 | 2023-12-11T14:57:43 | 2023-12-11T14:57:41 | CONTRIBUTOR | null | # What does this PR do?
The new test added in https://github.com/huggingface/transformers/pull/27568 doesn't account for multi-GPU, when the `bs` is multiplied by `n_gpu` for the effective train batch size. This PR modifies the test slightly as a result to work on any number of GPUs (and CPU)
Fixes # (issue)
Failing nightly test
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27947/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27947/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27947",
"html_url": "https://github.com/huggingface/transformers/pull/27947",
"diff_url": "https://github.com/huggingface/transformers/pull/27947.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27947.patch",
"merged_at": "2023-12-11T14:57:41"
} |
https://api.github.com/repos/huggingface/transformers/issues/27946 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27946/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27946/comments | https://api.github.com/repos/huggingface/transformers/issues/27946/events | https://github.com/huggingface/transformers/pull/27946 | 2,035,678,553 | PR_kwDOCUB6oc5hrq4b | 27,946 | Update import message | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-11T13:26:09 | 2023-12-11T14:58:06 | 2023-12-11T14:58:06 | CONTRIBUTOR | null | # What does this PR do?
Currently, if you do have Accelerate installed, but you don't have the `min_version` specified [here](https://github.com/huggingface/transformers/blob/56be5e80e6cd5264012eb9ea84bd589233a503d9/src/transformers/utils/import_utils.py#L671), you will get a message saying Accelerate is not installed.
So I've improved the error message.
cc @muellerzr | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27946/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27946",
"html_url": "https://github.com/huggingface/transformers/pull/27946",
"diff_url": "https://github.com/huggingface/transformers/pull/27946.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27946.patch",
"merged_at": "2023-12-11T14:58:06"
} |
https://api.github.com/repos/huggingface/transformers/issues/27945 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27945/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27945/comments | https://api.github.com/repos/huggingface/transformers/issues/27945/events | https://github.com/huggingface/transformers/pull/27945 | 2,035,613,327 | PR_kwDOCUB6oc5hrccI | 27,945 | Fix parameter count in readme for mixtral 45b | {
"login": "CyberTimon",
"id": 78795905,
"node_id": "MDQ6VXNlcjc4Nzk1OTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/78795905?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CyberTimon",
"html_url": "https://github.com/CyberTimon",
"followers_url": "https://api.github.com/users/CyberTimon/followers",
"following_url": "https://api.github.com/users/CyberTimon/following{/other_user}",
"gists_url": "https://api.github.com/users/CyberTimon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CyberTimon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CyberTimon/subscriptions",
"organizations_url": "https://api.github.com/users/CyberTimon/orgs",
"repos_url": "https://api.github.com/users/CyberTimon/repos",
"events_url": "https://api.github.com/users/CyberTimon/events{/privacy}",
"received_events_url": "https://api.github.com/users/CyberTimon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-11T12:51:00 | 2023-12-11T14:58:49 | 2023-12-11T14:58:49 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR fixes the parameter count number in the readme. In the [mistral blog post](https://mistral.ai/news/mixtral-of-experts/) they state that it's a 45b model and not a 84b / 85b.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27945/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27945",
"html_url": "https://github.com/huggingface/transformers/pull/27945",
"diff_url": "https://github.com/huggingface/transformers/pull/27945.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27945.patch",
"merged_at": "2023-12-11T14:58:49"
} |
https://api.github.com/repos/huggingface/transformers/issues/27944 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27944/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27944/comments | https://api.github.com/repos/huggingface/transformers/issues/27944/events | https://github.com/huggingface/transformers/pull/27944 | 2,035,356,548 | PR_kwDOCUB6oc5hqjWg | 27,944 | Update bounding box format everywhere | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-11T10:38:09 | 2023-12-11T18:03:42 | 2023-12-11T18:03:42 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes an issue pointed out by https://discuss.huggingface.co/t/owl-vit-postprocess-api-bbox-conversion/65309/2, namely we state that we postprocess to the COCO API, but effectively we're using the Pascal VOC format.
A [very nice blog post](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#:~:text=albumentations%20is%20similar%20to%20pascal_voc,the%20height%20of%20the%20image) explaining all bounding box formats goes into more detail. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27944/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27944",
"html_url": "https://github.com/huggingface/transformers/pull/27944",
"diff_url": "https://github.com/huggingface/transformers/pull/27944.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27944.patch",
"merged_at": "2023-12-11T18:03:42"
} |
https://api.github.com/repos/huggingface/transformers/issues/27943 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27943/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27943/comments | https://api.github.com/repos/huggingface/transformers/issues/27943/events | https://github.com/huggingface/transformers/pull/27943 | 2,035,267,751 | PR_kwDOCUB6oc5hqPz_ | 27,943 | Fix PatchTSMixer Docstrings | {
"login": "vijaye12",
"id": 25958261,
"node_id": "MDQ6VXNlcjI1OTU4MjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/25958261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vijaye12",
"html_url": "https://github.com/vijaye12",
"followers_url": "https://api.github.com/users/vijaye12/followers",
"following_url": "https://api.github.com/users/vijaye12/following{/other_user}",
"gists_url": "https://api.github.com/users/vijaye12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vijaye12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vijaye12/subscriptions",
"organizations_url": "https://api.github.com/users/vijaye12/orgs",
"repos_url": "https://api.github.com/users/vijaye12/repos",
"events_url": "https://api.github.com/users/vijaye12/events{/privacy}",
"received_events_url": "https://api.github.com/users/vijaye12/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-11T09:51:51 | 2023-12-11T11:58:07 | 2023-12-11T11:56:57 | CONTRIBUTOR | null | Fix PatchTSMixer Docstring indentation issues. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27943/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27943/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27943",
"html_url": "https://github.com/huggingface/transformers/pull/27943",
"diff_url": "https://github.com/huggingface/transformers/pull/27943.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27943.patch",
"merged_at": "2023-12-11T11:56:57"
} |
https://api.github.com/repos/huggingface/transformers/issues/27942 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27942/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27942/comments | https://api.github.com/repos/huggingface/transformers/issues/27942/events | https://github.com/huggingface/transformers/pull/27942 | 2,035,251,962 | PR_kwDOCUB6oc5hqMaD | 27,942 | [`Add Mixtral`] Adds support for the Mixtral MoE | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-11T09:43:59 | 2023-12-11T13:35:25 | 2023-12-11T11:50:28 | COLLABORATOR | null | # What does this PR do?
Adds the latest MoE model from mistral AI | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27942/reactions",
"total_count": 62,
"+1": 26,
"-1": 0,
"laugh": 0,
"hooray": 18,
"confused": 0,
"heart": 18,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27942/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27942",
"html_url": "https://github.com/huggingface/transformers/pull/27942",
"diff_url": "https://github.com/huggingface/transformers/pull/27942.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27942.patch",
"merged_at": "2023-12-11T11:50:28"
} |
https://api.github.com/repos/huggingface/transformers/issues/27941 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27941/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27941/comments | https://api.github.com/repos/huggingface/transformers/issues/27941/events | https://github.com/huggingface/transformers/issues/27941 | 2,035,248,886 | I_kwDOCUB6oc55T272 | 27,941 | The "source" button in docs points to 404 | {
"login": "R-N",
"id": 1442761,
"node_id": "MDQ6VXNlcjE0NDI3NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1442761?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/R-N",
"html_url": "https://github.com/R-N",
"followers_url": "https://api.github.com/users/R-N/followers",
"following_url": "https://api.github.com/users/R-N/following{/other_user}",
"gists_url": "https://api.github.com/users/R-N/gists{/gist_id}",
"starred_url": "https://api.github.com/users/R-N/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/R-N/subscriptions",
"organizations_url": "https://api.github.com/users/R-N/orgs",
"repos_url": "https://api.github.com/users/R-N/repos",
"events_url": "https://api.github.com/users/R-N/events{/privacy}",
"received_events_url": "https://api.github.com/users/R-N/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-11T09:42:26 | 2023-12-12T04:28:28 | 2023-12-12T04:28:28 | NONE | null | ### System Info
Windows 11 64 bit 21h2
Google chrome latest
### Who can help?
@stevhliu
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Open [latest Trainer docs](https://huggingface.co/docs/transformers/main_classes/trainer)
2. Scroll to training_step
3. Click "source"
For me, it opens [this link](https://github.com/huggingface/transformers/blob/v4.36.0/src/transformers/trainer.py#L2697), which shows 404 for me.
### Expected behavior
It opens the source code, for example, source code of training_step. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27941/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27940 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27940/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27940/comments | https://api.github.com/repos/huggingface/transformers/issues/27940/events | https://github.com/huggingface/transformers/pull/27940 | 2,035,220,555 | PR_kwDOCUB6oc5hqFdH | 27,940 | Fix SDPA dispatch & make SDPA CI compatible with torch<2.1.1 | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-11T09:28:37 | 2023-12-11T09:56:39 | 2023-12-11T09:56:38 | COLLABORATOR | null | As per title.
On torch==2.0.1, these do pass
```
RUN_SLOW=1 pytest tests/models/bart -s -vvvvv -k "torchscript"
RUN_SLOW=1 pytest tests/models/llama -s -vvvvv -k "torchscript"
RUN_SLOW=1 pytest tests/models/whisper -s -vvvvv -k "torchscript"
RUN_SLOW=1 CUDA_VISIBLE_DEVICES=0 pytest tests/models/bert -s -vvvvv
RUN_SLOW=1 CUDA_VISIBLE_DEVICES=0 pytest tests/models/llama -s -vvvvv
```
On torch==2.1.1, these do pass (https://github.com/huggingface/transformers/pull/26572#issuecomment-1847774858)
```
RUN_SLOW=1 CUDA_VISIBLE_DEVICES=0 pytest tests/ -s -vvvvv -k "flash or sdpa"
RUN_SLOW=1 CUDA_VISIBLE_DEVICES=0 pytest tests/whisper -s -vvvvv -k "llama"
RUN_SLOW=1 CUDA_VISIBLE_DEVICES=0 pytest tests/models/llama -s -vvvvv
RUN_SLOW=1 CUDA_VISIBLE_DEVICES=0 pytest tests/models/bart -s -vvvvv
RUN_SLOW=1 CUDA_VISIBLE_DEVICES=0 pytest tests/models/bert -s -vvvvv
```
There was a bug where even though we manually request `attn_implementation="eager"`, we would still go into the SDPA controlflow and hard check that the requirements are fine. Which is not what we want. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27940/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27940",
"html_url": "https://github.com/huggingface/transformers/pull/27940",
"diff_url": "https://github.com/huggingface/transformers/pull/27940.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27940.patch",
"merged_at": "2023-12-11T09:56:38"
} |
https://api.github.com/repos/huggingface/transformers/issues/27939 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27939/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27939/comments | https://api.github.com/repos/huggingface/transformers/issues/27939/events | https://github.com/huggingface/transformers/pull/27939 | 2,034,958,797 | PR_kwDOCUB6oc5hpLtK | 27,939 | fix cpm-ant tokenizer name | {
"login": "jq460494839",
"id": 4471203,
"node_id": "MDQ6VXNlcjQ0NzEyMDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4471203?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jq460494839",
"html_url": "https://github.com/jq460494839",
"followers_url": "https://api.github.com/users/jq460494839/followers",
"following_url": "https://api.github.com/users/jq460494839/following{/other_user}",
"gists_url": "https://api.github.com/users/jq460494839/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jq460494839/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jq460494839/subscriptions",
"organizations_url": "https://api.github.com/users/jq460494839/orgs",
"repos_url": "https://api.github.com/users/jq460494839/repos",
"events_url": "https://api.github.com/users/jq460494839/events{/privacy}",
"received_events_url": "https://api.github.com/users/jq460494839/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-11T06:55:22 | 2024-01-02T17:35:10 | 2023-12-27T00:46:56 | NONE | null | # What does this PR do?
After comparison, I found that the names in the config file on HuggingFace and in the transformers library are inconsistent
Fixes #27938
@ArthurZucker @zh-zheng
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27939/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27939",
"html_url": "https://github.com/huggingface/transformers/pull/27939",
"diff_url": "https://github.com/huggingface/transformers/pull/27939.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27939.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27938 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27938/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27938/comments | https://api.github.com/repos/huggingface/transformers/issues/27938/events | https://github.com/huggingface/transformers/issues/27938 | 2,034,954,335 | I_kwDOCUB6oc55SvBf | 27,938 | ValueError: Tokenizer class CPMAntTokenizer does not exist or is not currently imported. | {
"login": "jq460494839",
"id": 4471203,
"node_id": "MDQ6VXNlcjQ0NzEyMDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4471203?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jq460494839",
"html_url": "https://github.com/jq460494839",
"followers_url": "https://api.github.com/users/jq460494839/followers",
"following_url": "https://api.github.com/users/jq460494839/following{/other_user}",
"gists_url": "https://api.github.com/users/jq460494839/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jq460494839/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jq460494839/subscriptions",
"organizations_url": "https://api.github.com/users/jq460494839/orgs",
"repos_url": "https://api.github.com/users/jq460494839/repos",
"events_url": "https://api.github.com/users/jq460494839/events{/privacy}",
"received_events_url": "https://api.github.com/users/jq460494839/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-11T06:51:44 | 2024-01-11T11:16:21 | 2024-01-11T11:16:21 | NONE | null | ### System Info
transformers==4.35.2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. download cpm-ant-10b from huggingface
2. load cpm-ant tokenizer locally
### Expected behavior
Traceback (most recent call last):
File "/opt/projects/FastChat/fastchat/train/train.py", line 301, in <module>
train()
File "/opt/projects/FastChat/fastchat/train/train.py", line 273, in train
tokenizer = transformers.AutoTokenizer.from_pretrained(
File "/root/miniconda/envs/torch_npu/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 766, in from_pretrained
raise ValueError(
ValueError: Tokenizer class CPMAntTokenizer does not exist or is not currently imported. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27938/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27938/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27937 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27937/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27937/comments | https://api.github.com/repos/huggingface/transformers/issues/27937/events | https://github.com/huggingface/transformers/issues/27937 | 2,034,800,258 | I_kwDOCUB6oc55SJaC | 27,937 | Whisper Large-v3 has problems with language detection | {
"login": "gau-nernst",
"id": 26946864,
"node_id": "MDQ6VXNlcjI2OTQ2ODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/26946864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gau-nernst",
"html_url": "https://github.com/gau-nernst",
"followers_url": "https://api.github.com/users/gau-nernst/followers",
"following_url": "https://api.github.com/users/gau-nernst/following{/other_user}",
"gists_url": "https://api.github.com/users/gau-nernst/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gau-nernst/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gau-nernst/subscriptions",
"organizations_url": "https://api.github.com/users/gau-nernst/orgs",
"repos_url": "https://api.github.com/users/gau-nernst/repos",
"events_url": "https://api.github.com/users/gau-nernst/events{/privacy}",
"received_events_url": "https://api.github.com/users/gau-nernst/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2023-12-11T04:25:44 | 2024-01-10T17:47:22 | null | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-5.15.0-89-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.22.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The audio file: https://drive.google.com/file/d/1EFWm7GpP79NUEmUO6rsLo444OugGRDHP/view?usp=sharing
```python
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="openai/whisper-large-v3")
output = pipe("sample.flac")
print(output)
```
v3 output: `{'text': ' Mari kita perlahan-lahan dengan penyelidikan baru dan baru yang telah dilakukan tahun lepas.'}`
v2 output: `{'text': " Let's go slow with this new and novel legalization passed last year."}`
tiny.en output: `{'text': " Let's go slow with this new and novel legalization past last year."}`
### Expected behavior
Output should be like v2 and the tiny.en model. I suspect there is something very wrong with language detection. I couldn't run v3 in the original repo (https://github.com/openai/whisper) due to OOM so I'm not sure if this is a problem with the v3 model itself or inference code in HF.
Seems related: https://github.com/huggingface/transformers/issues/27368#issuecomment-1835564217 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27937/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27936 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27936/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27936/comments | https://api.github.com/repos/huggingface/transformers/issues/27936/events | https://github.com/huggingface/transformers/issues/27936 | 2,034,584,228 | I_kwDOCUB6oc55RUqk | 27,936 | Problems importing LlavaForConditionalGeneration | {
"login": "ppsmk388",
"id": 60417397,
"node_id": "MDQ6VXNlcjYwNDE3Mzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/60417397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ppsmk388",
"html_url": "https://github.com/ppsmk388",
"followers_url": "https://api.github.com/users/ppsmk388/followers",
"following_url": "https://api.github.com/users/ppsmk388/following{/other_user}",
"gists_url": "https://api.github.com/users/ppsmk388/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ppsmk388/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ppsmk388/subscriptions",
"organizations_url": "https://api.github.com/users/ppsmk388/orgs",
"repos_url": "https://api.github.com/users/ppsmk388/repos",
"events_url": "https://api.github.com/users/ppsmk388/events{/privacy}",
"received_events_url": "https://api.github.com/users/ppsmk388/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-10T23:36:32 | 2023-12-16T02:26:46 | 2023-12-16T02:26:46 | NONE | null | ### System Info
transformers version: 4.35.2
Platform: Linux-5.10.0-26-cloud-amd64-x86_64-with-glibc2.31
Python version: 3.9.2
Huggingface_hub version: 0.19.4
Safetensors version: 0.4.1
Accelerate version: 0.25.0
PyTorch version (GPU?): 2.1.1+cu121 (True)
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import LlavaForConditionalGeneration
### Expected behavior
An error message will be displayed:
ImportError: cannot import name 'LlavaForConditionalGeneration' from 'transformers' (/data/kkk/anaconda3/envs/va/lib/python3.9/site-packages/transformers/__init__.py)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27936/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27936/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27935 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27935/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27935/comments | https://api.github.com/repos/huggingface/transformers/issues/27935/events | https://github.com/huggingface/transformers/pull/27935 | 2,034,508,529 | PR_kwDOCUB6oc5hnrHn | 27,935 | Fix tensor-parallelism link | {
"login": "steilgedacht",
"id": 89748204,
"node_id": "MDQ6VXNlcjg5NzQ4MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/89748204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/steilgedacht",
"html_url": "https://github.com/steilgedacht",
"followers_url": "https://api.github.com/users/steilgedacht/followers",
"following_url": "https://api.github.com/users/steilgedacht/following{/other_user}",
"gists_url": "https://api.github.com/users/steilgedacht/gists{/gist_id}",
"starred_url": "https://api.github.com/users/steilgedacht/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/steilgedacht/subscriptions",
"organizations_url": "https://api.github.com/users/steilgedacht/orgs",
"repos_url": "https://api.github.com/users/steilgedacht/repos",
"events_url": "https://api.github.com/users/steilgedacht/events{/privacy}",
"received_events_url": "https://api.github.com/users/steilgedacht/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 6 | 2023-12-10T19:48:32 | 2024-01-22T08:04:30 | 2024-01-22T08:04:30 | NONE | null | # What does this PR do?
Replaces the old link in the llama configuration file to the new section on the website.
Clean PR of #27840
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@stevhliu
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27935/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27935/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27935",
"html_url": "https://github.com/huggingface/transformers/pull/27935",
"diff_url": "https://github.com/huggingface/transformers/pull/27935.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27935.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27934 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27934/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27934/comments | https://api.github.com/repos/huggingface/transformers/issues/27934/events | https://github.com/huggingface/transformers/pull/27934 | 2,034,454,834 | PR_kwDOCUB6oc5hngpN | 27,934 | [BEiT] Fix test | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-10T17:09:31 | 2023-12-11T08:17:03 | 2023-12-11T08:17:03 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes the failing test_forward_signature test for `BeitBackbone`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27934/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27934",
"html_url": "https://github.com/huggingface/transformers/pull/27934",
"diff_url": "https://github.com/huggingface/transformers/pull/27934.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27934.patch",
"merged_at": "2023-12-11T08:17:02"
} |
https://api.github.com/repos/huggingface/transformers/issues/27933 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27933/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27933/comments | https://api.github.com/repos/huggingface/transformers/issues/27933/events | https://github.com/huggingface/transformers/issues/27933 | 2,034,416,293 | I_kwDOCUB6oc55Qrql | 27,933 | Migrate to Pydantic v2 | {
"login": "lmmx",
"id": 2979452,
"node_id": "MDQ6VXNlcjI5Nzk0NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2979452?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lmmx",
"html_url": "https://github.com/lmmx",
"followers_url": "https://api.github.com/users/lmmx/followers",
"following_url": "https://api.github.com/users/lmmx/following{/other_user}",
"gists_url": "https://api.github.com/users/lmmx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lmmx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lmmx/subscriptions",
"organizations_url": "https://api.github.com/users/lmmx/orgs",
"repos_url": "https://api.github.com/users/lmmx/repos",
"events_url": "https://api.github.com/users/lmmx/events{/privacy}",
"received_events_url": "https://api.github.com/users/lmmx/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | null | [] | null | 7 | 2023-12-10T15:26:11 | 2024-01-26T16:39:35 | 2024-01-26T16:39:34 | NONE | null | ### Feature request
Pydantic v2 was released five months ago in June 2023.
Transformers has pinned to v1 (#24596), which should only be used as a temporary solution.
Leaving it this way means that the many new features of Pydantic 2 are missed, and leaves little hope for the library to keep pace as a roadmap to v3 is already emerging.
In #24597 it was mentioned that part of the barrier was (at the time) in external dependencies that couple Transformers to v1:
> Regarding using Pydantic V2, I am afraid that the involved places are not directly in `transformers` codebase.
>
> For example, in
>
> [#24596 (comment)](https://github.com/huggingface/transformers/pull/24596#issuecomment-1615176591)
>
> it shows
>
> ```shell
> 2023-06-30T20:07:31.9883431Z > [19/19] RUN python3 -c "from deepspeed.launcher.runner import main":
> 2023-06-30T20:07:31.9883916Z 1.621 from deepspeed.runtime.zero.config import DeepSpeedZeroConfig
> 2023-06-30T20:07:31.9884613Z 1.621 File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/zero/config.py", line 76, in <module>
> 2023-06-30T20:07:31.9885116Z 1.621 class DeepSpeedZeroConfig(DeepSpeedConfigModel):
> 2023-06-30T20:07:31.9885814Z 1.621 File "/usr/local/lib/python3.8/dist-packages/pydantic/_internal/_model_construction.py", line 171, in __new__
> 2023-06-30T20:07:31.9886256Z 1.621 set_model_fields(cls, bases, config_wrapper, types_namespace)
> 2023-06-30T20:07:31.9886812Z 1.621 File "/usr/local/lib/python3.8/dist-packages/pydantic/_internal/_model_construction.py", line 361, in set_model_fields
> 2023-06-30T20:07:31.9887329Z 1.621 fields, class_vars = collect_model_fields(cls, bases, config_wrapper, types_namespace, typevars_map=typevars_map)
> 2023-06-30T20:07:31.9888039Z 1.621 File "/usr/local/lib/python3.8/dist-packages/pydantic/_internal/_fields.py", line 112, in collect_model_fields
> 2023-06-30T20:07:31.9888950Z 1.621 raise NameError(f'Field "{ann_name}" has conflict with protected namespace "{protected_namespace}"')
> 2023-06-30T20:07:31.9889546Z 1.621 NameError: Field "model_persistence_threshold" has conflict with protected namespace "
> ```
>
> which indicates `/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/zero/config.py` using `pydantic`.
>
> It's the 3rd party libraries using pydantic have to do something in order to be run with pydantic V2. Right now, `transformers` can only pin v1 and wait.
These barriers should at the very least be enumerated, I’m sure there are ways to deal with them without holding the entire repo’s development back.
Libraries such as SQLModel have included support for both v1 and v2.
- https://github.com/tiangolo/sqlmodel/pull/722
- https://github.com/tiangolo/sqlmodel/pull/709
- https://github.com/tiangolo/sqlmodel/pull/699 (first draft, ultimately not merged)
The pin adopted in Transformers has already begun to cause clashes with other libraries on v2 such as Gradio (v2.4.2 as raised in #27273)
> Eventually, if `pydantic>=2` is used by many libraries, we might consider to update the requirement (as long as not so many things breaking 😄 )
I fully appreciate the need to maintain backcompatibility and it is possible to support both, as examples like SQLModel have demonstrated.
### Motivation
The syntax of Pydantic v1 is incompatible with v2. Backpinning should only be used as a temporary measure, it is not a sustainable long-term approach. Specifically, the pin would be relaxed to `pydantic<3.0.0` as in SQLModel.
### Your contribution
I am opening this feature request to begin discussion and hopefully contribute to its resolution. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27933/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27933/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27932 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27932/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27932/comments | https://api.github.com/repos/huggingface/transformers/issues/27932/events | https://github.com/huggingface/transformers/pull/27932 | 2,034,339,234 | PR_kwDOCUB6oc5hnJXa | 27,932 | Adds VIP-llava to transformers | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-10T11:55:41 | 2023-12-13T09:42:28 | 2023-12-13T09:42:24 | CONTRIBUTOR | null | # What does this PR do?
VIP-llava is a new Llava variant. It seems the only difference between Llava and VIP-Llava is that VIP-llava uses a projector layernorm before passing the hidden states into the MM projector. It also concatenates many hidden states from the image encoder before passing it to the multi-modal projector.
```python
from transformers import pipeline
from PIL import Image
import requests
model_id = "ybelkada/vip-llava-7b"
pipe = pipeline("image-to-text", model=model_id, model_kwargs={"load_in_4bit": True, "use_flash_attention_2": True})
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/compel-neg.png"
image = Image.open(requests.get(url, stream=True).raw)
prompt = "USER: <image>\nCan you please describe this image?\nASSISTANT:"
outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 100})
print(outputs[0]["generated_text"])
>>> USER: <image>
Can you please describe this image?
ASSISTANT: The image features a brown and white cat sitting on a green surface, with a red ball in its paw. The cat appears to be playing with the ball, possibly a sports ball, as it is positioned in a relaxed manner. The cat's eyes are wide open, indicating that it is focused on the ball and possibly in the middle of a playful moment.
```

> The image features a brown and white cat sitting on a green surface, with a red ball in its paw. The cat appears to be playing with the ball, possibly a sports ball, as it is positioned in a relaxed manner. The cat's eyes are wide open, indicating that it is focused on the ball and possibly in the middle of a playful moment.
Also compatible with Flash Attention 2.
https://github.com/mu-cai/ViP-LLaVA
cc @ArthurZucker @NielsRogge @mu-cai @haotian-liu | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27932/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27932/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27932",
"html_url": "https://github.com/huggingface/transformers/pull/27932",
"diff_url": "https://github.com/huggingface/transformers/pull/27932.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27932.patch",
"merged_at": "2023-12-13T09:42:24"
} |
https://api.github.com/repos/huggingface/transformers/issues/27931 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27931/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27931/comments | https://api.github.com/repos/huggingface/transformers/issues/27931/events | https://github.com/huggingface/transformers/pull/27931 | 2,034,296,431 | PR_kwDOCUB6oc5hnAoy | 27,931 | [`Core generation`] Adds support for static KV cache | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 7 | 2023-12-10T09:48:11 | 2024-02-01T06:24:51 | null | COLLABORATOR | null | # What does this PR do?
Draft for now (fyi @gante, @patrickvonplaten and @tomaarsen ) 🤗
- [x] Refactors the way we deal with attention mask:
- causal and padding are seperated
- merged in 1 line. No attention mask utils are needed, no extra complicated logic all explicit
- LlamaAttention is now self contained. Makes a lot more sense TBH
- [x] Save the cache class in the generation config
- [x] Init the cache with the batch size (from the generate call) and the `max_length` from the generation config
- [x] torch.compile + cprofile
Currently getting ~7x to 10x speedups compare to dynamic cache with torch.compile for a single forward pass (agnostic to batch but faster for smaller batch)
Some benchmarks on running vs Dynamic Cache:
1. dynamic only
- Current PR, dynamic only
<img width="584" alt="image" src="https://github.com/huggingface/transformers/assets/48595927/84bd8542-ec71-4261-bbd0-75c84ea2328a">
- Main, dynamic only
2. Dynamic vs Static vs Static Compiled
fixes #28075 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27931/reactions",
"total_count": 11,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 7,
"rocket": 4,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27931/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27931",
"html_url": "https://github.com/huggingface/transformers/pull/27931",
"diff_url": "https://github.com/huggingface/transformers/pull/27931.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27931.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27930 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27930/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27930/comments | https://api.github.com/repos/huggingface/transformers/issues/27930/events | https://github.com/huggingface/transformers/issues/27930 | 2,034,262,567 | I_kwDOCUB6oc55QGIn | 27,930 | An error when creating test_dataloader in Time series transformer | {
"login": "kkckk1110",
"id": 144304282,
"node_id": "U_kgDOCJnomg",
"avatar_url": "https://avatars.githubusercontent.com/u/144304282?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kkckk1110",
"html_url": "https://github.com/kkckk1110",
"followers_url": "https://api.github.com/users/kkckk1110/followers",
"following_url": "https://api.github.com/users/kkckk1110/following{/other_user}",
"gists_url": "https://api.github.com/users/kkckk1110/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kkckk1110/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kkckk1110/subscriptions",
"organizations_url": "https://api.github.com/users/kkckk1110/orgs",
"repos_url": "https://api.github.com/users/kkckk1110/repos",
"events_url": "https://api.github.com/users/kkckk1110/events{/privacy}",
"received_events_url": "https://api.github.com/users/kkckk1110/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/k... | null | 18 | 2023-12-10T07:59:56 | 2023-12-12T17:40:20 | 2023-12-12T17:40:19 | NONE | null | ### System Info
I am running a time series transformer following the tutorial in Huggingface.
I have dynamic_features_real = 48 in my dataset. However, I came across an error when creating the test_dataloader:
```python
ValueError: all the input array dimensions except for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 75 and the array at index 2 has size 67.
``
some settings in my codes are:
```python
len(train_example['target']) = 59
len(validation_example['target']) = 67
np.array(validation_example['feat_dynamic_real']).shape = (48, 67)
prediction_length = 8
```
I think the problem is with the feat_dynamic_real, because when I set it = 0, the codes run normally. However, I tried but fail to solve the problem.
Can anyone help me fix the problem? Thanks a lot!
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
batch = next(iter(test_dataloader))
for k, v in batch.items():
print(k, v.shape, v.type())
```
ValueError: all the input array dimensions except for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 75 and the array at index 2 has size 67
### Expected behavior
I hope to fix the problem. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27930/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27929 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27929/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27929/comments | https://api.github.com/repos/huggingface/transformers/issues/27929/events | https://github.com/huggingface/transformers/pull/27929 | 2,034,210,195 | PR_kwDOCUB6oc5hmvSt | 27,929 | fix: handle multiprocess properly in trainer checkpointing | {
"login": "thundergolfer",
"id": 12058921,
"node_id": "MDQ6VXNlcjEyMDU4OTIx",
"avatar_url": "https://avatars.githubusercontent.com/u/12058921?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thundergolfer",
"html_url": "https://github.com/thundergolfer",
"followers_url": "https://api.github.com/users/thundergolfer/followers",
"following_url": "https://api.github.com/users/thundergolfer/following{/other_user}",
"gists_url": "https://api.github.com/users/thundergolfer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thundergolfer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thundergolfer/subscriptions",
"organizations_url": "https://api.github.com/users/thundergolfer/orgs",
"repos_url": "https://api.github.com/users/thundergolfer/repos",
"events_url": "https://api.github.com/users/thundergolfer/events{/privacy}",
"received_events_url": "https://api.github.com/users/thundergolfer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 8 | 2023-12-10T04:06:19 | 2023-12-13T17:14:22 | 2023-12-13T17:14:21 | CONTRIBUTOR | null | # What does this PR do?
Follow-up to https://github.com/huggingface/transformers/pull/27820 which is bugged for multi-device/multiprocess training. I made the error of thinking that in multiprocess training the `._save_checkpoint()` method was already restricted to a single writer.
I've fixed that now and augmented an existing multiprocess test to validate checkpointing functionality.
I've also noted with a `TODO` something I found pretty confusing in the current code. `store_flos()` isn't checkpointing related in my opinion, but it does an `all_gather` and thus if all processes don't enter the `store_flos()` fn the training program hangs. In my opinion this code should be moved out of the checkpointing method so that this method conceptually supports entrance and execution by a single writer (the process with `self.args.should_save == True`).
I didn't setup a multi-GPU VM to run the test, but this multi-GPU Modal script runs and passes the test:
```python
import modal
import subprocess
GIT_SHA = "d867b232d46a0652e1bfe6eda7bc0804b9ad5ea4" # my fork's latest commit
image = (
modal.Image.debian_slim(python_version="3.10")
.apt_install("git").pip_install("pytest")
.run_commands(
"cd /root && git init .",
"cd /root && git remote add origin https://github.com/thundergolfer/transformers",
f"cd /root && git fetch --depth=1 origin {GIT_SHA} && git checkout {GIT_SHA}",
"cd /root && pip install -e \".[dev]\"",
)
)
stub = modal.Stub(image=image)
@stub.function(
gpu=modal.gpu.T4(count=2),
# Can uncomment this to quickly modify local test implementation
# and sync with remote container.
# mounts=[modal.Mount.from_local_file(
# local_path="./tests/trainer/test_trainer.py",
# remote_path="/root/tests/trainer/test_trainer.py",
# )],
secrets=[modal.Secret.from_dict({"RUN_SLOW": "1", "NCCL_P2P_LEVEL": "PIX"})],
timeout=600,
)
def run():
subprocess.run("nvidia-smi", shell=True, check=True)
test_module = "tests/trainer/test_trainer.py"
test_identifier = f"{test_module}::TrainerIntegrationTest::test_end_to_end_example"
subprocess.run(f"pytest -s -v {test_identifier}", shell=True, check=True)
```
**Fixes** https://github.com/huggingface/transformers/issues/27925
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@muellerzr, @pacman100 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27929/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27929/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27929",
"html_url": "https://github.com/huggingface/transformers/pull/27929",
"diff_url": "https://github.com/huggingface/transformers/pull/27929.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27929.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27928 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27928/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27928/comments | https://api.github.com/repos/huggingface/transformers/issues/27928/events | https://github.com/huggingface/transformers/issues/27928 | 2,034,199,269 | I_kwDOCUB6oc55P2rl | 27,928 | [Question] What is the main difference between "AutoModelForCasualLM" and "PeftModelForCausalLM"? | {
"login": "daehuikim",
"id": 40377750,
"node_id": "MDQ6VXNlcjQwMzc3NzUw",
"avatar_url": "https://avatars.githubusercontent.com/u/40377750?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daehuikim",
"html_url": "https://github.com/daehuikim",
"followers_url": "https://api.github.com/users/daehuikim/followers",
"following_url": "https://api.github.com/users/daehuikim/following{/other_user}",
"gists_url": "https://api.github.com/users/daehuikim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daehuikim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daehuikim/subscriptions",
"organizations_url": "https://api.github.com/users/daehuikim/orgs",
"repos_url": "https://api.github.com/users/daehuikim/repos",
"events_url": "https://api.github.com/users/daehuikim/events{/privacy}",
"received_events_url": "https://api.github.com/users/daehuikim/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 15 | 2023-12-10T03:10:36 | 2024-02-01T00:49:07 | 2024-02-01T00:49:07 | NONE | null | I also wrote it down in peft repo. However this issue is also related to transformers. So i write my question here again.
issue is here in peft(https://github.com/huggingface/peft/issues/1245)
Hello, Sorry for naive question.
I noticed that the``model.generate()`` function performed differently when inferrence right after train with ```trainer.model``` and after merge and unload. (Every params are the same.)
So I checked two different object with simple print function.
Difference was the object that contains model.
1. ```model = trainer.model```
```
PeftModelForCausalLM(
(base_model): LoraModel(
(model): LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): ModulesToSaveWrapper(
(original_module): Embedding(32008, 5120)
(modules_to_save): ModuleDict(
(default): Embedding(32008, 5120)
)
)
(layers): ModuleList(
(0-39): 40 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear4bit(
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=5120, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=5120, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False)
)
(k_proj): Linear4bit(
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=5120, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=5120, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False)
)
(v_proj): Linear4bit(
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=5120, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=5120, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False)
)
(o_proj): Linear4bit(
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=5120, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=5120, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False)
)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear4bit(
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=5120, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=13824, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(base_layer): Linear4bit(in_features=5120, out_features=13824, bias=False)
)
(up_proj): Linear4bit(
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=5120, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=13824, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(base_layer): Linear4bit(in_features=5120, out_features=13824, bias=False)
)
(down_proj): Linear4bit(
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=13824, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=5120, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(base_layer): Linear4bit(in_features=13824, out_features=5120, bias=False)
)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): ModulesToSaveWrapper(
(original_module): Linear(in_features=5120, out_features=32008, bias=False)
(modules_to_save): ModuleDict(
(default): Linear(in_features=5120, out_features=32008, bias=False)
)
)
)
)
)
```
2. ```AutoModelForCasualLM.from_pretrained(after merging lora adapter)```
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32008, 5120)
(layers): ModuleList(
(0-39): 40 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear4bit(in_features=5120, out_features=5120, bias=False)
(k_proj): Linear4bit(in_features=5120, out_features=5120, bias=False)
(v_proj): Linear4bit(in_features=5120, out_features=5120, bias=False)
(o_proj): Linear4bit(in_features=5120, out_features=5120, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear4bit(in_features=5120, out_features=13824, bias=False)
(up_proj): Linear4bit(in_features=5120, out_features=13824, bias=False)
(down_proj): Linear4bit(in_features=13824, out_features=5120, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=5120, out_features=32008, bias=False)
)
```
I think both modes should work exactly the same way, but when I inferred with the model.generate function, I found that #1 (PeftModelForCausalLM) works much more accurately. I'd like to know why, is there a theoretical or engineering reason for this?
Thanks for watching my long long question! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27928/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27927 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27927/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27927/comments | https://api.github.com/repos/huggingface/transformers/issues/27927/events | https://github.com/huggingface/transformers/issues/27927 | 2,034,137,763 | I_kwDOCUB6oc55Pnqj | 27,927 | Terminate TextIteratorStreamer Before Done | {
"login": "fakerybakery",
"id": 76186054,
"node_id": "MDQ6VXNlcjc2MTg2MDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/76186054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fakerybakery",
"html_url": "https://github.com/fakerybakery",
"followers_url": "https://api.github.com/users/fakerybakery/followers",
"following_url": "https://api.github.com/users/fakerybakery/following{/other_user}",
"gists_url": "https://api.github.com/users/fakerybakery/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fakerybakery/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fakerybakery/subscriptions",
"organizations_url": "https://api.github.com/users/fakerybakery/orgs",
"repos_url": "https://api.github.com/users/fakerybakery/repos",
"events_url": "https://api.github.com/users/fakerybakery/events{/privacy}",
"received_events_url": "https://api.github.com/users/fakerybakery/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-09T23:46:55 | 2023-12-10T16:31:47 | 2023-12-10T16:31:46 | NONE | null | Hi,
Is there any way to terminate a TextIteratorStreamer before the text has finished generating? Related to [this](https://github.com/gradio-app/gradio/issues/6724).
Thank you! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27927/timeline | null | not_planned | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27925 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27925/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27925/comments | https://api.github.com/repos/huggingface/transformers/issues/27925/events | https://github.com/huggingface/transformers/issues/27925 | 2,033,911,870 | I_kwDOCUB6oc55Owg- | 27,925 | Save model checkpoint error when multi-gpu training | {
"login": "Cospui",
"id": 36847795,
"node_id": "MDQ6VXNlcjM2ODQ3Nzk1",
"avatar_url": "https://avatars.githubusercontent.com/u/36847795?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cospui",
"html_url": "https://github.com/Cospui",
"followers_url": "https://api.github.com/users/Cospui/followers",
"following_url": "https://api.github.com/users/Cospui/following{/other_user}",
"gists_url": "https://api.github.com/users/Cospui/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cospui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cospui/subscriptions",
"organizations_url": "https://api.github.com/users/Cospui/orgs",
"repos_url": "https://api.github.com/users/Cospui/repos",
"events_url": "https://api.github.com/users/Cospui/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cospui/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api... | null | 33 | 2023-12-09T16:18:07 | 2024-01-21T00:47:32 | 2023-12-13T17:17:32 | NONE | null | ### System Info
- `transformers` version: 4.36.0.dev0
- Platform: Linux-6.2.0-1017-azure-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
@muellerzr and @pacman100 I found when launch the example trainer code with multi-nodes, the code will raise a FileNotFound error when saving the checkpoint, and after debug, I think the reason is in `trainer.py` L2382:
```
if staging_output_dir != output_dir:
os.rename(staging_output_dir, output_dir)
```
When one process rename the folder, and other processes will encounter the FileNotFound error. Maybe one can modify the code like this to avoid the error:
```
if self.args.should_save and staging_output_dir != output_dir:
os.rename(staging_output_dir, output_dir)
```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the MAE training code from the example folder.
### Expected behavior
Solve the FileNotFound error. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27925/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/27925/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27924 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27924/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27924/comments | https://api.github.com/repos/huggingface/transformers/issues/27924/events | https://github.com/huggingface/transformers/pull/27924 | 2,033,894,083 | PR_kwDOCUB6oc5hltZg | 27,924 | Adding FA2 support for MusicGen | {
"login": "staghado",
"id": 84044788,
"node_id": "MDQ6VXNlcjg0MDQ0Nzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/84044788?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/staghado",
"html_url": "https://github.com/staghado",
"followers_url": "https://api.github.com/users/staghado/followers",
"following_url": "https://api.github.com/users/staghado/following{/other_user}",
"gists_url": "https://api.github.com/users/staghado/gists{/gist_id}",
"starred_url": "https://api.github.com/users/staghado/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/staghado/subscriptions",
"organizations_url": "https://api.github.com/users/staghado/orgs",
"repos_url": "https://api.github.com/users/staghado/repos",
"events_url": "https://api.github.com/users/staghado/events{/privacy}",
"received_events_url": "https://api.github.com/users/staghado/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 4 | 2023-12-09T15:29:24 | 2024-01-25T20:23:06 | null | CONTRIBUTOR | null | # What does this PR do?
This PR adds Flash Attention 2 support for MusicGen model. It is based on Bart example and it is a WIP for now.
I could not test the model because FA2 is not supported yet for T4 GPUs.
Fixes #27552
@sanchit-gandhi @ylacombe | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27924/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27924",
"html_url": "https://github.com/huggingface/transformers/pull/27924",
"diff_url": "https://github.com/huggingface/transformers/pull/27924.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27924.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27926 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27926/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27926/comments | https://api.github.com/repos/huggingface/transformers/issues/27926/events | https://github.com/huggingface/transformers/issues/27926 | 2,033,979,191 | I_kwDOCUB6oc55PA83 | 27,926 | Can't get add_generation_prompt to work correctly in apply_chat_template | {
"login": "odellus",
"id": 4686956,
"node_id": "MDQ6VXNlcjQ2ODY5NTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4686956?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/odellus",
"html_url": "https://github.com/odellus",
"followers_url": "https://api.github.com/users/odellus/followers",
"following_url": "https://api.github.com/users/odellus/following{/other_user}",
"gists_url": "https://api.github.com/users/odellus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/odellus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/odellus/subscriptions",
"organizations_url": "https://api.github.com/users/odellus/orgs",
"repos_url": "https://api.github.com/users/odellus/repos",
"events_url": "https://api.github.com/users/odellus/events{/privacy}",
"received_events_url": "https://api.github.com/users/odellus/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-12-09T15:09:44 | 2023-12-10T03:23:20 | 2023-12-10T03:23:19 | CONTRIBUTOR | null | I'm having trouble getting the `add_generation_prompt` feature working with `tokenizer.apply_chat_template`. I'm working with stablelm-zephyr-3b right now. I raised an issue on their HF model page, but I don't think the problem is with their chat template. Their chat template looks correct.
https://huggingface.co/stabilityai/stablelm-zephyr-3b/discussions/9
Discussion reproduced here so you don't have to click through:
Not able to get `tokenizer.apply_chat_template` to append the generation prompt for stablelm-zephyr-3b
```python
print(tokenizer.chat_template)
"{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'system' %}\n{{ '<|system|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|assistant|>\n' + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|assistant|>' }}\n{% endif %}\n{% endfor %}"
chat = [{'role': 'system', 'content': 'You are an excellent C++ programmer'}, {'role': 'user', 'content': 'Write a program to compute pairwise distances between atoms in a PDB file'}]
tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
'<|system|>\nYou are an excellent C++ programmer<|endoftext|>\n<|user|>\nWrite a program to compute pairwise distances between atoms in a PDB file<|endoftext|>\n'
tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=False)
'<|system|>\nYou are an excellent C++ programmer<|endoftext|>\n<|user|>\nWrite a program to compute pairwise distances between atoms in a PDB file<|endoftext|>\n'
```
Could this be an issue with tokenizer module? The chat template looks right. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27926/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27923 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27923/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27923/comments | https://api.github.com/repos/huggingface/transformers/issues/27923/events | https://github.com/huggingface/transformers/issues/27923 | 2,033,804,530 | I_kwDOCUB6oc55OWTy | 27,923 | SafetensorError: Error while deserializing header: HeaderTooLarge | {
"login": "KyrieCui",
"id": 37808472,
"node_id": "MDQ6VXNlcjM3ODA4NDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/37808472?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KyrieCui",
"html_url": "https://github.com/KyrieCui",
"followers_url": "https://api.github.com/users/KyrieCui/followers",
"following_url": "https://api.github.com/users/KyrieCui/following{/other_user}",
"gists_url": "https://api.github.com/users/KyrieCui/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KyrieCui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KyrieCui/subscriptions",
"organizations_url": "https://api.github.com/users/KyrieCui/orgs",
"repos_url": "https://api.github.com/users/KyrieCui/repos",
"events_url": "https://api.github.com/users/KyrieCui/events{/privacy}",
"received_events_url": "https://api.github.com/users/KyrieCui/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2023-12-09T11:12:10 | 2023-12-12T14:25:16 | 2023-12-12T14:25:16 | NONE | null | ### System Info
transformers version: 4.35.0
Platform: Linux-4.18.0-477.27.1.el8_8.x86_64.x86_64-x86_64-with-glibc2.28
Python version: 3.9.16
Huggingface_hub version:0.16.4
Accelerate version: 0.21.0
Safetensors version: 0.3.1
PyTorch version (GPU?): 22.0.1+cu117 (True)
### Who can help?
@ArthurZucker @SunMarc @younesbelkada
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
base_model = '/llm/llama2-2-70b-chat-hf'
model = AutoModelForCausalLM( base_model, load_in_8bit=True, device_map={"",0},use_safetensors=True)
in load_state_dict (checkpoint_file)
462 """
463 Reads a Pytorch checkpoint file, returning properly formatted errors if they arise.
464 """
465 if checkpoint_file.endswith(".safetensors") and is_safetensors_available():
-->466 with safe_open(checkpoint_file,framework="pt") as f:
467 metadata=f.metadate()
SafetensorError: Error while deserializing header: HeaderTooLarge
```
### Expected behavior
Expected tao load the model successful | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27923/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27923/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27922 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27922/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27922/comments | https://api.github.com/repos/huggingface/transformers/issues/27922/events | https://github.com/huggingface/transformers/issues/27922 | 2,033,681,878 | I_kwDOCUB6oc55N4XW | 27,922 | add system prompt option in .apply_chat_template() | {
"login": "ONE-THING-9",
"id": 123763769,
"node_id": "U_kgDOB2B8OQ",
"avatar_url": "https://avatars.githubusercontent.com/u/123763769?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ONE-THING-9",
"html_url": "https://github.com/ONE-THING-9",
"followers_url": "https://api.github.com/users/ONE-THING-9/followers",
"following_url": "https://api.github.com/users/ONE-THING-9/following{/other_user}",
"gists_url": "https://api.github.com/users/ONE-THING-9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ONE-THING-9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ONE-THING-9/subscriptions",
"organizations_url": "https://api.github.com/users/ONE-THING-9/orgs",
"repos_url": "https://api.github.com/users/ONE-THING-9/repos",
"events_url": "https://api.github.com/users/ONE-THING-9/events{/privacy}",
"received_events_url": "https://api.github.com/users/ONE-THING-9/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2023-12-09T07:11:25 | 2024-01-08T15:57:39 | 2023-12-24T06:12:29 | NONE | null | ### Feature request
Currently, I cannot find the option of adding a system prompt while doing tokenizer.apply_chat_template().
### Motivation
Because of this I have to avoid using apply_chat_template
### Your contribution
we can add this in {'role':'system'..........} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27922/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27921 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27921/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27921/comments | https://api.github.com/repos/huggingface/transformers/issues/27921/events | https://github.com/huggingface/transformers/pull/27921 | 2,033,645,816 | PR_kwDOCUB6oc5hk5mU | 27,921 | Add LayoutLM processor | {
"login": "gau-nernst",
"id": 26946864,
"node_id": "MDQ6VXNlcjI2OTQ2ODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/26946864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gau-nernst",
"html_url": "https://github.com/gau-nernst",
"followers_url": "https://api.github.com/users/gau-nernst/followers",
"following_url": "https://api.github.com/users/gau-nernst/following{/other_user}",
"gists_url": "https://api.github.com/users/gau-nernst/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gau-nernst/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gau-nernst/subscriptions",
"organizations_url": "https://api.github.com/users/gau-nernst/orgs",
"repos_url": "https://api.github.com/users/gau-nernst/repos",
"events_url": "https://api.github.com/users/gau-nernst/events{/privacy}",
"received_events_url": "https://api.github.com/users/gau-nernst/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2023-12-09T05:58:18 | 2024-01-10T11:11:54 | null | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #27826
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@ArthurZucker and @younesbelkada | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27921/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27921",
"html_url": "https://github.com/huggingface/transformers/pull/27921",
"diff_url": "https://github.com/huggingface/transformers/pull/27921.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27921.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27920 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27920/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27920/comments | https://api.github.com/repos/huggingface/transformers/issues/27920/events | https://github.com/huggingface/transformers/pull/27920 | 2,033,635,445 | PR_kwDOCUB6oc5hk3VO | 27,920 | fixed typos (issue 27919) | {
"login": "asusevski",
"id": 77211520,
"node_id": "MDQ6VXNlcjc3MjExNTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/77211520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asusevski",
"html_url": "https://github.com/asusevski",
"followers_url": "https://api.github.com/users/asusevski/followers",
"following_url": "https://api.github.com/users/asusevski/following{/other_user}",
"gists_url": "https://api.github.com/users/asusevski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asusevski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asusevski/subscriptions",
"organizations_url": "https://api.github.com/users/asusevski/orgs",
"repos_url": "https://api.github.com/users/asusevski/repos",
"events_url": "https://api.github.com/users/asusevski/events{/privacy}",
"received_events_url": "https://api.github.com/users/asusevski/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-09T05:36:47 | 2023-12-11T23:44:23 | 2023-12-11T23:44:23 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #27919
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@stevhliu and @MKhalusova and @merve
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27920/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27920",
"html_url": "https://github.com/huggingface/transformers/pull/27920",
"diff_url": "https://github.com/huggingface/transformers/pull/27920.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27920.patch",
"merged_at": "2023-12-11T23:44:23"
} |
https://api.github.com/repos/huggingface/transformers/issues/27919 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27919/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27919/comments | https://api.github.com/repos/huggingface/transformers/issues/27919/events | https://github.com/huggingface/transformers/issues/27919 | 2,033,624,429 | I_kwDOCUB6oc55NqVt | 27,919 | Typos with Knowledge Distillation for Computer Vision documentation | {
"login": "asusevski",
"id": 77211520,
"node_id": "MDQ6VXNlcjc3MjExNTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/77211520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asusevski",
"html_url": "https://github.com/asusevski",
"followers_url": "https://api.github.com/users/asusevski/followers",
"following_url": "https://api.github.com/users/asusevski/following{/other_user}",
"gists_url": "https://api.github.com/users/asusevski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asusevski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asusevski/subscriptions",
"organizations_url": "https://api.github.com/users/asusevski/orgs",
"repos_url": "https://api.github.com/users/asusevski/repos",
"events_url": "https://api.github.com/users/asusevski/events{/privacy}",
"received_events_url": "https://api.github.com/users/asusevski/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-09T05:10:55 | 2023-12-11T23:44:24 | 2023-12-11T23:44:24 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 1.12.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@stevhliu and @MKhalusova and @merve
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
**Issue 1**: ```NameError: name 'teacher_extractor' is not defined```
```
from transformers import DefaultDataCollator
data_collator = DefaultDataCollator()
trainer = ImageDistilTrainer(
student_model=student_model,
teacher_model=teacher_model,
training_args=training_args,
train_dataset=processed_datasets["train"],
eval_dataset=processed_datasets["validation"],
data_collator=data_collator,
tokenizer=teacher_extractor,
compute_metrics=compute_metrics,
temperature=5,
lambda_param=0.5
)
```
**Issue 2**: Trainer doesn't initialize
```
class ImageDistilTrainer(Trainer):
def __init__(self, *args, teacher_model=None, **kwargs):
super().__init__(*args, **kwargs)
self.teacher = teacher_model
self.student = student_model
self.loss_function = nn.KLDivLoss(reduction="batchmean")
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
self.teacher.to(device)
self.teacher.eval()
self.temperature = temperature
self.lambda_param = lambda_param
```
### Expected behavior
**Issue 1**: ```teacher_extractor``` should be ```teacher_processor```
**Issue 2**: ```ImageDistilTrainer``` should be:
```
class ImageDistilTrainer(Trainer):
def __init__(self ,teacher_model=None, student_model=None, temperature=None, lambda_param=None, *args, **kwargs):
super().__init__(model=student_model, *args, **kwargs)
self.teacher = teacher_model
self.student = student_model
self.loss_function = nn.KLDivLoss(reduction="batchmean")
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
self.teacher.to(device)
self.teacher.eval()
self.temperature = temperature
self.lambda_param = lambda_param
```
Will raise PR for both fixes! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27919/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27918 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27918/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27918/comments | https://api.github.com/repos/huggingface/transformers/issues/27918/events | https://github.com/huggingface/transformers/pull/27918 | 2,033,540,619 | PR_kwDOCUB6oc5hkjQ5 | 27,918 | Fix typo | {
"login": "f4hy",
"id": 36440,
"node_id": "MDQ6VXNlcjM2NDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/36440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/f4hy",
"html_url": "https://github.com/f4hy",
"followers_url": "https://api.github.com/users/f4hy/followers",
"following_url": "https://api.github.com/users/f4hy/following{/other_user}",
"gists_url": "https://api.github.com/users/f4hy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/f4hy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/f4hy/subscriptions",
"organizations_url": "https://api.github.com/users/f4hy/orgs",
"repos_url": "https://api.github.com/users/f4hy/repos",
"events_url": "https://api.github.com/users/f4hy/events{/privacy}",
"received_events_url": "https://api.github.com/users/f4hy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-09T02:10:13 | 2023-12-09T10:59:30 | 2023-12-09T10:59:24 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27918/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27918",
"html_url": "https://github.com/huggingface/transformers/pull/27918",
"diff_url": "https://github.com/huggingface/transformers/pull/27918.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27918.patch",
"merged_at": "2023-12-09T10:59:24"
} |
https://api.github.com/repos/huggingface/transformers/issues/27917 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27917/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27917/comments | https://api.github.com/repos/huggingface/transformers/issues/27917/events | https://github.com/huggingface/transformers/issues/27917 | 2,033,530,579 | I_kwDOCUB6oc55NTbT | 27,917 | LLava not working with accelerate dispatch: "Expected all tensors to be on the same device" | {
"login": "py4",
"id": 747819,
"node_id": "MDQ6VXNlcjc0NzgxOQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/747819?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/py4",
"html_url": "https://github.com/py4",
"followers_url": "https://api.github.com/users/py4/followers",
"following_url": "https://api.github.com/users/py4/following{/other_user}",
"gists_url": "https://api.github.com/users/py4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/py4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/py4/subscriptions",
"organizations_url": "https://api.github.com/users/py4/orgs",
"repos_url": "https://api.github.com/users/py4/repos",
"events_url": "https://api.github.com/users/py4/events{/privacy}",
"received_events_url": "https://api.github.com/users/py4/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-09T01:48:18 | 2023-12-15T14:05:21 | 2023-12-15T14:05:21 | NONE | null | ### System Info
- `transformers` version: 4.36.0.dev0
- Platform: Linux-5.10.0-26-cloud-amd64-x86_64-with-glibc2.31
- Python version: 3.9.2
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- PyTorch version (GPU?): 2.1.1+cu121 (True)
### Who can help?
@pacman100 @ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```import requests
from PIL import Image
import torch
from transformers import AutoProcessor, LlavaForConditionalGeneration
model_id = "llava-hf/llava-1.5-7b-hf"
prompt = "USER: <image>\nWhat are these?\nASSISTANT:"
image_file = "http://images.cocodataset.org/val2017/000000039769.jpg"
model = LlavaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
device_map='auto'
)
processor = AutoProcessor.from_pretrained(model_id)
raw_image = Image.open(requests.get(image_file, stream=True).raw)
inputs = processor(prompt, raw_image, return_tensors='pt').to('cuda', torch.float16)
output = model.generate(**inputs, max_new_tokens=200, do_sample=False)
print(processor.decode(output[0][2:], skip_special_tokens=True))
```
### Expected behavior
It should produce the output but I get the following. I believe something [similar to this ](https://github.com/huggingface/transformers/issues/24410#issuecomment-1603133017) is needed to fix
```return func(*args, **kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/transformers/generation/utils.py", line 1718, in generate
return self.greedy_search(
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/transformers/generation/utils.py", line 2579, in greedy_search
outputs = self(
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward
output = module._old_forward(*args, **kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/transformers/models/llava/modeling_llava.py", line 433, in forward
outputs = self.language_model(
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 1174, in forward
outputs = self.model(
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 1061, in forward
layer_outputs = decoder_layer(
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward
output = module._old_forward(*args, **kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 789, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward
output = module._old_forward(*args, **kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 408, in forward
key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/transformers/cache_utils.py", line 127, in update
self.key_cache[layer_idx] = torch.cat([self.key_cache[layer_idx], key_states], dim=-2)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument tensors in method wrapper_CUDA_cat)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27917/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27916 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27916/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27916/comments | https://api.github.com/repos/huggingface/transformers/issues/27916/events | https://github.com/huggingface/transformers/issues/27916 | 2,033,389,088 | I_kwDOCUB6oc55Mw4g | 27,916 | Question about the output of the decision transformer | {
"login": "Pulsar110",
"id": 125087940,
"node_id": "U_kgDOB3SwxA",
"avatar_url": "https://avatars.githubusercontent.com/u/125087940?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pulsar110",
"html_url": "https://github.com/Pulsar110",
"followers_url": "https://api.github.com/users/Pulsar110/followers",
"following_url": "https://api.github.com/users/Pulsar110/following{/other_user}",
"gists_url": "https://api.github.com/users/Pulsar110/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pulsar110/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pulsar110/subscriptions",
"organizations_url": "https://api.github.com/users/Pulsar110/orgs",
"repos_url": "https://api.github.com/users/Pulsar110/repos",
"events_url": "https://api.github.com/users/Pulsar110/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pulsar110/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-12-08T22:32:44 | 2023-12-21T09:43:04 | 2023-12-21T09:43:03 | NONE | null | From the code in here: https://github.com/huggingface/transformers/blob/v4.35.2/src/transformers/models/decision_transformer/modeling_decision_transformer.py#L920-L927
```
# reshape x so that the second dimension corresponds to the original
# returns (0), states (1), or actions (2); i.e. x[:,1,t] is the token for s_t
x = x.reshape(batch_size, seq_length, 3, self.hidden_size).permute(0, 2, 1, 3)
# get predictions
return_preds = self.predict_return(x[:, 2]) # predict next return given state and action
state_preds = self.predict_state(x[:, 2]) # predict next state given state and action
action_preds = self.predict_action(x[:, 1]) # predict next action given state
````
I'm not sure I understand why ` self.predict_return(x[:, 2])` or `self.predict_state(x[:, 2])` is predicting the return/next state given the state and action. From the comment on the top, `x[:, 2]` is only the action? Am I missing something?
And if this code is correct, what is the use of `x[:, 0]`? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27916/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27915 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27915/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27915/comments | https://api.github.com/repos/huggingface/transformers/issues/27915/events | https://github.com/huggingface/transformers/issues/27915 | 2,033,135,988 | I_kwDOCUB6oc55LzF0 | 27,915 | dMoE support | {
"login": "AlpinDale",
"id": 52078762,
"node_id": "MDQ6VXNlcjUyMDc4NzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/52078762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlpinDale",
"html_url": "https://github.com/AlpinDale",
"followers_url": "https://api.github.com/users/AlpinDale/followers",
"following_url": "https://api.github.com/users/AlpinDale/following{/other_user}",
"gists_url": "https://api.github.com/users/AlpinDale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AlpinDale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlpinDale/subscriptions",
"organizations_url": "https://api.github.com/users/AlpinDale/orgs",
"repos_url": "https://api.github.com/users/AlpinDale/repos",
"events_url": "https://api.github.com/users/AlpinDale/events{/privacy}",
"received_events_url": "https://api.github.com/users/AlpinDale/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | null | 2 | 2023-12-08T18:35:58 | 2023-12-21T07:30:18 | null | CONTRIBUTOR | null | ### Feature request
MistralAI recently [released their new model](https://twitter.com/MistralAI/status/1733150512395038967), a Mixture of Experts based on [megablocks](https://github.com/stanford-futuredata/megablocks), a type of dropless Mixture of Experts.
### Motivation
It's very likely that the future of open source LLMs will be MoEs. Having it in HF transformers would allow us to use the built-in trainer, as it's unwieldy to use Megatron-LM for the average user who's only ever done QLoRA.
### Your contribution
No clue for now. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27915/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27915/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27914 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27914/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27914/comments | https://api.github.com/repos/huggingface/transformers/issues/27914/events | https://github.com/huggingface/transformers/pull/27914 | 2,032,919,310 | PR_kwDOCUB6oc5hibuH | 27,914 | Fix: [SeamlessM4T - S2TT] Bug in batch loading of audio in torch.Tensor format in the SeamlessM4TFeatureExtractor class | {
"login": "nicholasneo78",
"id": 45549785,
"node_id": "MDQ6VXNlcjQ1NTQ5Nzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/45549785?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nicholasneo78",
"html_url": "https://github.com/nicholasneo78",
"followers_url": "https://api.github.com/users/nicholasneo78/followers",
"following_url": "https://api.github.com/users/nicholasneo78/following{/other_user}",
"gists_url": "https://api.github.com/users/nicholasneo78/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nicholasneo78/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nicholasneo78/subscriptions",
"organizations_url": "https://api.github.com/users/nicholasneo78/orgs",
"repos_url": "https://api.github.com/users/nicholasneo78/repos",
"events_url": "https://api.github.com/users/nicholasneo78/events{/privacy}",
"received_events_url": "https://api.github.com/users/nicholasneo78/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-12-08T15:56:25 | 2023-12-22T10:47:31 | 2023-12-22T10:47:31 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
Based on the documentation for the [SeamlessM4TProcessor](https://github.com/huggingface/transformers/blob/main/src/transformers/models/seamless_m4t/processing_seamless_m4t.py#L22) class, the class is supposed to take in either `List[np.ndarray]` or `List[torch.Tensor]` for batch decoding when `processor(audios=...)` is called. It would then return a batch of transcriptions (S2TT task of SeamlessM4T). However, when `List[torch.Tensor]` is passed into the `audios` arg, only one translated transcript is being returned even though a batch of audio is passed in. After adding the check for `torch.Tensor` in the `SeamlessM4TFeatureExtractor` class, the translated batch transcript returned as expected.
Below is a code snippet that I use to test the issue:
```python
from transformers import SeamlessM4Tv2Model, SeamlessM4TProcessor
import torch
from datasets import load_dataset
processor = SeamlessM4TProcessor.from_pretrained("facebook/seamless-m4t-v2-large")
model = SeamlessM4Tv2Model.from_pretrained("facebook/seamless-m4t-v2-large", use_safetensors=True)
dataset = load_dataset("arabic_speech_corpus", split="test", streaming=True)
# numpy array as audio_inputs
audio_sample = next(iter(dataset))["audio"]
print(type(audio_sample["array"])) # <class 'numpy.ndarray'>
# get a list of two numpy arrays to simulate batch size=2 when loading the audio arrays
audio_sample_batch = [audio_sample["array"], audio_sample["array"]]
audio_inputs = processor(audios=audio_sample_batch, return_tensors="pt", sampling_rate=16000)
output_tokens = model.generate(**audio_inputs, tgt_lang="eng", generate_speech=False)
translated_text_from_audio = processor.batch_decode(output_tokens[0].tolist(), skip_special_tokens=True)
print(f"Translated text from audio (numpy array): {translated_text_from_audio}\n")
# >>> Translated text from audio (numpy array): ['The first is the fact that the sun is shining brightly on the moon.', 'The first is the fact that the sun is shining brightly on the moon.']
# torch tensors as audio_inputs
torch_tensor_audio_sample = torch.from_numpy(audio_sample["array"])
print(type(torch_tensor_audio_sample)) # <class 'torch.Tensor'>
# get a list of two torch tensors to simulate batch size=2 when loading the audio arrays
torch_tensor_audio_sample_batch = [torch_tensor_audio_sample,torch_tensor_audio_sample]
audio_inputs = processor(audios=torch_tensor_audio_sample_batch, return_tensors="pt", sampling_rate=16000)
output_tokens = model.generate(**audio_inputs, tgt_lang="eng", generate_speech=False)
translated_text_from_audio = processor.batch_decode(output_tokens[0].tolist(), skip_special_tokens=True)
print(f"Translated text from audio (torch tensors): {translated_text_from_audio}")
# >>> Translated text from audio (torch tensors): ['The first is the fact that the sun is shining brightly on the moon.']
# expects two translated sentences just like the numpy array inputs but only one sentence is translated
```
Environment:
```shell
- `transformers` version: 4.36.0.dev0
- Platform: Linux-6.1.11-76060111-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
```
## Before submitting
- [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
cc: @sanchit-gandhi
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27914/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27914/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27914",
"html_url": "https://github.com/huggingface/transformers/pull/27914",
"diff_url": "https://github.com/huggingface/transformers/pull/27914.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27914.patch",
"merged_at": "2023-12-22T10:47:31"
} |
https://api.github.com/repos/huggingface/transformers/issues/27913 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27913/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27913/comments | https://api.github.com/repos/huggingface/transformers/issues/27913/events | https://github.com/huggingface/transformers/pull/27913 | 2,032,904,981 | PR_kwDOCUB6oc5hiYby | 27,913 | Fixing Value error question_answering.py | {
"login": "khyatikhandelwal",
"id": 65815098,
"node_id": "MDQ6VXNlcjY1ODE1MDk4",
"avatar_url": "https://avatars.githubusercontent.com/u/65815098?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/khyatikhandelwal",
"html_url": "https://github.com/khyatikhandelwal",
"followers_url": "https://api.github.com/users/khyatikhandelwal/followers",
"following_url": "https://api.github.com/users/khyatikhandelwal/following{/other_user}",
"gists_url": "https://api.github.com/users/khyatikhandelwal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/khyatikhandelwal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/khyatikhandelwal/subscriptions",
"organizations_url": "https://api.github.com/users/khyatikhandelwal/orgs",
"repos_url": "https://api.github.com/users/khyatikhandelwal/repos",
"events_url": "https://api.github.com/users/khyatikhandelwal/events{/privacy}",
"received_events_url": "https://api.github.com/users/khyatikhandelwal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-08T15:51:43 | 2024-01-18T08:03:49 | 2024-01-18T08:03:49 | NONE | null | On running this pipeline, the value error is always raised even if a dict/SquadExample is passed as there was no 'else' condition. Now it will only be raised when input is not dict/SquadExample.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27913/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27913",
"html_url": "https://github.com/huggingface/transformers/pull/27913",
"diff_url": "https://github.com/huggingface/transformers/pull/27913.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27913.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27912 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27912/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27912/comments | https://api.github.com/repos/huggingface/transformers/issues/27912/events | https://github.com/huggingface/transformers/pull/27912 | 2,032,860,518 | PR_kwDOCUB6oc5hiOtH | 27,912 | Skip `UnivNetModelTest::test_multi_gpu_data_parallel_forward` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-08T15:27:11 | 2023-12-11T08:17:39 | 2023-12-11T08:17:38 | COLLABORATOR | null | # What does this PR do?
`test_multi_gpu_data_parallel_forward` is known to fail, and it uses `nn.DataParallel` which is not recommended by pyttorch.
Let's skip it for now. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27912/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27912/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27912",
"html_url": "https://github.com/huggingface/transformers/pull/27912",
"diff_url": "https://github.com/huggingface/transformers/pull/27912.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27912.patch",
"merged_at": "2023-12-11T08:17:38"
} |
https://api.github.com/repos/huggingface/transformers/issues/27911 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27911/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27911/comments | https://api.github.com/repos/huggingface/transformers/issues/27911/events | https://github.com/huggingface/transformers/pull/27911 | 2,032,822,044 | PR_kwDOCUB6oc5hiGTN | 27,911 | Fix M4T v2 integration tests | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-08T15:04:52 | 2023-12-11T09:51:20 | 2023-12-11T08:18:42 | COLLABORATOR | null | # What does this PR do?
Some M4T-v2 integration tests are [causing GPUs OOM](https://github.com/huggingface/transformers/actions/runs/7054968060/job/19204890066). It happens when we load two models together. I thus shifted some integration tests to half precision which should solve the issue.
cc @ydshieh @ArthurZucker
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27911/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27911/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27911",
"html_url": "https://github.com/huggingface/transformers/pull/27911",
"diff_url": "https://github.com/huggingface/transformers/pull/27911.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27911.patch",
"merged_at": "2023-12-11T08:18:42"
} |
https://api.github.com/repos/huggingface/transformers/issues/27910 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27910/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27910/comments | https://api.github.com/repos/huggingface/transformers/issues/27910/events | https://github.com/huggingface/transformers/pull/27910 | 2,032,739,953 | PR_kwDOCUB6oc5hh0DZ | 27,910 | Llama conversion script: adjustments for Llama Guard | {
"login": "pcuenca",
"id": 1177582,
"node_id": "MDQ6VXNlcjExNzc1ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1177582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pcuenca",
"html_url": "https://github.com/pcuenca",
"followers_url": "https://api.github.com/users/pcuenca/followers",
"following_url": "https://api.github.com/users/pcuenca/following{/other_user}",
"gists_url": "https://api.github.com/users/pcuenca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pcuenca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pcuenca/subscriptions",
"organizations_url": "https://api.github.com/users/pcuenca/orgs",
"repos_url": "https://api.github.com/users/pcuenca/repos",
"events_url": "https://api.github.com/users/pcuenca/events{/privacy}",
"received_events_url": "https://api.github.com/users/pcuenca/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-08T14:18:11 | 2023-12-08T16:19:12 | 2023-12-08T15:02:50 | MEMBER | null | # What does this PR do?
Small adjustments to the Llama 2 conversion script so it works with the original Llama Guard weights.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
cc @ArthurZucker | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27910/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27910",
"html_url": "https://github.com/huggingface/transformers/pull/27910",
"diff_url": "https://github.com/huggingface/transformers/pull/27910.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27910.patch",
"merged_at": "2023-12-08T15:02:50"
} |
https://api.github.com/repos/huggingface/transformers/issues/27909 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27909/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27909/comments | https://api.github.com/repos/huggingface/transformers/issues/27909/events | https://github.com/huggingface/transformers/pull/27909 | 2,032,734,928 | PR_kwDOCUB6oc5hhy59 | 27,909 | fix llava | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-08T14:16:05 | 2023-12-08T16:32:35 | 2023-12-08T16:32:34 | COLLABORATOR | null | # What does this PR do?
Fix the prepare inputs for generationa after the cache PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27909/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27909",
"html_url": "https://github.com/huggingface/transformers/pull/27909",
"diff_url": "https://github.com/huggingface/transformers/pull/27909.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27909.patch",
"merged_at": "2023-12-08T16:32:34"
} |
https://api.github.com/repos/huggingface/transformers/issues/27908 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27908/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27908/comments | https://api.github.com/repos/huggingface/transformers/issues/27908/events | https://github.com/huggingface/transformers/issues/27908 | 2,032,726,259 | I_kwDOCUB6oc55KPDz | 27,908 | Mistral: CUDA error when generating text with a batch of inputs | {
"login": "plroit",
"id": 1734563,
"node_id": "MDQ6VXNlcjE3MzQ1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1734563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/plroit",
"html_url": "https://github.com/plroit",
"followers_url": "https://api.github.com/users/plroit/followers",
"following_url": "https://api.github.com/users/plroit/following{/other_user}",
"gists_url": "https://api.github.com/users/plroit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/plroit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/plroit/subscriptions",
"organizations_url": "https://api.github.com/users/plroit/orgs",
"repos_url": "https://api.github.com/users/plroit/repos",
"events_url": "https://api.github.com/users/plroit/events{/privacy}",
"received_events_url": "https://api.github.com/users/plroit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-08T14:10:55 | 2024-01-16T08:03:46 | 2024-01-16T08:03:46 | NONE | null | ### System Info
I'm trying to decode a batch of outputs from a batch of inputs, with code that is working correctly with any encoder-decoder model (i.e. T5). I get the following error when I'm using Mistral:
` CUDA error: device-side assert triggered`
stack trace:
```python
File ~/miniconda3/lib/python3.11/site-packages/transformers/models/mistral/modeling_mistral.py:84, in MistralRMSNorm.forward(self, hidden_states)
82 input_dtype = hidden_states.dtype
83 hidden_states = hidden_states.to(torch.float32)
---> 84 variance = hidden_states.pow(2).mean(-1, keepdim=True)
85 hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
86 return self.weight * hidden_states.to(input_dtype)
```
- `transformers` version: 4.35.2
- Platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.31
- Python version: 3.11.5
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: no
- use_cpu: False
- debug: False
- num_processes: 4
- machine_rank: 0
- num_machines: 1
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.1.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: neither, single device
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Example script to recreate:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
# required if I want a padded batch (Mistral does not define a padding token)
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
device = "cuda"
model_inputs = tokenizer(["Hi there how are you? What's your name?", "Hi, sup?"], return_tensors="pt", padding=True).to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=10)
```
### Expected behavior
I should be able to use tokenizer.batch_decode on the outputs. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27908/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27908/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27907 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27907/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27907/comments | https://api.github.com/repos/huggingface/transformers/issues/27907/events | https://github.com/huggingface/transformers/pull/27907 | 2,032,684,189 | PR_kwDOCUB6oc5hhnqy | 27,907 | Generate: SinkCache can handle iterative prompts | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 9 | 2023-12-08T13:45:14 | 2023-12-08T20:02:33 | 2023-12-08T20:02:20 | MEMBER | null | # What does this PR do?
Fixes the case where `SinkCache` is used in a chat bot, receiving new prompts after giving an answer. Fix developed with @tomaarsen
Here's an example of a script that works after this PR:
```py
from transformers import AutoTokenizer, SinkCache, AutoModelForCausalLM, TextStreamer
import torch
from datasets import load_dataset
# Loading the model & tokenizer
model_id = "HuggingFaceH4/zephyr-7b-beta"
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Loading the prompts to simulate user interactions
prompt_dataset = load_dataset("HuggingFaceH4/mt_bench_prompts", split="train")
prompts = [prompt for prompts in prompt_dataset["prompt"] for prompt in prompts]
# Prepare generation settings
cache = SinkCache(window_length=1024, num_sink_tokens=4)
streamer = TextStreamer(tokenizer)
input_ids = torch.tensor([], device=model.device, dtype=torch.int)
for prompt in prompts:
# Tokenize the prompt with the correct chat template
chat = [{"role": "user", "content": prompt}]
input_ids = torch.cat((input_ids, tokenizer.apply_chat_template(chat, return_tensors="pt", add_generation_prompt=True).to(model.device)), dim=1)
# input_ids = tokenizer.apply_chat_template(chat, return_tensors="pt").to(model.device)
# Perform the generation
gen_out = model.generate(input_ids, do_sample=False, max_new_tokens=100, past_key_values=cache, use_cache=True, streamer=streamer)
# input_ids = torch.cat((input_ids, gen_out), dim=1)
input_ids = gen_out
# If desired, decode the output from this prompt
decoded = tokenizer.batch_decode(gen_out, skip_special_tokens=True)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27907/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27907/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27907",
"html_url": "https://github.com/huggingface/transformers/pull/27907",
"diff_url": "https://github.com/huggingface/transformers/pull/27907.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27907.patch",
"merged_at": "2023-12-08T20:02:20"
} |
https://api.github.com/repos/huggingface/transformers/issues/27906 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27906/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27906/comments | https://api.github.com/repos/huggingface/transformers/issues/27906/events | https://github.com/huggingface/transformers/pull/27906 | 2,032,423,360 | PR_kwDOCUB6oc5hgvQh | 27,906 | mark `test_initialization` as flaky in 2 model tests | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-12-08T10:49:06 | 2023-12-08T13:54:33 | 2023-12-08T13:54:32 | COLLABORATOR | null | # What does this PR do?
`torch.nn.init.trunc_normal_` is flaky and sometimes produce large value even if `mean=0.0` and `std=1e-10).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27906/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27906",
"html_url": "https://github.com/huggingface/transformers/pull/27906",
"diff_url": "https://github.com/huggingface/transformers/pull/27906.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27906.patch",
"merged_at": "2023-12-08T13:54:32"
} |
https://api.github.com/repos/huggingface/transformers/issues/27905 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27905/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27905/comments | https://api.github.com/repos/huggingface/transformers/issues/27905/events | https://github.com/huggingface/transformers/pull/27905 | 2,032,418,365 | PR_kwDOCUB6oc5hguKt | 27,905 | [Seamless] Fix links in docs | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-12-08T10:45:36 | 2023-12-14T15:14:17 | 2023-12-14T15:14:13 | CONTRIBUTOR | null | # What does this PR do?
Relative links were broken, updated to absolute URL ones. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27905/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27905",
"html_url": "https://github.com/huggingface/transformers/pull/27905",
"diff_url": "https://github.com/huggingface/transformers/pull/27905.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27905.patch",
"merged_at": "2023-12-14T15:14:13"
} |
https://api.github.com/repos/huggingface/transformers/issues/27904 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27904/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27904/comments | https://api.github.com/repos/huggingface/transformers/issues/27904/events | https://github.com/huggingface/transformers/issues/27904 | 2,032,397,437 | I_kwDOCUB6oc55I-x9 | 27,904 | ERROR: Could not build wheels for safetensors, tokenizers, which is required to install pyproject.toml-based projects | {
"login": "zhaosheng-thu",
"id": 144892591,
"node_id": "U_kgDOCKLirw",
"avatar_url": "https://avatars.githubusercontent.com/u/144892591?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhaosheng-thu",
"html_url": "https://github.com/zhaosheng-thu",
"followers_url": "https://api.github.com/users/zhaosheng-thu/followers",
"following_url": "https://api.github.com/users/zhaosheng-thu/following{/other_user}",
"gists_url": "https://api.github.com/users/zhaosheng-thu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhaosheng-thu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhaosheng-thu/subscriptions",
"organizations_url": "https://api.github.com/users/zhaosheng-thu/orgs",
"repos_url": "https://api.github.com/users/zhaosheng-thu/repos",
"events_url": "https://api.github.com/users/zhaosheng-thu/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhaosheng-thu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-12-08T10:31:00 | 2024-01-17T08:14:15 | 2023-12-25T16:36:46 | NONE | null | ### System Info
`pip install transformers`
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When I am installing transformers-4.35.2, the problems happen.
Building wheels for collected packages: safetensors, tokenizers
Building wheel for safetensors (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for safetensors (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [45 lines of output]
Running `maturin pep517 build-wheel -i D:\pycharm312\venv\Scripts\python.exe --compatibility off`
馃嵐 Building a mixed python/rust project
馃敆 Found pyo3 bindings
馃悕 Found CPython 3.12 at D:\pycharm312\venv\Scripts\python.exe
馃摗 Using build options features, bindings from pyproject.toml
Compiling proc-macro2 v1.0.70
Compiling target-lexicon v0.12.12
Compiling unicode-ident v1.0.12
Compiling autocfg v1.1.0
Compiling once_cell v1.18.0
Compiling windows_x86_64_msvc v0.48.5
Compiling syn v1.0.109
Compiling libc v0.2.150
Compiling parking_lot_core v0.9.9
Compiling serde v1.0.193
Compiling cfg-if v1.0.0
Compiling scopeguard v1.2.0
Compiling smallvec v1.11.2
Compiling serde_json v1.0.108
Compiling itoa v1.0.9
Compiling ryu v1.0.15
Compiling unindent v0.1.11
error: linker `link.exe` not found
|
= note: program not found
note: the msvc targets depend on the msvc linker but `link.exe` was not found
note: please ensure that Visual Studio 2017 or later, or Build Tools for Visual Studio were installed with the Visual C++ option.
note: VS Code is a different product, and is not sufficient.
error: could not compile `proc-macro2` (build script) due to previous error
warning: build failed, waiting for other jobs to finish...
error: could not compile `target-lexicon` (build script) due to previous error
error: could not compile `windows_x86_64_msvc` (build script) due to previous error
error: could not compile `syn` (build script) due to previous error
error: could not compile `libc` (build script) due to previous error
error: could not compile `parking_lot_core` (build script) due to previous error
error: could not compile `serde` (build script) due to previous error
error: could not compile `serde_json` (build script) due to previous error
馃挜 maturin failed
Caused by: Failed to build a native library through cargo
Caused by: Cargo build finished with "exit code: 101": `"cargo" "rustc" "--features" "pyo3/extension-module" "--message-format" "json-render-diagnostics"
"--manifest-path" "C:\\Users\\86186\\AppData\\Local\\Temp\\pip-install-b9fju0sq\\safetensors_2de969c81fb9425fbfef7449546ec30d\\bindings\\python\\Cargo.toml" "--release" "--lib"`
Error: command ['maturin', 'pep517', 'build-wheel', '-i', 'D:\\pycharm312\\venv\\Scripts\\python.exe', '--compatibility', 'off'] returned non-zero exit status 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for safetensors
Building wheel for tokenizers (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for tokenizers (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [43 lines of output]
Running `maturin pep517 build-wheel -i D:\pycharm312\venv\Scripts\python.exe --compatibility off`
馃嵐 Building a mixed python/rust project
馃敆 Found pyo3 bindings
馃悕 Found CPython 3.12 at D:\pycharm312\venv\Scripts\python.exe
馃摗 Using build options features, bindings from pyproject.toml
Compiling autocfg v1.1.0
Compiling proc-macro2 v1.0.69
Compiling unicode-ident v1.0.12
Compiling windows_x86_64_msvc v0.48.5
Compiling cfg-if v1.0.0
Compiling syn v1.0.109
Compiling target-lexicon v0.12.12
Compiling scopeguard v1.2.0
Compiling libc v0.2.150
Compiling crossbeam-utils v0.8.16
Compiling cc v1.0.83
Compiling once_cell v1.18.0
Compiling memchr v2.6.4
Compiling fnv v1.0.7
Compiling windows_x86_64_msvc v0.42.2
Compiling strsim v0.10.0
error: linker `link.exe` not found
|
= note: program not found
note: the msvc targets depend on the msvc linker but `link.exe` was not found
note: please ensure that Visual Studio 2017 or later, or Build Tools for Visual Studio were installed with the Visual C++ option.
note: VS Code is a different product, and is not sufficient.
error: could not compile `windows_x86_64_msvc` (build script) due to previous error
warning: build failed, waiting for other jobs to finish...
error: could not compile `proc-macro2` (build script) due to previous error
error: could not compile `windows_x86_64_msvc` (build script) due to previous error
error: could not compile `crossbeam-utils` (build script) due to previous error
error: could not compile `target-lexicon` (build script) due to previous error
error: could not compile `libc` (build script) due to previous error
error: could not compile `syn` (build script) due to previous error
馃挜 maturin failed
Caused by: Failed to build a native library through cargo
Caused by: Cargo build finished with "exit code: 101": `"cargo" "rustc" "--features" "pyo3/extension-module" "--message-format" "json-render-diagnostics"
"--manifest-path" "C:\\Users\\86186\\AppData\\Local\\Temp\\pip-install-b9fju0sq\\tokenizers_1ea38977042a4a4194501ec96394a1a0\\bindings\\python\\Cargo.toml" "--release" "--lib"`
Error: command ['maturin', 'pep517', 'build-wheel', '-i', 'D:\\pycharm312\\venv\\Scripts\\python.exe', '--compatibility', 'off'] returned non-zero exit status 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for tokenizers
Failed to build safetensors tokenizers
ERROR: Could not build wheels for safetensors, tokenizers, which is required to install pyproject.toml-based projects
versions:
pip: 23.3.1
setuptools: 69.0.2
wheel: 0.38.4
they are updated to the latest version.
how can i solve it? Thanks!
### Expected behavior
how can i solve it? Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27904/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27903 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27903/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27903/comments | https://api.github.com/repos/huggingface/transformers/issues/27903/events | https://github.com/huggingface/transformers/pull/27903 | 2,032,323,741 | PR_kwDOCUB6oc5hgZaW | 27,903 | Fix `notification_service.py` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-08T09:43:45 | 2023-12-08T13:55:04 | 2023-12-08T13:55:03 | COLLABORATOR | null | # What does this PR do?
Fix a tiny issue in #27881 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27903/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27903",
"html_url": "https://github.com/huggingface/transformers/pull/27903",
"diff_url": "https://github.com/huggingface/transformers/pull/27903.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27903.patch",
"merged_at": "2023-12-08T13:55:02"
} |
https://api.github.com/repos/huggingface/transformers/issues/27902 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27902/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27902/comments | https://api.github.com/repos/huggingface/transformers/issues/27902/events | https://github.com/huggingface/transformers/issues/27902 | 2,032,276,433 | I_kwDOCUB6oc55IhPR | 27,902 | Trainer logging_first_step not evaluate on first step as it is documented | {
"login": "RmZeta2718",
"id": 42400165,
"node_id": "MDQ6VXNlcjQyNDAwMTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/42400165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RmZeta2718",
"html_url": "https://github.com/RmZeta2718",
"followers_url": "https://api.github.com/users/RmZeta2718/followers",
"following_url": "https://api.github.com/users/RmZeta2718/following{/other_user}",
"gists_url": "https://api.github.com/users/RmZeta2718/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RmZeta2718/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RmZeta2718/subscriptions",
"organizations_url": "https://api.github.com/users/RmZeta2718/orgs",
"repos_url": "https://api.github.com/users/RmZeta2718/repos",
"events_url": "https://api.github.com/users/RmZeta2718/events{/privacy}",
"received_events_url": "https://api.github.com/users/RmZeta2718/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2023-12-08T09:14:15 | 2024-01-08T09:51:19 | null | NONE | null | ### System Info
`transformers` version: 4.35.2
### Who can help?
trainer: @muellerzr and @pacman100
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
document says that [logging_first_step](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.logging_first_step) will evaluate on the first global_step. But it only logs on the first step, not evaluate.
Related code: [link](https://github.com/huggingface/transformers/blob/633215ba58fe5114d8c8d32e415a04600e010701/src/transformers/trainer_callback.py#L435)
### Expected behavior
Either fix the document (remove "evaluate") or add evaluate feature to `logging_first_step` (I would prefer the latter)
Or if it's confusing for `logging_first_step` to evaluate, maybe we can add a `evaluate_first_step` argument. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27902/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27901 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27901/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27901/comments | https://api.github.com/repos/huggingface/transformers/issues/27901/events | https://github.com/huggingface/transformers/pull/27901 | 2,032,120,031 | PR_kwDOCUB6oc5hfsP8 | 27,901 | [Bugfix] non_attended_tokens index | {
"login": "okotaku",
"id": 24734142,
"node_id": "MDQ6VXNlcjI0NzM0MTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24734142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/okotaku",
"html_url": "https://github.com/okotaku",
"followers_url": "https://api.github.com/users/okotaku/followers",
"following_url": "https://api.github.com/users/okotaku/following{/other_user}",
"gists_url": "https://api.github.com/users/okotaku/gists{/gist_id}",
"starred_url": "https://api.github.com/users/okotaku/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/okotaku/subscriptions",
"organizations_url": "https://api.github.com/users/okotaku/orgs",
"repos_url": "https://api.github.com/users/okotaku/repos",
"events_url": "https://api.github.com/users/okotaku/events{/privacy}",
"received_events_url": "https://api.github.com/users/okotaku/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-12-08T07:53:07 | 2024-01-14T02:33:05 | 2024-01-14T02:33:05 | NONE | null | # What does this PR do?
```
batch_index, non_attended_tokens = torch.where(first_layer_past_key_value == 0)
# Get the target length
target_seqlen = first_layer_past_key_value.shape[-1] + 1
extended_attention_mask = torch.ones(
(attention_mask.shape[0], target_seqlen - attention_mask.shape[1]),
dtype=attention_mask.dtype,
device=attention_mask.device,
)
# Zero-out the places where we don't need to attend
extended_attention_mask[batch_index, non_attended_tokens] = 0
attention_mask = torch.cat((attention_mask, extended_attention_mask), dim=1)
```
The shape of `extended_attention_mask` is `(attention_mask.shape[0], target_seqlen - attention_mask.shape[1])`, but `first_layer_past_key_value` is `(attention_mask.shape[0], target_seqlen)`.
This cause index error of `non_attended_tokens`.
I added an index fix line.
```
non_attended_tokens = non_attended_tokens - attention_mask.shape[1]
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27901/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27901",
"html_url": "https://github.com/huggingface/transformers/pull/27901",
"diff_url": "https://github.com/huggingface/transformers/pull/27901.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27901.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27900 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27900/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27900/comments | https://api.github.com/repos/huggingface/transformers/issues/27900/events | https://github.com/huggingface/transformers/issues/27900 | 2,032,092,580 | I_kwDOCUB6oc55H0Wk | 27,900 | Weird Tokenization when Training New Tokenizer from Llama 2 Tokenizer using `train_new_from_iterator` | {
"login": "phoongkhangzhie",
"id": 25717121,
"node_id": "MDQ6VXNlcjI1NzE3MTIx",
"avatar_url": "https://avatars.githubusercontent.com/u/25717121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phoongkhangzhie",
"html_url": "https://github.com/phoongkhangzhie",
"followers_url": "https://api.github.com/users/phoongkhangzhie/followers",
"following_url": "https://api.github.com/users/phoongkhangzhie/following{/other_user}",
"gists_url": "https://api.github.com/users/phoongkhangzhie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phoongkhangzhie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phoongkhangzhie/subscriptions",
"organizations_url": "https://api.github.com/users/phoongkhangzhie/orgs",
"repos_url": "https://api.github.com/users/phoongkhangzhie/repos",
"events_url": "https://api.github.com/users/phoongkhangzhie/events{/privacy}",
"received_events_url": "https://api.github.com/users/phoongkhangzhie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 19 | 2023-12-08T07:33:34 | 2024-01-18T11:31:56 | 2024-01-18T11:31:56 | NONE | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-5.4.0-105-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
import os
import argparse
from datasets import load_dataset
from transformers import (
AutoTokenizer
)
def python_generator():
# Load local files for code_search_net/python
# https://huggingface.co/datasets/code_search_net
dataset = load_dataset("code_search_net", "python")
dataset = dataset["train"]
for start_idx in range(0, len(dataset), 1000):
samples = dataset[start_idx: start_idx + 1000]
yield samples["whole_func_string"]
def main(args):
model_paths = [
"gpt2",
"meta-llama/Llama-2-70b-hf",
]
access_token = ""
for model_path in model_paths:
print(f"\n\n{model_path}")
save_dir = (
f"{model_path}-python-52K_vocab"
)
os.makedirs(os.path.join(os.getcwd(), "tokenizers"), exist_ok=True)
save_path = os.path.join(os.getcwd(), "tokenizers", save_dir)
old_tokenizer = AutoTokenizer.from_pretrained(
model_path,
token=access_token
)
assert old_tokenizer.is_fast
if os.path.exists(save_path):
new_tokenizer = AutoTokenizer.from_pretrained(save_path)
else:
new_tokenizer = old_tokenizer.train_new_from_iterator(
python_generator(),
vocab_size=52000
)
new_tokenizer.save_pretrained(save_path)
example_1 = '''
def add_numbers(a, b):
"""Add the two numbers `a` and `b`."""
return a + b
'''
print(f"\n{example_1}")
old_tokens = old_tokenizer.tokenize(example_1)
print(f"old: {old_tokens}")
new_tokens = new_tokenizer.tokenize(example_1)
print(f"new: {new_tokens}")
example_2 = """
class LinearLayer():
def __init__(self, input_size, output_size):
self.weight = torch.randn(input_size, output_size)
self.bias = torch.zeros(output_size)
def __call__(self, x):
return x @ self.weights + self.bias
"""
print(f"\n{example_2}")
old_tokens = old_tokenizer.tokenize(example_2)
print(f"old: {old_tokens}")
new_tokens = new_tokenizer.tokenize(example_2)
print(f"new: {new_tokens}")
```
### Expected behavior
The function `train_new_from_iterator` works as expected when training a new tokenizer from a gpt2 tokenizer as demonstrated in the [example](https://huggingface.co/learn/nlp-course/chapter6/2), but does not work for training a new tokenizer from a Llama-2 tokenizer.
With the code snippet above, training a tokenizer from gpt2 gives the output:
```
Example 1:
def add_numbers(a, b):
"""Add the two numbers `a` and `b`."""
return a + b
old: ['Ċ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġdef', 'Ġadd', '_', 'n', 'umbers', '(', 'a', ',', 'Ġb', '):', 'Ċ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ"""', 'Add', 'Ġthe', 'Ġtwo', 'Ġnumbers', 'Ġ`', 'a', '`', 'Ġand', 'Ġ`', 'b', '`', '."', '""', 'Ċ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġreturn', 'Ġa', 'Ġ+', 'Ġb', 'Ċ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ']
new: ['ĊĠĠĠĠĠĠĠ', 'Ġdef', 'Ġadd', '_', 'numbers', '(', 'a', ',', 'Ġb', '):', 'ĊĠĠĠĠĠĠĠĠĠĠĠ', 'Ġ"""', 'Add', 'Ġthe', 'Ġtwo', 'Ġnumbers', 'Ġ`', 'a', '`', 'Ġand', 'Ġ`', 'b', '`."""', 'ĊĠĠĠĠĠĠĠĠĠĠĠ', 'Ġreturn', 'Ġa', 'Ġ+', 'Ġb', 'ĊĠĠĠĠĠĠĠĠ']
Example 2:
class LinearLayer():
def __init__(self, input_size, output_size):
self.weight = torch.randn(input_size, output_size)
self.bias = torch.zeros(output_size)
def __call__(self, x):
return x @ self.weights + self.bias
old: ['Ċ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġclass', 'ĠLinear', 'Layer', '():', 'Ċ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġdef', 'Ġ__', 'init', '__', '(', 'self', ',', 'Ġinput', '_', 'size', ',', 'Ġoutput', '_', 'size', '):', 'Ċ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġself', '.', 'weight', 'Ġ=', 'Ġtorch', '.', 'rand', 'n', '(', 'input', '_', 'size', ',', 'Ġoutput', '_', 'size', ')', 'Ċ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġself', '.', 'b', 'ias', 'Ġ=', 'Ġtorch', '.', 'zer', 'os', '(', 'output', '_', 'size', ')', 'ĊĊ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġdef', 'Ġ__', 'call', '__', '(', 'self', ',', 'Ġx', '):', 'Ċ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġreturn', 'Ġx', 'Ġ@', 'Ġself', '.', 'weights', 'Ġ+', 'Ġself', '.', 'b', 'ias', 'Ċ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ']
new: ['ĊĠĠĠĠĠĠĠ', 'Ġclass', 'ĠLinear', 'Layer', '():', 'ĊĠĠĠĠĠĠĠĠĠĠĠ', 'Ġdef', 'Ġ__', 'init', '__(', 'self', ',', 'Ġinput', '_', 'size', ',', 'Ġoutput', '_', 'size', '):', 'ĊĠĠĠĠĠĠĠĠĠĠĠĠĠĠĠ', 'Ġself', '.', 'weight', 'Ġ=', 'Ġtorch', '.', 'randn', '(', 'input', '_', 'size', ',', 'Ġoutput', '_', 'size', ')', 'ĊĠĠĠĠĠĠĠĠĠĠĠĠĠĠĠ', 'Ġself', '.', 'bias', 'Ġ=', 'Ġtorch', '.', 'zeros', '(', 'output', '_', 'size', ')', 'ĊĊĠĠĠĠĠĠĠĠĠĠĠ', 'Ġdef', 'Ġ__', 'call', '__(', 'self', ',', 'Ġx', '):', 'ĊĠĠĠĠĠĠĠĠĠĠĠĠĠĠĠ', 'Ġreturn', 'Ġx', 'Ġ@', 'Ġself', '.', 'weights', 'Ġ+', 'Ġself', '.', 'bias', 'ĊĠĠĠĠĠĠĠĠ']
```
However, training Llama-2's tokenizer gives:
```
Example 1:
def add_numbers(a, b):
"""Add the two numbers `a` and `b`."""
return a + b
old: ['▁', '<0x0A>', '▁▁▁▁▁▁▁', '▁def', '▁add', '_', 'numbers', '(', 'a', ',', '▁b', '):', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁', '▁"""', 'Add', '▁the', '▁two', '▁numbers', '▁`', 'a', '`', '▁and', '▁`', 'b', '`', '."', '""', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁', '▁return', '▁a', '▁+', '▁b', '<0x0A>', '▁▁▁▁▁▁▁▁']
new: ['▁', '\n▁▁▁▁▁▁▁▁def▁', 'add_', 'number', 's(', 'a,▁b', '):\n▁▁▁▁▁▁▁▁▁▁▁▁"""', 'Add▁the▁', 'two▁', 'number', 's▁`', 'a', '`▁and▁`', 'b', '`', '."""', '\n▁▁▁▁▁▁▁▁▁▁▁▁return▁', 'a▁+▁', 'b', '\n▁▁▁▁▁▁▁▁']
Example 2:
class LinearLayer():
def __init__(self, input_size, output_size):
self.weight = torch.randn(input_size, output_size)
self.bias = torch.zeros(output_size)
def __call__(self, x):
return x @ self.weights + self.bias
old: ['▁', '<0x0A>', '▁▁▁▁▁▁▁', '▁class', '▁Linear', 'Layer', '():', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁', '▁def', '▁__', 'init', '__(', 'self', ',', '▁input', '_', 'size', ',', '▁output', '_', 'size', '):', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁', '▁self', '.', 'weight', '▁=', '▁tor', 'ch', '.', 'rand', 'n', '(', 'input', '_', 'size', ',', '▁output', '_', 'size', ')', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁', '▁self', '.', 'b', 'ias', '▁=', '▁tor', 'ch', '.', 'zer', 'os', '(', 'output', '_', 'size', ')', '<0x0A>', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁', '▁def', '▁__', 'call', '__(', 'self', ',', '▁x', '):', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁', '▁return', '▁x', '▁@', '▁self', '.', 'we', 'ights', '▁+', '▁self', '.', 'b', 'ias', '<0x0A>', '▁▁▁▁▁▁▁▁']
new: ['▁', '\n▁▁▁▁▁▁▁▁', 'class▁', 'Linear', 'Layer(', '):\n▁▁▁▁▁▁▁▁▁▁▁▁', 'def▁__init__(self,▁', 'input_', 'size,▁', 'output_', 'size', '):\n▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁self.', 'weight▁=▁', 'torch', '.r', 'and', 'n(', 'input_', 'size,▁', 'output_', 'size', ')\n▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁self.', 'bi', 'as▁=▁', 'torch.', 'zeros(', 'output_', 'size', ')\n\n▁▁▁▁▁▁▁▁▁▁▁▁', 'def▁__', 'call__', '(self,▁x', '):\n▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁return▁', 'x▁', '@▁', 'self.', 'weight', 's▁+▁', 'self.', 'bias', '\n▁▁▁▁▁▁▁▁']
```
The underscores `_` should be prepended at the front of new words, but it seems to be inserted at the back of words or in between words. In fact, it seems like the retrained tokenizer is worse than the original tokenizer on the new data. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27900/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27900/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27899 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27899/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27899/comments | https://api.github.com/repos/huggingface/transformers/issues/27899/events | https://github.com/huggingface/transformers/pull/27899 | 2,031,983,998 | PR_kwDOCUB6oc5hfOn0 | 27,899 | fix typo in image_processing_blip.py Wwhether -> Whether | {
"login": "zhc7",
"id": 53651354,
"node_id": "MDQ6VXNlcjUzNjUxMzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/53651354?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhc7",
"html_url": "https://github.com/zhc7",
"followers_url": "https://api.github.com/users/zhc7/followers",
"following_url": "https://api.github.com/users/zhc7/following{/other_user}",
"gists_url": "https://api.github.com/users/zhc7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhc7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhc7/subscriptions",
"organizations_url": "https://api.github.com/users/zhc7/orgs",
"repos_url": "https://api.github.com/users/zhc7/repos",
"events_url": "https://api.github.com/users/zhc7/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhc7/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-12-08T06:05:10 | 2023-12-08T18:32:48 | 2023-12-08T18:32:48 | CONTRIBUTOR | null | # fix typo in image_processing_blip.py Wwhether -> Whether
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
vision models: @amyeroberts
Documentation: @stevhliu and @MKhalusova | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27899/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27899",
"html_url": "https://github.com/huggingface/transformers/pull/27899",
"diff_url": "https://github.com/huggingface/transformers/pull/27899.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27899.patch",
"merged_at": "2023-12-08T18:32:48"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.