url stringlengths 66 66 | repository_url stringclasses 1
value | labels_url stringlengths 80 80 | comments_url stringlengths 75 75 | events_url stringlengths 73 73 | html_url stringlengths 54 56 | id int64 2.03B 2.11B | node_id stringlengths 18 19 | number int64 27.9k 28.8k | title stringlengths 3 306 | user dict | labels list | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees list | milestone null | comments int64 0 39 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | active_lock_reason null | body stringlengths 19 42.4k ⌀ | reactions dict | timeline_url stringlengths 75 75 | performed_via_github_app null | state_reason stringclasses 3
values | draft bool 2
classes | pull_request dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/28708 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28708/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28708/comments | https://api.github.com/repos/huggingface/transformers/issues/28708/events | https://github.com/huggingface/transformers/pull/28708 | 2,100,722,330 | PR_kwDOCUB6oc5lFTx2 | 28,708 | Fixed nll with label_smoothing to just nll | {
"login": "nileshkokane01",
"id": 8201108,
"node_id": "MDQ6VXNlcjgyMDExMDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8201108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nileshkokane01",
"html_url": "https://github.com/nileshkokane01",
"followers_url": "https://api.github.com/users/nileshkokane01/followers",
"following_url": "https://api.github.com/users/nileshkokane01/following{/other_user}",
"gists_url": "https://api.github.com/users/nileshkokane01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nileshkokane01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nileshkokane01/subscriptions",
"organizations_url": "https://api.github.com/users/nileshkokane01/orgs",
"repos_url": "https://api.github.com/users/nileshkokane01/repos",
"events_url": "https://api.github.com/users/nileshkokane01/events{/privacy}",
"received_events_url": "https://api.github.com/users/nileshkokane01/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-01-25T16:13:24 | 2024-01-26T04:47:26 | null | CONTRIBUTOR | null | # What does this PR do?
This PR fixes #28167 by making label_smoothing= 0 .
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #28167
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@younesbelkada
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28708/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28708/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28708",
"html_url": "https://github.com/huggingface/transformers/pull/28708",
"diff_url": "https://github.com/huggingface/transformers/pull/28708.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28708.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28707 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28707/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28707/comments | https://api.github.com/repos/huggingface/transformers/issues/28707/events | https://github.com/huggingface/transformers/issues/28707 | 2,100,555,786 | I_kwDOCUB6oc59M_AK | 28,707 | MixTral 8*7B GPU Memory usage keeps increasing during inference | {
"login": "oroojlooy",
"id": 20797260,
"node_id": "MDQ6VXNlcjIwNzk3MjYw",
"avatar_url": "https://avatars.githubusercontent.com/u/20797260?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oroojlooy",
"html_url": "https://github.com/oroojlooy",
"followers_url": "https://api.github.com/users/oroojlooy/followers",
"following_url": "https://api.github.com/users/oroojlooy/following{/other_user}",
"gists_url": "https://api.github.com/users/oroojlooy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oroojlooy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oroojlooy/subscriptions",
"organizations_url": "https://api.github.com/users/oroojlooy/orgs",
"repos_url": "https://api.github.com/users/oroojlooy/repos",
"events_url": "https://api.github.com/users/oroojlooy/events{/privacy}",
"received_events_url": "https://api.github.com/users/oroojlooy/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2024-01-25T14:48:36 | 2024-01-31T19:06:32 | null | NONE | null | ### System Info
- `transformers` version: 4.36.0
- Platform: Linux-5.4.17-2136.323.8.2.el7uek.x86_64-x86_64-with-glibc2.17
- Python version: 3.11.0
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: 0.26.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
The machine include 8*A100-40Gb,
### Who can help?
@Narsil @SunMarc
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am creating an instance of the `MixTralModel` class and call it in a loop with the prompts that I have.
```
import transformers
import torch
class MixTralModel:
def __init__(self, temperature=0.0, max_new_tokens=356, do_sample=False, top_k=50, top_p=0.7):
self.temperature = temperature
self.max_new_tokens = max_new_tokens
self.do_sample = do_sample
self.top_k = top_k
self.top_p = top_p
if do_sample and temperature == 0.0:
raise ValueError(
"`temperature` (=0.0) has to be a strictly positive float, otherwise your next token scores will be "
"invalid. If you're looking for greedy decoding strategies, set `do_sample=False`")
self.pipeline = transformers.pipeline(
"text-generation",
model="mistralai/Mixtral-8x7B-Instruct-v0.1",
device_map="auto",
# device="cuda:0",
# model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
model_kwargs={"torch_dtype": torch.float16},
)
def __call__(self, raw_messages: str) -> str:
"""
An example of message is:
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
"""
try:
messages = [{"role": "user", "content": raw_messages}]
prompt = self.pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = self.pipeline(prompt, max_new_tokens=self.max_new_tokens, do_sample=self.do_sample,
temperature=self.temperature, top_k=self.top_k, top_p=self.top_p)
return outputs[0]["generated_text"]
except Exception as e:
print(e)
if __name__ == "__main__":
model = MixTralModel(temperature=0.0, max_new_tokens=356, do_sample=False, top_k=50, top_p=0.7)
messages = "Explain what a Mixture of Experts is in less than 100 words."
out = model(messages)
print(out)
```
### Expected behavior
When I call the instance of the above class with my data, the GPU memory keeps increasing over time until I get a CUDA memory error.
It seems there is a memory leakage or it maybe keeps the gradient (?) on the memory.
The memory jumps over by 3Gb on each core each time. For example, below shows the gpu memory usage before and after a jump:
```
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 18234MiB |
| 1 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 20764MiB |
| 2 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 20764MiB |
| 3 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 20764MiB |
| 4 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 20764MiB |
| 5 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 20764MiB |
| 6 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 15430MiB |
| 7 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 414MiB |
+---------------------------------------------------------------------------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 22230MiB |
| 1 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 24760MiB |
| 2 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 24762MiB |
| 3 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 24760MiB |
| 4 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 24760MiB |
| 5 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 24760MiB |
| 6 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 19426MiB |
| 7 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 414MiB |
+---------------------------------------------------------------------------------------+
```
Note that this does not happen in each call of the model, and overall it gets killed after about 120 calls.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28707/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28707/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28706 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28706/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28706/comments | https://api.github.com/repos/huggingface/transformers/issues/28706/events | https://github.com/huggingface/transformers/pull/28706 | 2,100,275,754 | PR_kwDOCUB6oc5lDy-H | 28,706 | Add AutoFeatureExtractor support to Wav2Vec2ProcessorWithLM | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-01-25T12:19:36 | 2024-01-30T00:52:34 | null | COLLABORATOR | null | # What does this PR do?
Using a N-GRAM based language-model on top of Wav2Vec2-based models is an easy way to get a performance boost. At the moment, Wav2Vec2ProcessorWithLM was only compatible with Wav2Vec2FeatureExtractor.
W2V2-Bert could also benefit from this boost, but need its feature extractor to also be compatible with Wav2Vec2ProcessorWithLM.
The easiest way to do it is to add AutoFeatureExtractor instead of Wav2Vec2FeatureExtractor in the code, since the processor only changes the tokenizer behaviour.
cc @sanchit-gandhi @amyeroberts | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28706/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28706/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28706",
"html_url": "https://github.com/huggingface/transformers/pull/28706",
"diff_url": "https://github.com/huggingface/transformers/pull/28706.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28706.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28705 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28705/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28705/comments | https://api.github.com/repos/huggingface/transformers/issues/28705/events | https://github.com/huggingface/transformers/pull/28705 | 2,100,266,371 | PR_kwDOCUB6oc5lDw6c | 28,705 | [Docs] Add resources | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-01-25T12:13:57 | 2024-01-25T12:13:57 | null | CONTRIBUTOR | null | # What does this PR do?
This PR adds some more resources regarding various models. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28705/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28705/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28705",
"html_url": "https://github.com/huggingface/transformers/pull/28705",
"diff_url": "https://github.com/huggingface/transformers/pull/28705.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28705.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28704 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28704/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28704/comments | https://api.github.com/repos/huggingface/transformers/issues/28704/events | https://github.com/huggingface/transformers/issues/28704 | 2,100,171,431 | I_kwDOCUB6oc59LhKn | 28,704 | FalconForCausalLM does not support Flash Attention 2.0 yet | {
"login": "menouarazib",
"id": 99955425,
"node_id": "U_kgDOBfUy4Q",
"avatar_url": "https://avatars.githubusercontent.com/u/99955425?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/menouarazib",
"html_url": "https://github.com/menouarazib",
"followers_url": "https://api.github.com/users/menouarazib/followers",
"following_url": "https://api.github.com/users/menouarazib/following{/other_user}",
"gists_url": "https://api.github.com/users/menouarazib/gists{/gist_id}",
"starred_url": "https://api.github.com/users/menouarazib/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/menouarazib/subscriptions",
"organizations_url": "https://api.github.com/users/menouarazib/orgs",
"repos_url": "https://api.github.com/users/menouarazib/repos",
"events_url": "https://api.github.com/users/menouarazib/events{/privacy}",
"received_events_url": "https://api.github.com/users/menouarazib/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-25T11:26:20 | 2024-01-25T14:12:32 | 2024-01-25T14:12:32 | NONE | null | I attempted to use Flash Attention with the Falcon-7B model, but encountered the following error: **ValueError: FalconForCausalLM does not support Flash Attention 2.0 yet.**
This error was raised by the transformers/modeling_utils.py:
```
if not cls._supports_flash_attn_2:
raise ValueError(
f"{cls.__name__} does not support Flash Attention 2.0 yet. Please request to add support where"
f" the model is hosted, on its model hub page: https://huggingface.co/{config._name_or_path}/discussions/new"
" or in the Transformers GitHub repo: https://github.com/huggingface/transformers/issues/new"
)
```
I installed the Transformers library from the GitHub repository using the following command:
`pip install git+https://github.com/huggingface/transformers`
Here is the code I used:
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
# Hugging Face Falcon-7B model ID
model_id = "tiiuae/falcon-7b"
# BitsAndBytesConfig for 4-bit integers
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
# Load the model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
model_id,
trust_remote_code=True,
device_map="auto",
attn_implementation="flash_attention_2",
torch_dtype=torch.bfloat16,
quantization_config=bnb_config
)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28704/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28704/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28703 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28703/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28703/comments | https://api.github.com/repos/huggingface/transformers/issues/28703/events | https://github.com/huggingface/transformers/pull/28703 | 2,100,153,707 | PR_kwDOCUB6oc5lDYOa | 28,703 | [DO NOT MERGE] Hf quantizer refactor | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2024-01-25T11:16:14 | 2024-01-30T00:27:22 | 2024-01-30T00:27:21 | CONTRIBUTOR | null | # What does this PR do?
Built on top of https://github.com/huggingface/transformers/pull/26610 - this PR is just to see if I don't any surprising diff similar as in https://github.com/poedator/transformers/pull/4 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28703/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28703/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28703",
"html_url": "https://github.com/huggingface/transformers/pull/28703",
"diff_url": "https://github.com/huggingface/transformers/pull/28703.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28703.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28702 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28702/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28702/comments | https://api.github.com/repos/huggingface/transformers/issues/28702/events | https://github.com/huggingface/transformers/issues/28702 | 2,100,102,165 | I_kwDOCUB6oc59LQQV | 28,702 | Numpy version check failures | {
"login": "Iron-Bound",
"id": 7122848,
"node_id": "MDQ6VXNlcjcxMjI4NDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7122848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Iron-Bound",
"html_url": "https://github.com/Iron-Bound",
"followers_url": "https://api.github.com/users/Iron-Bound/followers",
"following_url": "https://api.github.com/users/Iron-Bound/following{/other_user}",
"gists_url": "https://api.github.com/users/Iron-Bound/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Iron-Bound/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Iron-Bound/subscriptions",
"organizations_url": "https://api.github.com/users/Iron-Bound/orgs",
"repos_url": "https://api.github.com/users/Iron-Bound/repos",
"events_url": "https://api.github.com/users/Iron-Bound/events{/privacy}",
"received_events_url": "https://api.github.com/users/Iron-Bound/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-25T10:48:25 | 2024-01-30T10:01:36 | null | NONE | null | ### System Info
latest docker container from `rocm/pytorch`
### Packages pip/conda
numpy 1.26.3
transformers 4.37.1
peft 0.7.1
accelerate 0.26.1
### Error
Python 3.9.18 (main, Sep 11 2023, 13:41:44)
│[GCC 11.2.0] :: Anaconda, Inc. on linux
│Type "help", "copyright", "credits" or "license" for more information.
│>>> import transformers
│Traceback (most recent call last):
│ File "<stdin>", line 1, in <module>
│ File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/__init__.py", line 26, in <module>
│ from . import dependency_versions_check
│ File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/dependency_versions_check.py", line 57, in <module> │
│ require_version_core(deps[pkg])
│ File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/versions.py", line 117, in require_version_core
│ return require_version(requirement, hint)
│ File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/versions.py", line 111, in require_version
│ _compare_versions(op, got_ver, want_ver, requirement, pkg, hint)
│ File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/versions.py", line 39, in _compare_versions
│ raise ValueError(
│ValueError: Unable to compare versions for numpy>=1.17: need=1.17 found=None. This is unusual. Consider reinstalling numpy.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps:
1. $ pip install transformers
2. $ python3
3. import transformers
Hacky fix, disabled the check in `transformers/utils/versions.py` to get past the error.
### Expected behavior
loads without issue, is this to simple an answer? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28702/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28702/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28701 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28701/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28701/comments | https://api.github.com/repos/huggingface/transformers/issues/28701/events | https://github.com/huggingface/transformers/issues/28701 | 2,100,041,061 | I_kwDOCUB6oc59LBVl | 28,701 | HfArgumentParser does not match exact arguments | {
"login": "ahmedkooli",
"id": 56259512,
"node_id": "MDQ6VXNlcjU2MjU5NTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/56259512?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahmedkooli",
"html_url": "https://github.com/ahmedkooli",
"followers_url": "https://api.github.com/users/ahmedkooli/followers",
"following_url": "https://api.github.com/users/ahmedkooli/following{/other_user}",
"gists_url": "https://api.github.com/users/ahmedkooli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahmedkooli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahmedkooli/subscriptions",
"organizations_url": "https://api.github.com/users/ahmedkooli/orgs",
"repos_url": "https://api.github.com/users/ahmedkooli/repos",
"events_url": "https://api.github.com/users/ahmedkooli/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahmedkooli/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2024-01-25T10:16:47 | 2024-01-29T10:30:31 | null | NONE | null | ### System Info
- `transformers` version: 4.35.2
- Platform: macOS-14.1.1-arm64-arm-64bit
- Python version: 3.11.0
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0 (False)
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
While running `examples/pytorch/text-classification/run_classification.py`, I noticed that the argument parser does not match the exact keywords, but rather a substring of the expected keyword. For example, running:
```bash
python run_classification.py \
--model_name_or_path bert-base-uncased \
--dataset_name glue \
--dataset_config_name mrpc \
--shuffle_train_dataset \
--max_train_samples 20 \
--max_eval_samples 20 \
--metric_name accuracy \
--text_column_na "sentence1,sentence2" \
--do_train \
--do_eval \
--do_predict \
--max_seq_length 512 \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 1 \
--output_dir /tmp/glue_mrpc/ \
--overwrite_output_dir \
```
works, whereas the argument `text_column_na` doesn't exist, and it replaces `text_column_names`. Is this meant to be? I think this can lead to unexpected behaviours. Thanks in advance.
### Expected behavior
I expected an error due to a non existing keyword, such as:
```bash
raise ValueError(f"Some specified arguments are not used by the HfArgumentParser: {remaining_args}")
ValueError: Some specified arguments are not used by the HfArgumentParser: ['--text_column_na', 'sentence1,sentence2']
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28701/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28701/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28700 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28700/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28700/comments | https://api.github.com/repos/huggingface/transformers/issues/28700/events | https://github.com/huggingface/transformers/pull/28700 | 2,100,030,420 | PR_kwDOCUB6oc5lC9Ek | 28,700 | Fixed interpolation for ViT to BICUBIC as the original implementation… | {
"login": "nileshkokane01",
"id": 8201108,
"node_id": "MDQ6VXNlcjgyMDExMDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8201108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nileshkokane01",
"html_url": "https://github.com/nileshkokane01",
"followers_url": "https://api.github.com/users/nileshkokane01/followers",
"following_url": "https://api.github.com/users/nileshkokane01/following{/other_user}",
"gists_url": "https://api.github.com/users/nileshkokane01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nileshkokane01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nileshkokane01/subscriptions",
"organizations_url": "https://api.github.com/users/nileshkokane01/orgs",
"repos_url": "https://api.github.com/users/nileshkokane01/repos",
"events_url": "https://api.github.com/users/nileshkokane01/events{/privacy}",
"received_events_url": "https://api.github.com/users/nileshkokane01/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-25T10:11:56 | 2024-01-26T05:40:41 | null | CONTRIBUTOR | null |
# What does this PR do?
This PR fixes the default interpolation mismatch between the hugging face library and the original implementation - as the original implementation uses BICUBIC by default but the hugging face default was BILINEAR
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #28180
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28700/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28700/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28700",
"html_url": "https://github.com/huggingface/transformers/pull/28700",
"diff_url": "https://github.com/huggingface/transformers/pull/28700.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28700.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28699 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28699/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28699/comments | https://api.github.com/repos/huggingface/transformers/issues/28699/events | https://github.com/huggingface/transformers/pull/28699 | 2,099,992,832 | PR_kwDOCUB6oc5lC02h | 28,699 | fix: corrected misleading log message in save_pretrained function | {
"login": "mturetskii",
"id": 96064903,
"node_id": "U_kgDOBbnVhw",
"avatar_url": "https://avatars.githubusercontent.com/u/96064903?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mturetskii",
"html_url": "https://github.com/mturetskii",
"followers_url": "https://api.github.com/users/mturetskii/followers",
"following_url": "https://api.github.com/users/mturetskii/following{/other_user}",
"gists_url": "https://api.github.com/users/mturetskii/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mturetskii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mturetskii/subscriptions",
"organizations_url": "https://api.github.com/users/mturetskii/orgs",
"repos_url": "https://api.github.com/users/mturetskii/repos",
"events_url": "https://api.github.com/users/mturetskii/events{/privacy}",
"received_events_url": "https://api.github.com/users/mturetskii/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-25T09:54:47 | 2024-01-26T12:11:26 | 2024-01-26T11:52:54 | CONTRIBUTOR | null | # What does this PR do?
This PR extends the fix implemented in a previous PR ([#28181](https://github.com/huggingface/transformers/pull/28181)), covering all cases where the saved file name might differ from the expected `WEIGHTS_NAME`. The earlier fix did not account for scenarios where the saved file could be named `ADAPTER_WEIGHTS_NAME` or `ADAPTER_SAFE_WEIGHTS_NAME`, leaving a potential for misleading log messages. This update ensures that all such cases are covered, and the log message accurately reflects the name of the file being saved in the `save_pretrained` function.
Fixes # https://github.com/huggingface/transformers/issues/28076
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28699/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28699/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28699",
"html_url": "https://github.com/huggingface/transformers/pull/28699",
"diff_url": "https://github.com/huggingface/transformers/pull/28699.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28699.patch",
"merged_at": "2024-01-26T11:52:54"
} |
https://api.github.com/repos/huggingface/transformers/issues/28698 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28698/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28698/comments | https://api.github.com/repos/huggingface/transformers/issues/28698/events | https://github.com/huggingface/transformers/issues/28698 | 2,099,901,347 | I_kwDOCUB6oc59KfOj | 28,698 | WhitespaceSplit not working | {
"login": "pradeepdev-1995",
"id": 41164884,
"node_id": "MDQ6VXNlcjQxMTY0ODg0",
"avatar_url": "https://avatars.githubusercontent.com/u/41164884?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pradeepdev-1995",
"html_url": "https://github.com/pradeepdev-1995",
"followers_url": "https://api.github.com/users/pradeepdev-1995/followers",
"following_url": "https://api.github.com/users/pradeepdev-1995/following{/other_user}",
"gists_url": "https://api.github.com/users/pradeepdev-1995/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pradeepdev-1995/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pradeepdev-1995/subscriptions",
"organizations_url": "https://api.github.com/users/pradeepdev-1995/orgs",
"repos_url": "https://api.github.com/users/pradeepdev-1995/repos",
"events_url": "https://api.github.com/users/pradeepdev-1995/events{/privacy}",
"received_events_url": "https://api.github.com/users/pradeepdev-1995/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 12 | 2024-01-25T09:04:24 | 2024-01-31T01:14:02 | null | NONE | null | ### System Info
torch==2.0.1
transformers==4.37.1
tokenizers==0.15.1
Python 3.8.16
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am trying to split a sentence by subword manner and word by word manner using WhitespaceSplit
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2", trust_remote_code=True,
use_fast=False)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
sentence = "Transformers tokenization testing"
tokenized_sentence = tokenizer.tokenize(sentence)
print("without WhitespaceSplit")
print(tokenized_sentence)
from tokenizers.pre_tokenizers import WhitespaceSplit
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2", trust_remote_code=True,
pretokenizer=WhitespaceSplit(),
use_fast=False)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
tokenized_sentence = tokenizer.tokenize(sentence)
print("with WhitespaceSplit ")
print(tokenized_sentence)
```
in both cases i am getting the same splitted data as below
```
without WhitespaceSplit
['▁Trans', 'form', 'ers', '▁token', 'ization', '▁testing']
with WhitespaceSplit
['▁Trans', 'form', 'ers', '▁token', 'ization', '▁testing']
```
### Expected behavior
In WhitespaceSplit ,the sentence should split in word by word such as
```
["Transformers", "tokenization", "testing"]
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28698/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28698/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28697 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28697/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28697/comments | https://api.github.com/repos/huggingface/transformers/issues/28697/events | https://github.com/huggingface/transformers/issues/28697 | 2,099,883,324 | I_kwDOCUB6oc59Ka08 | 28,697 | Bug happens in processor use model cached in local | {
"login": "wwx007121",
"id": 13541369,
"node_id": "MDQ6VXNlcjEzNTQxMzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/13541369?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wwx007121",
"html_url": "https://github.com/wwx007121",
"followers_url": "https://api.github.com/users/wwx007121/followers",
"following_url": "https://api.github.com/users/wwx007121/following{/other_user}",
"gists_url": "https://api.github.com/users/wwx007121/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wwx007121/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wwx007121/subscriptions",
"organizations_url": "https://api.github.com/users/wwx007121/orgs",
"repos_url": "https://api.github.com/users/wwx007121/repos",
"events_url": "https://api.github.com/users/wwx007121/events{/privacy}",
"received_events_url": "https://api.github.com/users/wwx007121/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.githu... | null | 5 | 2024-01-25T08:55:33 | 2024-01-26T08:03:49 | 2024-01-26T08:02:34 | NONE | null | ### System Info
version: transformers>=4.37.0
bug occurs in https://github.com/huggingface/transformers/blob/main/src/transformers/processing_utils.py ,line 466
I understand the purpose of this code, but this creats a conflict occurred with code in 'utils/hub.py line 426' that the error detail descriptions may have been changed.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I have made a simple solution that change my local code "if "does not appear to have a file named processor_config.json." in str(e):" to if "processor_config.json." in str(e): " .otherwise , Reduce version to 4.36.2 is also working。
### Expected behavior
I think it may have a better solution.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28697/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28697/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28696 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28696/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28696/comments | https://api.github.com/repos/huggingface/transformers/issues/28696/events | https://github.com/huggingface/transformers/pull/28696 | 2,099,856,257 | PR_kwDOCUB6oc5lCXll | 28,696 | Add French translation: french README.md | {
"login": "ThibaultLengagne",
"id": 11950126,
"node_id": "MDQ6VXNlcjExOTUwMTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/11950126?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ThibaultLengagne",
"html_url": "https://github.com/ThibaultLengagne",
"followers_url": "https://api.github.com/users/ThibaultLengagne/followers",
"following_url": "https://api.github.com/users/ThibaultLengagne/following{/other_user}",
"gists_url": "https://api.github.com/users/ThibaultLengagne/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ThibaultLengagne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ThibaultLengagne/subscriptions",
"organizations_url": "https://api.github.com/users/ThibaultLengagne/orgs",
"repos_url": "https://api.github.com/users/ThibaultLengagne/repos",
"events_url": "https://api.github.com/users/ThibaultLengagne/events{/privacy}",
"received_events_url": "https://api.github.com/users/ThibaultLengagne/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-25T08:39:14 | 2024-01-29T18:07:49 | 2024-01-29T18:07:49 | CONTRIBUTOR | null | # What does this PR do?
Add the French version of README.md
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@stevhliu and @MKhalusova
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28696/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28696/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28696",
"html_url": "https://github.com/huggingface/transformers/pull/28696",
"diff_url": "https://github.com/huggingface/transformers/pull/28696.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28696.patch",
"merged_at": "2024-01-29T18:07:49"
} |
https://api.github.com/repos/huggingface/transformers/issues/28695 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28695/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28695/comments | https://api.github.com/repos/huggingface/transformers/issues/28695/events | https://github.com/huggingface/transformers/pull/28695 | 2,099,803,282 | PR_kwDOCUB6oc5lCMJK | 28,695 | [`chore`] Add missing space in warning | {
"login": "tomaarsen",
"id": 37621491,
"node_id": "MDQ6VXNlcjM3NjIxNDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomaarsen",
"html_url": "https://github.com/tomaarsen",
"followers_url": "https://api.github.com/users/tomaarsen/followers",
"following_url": "https://api.github.com/users/tomaarsen/following{/other_user}",
"gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions",
"organizations_url": "https://api.github.com/users/tomaarsen/orgs",
"repos_url": "https://api.github.com/users/tomaarsen/repos",
"events_url": "https://api.github.com/users/tomaarsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomaarsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-25T08:06:00 | 2024-01-25T09:36:15 | 2024-01-25T09:34:53 | MEMBER | null | # What does this PR do?
Adds a missing space in a warning message.
## Before submitting
- [x] This PR fixes a typo or improves the docs
## Who can review?
@amyeroberts
- Tom Aarsen
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28695/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28695",
"html_url": "https://github.com/huggingface/transformers/pull/28695",
"diff_url": "https://github.com/huggingface/transformers/pull/28695.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28695.patch",
"merged_at": "2024-01-25T09:34:53"
} |
https://api.github.com/repos/huggingface/transformers/issues/28694 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28694/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28694/comments | https://api.github.com/repos/huggingface/transformers/issues/28694/events | https://github.com/huggingface/transformers/pull/28694 | 2,099,760,344 | PR_kwDOCUB6oc5lCCwB | 28,694 | Update question_answering.md | {
"login": "yusyel",
"id": 25446622,
"node_id": "MDQ6VXNlcjI1NDQ2NjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/25446622?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yusyel",
"html_url": "https://github.com/yusyel",
"followers_url": "https://api.github.com/users/yusyel/followers",
"following_url": "https://api.github.com/users/yusyel/following{/other_user}",
"gists_url": "https://api.github.com/users/yusyel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yusyel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yusyel/subscriptions",
"organizations_url": "https://api.github.com/users/yusyel/orgs",
"repos_url": "https://api.github.com/users/yusyel/repos",
"events_url": "https://api.github.com/users/yusyel/events{/privacy}",
"received_events_url": "https://api.github.com/users/yusyel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-25T07:39:01 | 2024-01-25T14:06:38 | 2024-01-25T14:06:38 | CONTRIBUTOR | null | # What does this PR do?
fix typo:
from:
"model = TFAutoModelForQuestionAnswering("distilbert-base-uncased")"
to:
model = TFAutoModelForQuestionAnswering.from_pretrained("distilbert-base-uncased")
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28694/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28694/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28694",
"html_url": "https://github.com/huggingface/transformers/pull/28694",
"diff_url": "https://github.com/huggingface/transformers/pull/28694.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28694.patch",
"merged_at": "2024-01-25T14:06:38"
} |
https://api.github.com/repos/huggingface/transformers/issues/28693 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28693/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28693/comments | https://api.github.com/repos/huggingface/transformers/issues/28693/events | https://github.com/huggingface/transformers/pull/28693 | 2,099,623,383 | PR_kwDOCUB6oc5lBkyx | 28,693 | Added code to match the default interpolation for convnext | {
"login": "nileshkokane01",
"id": 8201108,
"node_id": "MDQ6VXNlcjgyMDExMDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8201108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nileshkokane01",
"html_url": "https://github.com/nileshkokane01",
"followers_url": "https://api.github.com/users/nileshkokane01/followers",
"following_url": "https://api.github.com/users/nileshkokane01/following{/other_user}",
"gists_url": "https://api.github.com/users/nileshkokane01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nileshkokane01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nileshkokane01/subscriptions",
"organizations_url": "https://api.github.com/users/nileshkokane01/orgs",
"repos_url": "https://api.github.com/users/nileshkokane01/repos",
"events_url": "https://api.github.com/users/nileshkokane01/events{/privacy}",
"received_events_url": "https://api.github.com/users/nileshkokane01/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-01-25T05:55:23 | 2024-01-25T11:27:43 | null | CONTRIBUTOR | null | # What does this PR do?
This PR fixes the default interpolation type for Convnext to bicubic based on the original implementation . Also it adds assert in the image_processing_convnext_pytorch.py
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #28180
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28693/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28693/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28693",
"html_url": "https://github.com/huggingface/transformers/pull/28693",
"diff_url": "https://github.com/huggingface/transformers/pull/28693.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28693.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28692 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28692/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28692/comments | https://api.github.com/repos/huggingface/transformers/issues/28692/events | https://github.com/huggingface/transformers/pull/28692 | 2,099,512,869 | PR_kwDOCUB6oc5lBNct | 28,692 | Verify if output has logits or prediction logits in fill-mask pipeline | {
"login": "pedrogengo",
"id": 27240528,
"node_id": "MDQ6VXNlcjI3MjQwNTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/27240528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pedrogengo",
"html_url": "https://github.com/pedrogengo",
"followers_url": "https://api.github.com/users/pedrogengo/followers",
"following_url": "https://api.github.com/users/pedrogengo/following{/other_user}",
"gists_url": "https://api.github.com/users/pedrogengo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pedrogengo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pedrogengo/subscriptions",
"organizations_url": "https://api.github.com/users/pedrogengo/orgs",
"repos_url": "https://api.github.com/users/pedrogengo/repos",
"events_url": "https://api.github.com/users/pedrogengo/events{/privacy}",
"received_events_url": "https://api.github.com/users/pedrogengo/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-25T03:51:41 | 2024-01-25T17:39:06 | null | CONTRIBUTOR | null | # What does this PR do?
It checks if the output has the key "logits" or "prediction_logits" to avoid breaking the fill-mask pipeline for some models like BertForPreTraining that returns:
```
return BertForPreTrainingOutput(
loss=total_loss,
prediction_logits=prediction_scores,
seq_relationship_logits=seq_relationship_score,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
```
Error without this change:
<img width="1122" alt="image" src="https://github.com/huggingface/transformers/assets/27240528/a7491281-db09-4b23-a795-5f102ec9d911">
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @younesbelkada | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28692/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28692/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28692",
"html_url": "https://github.com/huggingface/transformers/pull/28692",
"diff_url": "https://github.com/huggingface/transformers/pull/28692.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28692.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28691 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28691/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28691/comments | https://api.github.com/repos/huggingface/transformers/issues/28691/events | https://github.com/huggingface/transformers/issues/28691 | 2,099,511,128 | I_kwDOCUB6oc59I_9Y | 28,691 | error with DataCollatorForLanguageModeling | {
"login": "minmie",
"id": 40080081,
"node_id": "MDQ6VXNlcjQwMDgwMDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/40080081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minmie",
"html_url": "https://github.com/minmie",
"followers_url": "https://api.github.com/users/minmie/followers",
"following_url": "https://api.github.com/users/minmie/following{/other_user}",
"gists_url": "https://api.github.com/users/minmie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minmie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minmie/subscriptions",
"organizations_url": "https://api.github.com/users/minmie/orgs",
"repos_url": "https://api.github.com/users/minmie/repos",
"events_url": "https://api.github.com/users/minmie/events{/privacy}",
"received_events_url": "https://api.github.com/users/minmie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-25T03:49:18 | 2024-01-30T09:37:09 | 2024-01-30T09:37:09 | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-3.10.0-1160.99.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.10.13
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I want to fine-tune gpt2 for text summarization, but an error occurs when I create the batch input with DataCollatorForLanguageModeling. My code is as follows:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, DataCollatorForTokenClassification, \
DataCollatorWithPadding, DataCollatorForLanguageModeling, DataCollatorForSeq2Seq
model_path = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# model = AutoModelForCausalLM.from_pretrained(model_path)
tokenizer.pad_token = tokenizer.eos_token
content = [
"x x",
"x x x"
]
summary = [
"y y",
"y y y"
]
batch = []
for c, s in zip(content, summary):
sample_input_ids = tokenizer.encode('<content>' + c + '<summary>')
label_input_ids = tokenizer.encode(s) + [tokenizer.eos_token_id]
input_ids = sample_input_ids + label_input_ids
labels = [-100] * len(sample_input_ids) + label_input_ids
batch.append({"input_ids": input_ids, "labels": labels})
data_collator1 = DataCollatorForTokenClassification(tokenizer)
# data_collator2 = DataCollatorWithPadding(tokenizer)
data_collator3 = DataCollatorForLanguageModeling(tokenizer, mlm=False)
data_collator4 = DataCollatorForSeq2Seq(tokenizer)
# i used this collator to create batch input and an error occured.
print(data_collator3(batch))
"""
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`labels` in this case) have excessive nesting (inputs type `list` where type `int` is expected).
"""
# I aslo try this two collator and there's an extra attention_mask returned,but gpt2 dose't need it.
# print(data_collator1(batch))
# print(data_collator4(batch))
"""
{'input_ids': tensor([[ 27, 11299, 29, 87, 2124, 27, 49736, 29, 88, 331,
50256, 50256, 50256],
[ 27, 11299, 29, 87, 2124, 2124, 27, 49736, 29, 88,
331, 331, 50256]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]), 'labels': tensor([[ -100, -100, -100, -100, -100, -100, -100, -100, 88, 331,
50256, -100, -100],
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, 88,
331, 331, 50256]])}
{'input_ids': tensor([[ 27, 11299, 29, 87, 2124, 27, 49736, 29, 88, 331,
50256, 50256, 50256],
[ 27, 11299, 29, 87, 2124, 2124, 27, 49736, 29, 88,
331, 331, 50256]]), 'labels': tensor([[ -100, -100, -100, -100, -100, -100, -100, -100, 88, 331,
50256, -100, -100],
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, 88,
331, 331, 50256]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
"""
print(1)
```
### Expected behavior
Considering I am fine-tuning gpt2 for text summarization, I expect to get a batch input like this:
```
{
'input_ids': tensor([[ 27, 11299, 29, 87, 2124, 27, 49736, 29, 88, 331, 50256, 50256, 50256],
[ 27, 11299, 29, 87, 2124, 2124, 27, 49736, 29, 88, 331, 331, 50256]]),
'labels': tensor([[ -100, -100, -100, -100, -100, -100, -100, -100, 88, 331,50256, -100, -100],
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, 88,331, 331, 50256]])
}
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28691/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28691/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28690 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28690/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28690/comments | https://api.github.com/repos/huggingface/transformers/issues/28690/events | https://github.com/huggingface/transformers/issues/28690 | 2,099,443,948 | I_kwDOCUB6oc59Ivjs | 28,690 | Running into AttributeErrorAttributeError from 4.37.0 | {
"login": "ningziwen",
"id": 8747309,
"node_id": "MDQ6VXNlcjg3NDczMDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8747309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ningziwen",
"html_url": "https://github.com/ningziwen",
"followers_url": "https://api.github.com/users/ningziwen/followers",
"following_url": "https://api.github.com/users/ningziwen/following{/other_user}",
"gists_url": "https://api.github.com/users/ningziwen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ningziwen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ningziwen/subscriptions",
"organizations_url": "https://api.github.com/users/ningziwen/orgs",
"repos_url": "https://api.github.com/users/ningziwen/repos",
"events_url": "https://api.github.com/users/ningziwen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ningziwen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2024-01-25T02:28:21 | 2024-01-30T13:55:07 | 2024-01-30T13:54:43 | NONE | null | ### System Info
Only happens from 4.37.0
```
- `transformers` version: 4.37.0
- Platform: Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: 0.26.1
- Accelerate config: not found
- PyTorch version (GPU?): 1.13.1+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
Switching back to 4.36.2 and it works well.
```
- `transformers` version: 4.36.2
- Platform: Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: 0.26.1
- Accelerate config: not found
- PyTorch version (GPU?): 1.13.1+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@fxmarty, @michaelbenayoun, @amyeroberts
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Run `torchrun` against one the simple file.
```
if __name__ == '__main__':
from transformers.utils.fx import HFTracer
```
Running into
```
Traceback (most recent call last):
Traceback (most recent call last):
File "/test/bin/pytorch_tests/testCustom", line 2, in <module>
File "/test/bin/pytorch_tests/testCustom", line 2, in <module>
from transformers.utils.fx import HFTracerfrom transformers.utils.fx import HFTracer
File "/usr/local/lib/python3.10/site-packages/transformers/utils/fx.py", line 611, in <module>
File "/usr/local/lib/python3.10/site-packages/transformers/utils/fx.py", line 611, in <module>
torch.nn.functional.scaled_dot_product_attention: torch_nn_functional_scaled_dot_product_attention,torch.nn.functional.scaled_dot_product_attention: torch_nn_functional_scaled_dot_product_attention,
AttributeErrorAttributeError: : module 'torch.nn.functional' has no attribute 'scaled_dot_product_attention'module 'torch.nn.functional' has no attribute 'scaled_dot_product_attention'. Did you mean: '. Did you mean: '_scaled_dot_product_attention_scaled_dot_product_attention'?'?
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 31) of binary: /usr/local/bin/python3.10
Traceback (most recent call last):
File "/usr/local/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/distributed/run.py", line 762, in main
run(args)
File "/usr/local/lib/python3.10/site-packages/torch/distributed/run.py", line 753, in run
elastic_launch(
File "/usr/local/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/usr/local/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/test/bin/pytorch_tests/testCustom FAILED
```
### Expected behavior
Should succeed. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28690/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28690/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28689 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28689/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28689/comments | https://api.github.com/repos/huggingface/transformers/issues/28689/events | https://github.com/huggingface/transformers/issues/28689 | 2,099,408,215 | I_kwDOCUB6oc59Im1X | 28,689 | safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization | {
"login": "tamanna-mostafa",
"id": 156403336,
"node_id": "U_kgDOCVKGiA",
"avatar_url": "https://avatars.githubusercontent.com/u/156403336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tamanna-mostafa",
"html_url": "https://github.com/tamanna-mostafa",
"followers_url": "https://api.github.com/users/tamanna-mostafa/followers",
"following_url": "https://api.github.com/users/tamanna-mostafa/following{/other_user}",
"gists_url": "https://api.github.com/users/tamanna-mostafa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tamanna-mostafa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tamanna-mostafa/subscriptions",
"organizations_url": "https://api.github.com/users/tamanna-mostafa/orgs",
"repos_url": "https://api.github.com/users/tamanna-mostafa/repos",
"events_url": "https://api.github.com/users/tamanna-mostafa/events{/privacy}",
"received_events_url": "https://api.github.com/users/tamanna-mostafa/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-01-25T01:41:10 | 2024-01-25T01:41:10 | null | NONE | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-5.15.0-1050-aws-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: 0.26.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. I fined tuned mistral 7b model with preference data
2. I ran DPO on the SFT model.
3. To merge my lora adaptors, I ran tthe following command:
`python merge_peft_adaptors_gpu.py --base_model_name_or_path <> --peft_model_path <> --output_dir <> --safe_serialization`
This is the `merge_peft_adaptors_gpu.py` script:
```
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
import os
import argparse
def get_args():
parser = argparse.ArgumentParser()
parser.add_argument("--base_model_name_or_path", type=str)
parser.add_argument("--peft_model_path", type=str)
parser.add_argument("--output_dir", type=str)
parser.add_argument("--device", type=str, default="auto")
parser.add_argument("--safe_serialization", action="store_true")
return parser.parse_args()
####
def main():
args = get_args()
if args.device == 'auto':
device_arg = { 'device_map': 'auto' }
else:
device_arg = { 'device_map': { "": args.device} }
print(f"Loading base model: {args.base_model_name_or_path}")
base_model = AutoModelForCausalLM.from_pretrained(
args.base_model_name_or_path,
return_dict=True,
torch_dtype=torch.float16,
trust_remote_code=True,
**device_arg
)
#device = torch.device('cpu')
#base_model.to(device)
print(f"Loading PEFT: {args.peft_model_path}")
model = PeftModel.from_pretrained(base_model, args.peft_model_path)
print("Peft Model : ", model.device)
print(f"Running merge_and_unload")
model = model.merge_and_unload()
tokenizer = AutoTokenizer.from_pretrained(args.base_model_name_or_path)
model.save_pretrained(f"{args.output_dir}",max_shard_size='9GB',safe_serialization=args.safe_serialization)
tokenizer.save_pretrained(f"{args.output_dir}",max_shard_size='9GB',safe_serialization=args.safe_serialization)
print(f"Model saved to {args.output_dir}")
####
if __name__ == "__main__" :
main()
```
4. I get the below error:
```
Loading base model: /mnt/efs/data/tammosta/files_t/output_sft_32k
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:04<00:00, 1.40s/it]
Loading PEFT: /mnt/efs/data/tammosta/files_t/DPO_output_mistral_32k
Traceback (most recent call last):
File "/mnt/efs/data/tammosta/scripts_hb/merge_peft_adaptors_gpu.py", line 51, in <module>
main()
File "/mnt/efs/data/tammosta/scripts_hb/merge_peft_adaptors_gpu.py", line 38, in main
model = PeftModel.from_pretrained(base_model, args.peft_model_path)
File "/opt/conda/envs/ml_v4/lib/python3.10/site-packages/peft/peft_model.py", line 352, in from_pretrained
model.load_adapter(model_id, adapter_name, is_trainable=is_trainable, **kwargs)
File "/opt/conda/envs/ml_v4/lib/python3.10/site-packages/peft/peft_model.py", line 689, in load_adapter
adapters_weights = load_peft_weights(model_id, device=torch_device, **hf_hub_download_kwargs)
File "/opt/conda/envs/ml_v4/lib/python3.10/site-packages/peft/utils/save_and_load.py", line 270, in load_peft_weights
adapters_weights = safe_load_file(filename, device=device)
File "/opt/conda/envs/ml_v4/lib/python3.10/site-packages/safetensors/torch.py", line 308, in load_file
with safe_open(filename, framework="pt", device=device) as f:
safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization
```
Any idea how to solve this?
### Expected behavior
base model and peft model will be successfully merged. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28689/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28689/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28688 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28688/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28688/comments | https://api.github.com/repos/huggingface/transformers/issues/28688/events | https://github.com/huggingface/transformers/issues/28688 | 2,099,400,046 | I_kwDOCUB6oc59Ik1u | 28,688 | OSError: /data/DPO_output_mistral_32k does not appear to have a file named config.json. | {
"login": "tamanna-mostafa",
"id": 156403336,
"node_id": "U_kgDOCVKGiA",
"avatar_url": "https://avatars.githubusercontent.com/u/156403336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tamanna-mostafa",
"html_url": "https://github.com/tamanna-mostafa",
"followers_url": "https://api.github.com/users/tamanna-mostafa/followers",
"following_url": "https://api.github.com/users/tamanna-mostafa/following{/other_user}",
"gists_url": "https://api.github.com/users/tamanna-mostafa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tamanna-mostafa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tamanna-mostafa/subscriptions",
"organizations_url": "https://api.github.com/users/tamanna-mostafa/orgs",
"repos_url": "https://api.github.com/users/tamanna-mostafa/repos",
"events_url": "https://api.github.com/users/tamanna-mostafa/events{/privacy}",
"received_events_url": "https://api.github.com/users/tamanna-mostafa/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 16 | 2024-01-25T01:30:26 | 2024-02-01T02:22:58 | null | NONE | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-5.15.0-1050-aws-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: 0.26.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@SunMarc @muellerzr
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Fine-tuned the mistral 7b model with 32k preference data.
2. Ran DPO on the SFT output.
3. Ran the `docker run` command on the DPO output to host the model on docker so I can run inferences.
### Expected behavior
Expected behavior was that docker will start running. However, I got this error instead:
```
2024-01-24T20:31:06.334853Z ERROR text_generation_launcher: Error when initializing model
Traceback (most recent call last):
File "/opt/conda/bin/text-generation-server", line 8, in <module>
sys.exit(app())
File "/opt/conda/lib/python3.9/site-packages/typer/main.py", line 311, in __call__
return get_command(self)(*args, **kwargs)
File "/opt/conda/lib/python3.9/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
File "/opt/conda/lib/python3.9/site-packages/typer/core.py", line 778, in main
return _main(
File "/opt/conda/lib/python3.9/site-packages/typer/core.py", line 216, in _main
rv = self.invoke(ctx)
File "/opt/conda/lib/python3.9/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/opt/conda/lib/python3.9/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/opt/conda/lib/python3.9/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/opt/conda/lib/python3.9/site-packages/typer/main.py", line 683, in wrapper
return callback(**use_params) # type: ignore
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/cli.py", line 83, in serve
server.serve(
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/server.py", line 207, in serve
asyncio.run(
File "/opt/conda/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/opt/conda/lib/python3.9/asyncio/base_events.py", line 634, in run_until_complete
self.run_forever()
File "/opt/conda/lib/python3.9/asyncio/base_events.py", line 601, in run_forever
self._run_once()
File "/opt/conda/lib/python3.9/asyncio/base_events.py", line 1905, in _run_once
handle._run()
File "/opt/conda/lib/python3.9/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
> File "/opt/conda/lib/python3.9/site-packages/text_generation_server/server.py", line 159, in serve_inner
model = get_model(
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/__init__.py", line 129, in get_model
config_dict, _ = PretrainedConfig.get_config_dict(
File "/opt/conda/lib/python3.9/site-packages/transformers/configuration_utils.py", line 620, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/opt/conda/lib/python3.9/site-packages/transformers/configuration_utils.py", line 675, in _get_config_dict
resolved_config_file = cached_file(
File "/opt/conda/lib/python3.9/site-packages/transformers/utils/hub.py", line 400, in cached_file
raise EnvironmentError(
OSError: /data/DPO_output_mistral_32k does not appear to have a file named config.json. Checkout 'https://huggingface.co//data/DPO_output_mistral_32k/None' for available files.
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28688/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28687 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28687/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28687/comments | https://api.github.com/repos/huggingface/transformers/issues/28687/events | https://github.com/huggingface/transformers/pull/28687 | 2,099,153,324 | PR_kwDOCUB6oc5lAAa8 | 28,687 | [Whisper] Refactor forced_decoder_ids & prompt ids | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2024-01-24T21:38:14 | 2024-01-31T12:02:08 | 2024-01-31T12:02:07 | MEMBER | null | # What does this PR do?
This PR refactors `forced_decoder_ids` making sure that we now always pass prompted ids as `decoder_input_ids` into generate for Whisper. The whole idea of forcing ids instead of just passing them as initial tokens was a bad design choice and we should try to move away from it.
In addition, Whisper prompting is improved by:
- Not allowing `prompt_ids` to be passed as a numpy array
- Enable `prompt_ids` for long-form generation with two modes:
- a) prompt only the first segment
- b) prompt every segment
While a) is the only supported case in the original Whisper repo b) can be very useful as can be seen in the added slow test [here](https://github.com/huggingface/transformers/pull/28687/files#r1467376608).
This is the final code PR regarding Whisper for Transformers. In the next weeks focus will be put on writing nice docs, tutorials and blog posts. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28687/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/28687/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28687",
"html_url": "https://github.com/huggingface/transformers/pull/28687",
"diff_url": "https://github.com/huggingface/transformers/pull/28687.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28687.patch",
"merged_at": "2024-01-31T12:02:07"
} |
https://api.github.com/repos/huggingface/transformers/issues/28686 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28686/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28686/comments | https://api.github.com/repos/huggingface/transformers/issues/28686/events | https://github.com/huggingface/transformers/pull/28686 | 2,099,095,224 | PR_kwDOCUB6oc5k_zd4 | 28,686 | Enable Gradient Checkpointing in Deformable DETR | {
"login": "FoamoftheSea",
"id": 50897218,
"node_id": "MDQ6VXNlcjUwODk3MjE4",
"avatar_url": "https://avatars.githubusercontent.com/u/50897218?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FoamoftheSea",
"html_url": "https://github.com/FoamoftheSea",
"followers_url": "https://api.github.com/users/FoamoftheSea/followers",
"following_url": "https://api.github.com/users/FoamoftheSea/following{/other_user}",
"gists_url": "https://api.github.com/users/FoamoftheSea/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FoamoftheSea/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FoamoftheSea/subscriptions",
"organizations_url": "https://api.github.com/users/FoamoftheSea/orgs",
"repos_url": "https://api.github.com/users/FoamoftheSea/repos",
"events_url": "https://api.github.com/users/FoamoftheSea/events{/privacy}",
"received_events_url": "https://api.github.com/users/FoamoftheSea/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2024-01-24T21:00:46 | 2024-01-29T10:10:41 | 2024-01-29T10:10:41 | CONTRIBUTOR | null | # What does this PR do?
Gradient Checkpointing is not currently supported by Deformable DETR, but with slight modifications I was able to get it working in both the encoder and decoder stages, which both independently led to noticeable reductions in VRAM usage during training. This makes a default Deformable DETR configuration trainable on a 4GB GPU.
@amyeroberts | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28686/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28686/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28686",
"html_url": "https://github.com/huggingface/transformers/pull/28686",
"diff_url": "https://github.com/huggingface/transformers/pull/28686.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28686.patch",
"merged_at": "2024-01-29T10:10:41"
} |
https://api.github.com/repos/huggingface/transformers/issues/28685 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28685/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28685/comments | https://api.github.com/repos/huggingface/transformers/issues/28685/events | https://github.com/huggingface/transformers/issues/28685 | 2,099,080,596 | I_kwDOCUB6oc59HW2U | 28,685 | torch.arange use should not use dtype=float for integer ranges, conflicts w/ DS `zero.Init()` | {
"login": "rwightman",
"id": 5702664,
"node_id": "MDQ6VXNlcjU3MDI2NjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5702664?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rwightman",
"html_url": "https://github.com/rwightman",
"followers_url": "https://api.github.com/users/rwightman/followers",
"following_url": "https://api.github.com/users/rwightman/following{/other_user}",
"gists_url": "https://api.github.com/users/rwightman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rwightman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rwightman/subscriptions",
"organizations_url": "https://api.github.com/users/rwightman/orgs",
"repos_url": "https://api.github.com/users/rwightman/repos",
"events_url": "https://api.github.com/users/rwightman/events{/privacy}",
"received_events_url": "https://api.github.com/users/rwightman/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 12 | 2024-01-24T20:50:17 | 2024-01-31T17:20:15 | null | NONE | null | ### System Info
Impacts many versions of transformers up to and including current.
### Who can help?
@ArthurZucker @amyeroberts
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Use a number of transformers models that utilize arange for integer enumerations in the calculation of position embeddings with DeepSpeed zero.Init() and a low precision dtype (float16, bfloat16), and the generated embeddings will differ significantly from intended.
Using Llama as an example
`t = torch.arange(self.max_seq_len_cached, device=device, dtype=self.inv_freq.dtype)`
The inv_freq.dtype == float32. Single precision float can cover the required integer range for the enumeration (I believe it's in the 2k-8k range for Llama?).
However, when DeepSpeed zero.Init is used the init function patching will override the float dtype passed in with a low precision float dtype, so float32 -> bfloat16 or float16. Thus the integer range that can be represented without significant loss drops down to 256 for bfloat16 or 2048 for float16. DeepSpeed's patching has an exception for integer dtype, it will not cast arange to the low precision float dtype if arange dtype is an int type.
https://github.com/microsoft/DeepSpeed/blob/0dd0c615f8e6c7947ba81a4b0993284da5ec3209/deepspeed/runtime/zero/partition_parameters.py#L245-L246
```
def zero_wrapper_for_fp_tensor_constructor(fn: Callable, target_fp_dtype: torch.dtype) -> Callable:
def wrapped_fn(*args, **kwargs) -> Tensor:
if kwargs.get("device", None) is None:
kwargs['device'] = torch.device(get_accelerator().device_name(os.environ["LOCAL_RANK"]))
tensor: Tensor = fn(*args, **kwargs)
if tensor.is_floating_point():
tensor.data = tensor.data.to(target_fp_dtype)
return tensor
return wrapped_fn
```
torch.arange defaults to an integer dtype if start/end/step are ints. In this case though it's best to be explicit to make intent clear, we should explictly set dtype=torch.long (or torch.int64 depending on your tastes). Casting to float should be done after the arange. Additionally, in many position embedding calculation scenarios, it's best to try and keep the calculations in float32 as long as possible, doing final conversion to low precision type at the very end (if that's the dtype of inference or training).
### Expected behavior
Use of torch.arange should explicitly set dtype=torch.long (or int64).
Ex: for Llama,
`t = torch.arange(self.max_seq_len_cached, device=device).type_as(self.inv_freq)` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28685/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28685/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28684 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28684/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28684/comments | https://api.github.com/repos/huggingface/transformers/issues/28684/events | https://github.com/huggingface/transformers/pull/28684 | 2,098,841,913 | PR_kwDOCUB6oc5k-7oe | 28,684 | [docs] Fix doc format | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-24T18:23:14 | 2024-01-24T19:19:03 | 2024-01-24T19:19:00 | MEMBER | null | Closes a open `<hfoptions>` tag in the DeepSpeed docs :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28684/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28684/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28684",
"html_url": "https://github.com/huggingface/transformers/pull/28684",
"diff_url": "https://github.com/huggingface/transformers/pull/28684.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28684.patch",
"merged_at": "2024-01-24T19:19:00"
} |
https://api.github.com/repos/huggingface/transformers/issues/28683 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28683/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28683/comments | https://api.github.com/repos/huggingface/transformers/issues/28683/events | https://github.com/huggingface/transformers/issues/28683 | 2,098,818,468 | I_kwDOCUB6oc59GW2k | 28,683 | Add option to suppress progress bar in train log output | {
"login": "cohml",
"id": 62400541,
"node_id": "MDQ6VXNlcjYyNDAwNTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/62400541?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cohml",
"html_url": "https://github.com/cohml",
"followers_url": "https://api.github.com/users/cohml/followers",
"following_url": "https://api.github.com/users/cohml/following{/other_user}",
"gists_url": "https://api.github.com/users/cohml/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cohml/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cohml/subscriptions",
"organizations_url": "https://api.github.com/users/cohml/orgs",
"repos_url": "https://api.github.com/users/cohml/repos",
"events_url": "https://api.github.com/users/cohml/events{/privacy}",
"received_events_url": "https://api.github.com/users/cohml/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2155169140,
"node_id": "MDU6TGFiZWwyMTU1MTY5MTQw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/trainer",
"name": "trainer",
"color": "2ef289",
"default": false,
"description": ""
},
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
... | closed | false | null | [] | null | 2 | 2024-01-24T18:07:26 | 2024-01-24T22:34:30 | 2024-01-24T22:34:29 | NONE | null | ### Feature request
When training a transformer model with `transformers` - certainly using the `Trainer.train` API but probably also with other methods as well - a `tqdm`-style progress bar is printed to the screen.
This is very useful when monitoring training in the terminal in real time. But it really messes up logging when this output is piped to a file.
This is because the progress bar appears to use the carriage return character to give the illusion of refreshing the bar, but that character wreaks havoc when trying to `cat`, `less`, or `grep` through a file.
Here's an example of what I mean:
```bash
❯ grep -c eval_mse train.log
85
❯ grep eval_mse train.log
100%|██████████| 850/850 [1:41:01<00:00, 4.69s/it]{'eval_loss': 0.6620487570762634, 'eval_mse': 0.8136637373073005, 'eval_qwk': 0.7408296916568654, 'eval_runtime': 15.4491, 'eval_samples_per_second': 43.239, 'eval_steps_per_second': 2.719, 'epoch': 49.41}
{'eval_loss': 0.6605485081672668, 'eval_mse': 0.812741345170533, 'eval_qwk': 0.7413115848125768, 'eval_runtime': 15.2233, 'eval_samples_per_second': 43.88, 'eval_steps_per_second': 2.759, 'epoch': 50.0}```
```
This shows that my log file has 85 lines with the substring `eval_mse`, but when I try to view the individual lines themselves, the carriage returns eats almost all the output.
Meanwhile, manually replacing those characters shows all the matches (only last 5 shown here for brevity):
```bash
❯ sed 's/\r/\n/g' train.log | grep eval_mse | nl | tail -5
81 96%|█████████▋| 820/850 [1:37:14<03:08, 6.27s/it]{'eval_loss': 0.6764867305755615, 'eval_mse': 0.8224881544548046, 'eval_qwk': 0.7378055733442015, 'eval_runtime': 14.7125, 'eval_samples_per_second': 45.404, 'eval_steps_per_second': 2.855, 'epoch': 47.65}
82 98%|█████████▊| 830/850 [1:38:24<02:07, 6.36s/it]{'eval_loss': 0.6555904746055603, 'eval_mse': 0.8096854324958218, 'eval_qwk': 0.7447225139461018, 'eval_runtime': 14.7862, 'eval_samples_per_second': 45.177, 'eval_steps_per_second': 2.84, 'epoch': 48.24}
83 99%|█████████▉| 840/850 [1:39:47<01:48, 10.82s/it]{'eval_loss': 0.6539692878723145, 'eval_mse': 0.8086836641571903, 'eval_qwk': 0.744001775888472, 'eval_runtime': 14.8626, 'eval_samples_per_second': 44.945, 'eval_steps_per_second': 2.826, 'epoch': 48.82}
84 100%|██████████| 850/850 [1:41:01<00:00, 4.69s/it]{'eval_loss': 0.6620487570762634, 'eval_mse': 0.8136637373073005, 'eval_qwk': 0.7408296916568654, 'eval_runtime': 15.4491, 'eval_samples_per_second': 43.239, 'eval_steps_per_second': 2.719, 'epoch': 49.41}
85 {'eval_loss': 0.6605485081672668, 'eval_mse': 0.812741345170533, 'eval_qwk': 0.7413115848125768, 'eval_runtime': 15.2233, 'eval_samples_per_second': 43.88, 'eval_steps_per_second': 2.759, 'epoch': 50.0}
```
This kind of thing has inconvenienced me many times. So it would be very nice and make the logging more readable when captured via a pipeline if this progress bar could be optionally disabled.
### Motivation
The motivation for this feature comes from training scenarios where the logging output is captured in a file or other log stream that persists. The progress bar is hlepful for watching training in real time. However once an experiment is finished, if the logging output is stored for future consultation, the progress bar just becomes an obstacle to work around. So if users could opt out of it, that would be great.
### Your contribution
I would love to contribute a solution here, but the `transformers` code base is so vast that I have no idea where to begin. If a core dev could provide some pointers to help get me started, that would be much appreciated. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28683/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28683/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28682 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28682/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28682/comments | https://api.github.com/repos/huggingface/transformers/issues/28682/events | https://github.com/huggingface/transformers/pull/28682 | 2,098,597,320 | PR_kwDOCUB6oc5k-GPD | 28,682 | Add artifact name in jobs' step to maintain jobs and artifacts correspondence | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2024-01-24T16:06:06 | 2024-01-31T14:58:19 | 2024-01-31T14:58:18 | COLLABORATOR | null | # What does this PR do?
When our (actual) CI workflow files are called using `workflow_call` event by other workflows files, the job names will be concatenated like `Nightly CI / Model Test (models/bert, single-gpu)`.
We currently have Nightly/Past/AMD CI using `workflow_call`. We will soon have to use it for daily CI too due to the 256 matrix jobs limit of GitHub Actions.
So the (model test) job names in daily CI will become something like `Part (0) / Model Test (models/bert, single-gpu)` and nightly CI will have `Nightly CI / Part (0) / Model Test (models/bert, single-gpu)`.
_This makes `utils/notification_service.py` more complex to handle the job names correctly to get job links_
**This PR implements a new approach to maintain the correspondence between jobs, links and artifacts, so `utils/notification_service.py` can have the necessary information more easily.** | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28682/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28682/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28682",
"html_url": "https://github.com/huggingface/transformers/pull/28682",
"diff_url": "https://github.com/huggingface/transformers/pull/28682.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28682.patch",
"merged_at": "2024-01-31T14:58:17"
} |
https://api.github.com/repos/huggingface/transformers/issues/28681 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28681/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28681/comments | https://api.github.com/repos/huggingface/transformers/issues/28681/events | https://github.com/huggingface/transformers/pull/28681 | 2,098,331,447 | PR_kwDOCUB6oc5k9L1D | 28,681 | Add back in generation types | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2024-01-24T13:59:56 | 2024-01-26T19:49:20 | 2024-01-24T14:37:31 | COLLABORATOR | null | # What does this PR do?
#28494 removed some custom types in the `generation.utils` module. This has caused downstream issues in other libraries, notably Coqui-TTS c.f #28649
This PR adds them back in so they're still importable.
cc @gante for reference when you're back | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28681/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28681/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28681",
"html_url": "https://github.com/huggingface/transformers/pull/28681",
"diff_url": "https://github.com/huggingface/transformers/pull/28681.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28681.patch",
"merged_at": "2024-01-24T14:37:30"
} |
https://api.github.com/repos/huggingface/transformers/issues/28680 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28680/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28680/comments | https://api.github.com/repos/huggingface/transformers/issues/28680/events | https://github.com/huggingface/transformers/pull/28680 | 2,098,278,568 | PR_kwDOCUB6oc5k9ATB | 28,680 | fix: readme | {
"login": "ThibaultLengagne",
"id": 11950126,
"node_id": "MDQ6VXNlcjExOTUwMTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/11950126?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ThibaultLengagne",
"html_url": "https://github.com/ThibaultLengagne",
"followers_url": "https://api.github.com/users/ThibaultLengagne/followers",
"following_url": "https://api.github.com/users/ThibaultLengagne/following{/other_user}",
"gists_url": "https://api.github.com/users/ThibaultLengagne/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ThibaultLengagne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ThibaultLengagne/subscriptions",
"organizations_url": "https://api.github.com/users/ThibaultLengagne/orgs",
"repos_url": "https://api.github.com/users/ThibaultLengagne/repos",
"events_url": "https://api.github.com/users/ThibaultLengagne/events{/privacy}",
"received_events_url": "https://api.github.com/users/ThibaultLengagne/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-24T13:32:39 | 2024-01-24T13:40:03 | 2024-01-24T13:40:03 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28680/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28680/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28680",
"html_url": "https://github.com/huggingface/transformers/pull/28680",
"diff_url": "https://github.com/huggingface/transformers/pull/28680.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28680.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28679 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28679/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28679/comments | https://api.github.com/repos/huggingface/transformers/issues/28679/events | https://github.com/huggingface/transformers/issues/28679 | 2,098,184,627 | I_kwDOCUB6oc59D8Gz | 28,679 | GPT2 after few finetune epochs starts to generate sequence of only EOS tokens | {
"login": "tempdeltavalue",
"id": 36921178,
"node_id": "MDQ6VXNlcjM2OTIxMTc4",
"avatar_url": "https://avatars.githubusercontent.com/u/36921178?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tempdeltavalue",
"html_url": "https://github.com/tempdeltavalue",
"followers_url": "https://api.github.com/users/tempdeltavalue/followers",
"following_url": "https://api.github.com/users/tempdeltavalue/following{/other_user}",
"gists_url": "https://api.github.com/users/tempdeltavalue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tempdeltavalue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tempdeltavalue/subscriptions",
"organizations_url": "https://api.github.com/users/tempdeltavalue/orgs",
"repos_url": "https://api.github.com/users/tempdeltavalue/repos",
"events_url": "https://api.github.com/users/tempdeltavalue/events{/privacy}",
"received_events_url": "https://api.github.com/users/tempdeltavalue/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2024-01-24T12:40:43 | 2024-01-27T00:56:19 | null | NONE | null | ### System Info
I get output like this:
<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Check this ipynb
https://github.com/tempdeltavalue/temp_l/blob/main/finetune_seq2seq.ipynb
### Expected behavior
I expect the model should returns something | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28679/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28679/timeline | null | reopened | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28678 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28678/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28678/comments | https://api.github.com/repos/huggingface/transformers/issues/28678/events | https://github.com/huggingface/transformers/pull/28678 | 2,098,138,688 | PR_kwDOCUB6oc5k8hsy | 28,678 | use scaled_dot_product_attention | {
"login": "lintangsutawika",
"id": 5774558,
"node_id": "MDQ6VXNlcjU3NzQ1NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5774558?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lintangsutawika",
"html_url": "https://github.com/lintangsutawika",
"followers_url": "https://api.github.com/users/lintangsutawika/followers",
"following_url": "https://api.github.com/users/lintangsutawika/following{/other_user}",
"gists_url": "https://api.github.com/users/lintangsutawika/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lintangsutawika/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lintangsutawika/subscriptions",
"organizations_url": "https://api.github.com/users/lintangsutawika/orgs",
"repos_url": "https://api.github.com/users/lintangsutawika/repos",
"events_url": "https://api.github.com/users/lintangsutawika/events{/privacy}",
"received_events_url": "https://api.github.com/users/lintangsutawika/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-01-24T12:14:22 | 2024-01-24T15:04:29 | null | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28678/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28678/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28678",
"html_url": "https://github.com/huggingface/transformers/pull/28678",
"diff_url": "https://github.com/huggingface/transformers/pull/28678.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28678.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28677 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28677/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28677/comments | https://api.github.com/repos/huggingface/transformers/issues/28677/events | https://github.com/huggingface/transformers/issues/28677 | 2,097,482,345 | I_kwDOCUB6oc59BQpp | 28,677 | Cannot find checkpoint during Trainer._load_best_model when using deepspeed | {
"login": "nathan-az",
"id": 42650258,
"node_id": "MDQ6VXNlcjQyNjUwMjU4",
"avatar_url": "https://avatars.githubusercontent.com/u/42650258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nathan-az",
"html_url": "https://github.com/nathan-az",
"followers_url": "https://api.github.com/users/nathan-az/followers",
"following_url": "https://api.github.com/users/nathan-az/following{/other_user}",
"gists_url": "https://api.github.com/users/nathan-az/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nathan-az/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nathan-az/subscriptions",
"organizations_url": "https://api.github.com/users/nathan-az/orgs",
"repos_url": "https://api.github.com/users/nathan-az/repos",
"events_url": "https://api.github.com/users/nathan-az/events{/privacy}",
"received_events_url": "https://api.github.com/users/nathan-az/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-24T05:56:47 | 2024-01-24T23:15:30 | 2024-01-24T23:15:30 | NONE | null | ### System Info
```
- `transformers` version: 4.37.0
- Platform: Linux-6.2.0-1017-aws-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
```
Note the above was run in a container on a different instance from the job compute, but with the same docker image.
### Who can help?
@pacman100 @muellerzr
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Came across this using the SFT script in the [alignment handbook](https://github.com/huggingface/alignment-handbook).
I can add more information but I think the relevant info is as follows:
In terms of trainer args:
```yaml
load_best_model_at_end: true
num_train_epochs: 40
output_dir: /local_disk0/hf/outputs
overwrite_output_dir: true
resume_from_checkpoint: false
save_on_each_node: true
save_only_model: true
save_steps: 1
save_strategy: "epoch"
save_total_limit: 5
```
Note that I am attempting to only save 5 models, but to keep track of the best model and load it at the end for saving. I also set `save_only_model` to `true` as I don't currently care to be able to actually load a checkpoint for continued training, and suspect this is the problem.
Note that the output directory `/local_disk0/hf/outputs` is a directory path that _exists_ on each node, but is _not_ a shared filesystem/NFS (so each node contains its information in that path).
My setup is distributed multi-gpu and multi-node via pdsh.
```yaml
compute_environment: LOCAL_MACHINE
deepspeed_config:
deepspeed_multinode_launcher: pdsh
deepspeed_hostfile: {TRAIN_DIR}/hostfile
deepspeed_config_file: {CONFIG_FILE}
zero3_init_flag: true
distributed_type: DEEPSPEED
```
I've tried to clean up the stacktrace, since I'm getting multiple (it appears to be one per rank)
```
File "/databricks/python3/lib/python3.10/site-packages/trl/trainer/sft_trainer.py", line 315, in train
main()
File "/local_disk0/.ephemeral_nfs/training/alignment-handbook/scripts/run_sft.py", line 164, in main
output = super().train(*args, **kwargs)
train_result = trainer.train(resume_from_checkpoint=checkpoint)output = super().train(*args, **kwargs)
File "/databricks/python3/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train
output = super().train(*args, **kwargs)
File "/databricks/python3/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train
return inner_training_loop(
File "/databricks/python3/lib/python3.10/site-packages/transformers/trainer.py", line 1972, in _inner_training_loop
self._load_best_model()
File "/databricks/python3/lib/python3.10/site-packages/transformers/trainer.py", line 2167, in _load_best_model
deepspeed_load_checkpoint(self.model_wrapped, self.state.best_model_checkpoint)
File "/databricks/python3/lib/python3.10/site-packages/transformers/integrations/deepspeed.py", line 408, in deepspeed_load_checkpoint
raise ValueError(f"Can't find a valid checkpoint at {checkpoint_path}")
ValueError: Can't find a valid checkpoint at /local_disk0/hf/outputs/checkpoint-29
```
### Expected behavior
I expect the model to be loaded at the end, simply so that it can be saved (i.e. the motivation is to save the model with the best eval metric for use during inference).
The error message indicates the `checkpoint` could not be found. I suspect that maybe it is expecting a full checkpoint including parameters, optimiser states, etc. so that training can continue, but am unsure. If this is the cause, this might be more of a feature request? Since it would be good to have a way to keep track of and save just the parameters of the iteration with the best eval metric. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28677/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28677/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28676 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28676/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28676/comments | https://api.github.com/repos/huggingface/transformers/issues/28676/events | https://github.com/huggingface/transformers/pull/28676 | 2,097,473,712 | PR_kwDOCUB6oc5k6P-N | 28,676 | fix(tokenization): `encode` should remove leading batch axis for all types of single batch to keep consistent. | {
"login": "scruel",
"id": 16933298,
"node_id": "MDQ6VXNlcjE2OTMzMjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/16933298?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scruel",
"html_url": "https://github.com/scruel",
"followers_url": "https://api.github.com/users/scruel/followers",
"following_url": "https://api.github.com/users/scruel/following{/other_user}",
"gists_url": "https://api.github.com/users/scruel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/scruel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scruel/subscriptions",
"organizations_url": "https://api.github.com/users/scruel/orgs",
"repos_url": "https://api.github.com/users/scruel/repos",
"events_url": "https://api.github.com/users/scruel/events{/privacy}",
"received_events_url": "https://api.github.com/users/scruel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2024-01-24T05:47:43 | 2024-01-24T09:13:43 | 2024-01-24T08:56:30 | CONTRIBUTOR | null | # What does this PR do?
`encode` should remove leading batch axis for all types to keep consistent with `decode` method.
Fixes #28635
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28676/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28676/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28676",
"html_url": "https://github.com/huggingface/transformers/pull/28676",
"diff_url": "https://github.com/huggingface/transformers/pull/28676.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28676.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28675 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28675/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28675/comments | https://api.github.com/repos/huggingface/transformers/issues/28675/events | https://github.com/huggingface/transformers/issues/28675 | 2,097,325,509 | I_kwDOCUB6oc59AqXF | 28,675 | Swinv2ForImageClassification often outputs NaN at initialization | {
"login": "norabelrose",
"id": 39116809,
"node_id": "MDQ6VXNlcjM5MTE2ODA5",
"avatar_url": "https://avatars.githubusercontent.com/u/39116809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/norabelrose",
"html_url": "https://github.com/norabelrose",
"followers_url": "https://api.github.com/users/norabelrose/followers",
"following_url": "https://api.github.com/users/norabelrose/following{/other_user}",
"gists_url": "https://api.github.com/users/norabelrose/gists{/gist_id}",
"starred_url": "https://api.github.com/users/norabelrose/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/norabelrose/subscriptions",
"organizations_url": "https://api.github.com/users/norabelrose/orgs",
"repos_url": "https://api.github.com/users/norabelrose/repos",
"events_url": "https://api.github.com/users/norabelrose/events{/privacy}",
"received_events_url": "https://api.github.com/users/norabelrose/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "htt... | null | 0 | 2024-01-24T02:56:39 | 2024-01-25T19:54:10 | null | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.4.0-164-generic-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: 0.26.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```py
from transformers import Swinv2Config, Swinv2ForImageClassification
import torch
with torch.inference_mode():
cfg = Swinv2Config(image_size=56)
model = Swinv2ForImageClassification(cfg)
out = model(torch.rand(1, 3, 56, 56))
out.logits
```
### Expected behavior
Should not output NaN; throw an error if the implementation doesn't currently support a certain combination of architectural hyperparameters | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28675/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28675/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28674 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28674/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28674/comments | https://api.github.com/repos/huggingface/transformers/issues/28674/events | https://github.com/huggingface/transformers/issues/28674 | 2,097,305,722 | I_kwDOCUB6oc59Alh6 | 28,674 | Can not execute example in idefics-9b-instruct | {
"login": "ppsmk388",
"id": 60417397,
"node_id": "MDQ6VXNlcjYwNDE3Mzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/60417397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ppsmk388",
"html_url": "https://github.com/ppsmk388",
"followers_url": "https://api.github.com/users/ppsmk388/followers",
"following_url": "https://api.github.com/users/ppsmk388/following{/other_user}",
"gists_url": "https://api.github.com/users/ppsmk388/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ppsmk388/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ppsmk388/subscriptions",
"organizations_url": "https://api.github.com/users/ppsmk388/orgs",
"repos_url": "https://api.github.com/users/ppsmk388/repos",
"events_url": "https://api.github.com/users/ppsmk388/events{/privacy}",
"received_events_url": "https://api.github.com/users/ppsmk388/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2024-01-24T02:31:09 | 2024-01-24T20:10:38 | null | NONE | null | ### System Info
tokenizers-0.15.1
transformers-4.37.0
python3.8.10
Linux
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When I run example in https://huggingface.co/HuggingFaceM4/idefics-9b-instruct
```
import torch
from transformers import IdeficsForVisionText2Text, AutoProcessor
device = "cuda" if torch.cuda.is_available() else "cpu"
checkpoint = "HuggingFaceM4/idefics-9b"
model = IdeficsForVisionText2Text.from_pretrained(checkpoint, torch_dtype=torch.bfloat16).to(device)
processor = AutoProcessor.from_pretrained(checkpoint)
# We feed to the model an arbitrary sequence of text strings and images. Images can be either URLs or PIL Images.
prompts = [
[
"https://upload.wikimedia.org/wikipedia/commons/8/86/Id%C3%A9fix.JPG",
"In this picture from Asterix and Obelix, we can see"
],
]
# --batched mode
inputs = processor(prompts, return_tensors="pt").to(device)
# --single sample mode
# inputs = processor(prompts[0], return_tensors="pt").to(device)
# Generation args
bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids
generated_ids = model.generate(**inputs, bad_words_ids=bad_words_ids, max_length=100)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)
for i, t in enumerate(generated_text):
print(f"{i}:\n{t}\n")
```
I got:
```
OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like HuggingFaceM4/idefics-9b-instruct is not the path to a directory containing a file named processor_config.json.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'
```
in
```
`processor = AutoProcessor.from_pretrained(checkpoint)
```
### Expected behavior
Successful code execution | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28674/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28674/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28673 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28673/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28673/comments | https://api.github.com/repos/huggingface/transformers/issues/28673/events | https://github.com/huggingface/transformers/pull/28673 | 2,097,044,260 | PR_kwDOCUB6oc5k41uZ | 28,673 | Phi-2 requires a disabled autocast in attention layer | {
"login": "gugarosa",
"id": 4120639,
"node_id": "MDQ6VXNlcjQxMjA2Mzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4120639?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gugarosa",
"html_url": "https://github.com/gugarosa",
"followers_url": "https://api.github.com/users/gugarosa/followers",
"following_url": "https://api.github.com/users/gugarosa/following{/other_user}",
"gists_url": "https://api.github.com/users/gugarosa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gugarosa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gugarosa/subscriptions",
"organizations_url": "https://api.github.com/users/gugarosa/orgs",
"repos_url": "https://api.github.com/users/gugarosa/repos",
"events_url": "https://api.github.com/users/gugarosa/events{/privacy}",
"received_events_url": "https://api.github.com/users/gugarosa/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2024-01-23T21:49:07 | 2024-01-29T23:03:00 | null | CONTRIBUTOR | null | # What does this PR do?
Phi-2 has an attention overflow issue, and since the model weights were released with a MIT license, there is no short-term solution in replacing them (re-training the model). Therefore, the only solution we could find to cover all corner cases regarding the overflow, is to also disable the autocast in the attention layer.
This update follows the current [model file](https://huggingface.co/microsoft/phi-2/blob/main/modeling_phi.py) we have on `microsoft/phi-2` repository. Additionally, it follows the [previous solution](https://huggingface.co/microsoft/phi-2/blob/834565c23f9b28b96ccbeabe614dd906b6db551a/modeling_phi.py#L347) we had done before the Phi integration.
Please let me know if we can think of any different solutions, or if there is anything else we can do.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
@susnato @ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28673/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28673/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28673",
"html_url": "https://github.com/huggingface/transformers/pull/28673",
"diff_url": "https://github.com/huggingface/transformers/pull/28673.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28673.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28672 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28672/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28672/comments | https://api.github.com/repos/huggingface/transformers/issues/28672/events | https://github.com/huggingface/transformers/issues/28672 | 2,096,918,999 | I_kwDOCUB6oc58_HHX | 28,672 | GPT2 cannot be used with device_map='auto'; Report "found at least two devices" | {
"login": "haobozhang",
"id": 56833210,
"node_id": "MDQ6VXNlcjU2ODMzMjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/56833210?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/haobozhang",
"html_url": "https://github.com/haobozhang",
"followers_url": "https://api.github.com/users/haobozhang/followers",
"following_url": "https://api.github.com/users/haobozhang/following{/other_user}",
"gists_url": "https://api.github.com/users/haobozhang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/haobozhang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haobozhang/subscriptions",
"organizations_url": "https://api.github.com/users/haobozhang/orgs",
"repos_url": "https://api.github.com/users/haobozhang/repos",
"events_url": "https://api.github.com/users/haobozhang/events{/privacy}",
"received_events_url": "https://api.github.com/users/haobozhang/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-23T20:23:23 | 2024-01-24T04:03:40 | null | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.15.0-89-generic-x86_64-with-glibc2.35
- Python version: 3.9.18
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: 0.26.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
A simple reproducer here:
```python
from transformers import GPT2LMHeadModel
# create a sample input:
batch_ids = {
'input_ids': torch.tensor([[312, 134, 56, 712, 351, 89, 63, 550, 971, 2]]),
'attention_mask': torch.tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]),
}
gpt2_large = GPT2LMHeadModel.from_pretrained('gpt2-large', cache_dir='./cache_dir', device_map='auto')
gpt2 = GPT2LMHeadModel.from_pretrained('gpt2', cache_dir='./cache_dir', device_map='auto')
loss_gpt2_large = gpt2_large(**batch_ids, labels=batch_ids['input_ids']).loss
loss_gpt2 = gpt2(**batch_ids, labels=batch_ids['input_ids']).loss
```
### Expected behavior
It works well to generate `loss_gpt2_large`, but it will report error when generating `loss_gpt2`:
`RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:7 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)`
I am not sure why this behaves differently with the same model class. Could you please provide any comments on this? Thanks in advance! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28672/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28672/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28671 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28671/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28671/comments | https://api.github.com/repos/huggingface/transformers/issues/28671/events | https://github.com/huggingface/transformers/issues/28671 | 2,096,875,487 | I_kwDOCUB6oc58-8ff | 28,671 | Issue with finetuning Mixtral w/ deepspeed after new release | {
"login": "sam-h-bean",
"id": 43734688,
"node_id": "MDQ6VXNlcjQzNzM0Njg4",
"avatar_url": "https://avatars.githubusercontent.com/u/43734688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sam-h-bean",
"html_url": "https://github.com/sam-h-bean",
"followers_url": "https://api.github.com/users/sam-h-bean/followers",
"following_url": "https://api.github.com/users/sam-h-bean/following{/other_user}",
"gists_url": "https://api.github.com/users/sam-h-bean/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sam-h-bean/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sam-h-bean/subscriptions",
"organizations_url": "https://api.github.com/users/sam-h-bean/orgs",
"repos_url": "https://api.github.com/users/sam-h-bean/repos",
"events_url": "https://api.github.com/users/sam-h-bean/events{/privacy}",
"received_events_url": "https://api.github.com/users/sam-h-bean/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-23T19:55:04 | 2024-01-24T03:20:44 | 2024-01-24T03:20:44 | CONTRIBUTOR | null | ### System Info
transformers: latest
env: ray + deepspeed on k8s
There seems to be an issue with Mixtral on the latest transformers release that manifests like
```
RuntimeError: Detected mismatch between collectives on ranks. Rank 4 is running collective:
CollectiveFingerPrint(SequenceNumber=724883, OpType=_ALLGATHER_BASE, TensorShape=[4194305], TensorDtypes=BFloat16, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt))),
but Rank 0 is running collective:
CollectiveFingerPrint(SequenceNumber=724883, OpType=_ALLGATHER_BASE, TensorShape=[699051], TensorDtypes=BFloat16, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt))).Collectives differ in the following aspects:
Tensor Tensor shapes: 4194305vs 699051
```
When I pin transformers to 4.36.2 the issue goes away.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Finetune mixtral w/ deepspeed + accelerate my ds_config is like so
```json
{
"fp16": {
"enabled": false
},
"bf16": {
"enabled": true
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": false
},
"overlap_comm": true,
"contiguous_gradients": true,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"gather_16bit_weights_on_model_save": true,
"round_robin_gradients": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 10,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false,
"comms_logger": {
"enabled": true,
"verbose": true,
"prof_all": true,
"debug": true
}
}
```
2.
### Expected behavior
The tensors are the correct shape | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28671/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28671/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28670 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28670/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28670/comments | https://api.github.com/repos/huggingface/transformers/issues/28670/events | https://github.com/huggingface/transformers/issues/28670 | 2,096,803,973 | I_kwDOCUB6oc58-rCF | 28,670 | OSError: Can't load tokenizer for fine-tuned model | {
"login": "ccruttjr",
"id": 146245010,
"node_id": "U_kgDOCLeFkg",
"avatar_url": "https://avatars.githubusercontent.com/u/146245010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ccruttjr",
"html_url": "https://github.com/ccruttjr",
"followers_url": "https://api.github.com/users/ccruttjr/followers",
"following_url": "https://api.github.com/users/ccruttjr/following{/other_user}",
"gists_url": "https://api.github.com/users/ccruttjr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ccruttjr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ccruttjr/subscriptions",
"organizations_url": "https://api.github.com/users/ccruttjr/orgs",
"repos_url": "https://api.github.com/users/ccruttjr/repos",
"events_url": "https://api.github.com/users/ccruttjr/events{/privacy}",
"received_events_url": "https://api.github.com/users/ccruttjr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-23T19:11:48 | 2024-01-24T18:41:11 | 2024-01-24T18:41:10 | NONE | null | ### System Info
**I promise you this issue isn't as long as it seems.** (It's long because I included a lot of context below just in case it was needed)
Hello! I fine-tuned a the gpt2-xl model on some custom data and saved the model. The directory for the saved model is `./saved/` and the file contents are `config.json generation_config.json model-00001-of-00002.safetensors model-00002-of-00002.safetensors model.safetensors model.safetensors.index.json`. Let me know if there's more information I can provide other than what's below. When attempting to use the fine-tuned model for text generation, I ran into an error running this:
```python
model = AutoModelForCausalLM.from_pretrained(model_path)
```
getting
```
Traceback (most recent call last):
File "/home/username/NCAI/inference.py", line 40, in <module>
main()
File "/home/username/NCAI/inference.py", line 32, in main
model, tokenizer = load_model(model_path)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/username/NCAI/inference.py", line 9, in load_model
model = AutoModelForCausalLM.from_pretrained(model_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/username/miniconda3/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 566, in from_pretrained
return model_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/username/miniconda3/lib/python3.11/site-packages/transformers/modeling_utils.py", line 3371, in from_pretrained
with safe_open(resolved_archive_file, framework="pt") as f:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization
```
I changed how I loaded up the model via
```python
config = AutoConfig.from_pretrained(model_path)
model = AutoModelForCausalLM.from_config(config)
```
which the script got through! Which is why I didn't put that issue in the title. But... I ran into an error right after with this:
```python
tokenizer = AutoTokenizer.from_pretrained(model_path)
# as well as this
tokenizer = GPT2Tokenizer.from_pretrained(model_path)
```
giving
```
Traceback (most recent call last):
File "/home/username/NCAI/inference.py", line 40, in <module>
main()
File "/home/username/NCAI/inference.py", line 32, in main
model, tokenizer = load_model(model_path)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/username/NCAI/inference.py", line 12, in load_model
tokenizer = AutoTokenizer.from_pretrained(model_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/username/miniconda3/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 805, in from_pretrained
return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/daimyollc/miniconda3/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2012, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for './saved/'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure './saved/' is the correct path to a directory containing all relevant files for a GPT2TokenizerFast tokenizer.
```
For some extra info, here is my config.json and generation_config.json
```json
{
"_name_or_path": "gpt2-xl",
"activation_function": "gelu_new",
"architectures": [
"GPT2LMHeadModel"
],
"attn_pdrop": 0.1,
"bos_token_id": 50256,
"embd_pdrop": 0.1,
"eos_token_id": 50256,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 1600,
"n_head": 25,
"n_inner": null,
"n_layer": 48,
"n_positions": 1024,
"output_past": true,
"reorder_and_upcast_attn": false,
"resid_pdrop": 0.1,
"scale_attn_by_inverse_layer_idx": false,
"scale_attn_weights": true,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"task_specific_params": {
"text-generation": {
"do_sample": true,
"max_length": 50
}
},
"torch_dtype": "float32",
"transformers_version": "4.36.2",
"use_cache": true,
"vocab_size": 50257
}
```
```json
{
"_from_model_config": true,
"bos_token_id": 50256,
"eos_token_id": 50256,
"transformers_version": "4.36.2"
}
```
Here is the code that fine-tuned and saved the new model
```python
# 1. Have Transformer's determine the best tokenizer for the given model
# 2. Convert XML to readable dataset. Have the first GPU run it first so multiple GPUs aren't trying to edit the XML at
# the same time
# 3. Set the max length and padding of each eConsult and how wewant to tokenize the dataset
# 4. Split dataset into training dataset and eval 80/20
# 5. Distribute tokenized datasets across multiple GPUs as to not run out of memory
# 6. Create/return dataloader with the given data for the trainer to use
def get_dataloaders(accelerator: Accelerator, batch_size, model_name, data_location):
# 1
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.pad_token = tokenizer.eos_token
# 2
with accelerator.main_process_first():
dataset = Dataset.from_pandas(process_dataset(data_location))
# 3
def tokenize_function(examples):
return tokenizer(examples["conversation"], padding="max_length", truncation=True, max_length=256)
with accelerator.main_process_first():
tokenized_dataset = dataset.map(tokenize_function, batched=True)
tokenized_dataset.set_format(
"torch", columns=["input_ids", "attention_mask"])
# 4
split_datasets = tokenized_dataset.train_test_split(test_size=0.2)
tokenized_train_dataset = split_datasets["train"]
tokenized_eval_dataset = split_datasets["test"]
# 5
train_sampler = DistributedSampler(
tokenized_train_dataset, num_replicas=accelerator.num_processes, rank=accelerator.process_index, shuffle=True
)
eval_sampler = DistributedSampler(
tokenized_eval_dataset, num_replicas=accelerator.num_processes, rank=accelerator.process_index, shuffle=False
)
# 6
train_dataloader = DataLoader(
tokenized_train_dataset,
batch_size=batch_size,
drop_last=True,
sampler=train_sampler
)
eval_dataloader = DataLoader(
tokenized_eval_dataset,
batch_size=batch_size*2,
drop_last=(accelerator.mixed_precision == "fp8"),
sampler=eval_sampler
)
return train_dataloader, eval_dataloader
# 1. Initialize accelerator with mixed percision and define training parameters via arguments given in command line
# 2. Sets seed (if given as a command line argument) for reproducability
# 3. Get dataloaders
# 4. Initialize more training perameters and "prepare"/optimize them via Accelerate
# 5. Train/fine-tune model with new data & set parameters using FSDP
# 6. Evaluate quality of trainer for that epoch
# 7. Have the first GPU save the newly fine-tuned dataset
def training_function(args):
# 1
accelerator = Accelerator(mixed_precision=args.mixed_precision)
lr = args.lr
num_epochs = args.num_epochs
batch_size = args.batch_size
num_warmup_steps = args.num_warmup_steps
# 2
if args.seed:
set_seed(args.seed)
# 3
train_dataloader, eval_dataloader = get_dataloaders(
accelerator, batch_size, args.model_name, args.data_location)
# 4
# Instantiate the model (we build the model here so that the seed also control new weights initialization)
model = AutoModelForCausalLM.from_pretrained(args.model_name)
model = accelerator.prepare(model)
optimizer = AdamW(params=model.parameters(), lr=lr)
# Instantiate scheduler
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=num_warmup_steps,
num_training_steps=(len(train_dataloader) *
num_epochs),
)
# Prepare everything
# There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the
# prepare method.
optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
# Initialize logging variables
total_train_loss = 0
total_eval_loss = 0
# 5
# Now we train the model
for epoch in range(num_epochs):
model.train()
total_train_loss = 0
for batch in tqdm(train_dataloader, desc="Training"):
with accelerator.accumulate(model):
# Process the batch
inputs = {k: v.to(accelerator.device)
for k, v in batch.items()}
if "labels" not in inputs:
inputs["labels"] = inputs["input_ids"]
outputs = model(**inputs)
loss = outputs.loss
total_train_loss += loss.item()
accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
accelerator.wait_for_everyone()
# 6
# Evaluation loop after each training epoch
model.eval()
total_eval_loss = 0
for batch in tqdm(eval_dataloader, "Evaluating"):
with torch.no_grad():
inputs = {k: v.to(accelerator.device)
for k, v in batch.items()}
if "labels" not in inputs:
inputs["labels"] = inputs["input_ids"]
outputs = model(**inputs)
loss = outputs.loss
total_eval_loss += loss.item()
# Log the average losses
avg_train_loss = total_train_loss / len(train_dataloader)
avg_eval_loss = total_eval_loss / len(eval_dataloader)
print(
f"Epoch: {epoch}, Average Training Loss: {avg_train_loss}, Average Evaluation Loss: {avg_eval_loss}")
accelerator.wait_for_everyone()
# 7
accelerator.wait_for_everyone()
accelerator.print("saving")
accelerator.unwrap_model(model).save_pretrained(
"./saved_1000",
is_main_process=accelerator.is_main_process,
save_function=accelerator.save,
state_dict=accelerator.get_state_dict(model),
)
def main():
args = parse_args()
training_function(args)
if __name__ == "__main__":
start = time()
main()
print(f"Total Execution Time: {time() - start} seconds")
```
```
$ transformers-cli env
- `transformers` version: 4.36.2
- Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35
- Python version: 3.11.5
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: 0.26.1
- Accelerate config:
- compute_environment: LOCAL_MACHINE
- distributed_type: FSDP
- mixed_precision: fp16
- use_cpu: False
- debug: False
- num_processes: 6
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- fsdp_config: {
'fsdp_auto_wrap_policy': 'TRANSFORMER_BASED_WRAP','
'fsdp_backward_prefetch': 'BACKWARD_PRE',
'fsdp_cpu_ram_efficient_loading': True,
'fsdp_forward_prefetch': False,
'fsdp_offload_params': False,
'fsdp_sharding_strategy': 'FULL_SHARD',
'fsdp_state_dict_type': 'SHARDED_STATE_DICT',
'fsdp_sync_module_states': True,
'fsdp_use_orig_params': True
}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.1.2 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```bash
$ # create XML file with data we wanna use
$ conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
$ conda install transformers accelerate dataset
$ pip instal bs4 pandas tqdm
$ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
$ # https://developer.nvidia.com/cuda-zone
$ # ran fine tuning file
$ python inference.py
```
### Expected behavior
For this
```python
tokenizer = GPT2Tokenizer.from_pretrained(model_path)
```
to not fail! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28670/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28670/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28669 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28669/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28669/comments | https://api.github.com/repos/huggingface/transformers/issues/28669/events | https://github.com/huggingface/transformers/pull/28669 | 2,096,778,888 | PR_kwDOCUB6oc5k37a3 | 28,669 | Use save_safetensor to disable safe serialization for XLA | {
"login": "jeffhataws",
"id": 56947987,
"node_id": "MDQ6VXNlcjU2OTQ3OTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/56947987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeffhataws",
"html_url": "https://github.com/jeffhataws",
"followers_url": "https://api.github.com/users/jeffhataws/followers",
"following_url": "https://api.github.com/users/jeffhataws/following{/other_user}",
"gists_url": "https://api.github.com/users/jeffhataws/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeffhataws/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeffhataws/subscriptions",
"organizations_url": "https://api.github.com/users/jeffhataws/orgs",
"repos_url": "https://api.github.com/users/jeffhataws/repos",
"events_url": "https://api.github.com/users/jeffhataws/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeffhataws/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-23T18:56:30 | 2024-01-24T21:51:46 | 2024-01-24T11:57:46 | CONTRIBUTOR | null | # What does this PR do?
Safetensor serialization is now default but not yet supported by XLA. This change uses save_safetensor argument to disable safe serialization for XLA as a workaround until XLA catches up.
Fixes # (issue)
https://github.com/huggingface/transformers/issues/28438
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@muellerzr and @pacman100
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28669/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28669/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28669",
"html_url": "https://github.com/huggingface/transformers/pull/28669",
"diff_url": "https://github.com/huggingface/transformers/pull/28669.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28669.patch",
"merged_at": "2024-01-24T11:57:45"
} |
https://api.github.com/repos/huggingface/transformers/issues/28668 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28668/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28668/comments | https://api.github.com/repos/huggingface/transformers/issues/28668/events | https://github.com/huggingface/transformers/pull/28668 | 2,096,654,889 | PR_kwDOCUB6oc5k3f6k | 28,668 | Add W2V2 example to CTC training readme | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-01-23T17:55:42 | 2024-01-23T17:55:42 | null | COLLABORATOR | null | # What does this PR do?
This PR adds a W2V2-Bert training example config to the CTC folder. This might be a bit light, I can add another training config example on TIMIT or turkish CV tomorrow if needed.
cc @sanchit-gandhi
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28668/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28668/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28668",
"html_url": "https://github.com/huggingface/transformers/pull/28668",
"diff_url": "https://github.com/huggingface/transformers/pull/28668.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28668.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28667 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28667/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28667/comments | https://api.github.com/repos/huggingface/transformers/issues/28667/events | https://github.com/huggingface/transformers/pull/28667 | 2,096,522,393 | PR_kwDOCUB6oc5k3CxJ | 28,667 | ENH: added new output_logits option to generate function | {
"login": "mbaak",
"id": 11329693,
"node_id": "MDQ6VXNlcjExMzI5Njkz",
"avatar_url": "https://avatars.githubusercontent.com/u/11329693?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mbaak",
"html_url": "https://github.com/mbaak",
"followers_url": "https://api.github.com/users/mbaak/followers",
"following_url": "https://api.github.com/users/mbaak/following{/other_user}",
"gists_url": "https://api.github.com/users/mbaak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mbaak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mbaak/subscriptions",
"organizations_url": "https://api.github.com/users/mbaak/orgs",
"repos_url": "https://api.github.com/users/mbaak/repos",
"events_url": "https://api.github.com/users/mbaak/events{/privacy}",
"received_events_url": "https://api.github.com/users/mbaak/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 6 | 2024-01-23T16:47:09 | 2024-01-29T18:26:42 | null | NONE | null | # What does this PR do?
output_logits option behaves like output_scores, but returns the raw, unprocessed prediction logit scores, ie. the values before they undergo logit processing and/or warping. The latter happens by default for the regular output scores.
It's useful to have the unprocessed logit scores in certain circumstances. For example, unprocessed logit scores are very useful with causallm models when one wants to determine the probability of a certain answer, e.g. when asking a question with a yes/no answer. In that case getting the next-token probabilities of both "yes" and "no" (and/or their relative ratio) is of interest for classification. The reason for getting these _before_ logit processing and/or warping is b/c a) that can change the probabilities or b) reject the tokens of interest / reduce the number of tokens to just 1.
In practice this can be used to generate confidence / classification scores when eg. using causallm models for question-answering tasks. Query your language model with: "Is the {statement} correct? Answer yes or no:", take the raw logit scores and softmax them, and calculate the score: prob(yes) / (prob(yes) + prob(no)) to get a useful classification score.
For an example use-case see paper TabLLM: Few-shot Classification of Tabular Data with Large Language Models by Stefan Hegselmann, Alejandro Buendia, Hunter Lang, Monica Agrawal, Xiaoyi Jiang, and David Sontag. https://arxiv.org/abs/2210.10723
In addition:
- added dedicated unit test: tests/generation/test_utils/test_return_unprocessed_logit_scores which tests return of logics with output_logits=True in generation.
- set output_logits=True in all other generation unit tests, that also have output_scores=True.
Fixes # (issue)
NA
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
(Yes, I've seen it discussed but now cannot refind the link.)
- [ X ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ X ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- generate: @gante
- text models: @ArthurZucker and @younesbelkada
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28667/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28667/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28667",
"html_url": "https://github.com/huggingface/transformers/pull/28667",
"diff_url": "https://github.com/huggingface/transformers/pull/28667.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28667.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28666 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28666/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28666/comments | https://api.github.com/repos/huggingface/transformers/issues/28666/events | https://github.com/huggingface/transformers/pull/28666 | 2,096,483,980 | PR_kwDOCUB6oc5k26Q0 | 28,666 | Improve Backbone API docs | {
"login": "merveenoyan",
"id": 53175384,
"node_id": "MDQ6VXNlcjUzMTc1Mzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/merveenoyan",
"html_url": "https://github.com/merveenoyan",
"followers_url": "https://api.github.com/users/merveenoyan/followers",
"following_url": "https://api.github.com/users/merveenoyan/following{/other_user}",
"gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions",
"organizations_url": "https://api.github.com/users/merveenoyan/orgs",
"repos_url": "https://api.github.com/users/merveenoyan/repos",
"events_url": "https://api.github.com/users/merveenoyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/merveenoyan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2024-01-23T16:31:12 | 2024-01-25T11:51:59 | 2024-01-25T11:51:59 | CONTRIBUTOR | null | I improved the wording of the Backbone API docs and added a new illustration. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28666/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28666/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28666",
"html_url": "https://github.com/huggingface/transformers/pull/28666",
"diff_url": "https://github.com/huggingface/transformers/pull/28666.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28666.patch",
"merged_at": "2024-01-25T11:51:59"
} |
https://api.github.com/repos/huggingface/transformers/issues/28665 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28665/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28665/comments | https://api.github.com/repos/huggingface/transformers/issues/28665/events | https://github.com/huggingface/transformers/pull/28665 | 2,096,464,018 | PR_kwDOCUB6oc5k21yg | 28,665 | Remove deprecated eager_serving fn | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-23T16:23:12 | 2024-01-23T16:53:08 | 2024-01-23T16:53:07 | MEMBER | null | The `eager_serving` method on our TF models was deprecated some time ago, and can now be removed - it was never part of the public API anyway!
EDIT: Throwing in a quick fix to the nearby `input_signature` docstring while I'm here | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28665/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28665/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28665",
"html_url": "https://github.com/huggingface/transformers/pull/28665",
"diff_url": "https://github.com/huggingface/transformers/pull/28665.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28665.patch",
"merged_at": "2024-01-23T16:53:07"
} |
https://api.github.com/repos/huggingface/transformers/issues/28664 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28664/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28664/comments | https://api.github.com/repos/huggingface/transformers/issues/28664/events | https://github.com/huggingface/transformers/pull/28664 | 2,096,319,075 | PR_kwDOCUB6oc5k2WrA | 28,664 | Introduce AcceleratorConfig dataclass | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2155169140,
"node_id": "MDU6TGFiZWwyMTU1MTY5MTQw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/trainer",
"name": "trainer",
"color": "2ef289",
"default": false,
"description": ""
}
] | open | false | null | [] | null | 1 | 2024-01-23T15:19:14 | 2024-01-25T21:08:51 | null | CONTRIBUTOR | null | # What does this PR do?
This PR centralizes all arguments for the `Accelerator` not covered by `fsdp_config` and `deepspeed_config` into a singular dataclass that users can pass in as a json file or through raw CLI param args.
I *think* I have the CLI args configured right? But I'm not 100% sure. Advice on how to check that would be appreciated!
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28664/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28664/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28664",
"html_url": "https://github.com/huggingface/transformers/pull/28664",
"diff_url": "https://github.com/huggingface/transformers/pull/28664.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28664.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28663 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28663/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28663/comments | https://api.github.com/repos/huggingface/transformers/issues/28663/events | https://github.com/huggingface/transformers/issues/28663 | 2,096,313,865 | I_kwDOCUB6oc588zYJ | 28,663 | How to set stopping criteria in model.generate() when a certain word appear | {
"login": "pradeepdev-1995",
"id": 41164884,
"node_id": "MDQ6VXNlcjQxMTY0ODg0",
"avatar_url": "https://avatars.githubusercontent.com/u/41164884?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pradeepdev-1995",
"html_url": "https://github.com/pradeepdev-1995",
"followers_url": "https://api.github.com/users/pradeepdev-1995/followers",
"following_url": "https://api.github.com/users/pradeepdev-1995/following{/other_user}",
"gists_url": "https://api.github.com/users/pradeepdev-1995/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pradeepdev-1995/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pradeepdev-1995/subscriptions",
"organizations_url": "https://api.github.com/users/pradeepdev-1995/orgs",
"repos_url": "https://api.github.com/users/pradeepdev-1995/repos",
"events_url": "https://api.github.com/users/pradeepdev-1995/events{/privacy}",
"received_events_url": "https://api.github.com/users/pradeepdev-1995/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-23T15:16:38 | 2024-01-23T15:20:24 | null | NONE | null | ### Feature request
stopping criteria in model.generate() when a certain word appear
The word I need to stop the generation when found is : [/SENTENCE]
But the model doesn't generate the word itself, instead, it generates the subwords
[ [/,SEN,TE,NC,E] ]
like this .
corresponding ids from the tokenizer are,
( Id and subword word)
28792 => [
28748 => /
28759 => SEN
2654 => TE
1197 => NC
28793 => E]
so how can i put the condition in **StoppingCriteriaList** that i should stop the generation when the [/SENTENCE] found.
### Motivation
stopping criteria in model.generate() when a certain word appear
The word I need to stop the generation when found is : [/SENTENCE]
But the model doesn't generate the word itself, instead, it generates the subwords
[ [/,SEN,TE,NC,E] ]
like this .
corresponding ids from the tokenizer are,
( Id and subword word)
28792 => [
28748 => /
28759 => SEN
2654 => TE
1197 => NC
28793 => E]
so how can i put the condition in **StoppingCriteriaList** that i should stop the generation when the [/SENTENCE] found.
### Your contribution
stopping criteria in model.generate() when a certain word appear
The word I need to stop the generation when found is : [/SENTENCE]
But the model doesn't generate the word itself, instead, it generates the subwords
[ [/,SEN,TE,NC,E] ]
like this .
corresponding ids from the tokenizer are,
( Id and subword word)
28792 => [
28748 => /
28759 => SEN
2654 => TE
1197 => NC
28793 => E]
so how can i put the condition in **StoppingCriteriaList** that i should stop the generation when the [/SENTENCE] found. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28663/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28663/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28662 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28662/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28662/comments | https://api.github.com/repos/huggingface/transformers/issues/28662/events | https://github.com/huggingface/transformers/issues/28662 | 2,096,100,605 | I_kwDOCUB6oc587_T9 | 28,662 | Training of GPT2 hang during Checkpoint stage | {
"login": "jchauhan",
"id": 74857,
"node_id": "MDQ6VXNlcjc0ODU3",
"avatar_url": "https://avatars.githubusercontent.com/u/74857?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jchauhan",
"html_url": "https://github.com/jchauhan",
"followers_url": "https://api.github.com/users/jchauhan/followers",
"following_url": "https://api.github.com/users/jchauhan/following{/other_user}",
"gists_url": "https://api.github.com/users/jchauhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jchauhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jchauhan/subscriptions",
"organizations_url": "https://api.github.com/users/jchauhan/orgs",
"repos_url": "https://api.github.com/users/jchauhan/repos",
"events_url": "https://api.github.com/users/jchauhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/jchauhan/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-23T13:30:47 | 2024-01-24T08:46:04 | null | NONE | null | ### System Info
**Env**
```
- `transformers` version: 4.38.0.dev0
- Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.31
- Python version: 3.10.0
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: 0.26.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: TPU
- Using distributed or parallel set-up in script?: xla_spwn script
GCP TPU v2.8 Architecture
```
**Libraries installed**
```
absl-py 2.1.0
accelerate 0.26.1
aiohttp 3.9.1
aiosignal 1.3.1
annotated-types 0.6.0
asttokens 2.4.1
async-timeout 4.0.3
attrs 23.2.0
bitsandbytes 0.42.0
cachetools 5.3.2
certifi 2023.11.17
charset-normalizer 3.3.2
cloud-tpu-client 0.10
datasets 2.16.1
decorator 5.1.1
deepspeed 0.13.0
dill 0.3.7
evaluate 0.4.1
exceptiongroup 1.2.0
executing 2.0.1
filelock 3.13.1
frozenlist 1.4.1
fsspec 2023.10.0
google-api-core 1.34.0
google-api-python-client 1.8.0
google-auth 2.26.2
google-auth-httplib2 0.2.0
googleapis-common-protos 1.62.0
hjson 3.1.0
httplib2 0.22.0
huggingface-hub 0.20.3
idna 3.6
install 1.3.5
ipython 8.20.0
jedi 0.19.1
Jinja2 3.1.3
joblib 1.3.2
libtpu-nightly 0.1.dev20230825+default
loralib 0.1.2
MarkupSafe 2.1.4
matplotlib-inline 0.1.6
mpmath 1.3.0
multidict 6.0.4
multiprocess 0.70.15
networkx 3.2.1
ninja 1.11.1.1
numpy 1.26.3
nvidia-cublas-cu12 12.1.3.1
nvidia-cuda-cupti-cu12 12.1.105
nvidia-cuda-nvrtc-cu12 12.1.105
nvidia-cuda-runtime-cu12 12.1.105
nvidia-cudnn-cu12 8.9.2.26
nvidia-cufft-cu12 11.0.2.54
nvidia-curand-cu12 10.3.2.106
nvidia-cusolver-cu12 11.4.5.107
nvidia-cusparse-cu12 12.1.0.106
nvidia-nccl-cu12 2.18.1
nvidia-nvjitlink-cu12 12.3.101
nvidia-nvtx-cu12 12.1.105
oauth2client 4.1.3
packaging 23.2
pandas 2.2.0
parso 0.8.3
peft 0.7.2.dev0
pexpect 4.9.0
pillow 10.2.0
pip 21.2.3
prompt-toolkit 3.0.43
protobuf 3.20.3
psutil 5.9.8
ptyprocess 0.7.0
pure-eval 0.2.2
py-cpuinfo 9.0.0
pyarrow 15.0.0
pyarrow-hotfix 0.6
pyasn1 0.5.1
pyasn1-modules 0.3.0
pydantic 2.5.3
pydantic_core 2.14.6
Pygments 2.17.2
pynvml 11.5.0
pyparsing 3.1.1
python-dateutil 2.8.2
pytz 2023.3.post1
PyYAML 6.0.1
regex 2023.12.25
requests 2.31.0
responses 0.18.0
rsa 4.9
safetensors 0.4.2
scikit-learn 1.4.0
scipy 1.12.0
setuptools 57.4.0
six 1.16.0
sklearn 0.0
stack-data 0.6.3
sympy 1.12
threadpoolctl 3.2.0
tokenizers 0.15.1
torch 2.1.2
torch-xla 2.1.0
torchvision 0.16.2
tqdm 4.66.1
traitlets 5.14.1
transformers 4.38.0.dev0
triton 2.1.0
typing_extensions 4.9.0
tzdata 2023.4
uritemplate 3.0.1
urllib3 2.1.0
wcwidth 0.2.13
xxhash 3.4.1
yarl 1.9.4
```
**Command**
### Who can help?
text models: @ArthurZucker and @younesbelkada
trainer: @muellerzr and @pacman100
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Procure a GCP TPU v2.8 VM
2. Setup Transformer in a virtual env
3. run the training command similar to below
```
export PJRT_DEVICE=TPU
python ./transformers/examples/pytorch/xla_spawn.py --num_cores 8 ./transformers/examples/pytorch/language-modeling/run_clm.py --model_name_or_path "gpt2" \
--train_file data.txt \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 2 \
--do_train \
--output_dir my-gpt \
--overwrite_output_dir \
--log_level debug \
--save_steps 1000 \
--cache_dir ./cache/ \
--num_train_epochs 40
```
### Expected behavior
The trained model and checkpoint should be complete within a reasonable time of 15 mins. The training takes 5 mins however, checkpointing and saving model does not complete | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28662/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28662/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28661 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28661/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28661/comments | https://api.github.com/repos/huggingface/transformers/issues/28661/events | https://github.com/huggingface/transformers/pull/28661 | 2,095,853,854 | PR_kwDOCUB6oc5k0xC- | 28,661 | [`Backbone`] Use `load_backbone` instead of `AutoBackbone.from_config` | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-23T11:21:33 | 2024-01-30T16:54:13 | 2024-01-30T16:54:09 | COLLABORATOR | null | # What does this PR do?
Uses `load_backbone` in place of `AutoBackbone.from_config` in the modeling files. This is the first part of a series of changes to enable loading timm or transformers models with the same call i.e. removing the if/else structure [we see in models like DETR](https://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/detr/modeling_detr.py#L345).
This forms part of the work to be able to load pretrained backbones from timm or transformers interchangeably into a new model.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28661/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28661/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28661",
"html_url": "https://github.com/huggingface/transformers/pull/28661",
"diff_url": "https://github.com/huggingface/transformers/pull/28661.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28661.patch",
"merged_at": "2024-01-30T16:54:09"
} |
https://api.github.com/repos/huggingface/transformers/issues/28660 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28660/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28660/comments | https://api.github.com/repos/huggingface/transformers/issues/28660/events | https://github.com/huggingface/transformers/pull/28660 | 2,095,691,451 | PR_kwDOCUB6oc5k0Nln | 28,660 | `tensor_size` - fix copy/paste error msg typo | {
"login": "scruel",
"id": 16933298,
"node_id": "MDQ6VXNlcjE2OTMzMjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/16933298?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scruel",
"html_url": "https://github.com/scruel",
"followers_url": "https://api.github.com/users/scruel/followers",
"following_url": "https://api.github.com/users/scruel/following{/other_user}",
"gists_url": "https://api.github.com/users/scruel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/scruel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scruel/subscriptions",
"organizations_url": "https://api.github.com/users/scruel/orgs",
"repos_url": "https://api.github.com/users/scruel/repos",
"events_url": "https://api.github.com/users/scruel/events{/privacy}",
"received_events_url": "https://api.github.com/users/scruel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-23T09:56:00 | 2024-01-23T11:37:19 | 2024-01-23T11:22:02 | CONTRIBUTOR | null | ## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stevhliu and @MKhalusova | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28660/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28660/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28660",
"html_url": "https://github.com/huggingface/transformers/pull/28660",
"diff_url": "https://github.com/huggingface/transformers/pull/28660.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28660.patch",
"merged_at": "2024-01-23T11:22:02"
} |
https://api.github.com/repos/huggingface/transformers/issues/28659 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28659/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28659/comments | https://api.github.com/repos/huggingface/transformers/issues/28659/events | https://github.com/huggingface/transformers/issues/28659 | 2,095,647,125 | I_kwDOCUB6oc586QmV | 28,659 | The newer tokenizer can not tokenize pad_token to pad_token_id | {
"login": "Magicalyz",
"id": 56778660,
"node_id": "MDQ6VXNlcjU2Nzc4NjYw",
"avatar_url": "https://avatars.githubusercontent.com/u/56778660?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Magicalyz",
"html_url": "https://github.com/Magicalyz",
"followers_url": "https://api.github.com/users/Magicalyz/followers",
"following_url": "https://api.github.com/users/Magicalyz/following{/other_user}",
"gists_url": "https://api.github.com/users/Magicalyz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Magicalyz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Magicalyz/subscriptions",
"organizations_url": "https://api.github.com/users/Magicalyz/orgs",
"repos_url": "https://api.github.com/users/Magicalyz/repos",
"events_url": "https://api.github.com/users/Magicalyz/events{/privacy}",
"received_events_url": "https://api.github.com/users/Magicalyz/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-23T09:34:20 | 2024-01-24T08:42:25 | null | NONE | null | ### System Info
- `transformers` version: 4.36.0
- Platform: Linux-5.4.143.bsk.7-amd64-x86_64-with-glibc2.31
- Python version: 3.9.2
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu122 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
Recently I upgrade my transformer from 4.31.0 to 4.36.0, then I found out the newer tokenizer can not tokenize pad_token to pad_token_id due to this code.https://github.com/huggingface/transformers/blob/039866094cb1c72f224049d4d006154ad0d6eda7/src/transformers/tokenization_utils.py#L600
```python
if tok_extended.single_word and left and left[-1] != " ":
tokens[i - 1] += token
tokens[i] = ""
elif tok_extended.single_word and right and right[0] != " ":
tokens[i + 1] = token + tokens[i + 1]
tokens[i] = ""
```
Here is my test code:
```python
tokenizer = AutoTokenizer.from_pretrained(
"baichuan-inc/Baichuan-13B-Chat", trust_remote_code=True
)
print(tokenizer("<s>This is a test sentence"))
#output(transformers==4.36.0): {'input_ids': [1557, 31114, 31219, 1523, 715, 650, 1801, 10583], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1]}
#output(transformers==4.31.0): {'input_ids': [1, 1170, 715, 650, 1801, 10583], 'attention_mask': [1, 1, 1, 1, 1, 1]}
print(tokenizer("<s> This is a test sentence"))
#output(transformers==4.36.0): {'input_ids': [1, 99, 1523, 715, 650, 1801, 10583], 'attention_mask': [1, 1, 1, 1, 1, 1, 1]}
#output(transformers==4.31.0): {'input_ids': [1, 99, 1523, 715, 650, 1801, 10583], 'attention_mask': [1, 1, 1, 1, 1, 1, 1]}
```
I'm wondering why there must be blank spaces between special_tokens, is there any rules to follow about add pad_token in text?
Thanks for your help!
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
tokenizer = AutoTokenizer.from_pretrained(
"baichuan-inc/Baichuan-13B-Chat", trust_remote_code=True
)
print(tokenizer("<s>This is a test sentence"))
#output(transformers==4.36.0): {'input_ids': [1557, 31114, 31219, 1523, 715, 650, 1801, 10583], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1]}
#output(transformers==4.31.0): {'input_ids': [1, 1170, 715, 650, 1801, 10583], 'attention_mask': [1, 1, 1, 1, 1, 1]}
print(tokenizer("<s> This is a test sentence"))
#output(transformers==4.36.0): {'input_ids': [1, 99, 1523, 715, 650, 1801, 10583], 'attention_mask': [1, 1, 1, 1, 1, 1, 1]}
#output(transformers==4.31.0): {'input_ids': [1, 99, 1523, 715, 650, 1801, 10583], 'attention_mask': [1, 1, 1, 1, 1, 1, 1]}
```
### Expected behavior
tokenize pad_token to pad_token_id | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28659/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28659/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28658 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28658/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28658/comments | https://api.github.com/repos/huggingface/transformers/issues/28658/events | https://github.com/huggingface/transformers/issues/28658 | 2,095,474,798 | I_kwDOCUB6oc585mhu | 28,658 | OSError: ../../../../models/Yi-VL-34B/ does not appear to have a file named preprocessor_config.json. Checkout 'https://huggingface.co/../../../../models/Yi-VL-34B//main' for available files. | {
"login": "lucasjinreal",
"id": 21303438,
"node_id": "MDQ6VXNlcjIxMzAzNDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/21303438?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucasjinreal",
"html_url": "https://github.com/lucasjinreal",
"followers_url": "https://api.github.com/users/lucasjinreal/followers",
"following_url": "https://api.github.com/users/lucasjinreal/following{/other_user}",
"gists_url": "https://api.github.com/users/lucasjinreal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucasjinreal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucasjinreal/subscriptions",
"organizations_url": "https://api.github.com/users/lucasjinreal/orgs",
"repos_url": "https://api.github.com/users/lucasjinreal/repos",
"events_url": "https://api.github.com/users/lucasjinreal/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucasjinreal/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 6 | 2024-01-23T08:02:19 | 2024-01-25T15:04:10 | null | NONE | null | OSError: ../../../../models/Yi-VL-34B/ does not appear to have a file named preprocessor_config.json. Checkout 'https://huggingface.co/../../../../models/Yi-VL-34B//main' for available files. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28658/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28658/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28657 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28657/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28657/comments | https://api.github.com/repos/huggingface/transformers/issues/28657/events | https://github.com/huggingface/transformers/pull/28657 | 2,095,315,464 | PR_kwDOCUB6oc5ky8YW | 28,657 | Add token_type_ids to Esm tokenizer | {
"login": "lhallee",
"id": 72926928,
"node_id": "MDQ6VXNlcjcyOTI2OTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/72926928?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhallee",
"html_url": "https://github.com/lhallee",
"followers_url": "https://api.github.com/users/lhallee/followers",
"following_url": "https://api.github.com/users/lhallee/following{/other_user}",
"gists_url": "https://api.github.com/users/lhallee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhallee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhallee/subscriptions",
"organizations_url": "https://api.github.com/users/lhallee/orgs",
"repos_url": "https://api.github.com/users/lhallee/repos",
"events_url": "https://api.github.com/users/lhallee/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhallee/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 6 | 2024-01-23T06:14:35 | 2024-01-25T16:43:34 | null | NONE | null | # What does this PR do?
Enables EsmTokenizer to correctly return token_type_ids. Before, ignored special tokens. create_token_type_ids_from_sequences was adapted from BertTokenizer with eos instead of sep per Esm special tokens.
This
```
tokenizer = EsmTokenizer.from_pretrained('facebook/esm2_t6_8M_UR50D')
seq = 'PROTEIN'
len(seq) # 7
tokens = tokenizer(seq, seq, return_token_type_ids=True)
len(tokens.input_ids), len(tokens.token_type_ids) # 17, 14
```
To this
`len(tokens.input_ids), len(tokens.token_type_ids) # 17, 17 `
Fixes [#28656](https://github.com/huggingface/transformers/issues/28656)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28657/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28657/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28657",
"html_url": "https://github.com/huggingface/transformers/pull/28657",
"diff_url": "https://github.com/huggingface/transformers/pull/28657.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28657.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28656 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28656/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28656/comments | https://api.github.com/repos/huggingface/transformers/issues/28656/events | https://github.com/huggingface/transformers/issues/28656 | 2,095,309,779 | I_kwDOCUB6oc584-PT | 28,656 | EsmTokenizer does not return correct length token type ids | {
"login": "lhallee",
"id": 72926928,
"node_id": "MDQ6VXNlcjcyOTI2OTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/72926928?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhallee",
"html_url": "https://github.com/lhallee",
"followers_url": "https://api.github.com/users/lhallee/followers",
"following_url": "https://api.github.com/users/lhallee/following{/other_user}",
"gists_url": "https://api.github.com/users/lhallee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhallee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhallee/subscriptions",
"organizations_url": "https://api.github.com/users/lhallee/orgs",
"repos_url": "https://api.github.com/users/lhallee/repos",
"events_url": "https://api.github.com/users/lhallee/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhallee/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-23T06:09:10 | 2024-01-23T11:45:48 | null | NONE | null | ### System Info
transformers 4.37
python 3.10.11
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
EsmTokenizer ignores special tokens when generating
```
tokenizer = EsmTokenizer.from_pretrained('facebook/esm2_t6_8M_UR50D')
seq = 'PROTEIN'
len(seq) # 7
tokens = tokenizer(seq, seq, return_token_type_ids=True)
len(tokens.input_ids), len(tokens.token_type_ids) # 17, 14
```
Is the same for return_special_tokens=True or False
I understand Esm does not natively use token_type_ids but some personal versions and upcoming contributions to the field do and could benefit from having EsmTokenizer with correct token_type_ids.
### Expected behavior
```
tokenizer = EsmTokenizer.from_pretrained('facebook/esm2_t6_8M_UR50D')
seq = 'PROTEIN'
len(seq) # 7
tokens = tokenizer(seq, seq, return_token_type_ids=True)
len(tokens.input_ids), len(tokens.token_type_ids) # 17, 17
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28656/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28656/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28655 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28655/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28655/comments | https://api.github.com/repos/huggingface/transformers/issues/28655/events | https://github.com/huggingface/transformers/pull/28655 | 2,094,838,316 | PR_kwDOCUB6oc5kxVh7 | 28,655 | Bump pillow from 10.0.1 to 10.2.0 in /examples/research_projects/decision_transformer | {
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
} | [
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
},
{
"id": 6410... | closed | false | null | [] | null | 3 | 2024-01-22T22:02:08 | 2024-01-23T11:41:12 | 2024-01-23T11:40:56 | CONTRIBUTOR | null | Bumps [pillow](https://github.com/python-pillow/Pillow) from 10.0.1 to 10.2.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/python-pillow/Pillow/releases">pillow's releases</a>.</em></p>
<blockquote>
<h2>10.2.0</h2>
<p><a href="https://pillow.readthedocs.io/en/stable/releasenotes/10.2.0.html">https://pillow.readthedocs.io/en/stable/releasenotes/10.2.0.html</a></p>
<h2>Changes</h2>
<ul>
<li>Add <code>keep_rgb</code> option when saving JPEG to prevent conversion of RGB colorspace <a href="https://redirect.github.com/python-pillow/Pillow/issues/7553">#7553</a> [<a href="https://github.com/bgilbert"><code>@bgilbert</code></a>]</li>
<li>Trim negative glyph offsets in ImageFont.getmask() <a href="https://redirect.github.com/python-pillow/Pillow/issues/7672">#7672</a> [<a href="https://github.com/nulano"><code>@nulano</code></a>]</li>
<li>Removed unnecessary "pragma: no cover" <a href="https://redirect.github.com/python-pillow/Pillow/issues/7668">#7668</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Trim glyph size in ImageFont.getmask() <a href="https://redirect.github.com/python-pillow/Pillow/issues/7669">#7669</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Fix loading IPTC images and update test <a href="https://redirect.github.com/python-pillow/Pillow/issues/7667">#7667</a> [<a href="https://github.com/nulano"><code>@nulano</code></a>]</li>
<li>Allow uncompressed TIFF images to be saved in chunks <a href="https://redirect.github.com/python-pillow/Pillow/issues/7650">#7650</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Concatenate multiple JPEG EXIF markers <a href="https://redirect.github.com/python-pillow/Pillow/issues/7496">#7496</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Changed IPTC tile tuple to match other plugins <a href="https://redirect.github.com/python-pillow/Pillow/issues/7661">#7661</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Do not assign new fp attribute when exiting context manager <a href="https://redirect.github.com/python-pillow/Pillow/issues/7566">#7566</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Support arbitrary masks for uncompressed RGB DDS images <a href="https://redirect.github.com/python-pillow/Pillow/issues/7589">#7589</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Support setting ROWSPERSTRIP tag <a href="https://redirect.github.com/python-pillow/Pillow/issues/7654">#7654</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Apply ImageFont.MAX_STRING_LENGTH to ImageFont.getmask() <a href="https://redirect.github.com/python-pillow/Pillow/issues/7662">#7662</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Optimise <code>ImageColor</code> using <code>functools.lru_cache</code> <a href="https://redirect.github.com/python-pillow/Pillow/issues/7657">#7657</a> [<a href="https://github.com/hugovk"><code>@hugovk</code></a>]</li>
<li>Restricted environment keys for ImageMath.eval() <a href="https://redirect.github.com/python-pillow/Pillow/issues/7655">#7655</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Optimise <code>ImageMode.getmode</code> using <code>functools.lru_cache</code> <a href="https://redirect.github.com/python-pillow/Pillow/issues/7641">#7641</a> [<a href="https://github.com/hugovk"><code>@hugovk</code></a>]</li>
<li>Added trusted PyPI publishing <a href="https://redirect.github.com/python-pillow/Pillow/issues/7616">#7616</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Compile FriBiDi for Windows ARM64 <a href="https://redirect.github.com/python-pillow/Pillow/issues/7629">#7629</a> [<a href="https://github.com/nulano"><code>@nulano</code></a>]</li>
<li>Fix incorrect color blending for overlapping glyphs <a href="https://redirect.github.com/python-pillow/Pillow/issues/7497">#7497</a> [<a href="https://github.com/ZachNagengast"><code>@ZachNagengast</code></a>]</li>
<li>Add .git-blame-ignore-revs file <a href="https://redirect.github.com/python-pillow/Pillow/issues/7528">#7528</a> [<a href="https://github.com/akx"><code>@akx</code></a>]</li>
<li>Attempt memory mapping when tile args is a string <a href="https://redirect.github.com/python-pillow/Pillow/issues/7565">#7565</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Fill identical pixels with transparency in subsequent frames when saving GIF <a href="https://redirect.github.com/python-pillow/Pillow/issues/7568">#7568</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Removed unnecessary string length check <a href="https://redirect.github.com/python-pillow/Pillow/issues/7560">#7560</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Determine mask mode in Python instead of C <a href="https://redirect.github.com/python-pillow/Pillow/issues/7548">#7548</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Corrected duration when combining multiple GIF frames into single frame <a href="https://redirect.github.com/python-pillow/Pillow/issues/7521">#7521</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Handle disposing GIF background from outside palette <a href="https://redirect.github.com/python-pillow/Pillow/issues/7515">#7515</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Seek past the data when skipping a PSD layer <a href="https://redirect.github.com/python-pillow/Pillow/issues/7483">#7483</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>ImageMath: Inline <code>isinstance</code> check <a href="https://redirect.github.com/python-pillow/Pillow/issues/7623">#7623</a> [<a href="https://github.com/hugovk"><code>@hugovk</code></a>]</li>
<li>Update actions/upload-artifact action to v4 <a href="https://redirect.github.com/python-pillow/Pillow/issues/7619">#7619</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Import plugins relative to the module <a href="https://redirect.github.com/python-pillow/Pillow/issues/7576">#7576</a> [<a href="https://github.com/deliangyang"><code>@deliangyang</code></a>]</li>
<li>Translate encoder error codes to strings; deprecate <code>ImageFile.raise_oserror()</code> <a href="https://redirect.github.com/python-pillow/Pillow/issues/7609">#7609</a> [<a href="https://github.com/bgilbert"><code>@bgilbert</code></a>]</li>
<li>Updated readthedocs to latest version of Python <a href="https://redirect.github.com/python-pillow/Pillow/issues/7611">#7611</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Support reading BC4U and DX10 BC1 images <a href="https://redirect.github.com/python-pillow/Pillow/issues/6486">#6486</a> [<a href="https://github.com/REDxEYE"><code>@REDxEYE</code></a>]</li>
<li>Optimize ImageStat.Stat.extrema <a href="https://redirect.github.com/python-pillow/Pillow/issues/7593">#7593</a> [<a href="https://github.com/florath"><code>@florath</code></a>]</li>
<li>Handle pathlib.Path in FreeTypeFont <a href="https://redirect.github.com/python-pillow/Pillow/issues/7578">#7578</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Use list comprehensions to create transformed lists <a href="https://redirect.github.com/python-pillow/Pillow/issues/7597">#7597</a> [<a href="https://github.com/hugovk"><code>@hugovk</code></a>]</li>
<li>Added support for reading DX10 BC4 DDS images <a href="https://redirect.github.com/python-pillow/Pillow/issues/7603">#7603</a> [<a href="https://github.com/sambvfx"><code>@sambvfx</code></a>]</li>
<li>Optimized ImageStat.Stat.count <a href="https://redirect.github.com/python-pillow/Pillow/issues/7599">#7599</a> [<a href="https://github.com/florath"><code>@florath</code></a>]</li>
<li>Moved error from truetype() to FreeTypeFont <a href="https://redirect.github.com/python-pillow/Pillow/issues/7587">#7587</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Correct PDF palette size when saving <a href="https://redirect.github.com/python-pillow/Pillow/issues/7555">#7555</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Fixed closing file pointer with olefile 0.47 <a href="https://redirect.github.com/python-pillow/Pillow/issues/7594">#7594</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>ruff: Minor optimizations of list comprehensions, x in set, etc. <a href="https://redirect.github.com/python-pillow/Pillow/issues/7524">#7524</a> [<a href="https://github.com/cclauss"><code>@cclauss</code></a>]</li>
<li>Build Windows wheels using cibuildwheel <a href="https://redirect.github.com/python-pillow/Pillow/issues/7580">#7580</a> [<a href="https://github.com/nulano"><code>@nulano</code></a>]</li>
<li>Raise ValueError when TrueType font size is zero or less <a href="https://redirect.github.com/python-pillow/Pillow/issues/7584">#7584</a> [<a href="https://github.com/akx"><code>@akx</code></a>]</li>
<li>Install cibuildwheel from requirements file <a href="https://redirect.github.com/python-pillow/Pillow/issues/7581">#7581</a> [<a href="https://github.com/hugovk"><code>@hugovk</code></a>]</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/python-pillow/Pillow/blob/main/CHANGES.rst">pillow's changelog</a>.</em></p>
<blockquote>
<h2>10.2.0 (2024-01-02)</h2>
<ul>
<li>
<p>Add <code>keep_rgb</code> option when saving JPEG to prevent conversion of RGB colorspace <a href="https://redirect.github.com/python-pillow/Pillow/issues/7553">#7553</a>
[bgilbert, radarhere]</p>
</li>
<li>
<p>Trim glyph size in ImageFont.getmask() <a href="https://redirect.github.com/python-pillow/Pillow/issues/7669">#7669</a>, <a href="https://redirect.github.com/python-pillow/Pillow/issues/7672">#7672</a>
[radarhere, nulano]</p>
</li>
<li>
<p>Deprecate IptcImagePlugin helpers <a href="https://redirect.github.com/python-pillow/Pillow/issues/7664">#7664</a>
[nulano, hugovk, radarhere]</p>
</li>
<li>
<p>Allow uncompressed TIFF images to be saved in chunks <a href="https://redirect.github.com/python-pillow/Pillow/issues/7650">#7650</a>
[radarhere]</p>
</li>
<li>
<p>Concatenate multiple JPEG EXIF markers <a href="https://redirect.github.com/python-pillow/Pillow/issues/7496">#7496</a>
[radarhere]</p>
</li>
<li>
<p>Changed IPTC tile tuple to match other plugins <a href="https://redirect.github.com/python-pillow/Pillow/issues/7661">#7661</a>
[radarhere]</p>
</li>
<li>
<p>Do not assign new fp attribute when exiting context manager <a href="https://redirect.github.com/python-pillow/Pillow/issues/7566">#7566</a>
[radarhere]</p>
</li>
<li>
<p>Support arbitrary masks for uncompressed RGB DDS images <a href="https://redirect.github.com/python-pillow/Pillow/issues/7589">#7589</a>
[radarhere, akx]</p>
</li>
<li>
<p>Support setting ROWSPERSTRIP tag <a href="https://redirect.github.com/python-pillow/Pillow/issues/7654">#7654</a>
[radarhere]</p>
</li>
<li>
<p>Apply ImageFont.MAX_STRING_LENGTH to ImageFont.getmask() <a href="https://redirect.github.com/python-pillow/Pillow/issues/7662">#7662</a>
[radarhere]</p>
</li>
<li>
<p>Optimise <code>ImageColor</code> using <code>functools.lru_cache</code> <a href="https://redirect.github.com/python-pillow/Pillow/issues/7657">#7657</a>
[hugovk]</p>
</li>
<li>
<p>Restricted environment keys for ImageMath.eval() <a href="https://redirect.github.com/python-pillow/Pillow/issues/7655">#7655</a>
[wiredfool, radarhere]</p>
</li>
<li>
<p>Optimise <code>ImageMode.getmode</code> using <code>functools.lru_cache</code> <a href="https://redirect.github.com/python-pillow/Pillow/issues/7641">#7641</a>
[hugovk, radarhere]</p>
</li>
<li>
<p>Fix incorrect color blending for overlapping glyphs <a href="https://redirect.github.com/python-pillow/Pillow/issues/7497">#7497</a>
[ZachNagengast, nulano, radarhere]</p>
</li>
<li>
<p>Attempt memory mapping when tile args is a string <a href="https://redirect.github.com/python-pillow/Pillow/issues/7565">#7565</a>
[radarhere]</p>
</li>
<li>
<p>Fill identical pixels with transparency in subsequent frames when saving GIF <a href="https://redirect.github.com/python-pillow/Pillow/issues/7568">#7568</a>
[radarhere]</p>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/python-pillow/Pillow/commit/6956d0b2853f5c7ec5f6ec4c60725c5a7ee73aeb"><code>6956d0b</code></a> 10.2.0 version bump</li>
<li><a href="https://github.com/python-pillow/Pillow/commit/31c8dacdc727673e9099f1ac86019714cdccec67"><code>31c8dac</code></a> Merge pull request <a href="https://redirect.github.com/python-pillow/Pillow/issues/7675">#7675</a> from python-pillow/pre-commit-ci-update-config</li>
<li><a href="https://github.com/python-pillow/Pillow/commit/40a3f91af2c78870676a13629b5902bab4ab4cf0"><code>40a3f91</code></a> Merge pull request <a href="https://redirect.github.com/python-pillow/Pillow/issues/7674">#7674</a> from nulano/url-example</li>
<li><a href="https://github.com/python-pillow/Pillow/commit/cb41b0cc78eeefbd9ed2ce8c10f8d6d4c405a706"><code>cb41b0c</code></a> [pre-commit.ci] pre-commit autoupdate</li>
<li><a href="https://github.com/python-pillow/Pillow/commit/de62b25ed318f1604aa4ccd6f942a04c6b2c8b59"><code>de62b25</code></a> fix image url in "Reading from URL" example</li>
<li><a href="https://github.com/python-pillow/Pillow/commit/7c526a6c6bdc7cb947f0aee1d1ee17c266ff6c61"><code>7c526a6</code></a> Update CHANGES.rst [ci skip]</li>
<li><a href="https://github.com/python-pillow/Pillow/commit/d93a5ad70bf94dbb63bdbfb19491a02976574d6d"><code>d93a5ad</code></a> Merge pull request <a href="https://redirect.github.com/python-pillow/Pillow/issues/7553">#7553</a> from bgilbert/jpeg-rgb</li>
<li><a href="https://github.com/python-pillow/Pillow/commit/aed764fe8404926472499208a39e5bf90d861b2a"><code>aed764f</code></a> Update CHANGES.rst [ci skip]</li>
<li><a href="https://github.com/python-pillow/Pillow/commit/f8df5303fa9daf40cf8bfe232403cb40389d8f8f"><code>f8df530</code></a> Merge pull request <a href="https://redirect.github.com/python-pillow/Pillow/issues/7672">#7672</a> from nulano/imagefont-negative-crop</li>
<li><a href="https://github.com/python-pillow/Pillow/commit/24e9485e6bb733a1a816f228dc75fd0086a93e19"><code>24e9485</code></a> Merge pull request <a href="https://redirect.github.com/python-pillow/Pillow/issues/7671">#7671</a> from radarhere/imagetransform</li>
<li>Additional commits viewable in <a href="https://github.com/python-pillow/Pillow/compare/10.0.1...10.2.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28655/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28655/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28655",
"html_url": "https://github.com/huggingface/transformers/pull/28655",
"diff_url": "https://github.com/huggingface/transformers/pull/28655.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28655.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28654 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28654/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28654/comments | https://api.github.com/repos/huggingface/transformers/issues/28654/events | https://github.com/huggingface/transformers/pull/28654 | 2,094,764,082 | PR_kwDOCUB6oc5kxFIV | 28,654 | Add Depth Anything | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-22T21:14:38 | 2024-01-25T08:34:51 | 2024-01-25T08:34:50 | CONTRIBUTOR | null | # What does this PR do?
This PR adds an alternative design to #28643, which adds a standalone separate model.
Pros:
- [x] does not clutter the existing modeling_dpt.py
- [x] is more in line with the [philosophy](https://huggingface.co/blog/transformers-design-philosophy)
Cons:
- [x] actually, not a lot :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28654/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28654/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28654",
"html_url": "https://github.com/huggingface/transformers/pull/28654",
"diff_url": "https://github.com/huggingface/transformers/pull/28654.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28654.patch",
"merged_at": "2024-01-25T08:34:50"
} |
https://api.github.com/repos/huggingface/transformers/issues/28653 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28653/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28653/comments | https://api.github.com/repos/huggingface/transformers/issues/28653/events | https://github.com/huggingface/transformers/pull/28653 | 2,094,695,897 | PR_kwDOCUB6oc5kw2Si | 28,653 | integrations: fix DVCLiveCallback model logging | {
"login": "dberenbaum",
"id": 2308172,
"node_id": "MDQ6VXNlcjIzMDgxNzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2308172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dberenbaum",
"html_url": "https://github.com/dberenbaum",
"followers_url": "https://api.github.com/users/dberenbaum/followers",
"following_url": "https://api.github.com/users/dberenbaum/following{/other_user}",
"gists_url": "https://api.github.com/users/dberenbaum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dberenbaum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dberenbaum/subscriptions",
"organizations_url": "https://api.github.com/users/dberenbaum/orgs",
"repos_url": "https://api.github.com/users/dberenbaum/repos",
"events_url": "https://api.github.com/users/dberenbaum/events{/privacy}",
"received_events_url": "https://api.github.com/users/dberenbaum/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-22T20:29:11 | 2024-01-23T09:11:10 | 2024-01-23T09:11:10 | CONTRIBUTOR | null | # What does this PR do?
Fixes issues with `HF_DVCLIVE_LOG_MODEL` environment variable not always being respected.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@muellerzr Could you please take a look when you have a chance? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28653/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28653/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28653",
"html_url": "https://github.com/huggingface/transformers/pull/28653",
"diff_url": "https://github.com/huggingface/transformers/pull/28653.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28653.patch",
"merged_at": "2024-01-23T09:11:10"
} |
https://api.github.com/repos/huggingface/transformers/issues/28652 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28652/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28652/comments | https://api.github.com/repos/huggingface/transformers/issues/28652/events | https://github.com/huggingface/transformers/pull/28652 | 2,094,694,394 | PR_kwDOCUB6oc5kw19J | 28,652 | [WIP] VMamba implementation | {
"login": "dmus",
"id": 464378,
"node_id": "MDQ6VXNlcjQ2NDM3OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/464378?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dmus",
"html_url": "https://github.com/dmus",
"followers_url": "https://api.github.com/users/dmus/followers",
"following_url": "https://api.github.com/users/dmus/following{/other_user}",
"gists_url": "https://api.github.com/users/dmus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dmus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dmus/subscriptions",
"organizations_url": "https://api.github.com/users/dmus/orgs",
"repos_url": "https://api.github.com/users/dmus/repos",
"events_url": "https://api.github.com/users/dmus/events{/privacy}",
"received_events_url": "https://api.github.com/users/dmus/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 12 | 2024-01-22T20:28:11 | 2024-01-31T20:49:37 | null | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #28606
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28652/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28652/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28652",
"html_url": "https://github.com/huggingface/transformers/pull/28652",
"diff_url": "https://github.com/huggingface/transformers/pull/28652.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28652.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28651 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28651/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28651/comments | https://api.github.com/repos/huggingface/transformers/issues/28651/events | https://github.com/huggingface/transformers/issues/28651 | 2,094,681,853 | I_kwDOCUB6oc582k79 | 28,651 | Memory consumption for inference with Llama2-7B is weird | {
"login": "c3ianwu",
"id": 92783433,
"node_id": "U_kgDOBYfDSQ",
"avatar_url": "https://avatars.githubusercontent.com/u/92783433?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/c3ianwu",
"html_url": "https://github.com/c3ianwu",
"followers_url": "https://api.github.com/users/c3ianwu/followers",
"following_url": "https://api.github.com/users/c3ianwu/following{/other_user}",
"gists_url": "https://api.github.com/users/c3ianwu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/c3ianwu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/c3ianwu/subscriptions",
"organizations_url": "https://api.github.com/users/c3ianwu/orgs",
"repos_url": "https://api.github.com/users/c3ianwu/repos",
"events_url": "https://api.github.com/users/c3ianwu/events{/privacy}",
"received_events_url": "https://api.github.com/users/c3ianwu/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 4 | 2024-01-22T20:20:16 | 2024-01-30T04:58:42 | null | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.20.1
- Safetensors version: 0.4.1
- Accelerate version: 0.22.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@ArthurZucker @younesbelkada @Gan
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am trying to track GPU memory consumption when doing inference with Llama2-7B. This is my set-up:
```
import json
import tqdm
import warnings
warnings.filterwarnings('ignore')
import time
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
import datasets
import matplotlib.pyplot as plt
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-chat-hf", torch_dtype=torch.bfloat16)
model.to(device=0)
prompt_data = datasets.load_from_disk("/data/metamath_100k_2048/train") # this is just some supervised training text data
prompts = prompt_data["inputs"] # this is a list of strings
class LocalModel:
def __init__(self, model, tokenizer):
self.model = model
self.tokenizer = tokenizer
def generate(self, prompts, do_sample=False, temperature=0, top_k=0, top_p=0, repetition_penalty=1.0, max_new_tokens=128):
self.tokenizer.pad_token = self.tokenizer.eos_token
tokenized_inputs = self.tokenizer(prompts, return_tensors="pt", padding=True).to(self.model.device)
inputs = tokenized_inputs["input_ids"]
attention_mask = tokenized_inputs["attention_mask"]
tic = time.time()
logits = self.model.generate(input_ids=inputs,
attention_mask=attention_mask,
do_sample=do_sample,
temperature=temperature,
top_k=top_k,
top_p=top_p,
repetition_penalty=repetition_penalty,
max_new_tokens=max_new_tokens)
max_alloc = torch.cuda.max_memory_allocated(0) / 1e9
print("Peak GPU Memory Consumption: {}".format(torch.cuda.max_memory_allocated(0) / 1e9))
torch.cuda.reset_peak_memory_stats(0)
toc = time.time()
print("Time for generation: {}".format(toc - tic))
return max_alloc
```
I ran
```
local_model = LocalModel(model, tokenizer)
alloc = []
x = [0, 2, 4, 6, 8, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160]
for i in x:
alloc.append(local_model.generate(prompts[:64], max_new_tokens=i))
plt.scatter(x, alloc)
plt.xlabel("Max New Tokens")
plt.ylabel("Peak Mem Usage / GB")
plt.show()
```
This is the plot:
<img width="580" alt="Screenshot 2024-01-22 at 20 00 36" src="https://github.com/huggingface/transformers/assets/92783433/22036995-a80f-46bf-9c44-e6f6329486b0">
### Expected behavior
I tried to compute theoretical numbers. I estimated the number of input tokens:
```
def calculate_prompt_tokens(tokenizer, prompts, batch_size):
tokenizer.pad_token = tokenizer.eos_token
tokens = tokenizer(prompts[:batch_size], return_tensors="pt", padding=True)
return tokens["input_ids"].shape[0] * tokens["input_ids"].shape[1]
calculate_prompt_tokens(tokenizer, prompts, batch_size=64)
```
which returns 12992. Taking the model to be 7B params ~ 14GB in bf16, and assuming that the kv cache consumes `4*num_layers*d_model = 4*32*4096 = 524,288 bytes/token`, we get an estimated `14 + (12992*524288)*1e-9 = 20.8GB` before anything is generated, which looks about right from the graph.
Using the same logic, we know that each additional generation step should cost (via the kv cache) `524,288*64 = 0.0034GB / step` of memory. Looking at the gradient of the linear portion of the plot, we get ~0.0067GB / step instead, which is around double the amount.
1. Why is the memory consumed for generation greater than expected?
2. What's going on in the early portion of the plot? Why is there a big jump at the start?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28651/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28650 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28650/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28650/comments | https://api.github.com/repos/huggingface/transformers/issues/28650/events | https://github.com/huggingface/transformers/pull/28650 | 2,094,606,858 | PR_kwDOCUB6oc5kwiw7 | 28,650 | [DO NOT MERGE] Testing safetensors 0.4.2 | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-22T19:29:25 | 2024-01-23T10:19:30 | 2024-01-23T10:19:30 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28650/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28650",
"html_url": "https://github.com/huggingface/transformers/pull/28650",
"diff_url": "https://github.com/huggingface/transformers/pull/28650.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28650.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28649 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28649/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28649/comments | https://api.github.com/repos/huggingface/transformers/issues/28649/events | https://github.com/huggingface/transformers/issues/28649 | 2,094,580,013 | I_kwDOCUB6oc582MEt | 28,649 | 4.37 ImportError: cannot import name 'SampleOutput' from 'transformers.generation.utils' | {
"login": "erew123",
"id": 35898566,
"node_id": "MDQ6VXNlcjM1ODk4NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/35898566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erew123",
"html_url": "https://github.com/erew123",
"followers_url": "https://api.github.com/users/erew123/followers",
"following_url": "https://api.github.com/users/erew123/following{/other_user}",
"gists_url": "https://api.github.com/users/erew123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erew123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erew123/subscriptions",
"organizations_url": "https://api.github.com/users/erew123/orgs",
"repos_url": "https://api.github.com/users/erew123/repos",
"events_url": "https://api.github.com/users/erew123/events{/privacy}",
"received_events_url": "https://api.github.com/users/erew123/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 15 | 2024-01-22T19:14:50 | 2024-01-24T16:48:49 | 2024-01-24T16:27:40 | NONE | null | ### System Info
I am a developer of AllTalk https://github.com/erew123/alltalk_tts/ which uses the Coqui TTS engine https://github.com/coqui-ai/TTS
As of the 4.37 update, I have users reporting this error:
```
Traceback (most recent call last):
File "/home/ai/alltalk_tts/tts_server.py", line 7, in
from TTS.tts.configs.xtts_config import XttsConfig
File "/home/ai/alltalk_tts/alltalk_environment/env/lib/python3.11/site-packages/TTS/tts/configs/xtts_config.py", line 5, in
from TTS.tts.models.xtts import XttsArgs, XttsAudioConfig
File "/home/ai/alltalk_tts/alltalk_environment/env/lib/python3.11/site-packages/TTS/tts/models/xtts.py", line 12, in
from TTS.tts.layers.xtts.stream_generator import init_stream_support
File "/home/ai/alltalk_tts/alltalk_environment/env/lib/python3.11/site-packages/TTS/tts/layers/xtts/stream_generator.py", line 24, in
from transformers.generation.utils import GenerateOutput, SampleOutput, logger
ImportError: cannot import name 'SampleOutput' from 'transformers.generation.utils' (/home/ai/alltalk_tts/alltalk_environment/env/lib/python3.11/site-packages/transformers/generation/utils.py)
```
The issue is mainly this:
`ImportError: cannot import name 'SampleOutput' from 'transformers.generation.utils'`
Downgrading to 4.36.2 of Transformers makes things work fine again.
I looked to see if this could be related to **Remove support for torch 1.10** but can find no references to **SampleOutput** being a part of that.
Would you be able to confirm to me is this something that has been dropped in 4.37 or perhaps an omission that will be resolved in a future update?
Thanks
### Who can help?
@sanchit-gandhi (Im guessing you may be the correct person as this is Speech, apologies if not).
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Use transformers 4.37 with the Coqui TTS engine and try to import their XTTS model.
### Expected behavior
Of course, for this model to import correctly. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28649/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28649/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28648 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28648/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28648/comments | https://api.github.com/repos/huggingface/transformers/issues/28648/events | https://github.com/huggingface/transformers/pull/28648 | 2,094,285,376 | PR_kwDOCUB6oc5kvcR4 | 28,648 | [`TokenizationUtils`] add support for `split_special_tokens` | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-01-22T16:26:31 | 2024-01-22T16:26:55 | null | COLLABORATOR | null | # What does this PR do?
Adds support for `split_special_tokens` for fast models as well
- [ ] deprecate `split_special_tokens` for `encode_special_tokens` for API consistency
- [ ] make sure this is saved and used not only as kwargs but also the attribute
- [ ] add some tests
- [ ] add some docs | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28648/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28648/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28648",
"html_url": "https://github.com/huggingface/transformers/pull/28648",
"diff_url": "https://github.com/huggingface/transformers/pull/28648.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28648.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28647 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28647/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28647/comments | https://api.github.com/repos/huggingface/transformers/issues/28647/events | https://github.com/huggingface/transformers/issues/28647 | 2,094,271,576 | I_kwDOCUB6oc581AxY | 28,647 | Why tokens / second is more on Float32 than Float16 | {
"login": "Anindyadeep",
"id": 58508471,
"node_id": "MDQ6VXNlcjU4NTA4NDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/58508471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Anindyadeep",
"html_url": "https://github.com/Anindyadeep",
"followers_url": "https://api.github.com/users/Anindyadeep/followers",
"following_url": "https://api.github.com/users/Anindyadeep/following{/other_user}",
"gists_url": "https://api.github.com/users/Anindyadeep/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Anindyadeep/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Anindyadeep/subscriptions",
"organizations_url": "https://api.github.com/users/Anindyadeep/orgs",
"repos_url": "https://api.github.com/users/Anindyadeep/repos",
"events_url": "https://api.github.com/users/Anindyadeep/events{/privacy}",
"received_events_url": "https://api.github.com/users/Anindyadeep/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 5 | 2024-01-22T16:19:01 | 2024-01-26T07:40:45 | null | CONTRIBUTOR | null | ### System Info
```
- `transformers` version: 4.34.1
- Platform: Linux-5.4.0-169-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.1
- Accelerate version: 0.26.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
```
### Who can help?
@ArthurZucker, @younesbelkada
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import time
import torch
import numpy as np
from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig
@torch.inference_mode()
def benchmark_gptq(dtype, prompt = "Hello, what do you know about transformers", repetitions = 10):
model_path = "./models/llama-2-7b-autogptq"
quantization_config = GPTQConfig(
bits=4,
group_size=128,
desc_act=False, use_exllama=False,
use_cuda_fp16=True if dtype == torch.float16 else False
)
model = AutoModelForCausalLM.from_pretrained(
model_path,
quantization_config=quantization_config,
torch_dtype=dtype,
device_map='cuda:0'
)
tokenizer = AutoTokenizer.from_pretrained(model_path)
tokenized_input = tokenizer.encode(prompt, return_tensors='pt').to('cuda:0')
results = []
print("STARTING TO BENCHMARK FOR: ", "fp16" if dtype==torch.float16 else "fp32", "\n")
for i in range(repetitions):
print("....")
start = time.time()
output = model.generate(input_ids = tokenized_input, max_new_tokens = 100).detach().cpu().numpy()
delta = time.time() - start
results.append(
len(output[0]) / delta
)
return np.mean(results)
if __name__ == '__main__':
print("FP-16: ", benchmark_gptq(dtype=torch.float16))
print("FP-32: ", benchmark_gptq(dtype=torch.float32))
```
### Expected behavior
This was the output:
```
FP-16: 39.50591397818114
FP-32: 49.23083222100881
```
Please note: The metric which is used here is: `tokens/sec`
But the expected should be the reversed right? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28647/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28647/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28646 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28646/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28646/comments | https://api.github.com/repos/huggingface/transformers/issues/28646/events | https://github.com/huggingface/transformers/pull/28646 | 2,094,253,001 | PR_kwDOCUB6oc5kvVOB | 28,646 | improve efficient training on CPU documentation | {
"login": "faaany",
"id": 24477841,
"node_id": "MDQ6VXNlcjI0NDc3ODQx",
"avatar_url": "https://avatars.githubusercontent.com/u/24477841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/faaany",
"html_url": "https://github.com/faaany",
"followers_url": "https://api.github.com/users/faaany/followers",
"following_url": "https://api.github.com/users/faaany/following{/other_user}",
"gists_url": "https://api.github.com/users/faaany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/faaany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/faaany/subscriptions",
"organizations_url": "https://api.github.com/users/faaany/orgs",
"repos_url": "https://api.github.com/users/faaany/repos",
"events_url": "https://api.github.com/users/faaany/events{/privacy}",
"received_events_url": "https://api.github.com/users/faaany/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2024-01-22T16:09:01 | 2024-01-24T17:07:14 | 2024-01-24T17:07:14 | CONTRIBUTOR | null | ## What does this PR do?
This PR improves the CPU efficient training documentation to make it more clear, accurate and up-to-date. Concrete improvements are
- add full names of the CPU instruction sets (e.g. Intel® Advanced Vector Extensions 512 instead of AVX-512)
- add one sentence to explain "mixed precision"
- add OOB mixed precision training using BF16 and further improvement with IPEX | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28646/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28646/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28646",
"html_url": "https://github.com/huggingface/transformers/pull/28646",
"diff_url": "https://github.com/huggingface/transformers/pull/28646.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28646.patch",
"merged_at": "2024-01-24T17:07:14"
} |
https://api.github.com/repos/huggingface/transformers/issues/28645 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28645/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28645/comments | https://api.github.com/repos/huggingface/transformers/issues/28645/events | https://github.com/huggingface/transformers/pull/28645 | 2,093,983,567 | PR_kwDOCUB6oc5kua1I | 28,645 | [`GPTNeoX`] Fix GPTNeoX + Flash Attention 2 issue | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-22T13:58:51 | 2024-01-22T14:50:02 | 2024-01-22T14:50:01 | CONTRIBUTOR | null | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/28613
Indeed, probably due to copy-pasta the target_dtype was trying to get inferred from the wrong attribute
cc @amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28645/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28645/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28645",
"html_url": "https://github.com/huggingface/transformers/pull/28645",
"diff_url": "https://github.com/huggingface/transformers/pull/28645.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28645.patch",
"merged_at": "2024-01-22T14:50:01"
} |
https://api.github.com/repos/huggingface/transformers/issues/28644 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28644/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28644/comments | https://api.github.com/repos/huggingface/transformers/issues/28644/events | https://github.com/huggingface/transformers/pull/28644 | 2,093,953,365 | PR_kwDOCUB6oc5kuULr | 28,644 | small doc update for CamemBERT | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-22T13:43:58 | 2024-01-29T14:46:34 | 2024-01-29T14:46:33 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28644/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28644/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28644",
"html_url": "https://github.com/huggingface/transformers/pull/28644",
"diff_url": "https://github.com/huggingface/transformers/pull/28644.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28644.patch",
"merged_at": "2024-01-29T14:46:33"
} |
https://api.github.com/repos/huggingface/transformers/issues/28643 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28643/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28643/comments | https://api.github.com/repos/huggingface/transformers/issues/28643/events | https://github.com/huggingface/transformers/pull/28643 | 2,093,924,305 | PR_kwDOCUB6oc5kuNzt | 28,643 | Convert Depth Anything | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-22T13:28:57 | 2024-01-23T21:02:34 | 2024-01-23T21:02:34 | CONTRIBUTOR | null | # What does this PR do?
[Depth Anything](https://twitter.com/_akhaliq/status/1749284669936275463) came out, and it is compatible with our implementation of DPT. It leverages DINOv2 as backbone.
It does use a small a tweak in the decoder, where it sets `size` instead of `scale_factor` when interpolating.
Demo notebook: https://colab.research.google.com/drive/1tHrdu4TY6f_oTXJbqPUn2DVoSKxbNa3O?usp=sharing. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28643/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28643/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28643",
"html_url": "https://github.com/huggingface/transformers/pull/28643",
"diff_url": "https://github.com/huggingface/transformers/pull/28643.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28643.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28642 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28642/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28642/comments | https://api.github.com/repos/huggingface/transformers/issues/28642/events | https://github.com/huggingface/transformers/pull/28642 | 2,093,923,569 | PR_kwDOCUB6oc5kuNpW | 28,642 | Set correct dtypes for ONNX quantization | {
"login": "severinsimmler",
"id": 16133277,
"node_id": "MDQ6VXNlcjE2MTMzMjc3",
"avatar_url": "https://avatars.githubusercontent.com/u/16133277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severinsimmler",
"html_url": "https://github.com/severinsimmler",
"followers_url": "https://api.github.com/users/severinsimmler/followers",
"following_url": "https://api.github.com/users/severinsimmler/following{/other_user}",
"gists_url": "https://api.github.com/users/severinsimmler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severinsimmler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severinsimmler/subscriptions",
"organizations_url": "https://api.github.com/users/severinsimmler/orgs",
"repos_url": "https://api.github.com/users/severinsimmler/repos",
"events_url": "https://api.github.com/users/severinsimmler/events{/privacy}",
"received_events_url": "https://api.github.com/users/severinsimmler/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 7 | 2024-01-22T13:28:33 | 2024-01-25T14:21:48 | null | CONTRIBUTOR | null | # What does this PR do?
It's currently not to possible to quantize an ONNX model with `transformers.convert_graph_to_onnx`.
Running the following snippet on `main`:
```python
from pathlib import Path
from transformers import pipeline
from transformers.convert_graph_to_onnx import convert_pytorch, quantize
# load a ner model
nlp = pipeline(task="ner", model="dbmdz/bert-large-cased-finetuned-conll03-english")
# name of the onnx file to be exported
output = Path("model.onnx")
# first transform pytorch to onnx model
convert_pytorch(nlp, output=output, opset=11, use_external_format=False)
# onnx model can now be quantized
quantize(output)
```
will result in:
```
Traceback (most recent call last):
File "/home/severin/git/transformers/test-quantization.py", line 12, in <module>
quantized_model = quantize(output)
^^^^^^^^^^^^^^^^
File "/home/severin/git/transformers/src/transformers/convert_graph_to_onnx.py", line 472, in quantize
quantizer.quantize_model()
File "/home/severin/git/transformers/.venv/lib/python3.11/site-packages/onnxruntime/quantization/onnx_quantizer.py", line 310, in quantize_model
op_quantizer.quantize()
File "/home/severin/git/transformers/.venv/lib/python3.11/site-packages/onnxruntime/quantization/operators/gather.py", line 29, in quantize
) = self.quantizer.quantize_activation(node, [0])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/severin/git/transformers/.venv/lib/python3.11/site-packages/onnxruntime/quantization/onnx_quantizer.py", line 825, in quantize_activation
return self.__quantize_inputs(
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/severin/git/transformers/.venv/lib/python3.11/site-packages/onnxruntime/quantization/onnx_quantizer.py", line 915, in __quantize_inputs
q_weight_name, zp_name, scale_name = self.quantize_initializer(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/severin/git/transformers/.venv/lib/python3.11/site-packages/onnxruntime/quantization/onnx_quantizer.py", line 995, in quantize_initializer
_, _, zero_point, scale, q_weight_data = quantize_data(
^^^^^^^^^^^^^^
File "/home/severin/git/transformers/.venv/lib/python3.11/site-packages/onnxruntime/quantization/quant_utils.py", line 277, in quantize_data
raise ValueError(f"Unexpected value for qType={qType}.")
ValueError: Unexpected value for qType=False.
```
The values are updated in this PR to be consistent with the default values in `optimum` (see [this](https://github.com/huggingface/optimum/blob/bb7b71a2c1f9c9220845b258afd88a5c1a24c013/optimum/onnxruntime/quantization.py#L367-L368) and also [this](https://github.com/huggingface/optimum/blob/bb7b71a2c1f9c9220845b258afd88a5c1a24c013/optimum/onnxruntime/configuration.py#L275-L277)).
Running the snippet from above in my branch outputs as expected:
```
Quantized model has been written at model-quantized.onnx: ✔
```
Tested with `onnxruntime` 1.16.3 (on Python 3.11.6) and 1.12.1 (on Python 3.10.13).
## Who can review?
@SunMarc and @younesbelkada (neither `bitsandbytes` nor `autogpt`, but quantization though)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28642/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28642/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28642",
"html_url": "https://github.com/huggingface/transformers/pull/28642",
"diff_url": "https://github.com/huggingface/transformers/pull/28642.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28642.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28641 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28641/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28641/comments | https://api.github.com/repos/huggingface/transformers/issues/28641/events | https://github.com/huggingface/transformers/issues/28641 | 2,093,895,845 | I_kwDOCUB6oc58zlCl | 28,641 | Qwen2 weights are not there/deleted? | {
"login": "aliencaocao",
"id": 20109683,
"node_id": "MDQ6VXNlcjIwMTA5Njgz",
"avatar_url": "https://avatars.githubusercontent.com/u/20109683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aliencaocao",
"html_url": "https://github.com/aliencaocao",
"followers_url": "https://api.github.com/users/aliencaocao/followers",
"following_url": "https://api.github.com/users/aliencaocao/following{/other_user}",
"gists_url": "https://api.github.com/users/aliencaocao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aliencaocao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aliencaocao/subscriptions",
"organizations_url": "https://api.github.com/users/aliencaocao/orgs",
"repos_url": "https://api.github.com/users/aliencaocao/repos",
"events_url": "https://api.github.com/users/aliencaocao/events{/privacy}",
"received_events_url": "https://api.github.com/users/aliencaocao/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-22T13:14:13 | 2024-01-23T15:37:50 | null | CONTRIBUTOR | null | ### System Info
https://huggingface.co/Qwen2/Qwen2-7B-Chat-beta gives a 404, despite the tutorial in https://huggingface.co/docs/transformers/main/model_doc/qwen2 quoting it.
https://huggingface.co/Qwen only has Qwen-1 models.
@ArthurZucker @younesbelkada @stevhliu
By the way, the docs is inconsistent in the model path. Most uses https://huggingface.co/Qwen2/Qwen2-7B-beta but in docs for the [config class](https://huggingface.co/docs/transformers/main/model_doc/qwen2#transformers.Qwen2Config), it uses https://huggingface.co/Qwen/Qwen2-7B-beta which also does not exist.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Visit https://huggingface.co/Qwen2/Qwen2-7B-Chat-beta
### Expected behavior
Model is there and accessible | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28641/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28640 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28640/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28640/comments | https://api.github.com/repos/huggingface/transformers/issues/28640/events | https://github.com/huggingface/transformers/pull/28640 | 2,093,854,665 | PR_kwDOCUB6oc5kt-ed | 28,640 | Add missing key to TFLayoutLM signature | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-22T12:52:41 | 2024-01-22T13:16:30 | 2024-01-22T13:16:29 | MEMBER | null | LayoutLM is missing the `bbox` key in its signature, which affects exporting to TFLite/TF Serving. LayoutLMv3 already has the correct signature and doesn't need to be fixed. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28640/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28640/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28640",
"html_url": "https://github.com/huggingface/transformers/pull/28640",
"diff_url": "https://github.com/huggingface/transformers/pull/28640.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28640.patch",
"merged_at": "2024-01-22T13:16:29"
} |
https://api.github.com/repos/huggingface/transformers/issues/28639 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28639/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28639/comments | https://api.github.com/repos/huggingface/transformers/issues/28639/events | https://github.com/huggingface/transformers/pull/28639 | 2,093,795,566 | PR_kwDOCUB6oc5ktxXz | 28,639 | compatibility to original owlv2 model | {
"login": "talshaharabany",
"id": 50660642,
"node_id": "MDQ6VXNlcjUwNjYwNjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/50660642?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/talshaharabany",
"html_url": "https://github.com/talshaharabany",
"followers_url": "https://api.github.com/users/talshaharabany/followers",
"following_url": "https://api.github.com/users/talshaharabany/following{/other_user}",
"gists_url": "https://api.github.com/users/talshaharabany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/talshaharabany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/talshaharabany/subscriptions",
"organizations_url": "https://api.github.com/users/talshaharabany/orgs",
"repos_url": "https://api.github.com/users/talshaharabany/repos",
"events_url": "https://api.github.com/users/talshaharabany/events{/privacy}",
"received_events_url": "https://api.github.com/users/talshaharabany/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-22T12:21:51 | 2024-01-22T12:23:32 | 2024-01-22T12:23:32 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28639/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28639/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28639",
"html_url": "https://github.com/huggingface/transformers/pull/28639",
"diff_url": "https://github.com/huggingface/transformers/pull/28639.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28639.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28638 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28638/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28638/comments | https://api.github.com/repos/huggingface/transformers/issues/28638/events | https://github.com/huggingface/transformers/pull/28638 | 2,093,791,766 | PR_kwDOCUB6oc5ktwiL | 28,638 | Avoid root logger's level being changed | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-22T12:19:45 | 2024-01-22T13:45:31 | 2024-01-22T13:45:30 | COLLABORATOR | null | # What does this PR do?
A complement to #28575 -> root cause is found and fixed | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28638/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28638/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28638",
"html_url": "https://github.com/huggingface/transformers/pull/28638",
"diff_url": "https://github.com/huggingface/transformers/pull/28638.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28638.patch",
"merged_at": "2024-01-22T13:45:30"
} |
https://api.github.com/repos/huggingface/transformers/issues/28637 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28637/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28637/comments | https://api.github.com/repos/huggingface/transformers/issues/28637/events | https://github.com/huggingface/transformers/pull/28637 | 2,093,712,354 | PR_kwDOCUB6oc5ktfCr | 28,637 | Fix windows err with checkpoint race conditions | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2024-01-22T11:35:07 | 2024-01-29T13:42:29 | 2024-01-23T13:30:36 | CONTRIBUTOR | null | # What does this PR do?
Windows doesn't like python trying to open files, so makes the additional race condition check to be non-windows based
Fixes # (issue)
https://github.com/huggingface/transformers/pull/28364
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28637/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28637/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28637",
"html_url": "https://github.com/huggingface/transformers/pull/28637",
"diff_url": "https://github.com/huggingface/transformers/pull/28637.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28637.patch",
"merged_at": "2024-01-23T13:30:36"
} |
https://api.github.com/repos/huggingface/transformers/issues/28636 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28636/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28636/comments | https://api.github.com/repos/huggingface/transformers/issues/28636/events | https://github.com/huggingface/transformers/pull/28636 | 2,093,599,501 | PR_kwDOCUB6oc5ktGNe | 28,636 | [`SigLIP`] Only import tokenizer if sentencepiece available | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-22T10:33:49 | 2024-01-22T15:20:20 | 2024-01-22T15:20:17 | COLLABORATOR | null | # What does this PR do?
Protects the import of siglip's tokenizer such that users can safely run `from transformers import *` if they don't have sentencepiece installed.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28636/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28636/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28636",
"html_url": "https://github.com/huggingface/transformers/pull/28636",
"diff_url": "https://github.com/huggingface/transformers/pull/28636.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28636.patch",
"merged_at": "2024-01-22T15:20:17"
} |
https://api.github.com/repos/huggingface/transformers/issues/28635 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28635/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28635/comments | https://api.github.com/repos/huggingface/transformers/issues/28635/events | https://github.com/huggingface/transformers/issues/28635 | 2,093,563,935 | I_kwDOCUB6oc58yUAf | 28,635 | Tokenizer `encode/decode` methods are inconsistent, TypeError: argument 'ids': 'list' object cannot be interpreted as an integer | {
"login": "scruel",
"id": 16933298,
"node_id": "MDQ6VXNlcjE2OTMzMjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/16933298?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scruel",
"html_url": "https://github.com/scruel",
"followers_url": "https://api.github.com/users/scruel/followers",
"following_url": "https://api.github.com/users/scruel/following{/other_user}",
"gists_url": "https://api.github.com/users/scruel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/scruel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scruel/subscriptions",
"organizations_url": "https://api.github.com/users/scruel/orgs",
"repos_url": "https://api.github.com/users/scruel/repos",
"events_url": "https://api.github.com/users/scruel/events{/privacy}",
"received_events_url": "https://api.github.com/users/scruel/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 4 | 2024-01-22T10:15:05 | 2024-01-30T10:46:15 | null | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-6.5.0-14-generic-x86_64-with-glibc2.35
- Python version: 3.11.6
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Run the following code:
```python
from transformers import AutoTokenizer
text = "test"
tokenizer = AutoTokenizer.from_pretrained("gpt2")
encoded = tokenizer.encode(text, return_tensors='pt')
result_text = tokenizer.decode(encoded, skip_special_tokens=True)
print(text)
```
Will raise exception:
```
Traceback (most recent call last):
File "main.py", line 8, in <module>
tokenizer.decode(encoded, skip_special_tokens=True)
File "/home/scruel/mambaforge/envs/vae/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 3748, in decode
return self._decode(
^^^^^^^^^^^^^
File "/home/scruel/mambaforge/envs/vae/lib/python3.11/site-packages/transformers/tokenization_utils_fast.py", line 625, in _decode
text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: argument 'ids': 'list' object cannot be interpreted as an integer
```
### Expected behavior
Should be able to print the original text `"test"`, rather than raise an exception(`TypeError`). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28635/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28635/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28634 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28634/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28634/comments | https://api.github.com/repos/huggingface/transformers/issues/28634/events | https://github.com/huggingface/transformers/pull/28634 | 2,093,449,358 | PR_kwDOCUB6oc5kslMZ | 28,634 | Exllama kernels support for AWQ models | {
"login": "IlyasMoutawwakil",
"id": 57442720,
"node_id": "MDQ6VXNlcjU3NDQyNzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/57442720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IlyasMoutawwakil",
"html_url": "https://github.com/IlyasMoutawwakil",
"followers_url": "https://api.github.com/users/IlyasMoutawwakil/followers",
"following_url": "https://api.github.com/users/IlyasMoutawwakil/following{/other_user}",
"gists_url": "https://api.github.com/users/IlyasMoutawwakil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IlyasMoutawwakil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IlyasMoutawwakil/subscriptions",
"organizations_url": "https://api.github.com/users/IlyasMoutawwakil/orgs",
"repos_url": "https://api.github.com/users/IlyasMoutawwakil/repos",
"events_url": "https://api.github.com/users/IlyasMoutawwakil/events{/privacy}",
"received_events_url": "https://api.github.com/users/IlyasMoutawwakil/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 4 | 2024-01-22T09:20:19 | 2024-02-01T01:22:25 | null | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Following https://github.com/casper-hansen/AutoAWQ/pull/313
ExllamaV2 offers up to 2x speedup compared to GEMM, while also compatible with AMD ROCm.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@SunMarc and @younesbelkada
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28634/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28634/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28634",
"html_url": "https://github.com/huggingface/transformers/pull/28634",
"diff_url": "https://github.com/huggingface/transformers/pull/28634.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28634.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28633 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28633/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28633/comments | https://api.github.com/repos/huggingface/transformers/issues/28633/events | https://github.com/huggingface/transformers/pull/28633 | 2,093,350,117 | PR_kwDOCUB6oc5ksPt9 | 28,633 | [`Vilt`] align input and model dtype in the ViltPatchEmbeddings forward pass | {
"login": "faaany",
"id": 24477841,
"node_id": "MDQ6VXNlcjI0NDc3ODQx",
"avatar_url": "https://avatars.githubusercontent.com/u/24477841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/faaany",
"html_url": "https://github.com/faaany",
"followers_url": "https://api.github.com/users/faaany/followers",
"following_url": "https://api.github.com/users/faaany/following{/other_user}",
"gists_url": "https://api.github.com/users/faaany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/faaany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/faaany/subscriptions",
"organizations_url": "https://api.github.com/users/faaany/orgs",
"repos_url": "https://api.github.com/users/faaany/repos",
"events_url": "https://api.github.com/users/faaany/events{/privacy}",
"received_events_url": "https://api.github.com/users/faaany/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2024-01-22T08:26:04 | 2024-01-25T15:03:21 | 2024-01-25T15:03:21 | CONTRIBUTOR | null | ## What does this PR do?
Just like the case in [BlipVisionEmbeddings](https://github.com/huggingface/transformers/blob/main/src/transformers/models/blip/modeling_blip.py#L249), there should also be a dtype alignment in the ViltPatchEmbeddings as well. Otherwise, I would get the `RuntimeError: Input type (float) and bias type (c10::Half) should be the same`, when my "dandelin/vilt-b32-finetuned-vqa" model is loaded in half-precision data type.
## Reproduction
```python
from transformers import ViltProcessor, ViltForQuestionAnswering
import requests
from PIL import Image
import torch
# prepare image + question
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = "How many cats are there?"
processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-vqa")
model = ViltForQuestionAnswering.from_pretrained("dandelin/vilt-b32-finetuned-vqa", torch_dtype=torch.float16).to("cuda")
# prepare inputs
encoding = processor(image, text, return_tensors="pt").to("cuda")
# forward pass
outputs = model(**encoding)
logits = outputs.logits
idx = logits.argmax(-1).item()
print("Predicted answer:", model.config.id2label[idx])
```
@ArthurZucker and @amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28633/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28633",
"html_url": "https://github.com/huggingface/transformers/pull/28633",
"diff_url": "https://github.com/huggingface/transformers/pull/28633.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28633.patch",
"merged_at": "2024-01-25T15:03:21"
} |
https://api.github.com/repos/huggingface/transformers/issues/28632 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28632/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28632/comments | https://api.github.com/repos/huggingface/transformers/issues/28632/events | https://github.com/huggingface/transformers/issues/28632 | 2,093,162,870 | I_kwDOCUB6oc58wyF2 | 28,632 | Can't quantize gptq model on CPU runtime? | {
"login": "gesanqiu",
"id": 37237570,
"node_id": "MDQ6VXNlcjM3MjM3NTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/37237570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gesanqiu",
"html_url": "https://github.com/gesanqiu",
"followers_url": "https://api.github.com/users/gesanqiu/followers",
"following_url": "https://api.github.com/users/gesanqiu/following{/other_user}",
"gists_url": "https://api.github.com/users/gesanqiu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gesanqiu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gesanqiu/subscriptions",
"organizations_url": "https://api.github.com/users/gesanqiu/orgs",
"repos_url": "https://api.github.com/users/gesanqiu/repos",
"events_url": "https://api.github.com/users/gesanqiu/events{/privacy}",
"received_events_url": "https://api.github.com/users/gesanqiu/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 4 | 2024-01-22T06:11:04 | 2024-01-23T15:24:57 | null | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.4.0-166-generic-x86_64-with-glibc2.31
- Python version: 3.10.0
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: 0.26.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@younesbelkada
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, GPTQConfig
import torch
model_path = r'/data1/ls/hf_models/multi_lan-mango-dev/'
save_path = r'/data1/ls/hf_models/multi_lan-mango-dev-gptq'
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
gptq_config = GPTQConfig(bits=4, dataset="wikitext2", tokenizer=tokenizer, group_size=32, use_exllama=False)
quantized_model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, device_map='cpu', use_safetensors=True, quantization_config=gptq_config)
# quantized_model.to("cpu")
quantized_model.save_pretrained(save_path)
```
I have 4*A40(48G) on my machine, and I tried to quantize a 30B model with `device_map='auto'`, but the gpu memory utilizaiton isn't balanced on all the GPUs during quantizing model.layers blocks and OOM occurred. So I want to quantize the model on CPU runtime, The logs shown as following:
```shell
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████| 15/15 [00:07<00:00, 2.10it/s]
Traceback (most recent call last):
File "/home/dell/workSpace/test/gptq_hf.py", line 9, in <module>
quantized_model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, device_map='cpu', use_safetensors=True, quantization_config=gptq_config)
File "/home/dell/anaconda3/envs/vllm-kv_quant/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 566, in from_pretrained
return model_class.from_pretrained(
File "/home/dell/anaconda3/envs/vllm-kv_quant/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3780, in from_pretrained
quantizer.quantize_model(model, quantization_config.tokenizer)
File "/home/dell/anaconda3/envs/vllm-kv_quant/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/dell/anaconda3/envs/vllm-kv_quant/lib/python3.10/site-packages/optimum/gptq/quantizer.py", line 431, in quantize_model
model(**data)
File "/home/dell/anaconda3/envs/vllm-kv_quant/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/dell/anaconda3/envs/vllm-kv_quant/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/dell/anaconda3/envs/vllm-kv_quant/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1181, in forward
outputs = self.model(
File "/home/dell/anaconda3/envs/vllm-kv_quant/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/dell/anaconda3/envs/vllm-kv_quant/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/dell/anaconda3/envs/vllm-kv_quant/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1025, in forward
inputs_embeds = self.embed_tokens(input_ids)
File "/home/dell/anaconda3/envs/vllm-kv_quant/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/dell/anaconda3/envs/vllm-kv_quant/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/dell/anaconda3/envs/vllm-kv_quant/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 162, in forward
return F.embedding(
File "/home/dell/anaconda3/envs/vllm-kv_quant/lib/python3.10/site-packages/torch/nn/functional.py", line 2233, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
```
I think the issue is because the model is on CPU but the `input_ids` encoded by tokenizer isn't on GPU?
### Expected behavior
Quantizing the model succeed. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28632/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28632/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28631 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28631/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28631/comments | https://api.github.com/repos/huggingface/transformers/issues/28631/events | https://github.com/huggingface/transformers/pull/28631 | 2,093,098,420 | PR_kwDOCUB6oc5krZGk | 28,631 | rm input dtype change in CPU | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 7 | 2024-01-22T05:14:32 | 2024-01-30T07:25:19 | null | CONTRIBUTOR | null | Hi @amyeroberts . Refer to [28199](https://github.com/huggingface/transformers/pull/28199). Since `Autocast` cannot integrate into the pipeline, I propose that keep the inputs dtype in the pipeline. Otherwise, it will block the low-precision usage in both ASR and text-to-audio.
BTW, we will be ready for review once we confirm that it works on different CPUs, please keep this PR open. Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28631/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28631/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28631",
"html_url": "https://github.com/huggingface/transformers/pull/28631",
"diff_url": "https://github.com/huggingface/transformers/pull/28631.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28631.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28630 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28630/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28630/comments | https://api.github.com/repos/huggingface/transformers/issues/28630/events | https://github.com/huggingface/transformers/issues/28630 | 2,093,093,853 | I_kwDOCUB6oc58whPd | 28,630 | Disable removing shared tensors by default | {
"login": "imoneoi",
"id": 26354659,
"node_id": "MDQ6VXNlcjI2MzU0NjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/26354659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imoneoi",
"html_url": "https://github.com/imoneoi",
"followers_url": "https://api.github.com/users/imoneoi/followers",
"following_url": "https://api.github.com/users/imoneoi/following{/other_user}",
"gists_url": "https://api.github.com/users/imoneoi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imoneoi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imoneoi/subscriptions",
"organizations_url": "https://api.github.com/users/imoneoi/orgs",
"repos_url": "https://api.github.com/users/imoneoi/repos",
"events_url": "https://api.github.com/users/imoneoi/events{/privacy}",
"received_events_url": "https://api.github.com/users/imoneoi/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 7 | 2024-01-22T05:09:44 | 2024-01-25T14:03:14 | null | NONE | null | ### System Info
```
- `transformers` version: 4.36.2
- Platform: Linux-5.4.0-167-generic-x86_64-with-glibc2.31
- Python version: 3.11.5
- Huggingface_hub version: 0.20.1
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes, torchrun
```
### Who can help?
@younesbelkada @Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Minimal reproduction on DeepSpeed can be found at https://github.com/huggingface/transformers/issues/27293 where disabling safe_serialization solves this issue.
Related (DeepSpeed): https://github.com/huggingface/transformers/issues/27293
### Expected behavior
Consider disabling removing shared tensors by default in https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L2409-L2452. This piece of code determines shared tensors through storage locations, but there are many cases that tensors are views of a large tensor, thus sharing the same location.
One example is when `q_proj`, `k_proj`, and `v_proj` are views of `qkv_proj`, and also DeepSpeed ZeRO, where all parameters are views of a large flat tensor. We've observed failures in both cases.
Besides, not removing shared tensors will not usually cause a large storage overhead as common shared tensors (such as tied embeddings) take up only a small fraction of the total parameters.
```
Removed shared tensor {'model.layers.27.self_attn.k_proj.weight', 'model.layers.2.self_attn.k_proj.weight', 'model.layers.23.self_attn.v_proj.weight', 'model.layers.6.self_attn.k_proj.weight', 'model.layers.14.self_attn.v_proj.weight', 'model.layers.0.self_attn.k_proj.weight', 'model.layers.18.self_attn.k_proj.weight', 'model.layers.24.self_attn.k_proj.weight', 'model.layers.21.self_attn.k_proj.weight', 'model.layers.16.self_attn.v_proj.weight', 'model.layers.28.self_attn.k_proj.weight', 'model.layers.29.self_attn.k_proj.weight', 'model.layers.30.self_attn.k_proj.weight', 'model.layers.7.self_attn.v_proj.weight', 'model.layers.6.self_attn.v_proj.weight', 'model.layers.28.self_attn.v_proj.weight', 'model.layers.30.self_attn.v_proj.weight', 'model.layers.18.self_attn.v_proj.weight', 'model.layers.17.self_attn.k_proj.weight', 'model.layers.29.self_attn.v_proj.weight', 'model.layers.19.self_attn.k_proj.weight', 'model.layers.12.self_attn.k_proj.weight', 'model.layers.0.self_attn.v_proj.weight', 'model.layers.15.self_attn.k_proj.weight', 'model.layers.21.self_attn.v_proj.weight', 'model.layers.10.self_attn.k_proj.weight', 'model.layers.10.self_attn.v_proj.weight', 'model.layers.13.self_attn.k_proj.weight', 'model.layers.1.self_attn.k_proj.weight', 'model.layers.3.self_attn.k_proj.weight', 'model.layers.31.self_attn.k_proj.weight', 'model.layers.4.self_attn.k_proj.weight', 'model.layers.25.self_attn.v_proj.weight', 'model.layers.22.self_attn.k_proj.weight', 'model.layers.9.self_attn.v_proj.weight', 'model.layers.23.self_attn.k_proj.weight', 'model.layers.5.self_attn.v_proj.weight', 'model.layers.15.self_attn.v_proj.weight', 'model.layers.1.self_attn.v_proj.weight', 'model.layers.3.self_attn.v_proj.weight', 'model.layers.24.self_attn.v_proj.weight', 'model.layers.8.self_attn.k_proj.weight', 'model.layers.20.self_attn.v_proj.weight', 'model.layers.31.self_attn.v_proj.weight', 'model.layers.20.self_attn.k_proj.weight', 'model.layers.26.self_attn.v_proj.weight', 'model.layers.14.self_attn.k_proj.weight', 'model.layers.22.self_attn.v_proj.weight', 'model.layers.9.self_attn.k_proj.weight', 'model.layers.4.self_attn.v_proj.weight', 'model.layers.17.self_attn.v_proj.weight', 'model.layers.27.self_attn.v_proj.weight', 'model.layers.8.self_attn.v_proj.weight', 'model.layers.16.self_attn.k_proj.weight', 'model.layers.5.self_attn.k_proj.weight', 'model.layers.7.self_attn.k_proj.weight', 'model.layers.11.self_attn.v_proj.weight', 'model.layers.13.self_attn.v_proj.weight', 'model.layers.26.self_attn.k_proj.weight', 'model.layers.12.self_attn.v_proj.weight', 'model.layers.19.self_attn.v_proj.weight', 'model.layers.2.self_attn.v_proj.weight', 'model.layers.25.self_attn.k_proj.weight', 'model.layers.11.self_attn.k_proj.weight'} while saving. This should be OK, but check by verifying that you don't receive any warning while reloading
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28630/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28630/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28629 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28629/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28629/comments | https://api.github.com/repos/huggingface/transformers/issues/28629/events | https://github.com/huggingface/transformers/issues/28629 | 2,093,006,116 | I_kwDOCUB6oc58wL0k | 28,629 | Fast tokenizer's time complexity is not linear | {
"login": "getao",
"id": 12735658,
"node_id": "MDQ6VXNlcjEyNzM1NjU4",
"avatar_url": "https://avatars.githubusercontent.com/u/12735658?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/getao",
"html_url": "https://github.com/getao",
"followers_url": "https://api.github.com/users/getao/followers",
"following_url": "https://api.github.com/users/getao/following{/other_user}",
"gists_url": "https://api.github.com/users/getao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/getao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/getao/subscriptions",
"organizations_url": "https://api.github.com/users/getao/orgs",
"repos_url": "https://api.github.com/users/getao/repos",
"events_url": "https://api.github.com/users/getao/events{/privacy}",
"received_events_url": "https://api.github.com/users/getao/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-22T03:37:35 | 2024-01-23T15:14:27 | null | NONE | null | ### System Info
torch 2.1.2
transformers 4.36.2
### Who can help?
@ArthurZucker
I'm using the streaming dataloader to train on a large dataset. However, I find some process is usually stuck and finally lead to NCCL timeout After I carefully checked, I found the problem may come from the tokenizer.
When a batch (I think it is 1000 by default) contains too many tokens (e.g., the batch has a document that is a book and is very long), the tokenization process will be extremely slow. I test the tokenization efficiency for sequences with different lengths and find the time cost is not linear to the sequence length but looks like quadratic.
Tokenizing a 50k-word sequence costs 0.5s but tokenizing a 500k-word sequence costs 70s (about 140x slower).
I don't know if it is a bug. If it is by design, how should I prevent the tokenizer being stuck by some batches with too many tokens. I think one way is to reduce the batch size (default 1000). Is there any other way?
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer
import time
def test_time(test_str):
a = time.time()
tokens = tokenizer(test_str)
b = time.time()
return b-a
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
test_str = "I love NLP. " * 100000 # we can change the number to try different lengths
time_cost = test_time(test_str)
print(time_cost)
```
### Expected behavior
Linear time cost increase as the sequence length | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28629/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28629/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28628 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28628/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28628/comments | https://api.github.com/repos/huggingface/transformers/issues/28628/events | https://github.com/huggingface/transformers/pull/28628 | 2,092,857,414 | PR_kwDOCUB6oc5kqlf4 | 28,628 | Support single token decode for `CodeGenTokenizer` | {
"login": "cmathw",
"id": 108584265,
"node_id": "U_kgDOBnjdSQ",
"avatar_url": "https://avatars.githubusercontent.com/u/108584265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cmathw",
"html_url": "https://github.com/cmathw",
"followers_url": "https://api.github.com/users/cmathw/followers",
"following_url": "https://api.github.com/users/cmathw/following{/other_user}",
"gists_url": "https://api.github.com/users/cmathw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cmathw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cmathw/subscriptions",
"organizations_url": "https://api.github.com/users/cmathw/orgs",
"repos_url": "https://api.github.com/users/cmathw/repos",
"events_url": "https://api.github.com/users/cmathw/events{/privacy}",
"received_events_url": "https://api.github.com/users/cmathw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-22T00:28:24 | 2024-01-23T17:14:52 | 2024-01-23T15:27:25 | CONTRIBUTOR | null | # What does this PR do?
This PR should fix #28627 by first converting token_ids to a list in the the `decode` method of `CodeGenTokenizer` class. No new tests added but happy to write some if need be.
## Example
```python3
from transformers.models.auto.tokenization_auto import AutoTokenizer
phi_tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2",
add_bos_token=True,
use_fast=False,
trust_remote_code=True)
gpt2_tokenizer = AutoTokenizer.from_pretrained("gpt2",
add_bos_token=True,
use_fast=False,
trust_remote_code=True)
a = "The cat sat on the mat"
gpt2_tokens = gpt2_tokenizer(a, return_tensors="pt")["input_ids"][0] # torch.Size([7])
gpt2_str_tokens = gpt2_tokenizer.batch_decode(gpt2_tokens) # Essentially: [gpt2_tokenizer.decode(seq) for seq in gpt2_tokens]
print(gpt2_str_tokens) # <-- This is fine and will output: ['<|endoftext|>', 'The', ' cat', ' sat', ' on', ' the', ' mat']
gpt2_single_decode = [gpt2_tokenizer.decode(gpt2_tokens[0])]
print(gpt2_single_decode) # <-- Decoding a 0-D tensor, this is fine and will output: ['<|endoftext|>']
phi_tokens = phi_tokenizer(a, return_tensors="pt")["input_ids"][0] # torch.Size([7])
phi_str_tokens = phi_tokenizer.batch_decode(phi_tokens) # Essentially: [phi_tokenizer.decode(seq) for seq in phi_tokens]
print(phi_str_tokens) # <-- Cannot do this due to below...
phi_single_decode = [phi_tokenizer.decode(phi_tokens[0])]
print(phi_single_decode) # <-- Cannot decode a 0-D Tensor, hence cannot do above either
single_tok = phi_tokens[0].detach().cpu().tolist()
gpt2_single_decode = [gpt2_tokenizer.decode(gpt2_tokens)]
phi_single_decode = [phi_tokenizer.decode(phi_tokens)]
```
## Output before fix:
```bash
TypeError: iteration over a 0-d tensor
```
## Output after fix:
```bash
['<|endoftext|>', 'The', ' cat', ' sat', ' on', ' the', ' mat']
['<|endoftext|>']
['<|endoftext|>', 'The', ' cat', ' sat', ' on', ' the', ' mat']
['<|endoftext|>']
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@ArthurZucker @rooa | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28628/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28628/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28628",
"html_url": "https://github.com/huggingface/transformers/pull/28628",
"diff_url": "https://github.com/huggingface/transformers/pull/28628.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28628.patch",
"merged_at": "2024-01-23T15:27:25"
} |
https://api.github.com/repos/huggingface/transformers/issues/28627 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28627/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28627/comments | https://api.github.com/repos/huggingface/transformers/issues/28627/events | https://github.com/huggingface/transformers/issues/28627 | 2,092,841,862 | I_kwDOCUB6oc58vjuG | 28,627 | Support decoding single tokens with `CodeGenTokenizer` | {
"login": "cmathw",
"id": 108584265,
"node_id": "U_kgDOBnjdSQ",
"avatar_url": "https://avatars.githubusercontent.com/u/108584265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cmathw",
"html_url": "https://github.com/cmathw",
"followers_url": "https://api.github.com/users/cmathw/followers",
"following_url": "https://api.github.com/users/cmathw/following{/other_user}",
"gists_url": "https://api.github.com/users/cmathw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cmathw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cmathw/subscriptions",
"organizations_url": "https://api.github.com/users/cmathw/orgs",
"repos_url": "https://api.github.com/users/cmathw/repos",
"events_url": "https://api.github.com/users/cmathw/events{/privacy}",
"received_events_url": "https://api.github.com/users/cmathw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-22T00:01:03 | 2024-01-23T15:27:42 | 2024-01-23T15:27:26 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.34.1
- Platform: macOS-14.2.1-arm64-arm-64bit
- Python version: 3.11.6
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.4.0
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No.
- Using distributed or parallel set-up in script?: No.
### Who can help?
@ArthurZucker @rooa
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code to reproduce:
``` python3
from transformers.models.auto.tokenization_auto import AutoTokenizer
phi_tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2",
add_bos_token=True,
use_fast=False,
trust_remote_code=True)
gpt2_tokenizer = AutoTokenizer.from_pretrained("gpt2",
add_bos_token=True,
use_fast=False,
trust_remote_code=True)
a = "The cat sat on the mat"
gpt2_tokens = gpt2_tokenizer(a, return_tensors="pt")["input_ids"][0] # torch.Size([7])
gpt2_str_tokens = gpt2_tokenizer.batch_decode(gpt2_tokens) # Essentially: [gpt2_tokenizer.decode(seq) for seq in gpt2_tokens]
print(gpt2_str_tokens) # <-- This is fine and will output: ['<|endoftext|>', 'The', ' cat', ' sat', ' on', ' the', ' mat']
gpt2_single_decode = [gpt2_tokenizer.decode(gpt2_tokens[0])]
print(gpt2_single_decode) # <-- Decoding a 0-D tensor, this is fine and will output: ['<|endoftext|>']
phi_tokens = phi_tokenizer(a, return_tensors="pt")["input_ids"][0] # torch.Size([7])
phi_str_tokens = phi_tokenizer.batch_decode(phi_tokens) # Essentially: [phi_tokenizer.decode(seq) for seq in phi_tokens]
print(phi_str_tokens) # <-- Cannot do this due to below...
phi_single_decode = [phi_tokenizer.decode(phi_tokens[0])]
print(phi_single_decode) # <-- Cannot decode a 0-D Tensor, hence cannot do above either
```
Returns:
TypeError: iteration over a 0-d tensor
### Expected behavior
In the above example,
```python3
phi_str_tokens = phi_tokenizer.batch_decode(phi_tokens)
```
Should return: ['<|endoftext|>', 'The', ' cat', ' sat', ' on', ' the', ' mat']. This is what the gpt2 tokenizer returns for example.
```python3
phi_single_decode = [phi_tokenizer.decode(phi_tokens)]
```
Should return: ['<|endoftext|>']. This is what the gpt2 tokenizer returns for example. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28627/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28627/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28626 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28626/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28626/comments | https://api.github.com/repos/huggingface/transformers/issues/28626/events | https://github.com/huggingface/transformers/issues/28626 | 2,092,734,450 | I_kwDOCUB6oc58vJfy | 28,626 | No, you cannot set a `token` to an `id`. It is the same as `tokenzier.pad_token_id = 0` if `tokenizer.eos_token_id` is `0` | {
"login": "rafa852",
"id": 59406764,
"node_id": "MDQ6VXNlcjU5NDA2NzY0",
"avatar_url": "https://avatars.githubusercontent.com/u/59406764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rafa852",
"html_url": "https://github.com/rafa852",
"followers_url": "https://api.github.com/users/rafa852/followers",
"following_url": "https://api.github.com/users/rafa852/following{/other_user}",
"gists_url": "https://api.github.com/users/rafa852/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rafa852/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rafa852/subscriptions",
"organizations_url": "https://api.github.com/users/rafa852/orgs",
"repos_url": "https://api.github.com/users/rafa852/repos",
"events_url": "https://api.github.com/users/rafa852/events{/privacy}",
"received_events_url": "https://api.github.com/users/rafa852/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-21T19:35:42 | 2024-01-22T10:58:50 | null | NONE | null | No, you cannot set a `token` to an `id`. It is the same as `tokenzier.pad_token_id = 0` if `tokenizer.eos_token_id` is `0`
_Originally posted by @ArthurZucker in https://github.com/huggingface/transformers/issues/26072#issuecomment-1859852130_
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28626/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/28626/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28625 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28625/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28625/comments | https://api.github.com/repos/huggingface/transformers/issues/28625/events | https://github.com/huggingface/transformers/issues/28625 | 2,092,731,446 | I_kwDOCUB6oc58vIw2 | 28,625 | ESM Rotary Embedding implementation is not TorchScript safe | {
"login": "ChenchaoZhao",
"id": 35147961,
"node_id": "MDQ6VXNlcjM1MTQ3OTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/35147961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChenchaoZhao",
"html_url": "https://github.com/ChenchaoZhao",
"followers_url": "https://api.github.com/users/ChenchaoZhao/followers",
"following_url": "https://api.github.com/users/ChenchaoZhao/following{/other_user}",
"gists_url": "https://api.github.com/users/ChenchaoZhao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChenchaoZhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChenchaoZhao/subscriptions",
"organizations_url": "https://api.github.com/users/ChenchaoZhao/orgs",
"repos_url": "https://api.github.com/users/ChenchaoZhao/repos",
"events_url": "https://api.github.com/users/ChenchaoZhao/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChenchaoZhao/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url"... | null | 4 | 2024-01-21T19:27:41 | 2024-01-29T13:28:33 | null | NONE | null | ### System Info
This issue is independent of the env. It’s purely about the PyTorch implementation of ESM position embedding
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Load an ESM model from pretrained
2. To cuda
3. Use ‘torch.jit.trace’
### Expected behavior
It may crash when you trace the model.
Even if it doesn't crash at trace time, when save the model and move the model to a different device and perform inference it will crash with a device error.
The reason is the cached sin and cos are NOT registered as buffers which means pytorch will not know how to move the python attribute when using the ‘to’ method. I would suggest copy and use the llama ROPE implementation. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28625/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28625/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28624 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28624/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28624/comments | https://api.github.com/repos/huggingface/transformers/issues/28624/events | https://github.com/huggingface/transformers/issues/28624 | 2,092,675,763 | I_kwDOCUB6oc58u7Kz | 28,624 | WhisperForAudioClassification throws errors while using use_weighted_layer_sum | {
"login": "chercheurkg",
"id": 128296694,
"node_id": "U_kgDOB6Wm9g",
"avatar_url": "https://avatars.githubusercontent.com/u/128296694?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chercheurkg",
"html_url": "https://github.com/chercheurkg",
"followers_url": "https://api.github.com/users/chercheurkg/followers",
"following_url": "https://api.github.com/users/chercheurkg/following{/other_user}",
"gists_url": "https://api.github.com/users/chercheurkg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chercheurkg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chercheurkg/subscriptions",
"organizations_url": "https://api.github.com/users/chercheurkg/orgs",
"repos_url": "https://api.github.com/users/chercheurkg/repos",
"events_url": "https://api.github.com/users/chercheurkg/events{/privacy}",
"received_events_url": "https://api.github.com/users/chercheurkg/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-21T17:10:17 | 2024-01-25T00:55:00 | null | NONE | null | ### System Info
- `transformers` version: 4.36.0.dev0
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.9.9
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu118 (True)
- Tensorflow version (GPU?): 2.15.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
For a classification task, I tried to fine-tune **whisper-small** pre-trained model using WhisperForAudioClassification and setting use_weighted_layer_sum equal to true. It threw the following error.
```
File "some_path\site-packages\torch\amp\autocast_mode.py", line 16, in decorate_autocast
return func(*args, **kwargs)
File "some_path\site-packages\transformers\models\whisper\modeling_whisper.py", line 2418, in forward
hidden_states = torch.stack(encoder_outputs, dim=1)
TypeError: stack(): argument 'tensors' (position 1) must be tuple of Tensors, not BaseModelOutput
0%| | 0/2085 [00:52<?, ?it/s]
```
1. Using **whisper-small** pretrained model and setting use_weighted_layer_sum equal to true
```
config = AutoConfig.from_pretrained(
'openai/whisper-small',
..........
)
config.use_weighted_layer_sum = True
```
2. start training it using a label dataset
### Expected behavior
It should not throw the above error as it should work for both `use_weighted_layer_sum = True and use_weighted_layer_sum = False` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28624/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28624/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28623 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28623/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28623/comments | https://api.github.com/repos/huggingface/transformers/issues/28623/events | https://github.com/huggingface/transformers/issues/28623 | 2,092,648,273 | I_kwDOCUB6oc58u0dR | 28,623 | RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn while model parameters requires_grad is True | {
"login": "zhongshsh",
"id": 62104945,
"node_id": "MDQ6VXNlcjYyMTA0OTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/62104945?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhongshsh",
"html_url": "https://github.com/zhongshsh",
"followers_url": "https://api.github.com/users/zhongshsh/followers",
"following_url": "https://api.github.com/users/zhongshsh/following{/other_user}",
"gists_url": "https://api.github.com/users/zhongshsh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhongshsh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhongshsh/subscriptions",
"organizations_url": "https://api.github.com/users/zhongshsh/orgs",
"repos_url": "https://api.github.com/users/zhongshsh/repos",
"events_url": "https://api.github.com/users/zhongshsh/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhongshsh/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 5 | 2024-01-21T16:01:59 | 2024-01-30T08:10:24 | null | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.15.0-18-shopee-generic-x86_64-with-glibc2.35
- Python version: 3.11.5
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.0
- Accelerate version: 0.27.0.dev0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 8
- machine_rank: 0
- num_machines: 1
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.1.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I load `Mixtral` using device_map as `auto` (have to use `auto` otherwise OOM), then I make some of the model parameters' grad `True` (e.g., model.model.layers[0]), and use `Trainer` to fine-tune it.
I launch the code by `python xx.py` as explaining in https://github.com/huggingface/accelerate/issues/1840#issuecomment-1683105994.
error message I got as follows:
```
Traceback (most recent call last):
File "xx.py", line 461, in <module>
trainer.train()
File "miniconda3/lib/python3.11/site-packages/transformers/trainer.py", line 1537, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "miniconda3/lib/python3.11/site-packages/transformers/trainer.py", line 1864, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "miniconda3/lib/python3.11/site-packages/transformers/trainer.py", line 2763, in training_step
self.accelerator.backward(loss)
File "miniconda3/lib/python3.11/site-packages/accelerate/accelerator.py", line 1964, in backward
loss.backward(**kwargs)
File "miniconda3/lib/python3.11/site-packages/torch/_tensor.py", line 492, in backward
torch.autograd.backward(
File "miniconda3/lib/python3.11/site-packages/torch/autograd/__init__.py", line 251, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
```
If I set other module such as `lm_head` grad as True, the code can run successfully. I guess whether the above error caused by setting `device_map` as `auto`?
### Expected behavior
make some of `Mixtral` parameters' grad `True`, and use `Trainer` to fine-tune it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28623/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28623/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28622 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28622/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28622/comments | https://api.github.com/repos/huggingface/transformers/issues/28622/events | https://github.com/huggingface/transformers/issues/28622 | 2,092,639,091 | I_kwDOCUB6oc58uyNz | 28,622 | Can `LlamaTokenizerFast` support the argument `add_prefix_space = False` | {
"login": "hnyls2002",
"id": 95566987,
"node_id": "U_kgDOBbI8iw",
"avatar_url": "https://avatars.githubusercontent.com/u/95566987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hnyls2002",
"html_url": "https://github.com/hnyls2002",
"followers_url": "https://api.github.com/users/hnyls2002/followers",
"following_url": "https://api.github.com/users/hnyls2002/following{/other_user}",
"gists_url": "https://api.github.com/users/hnyls2002/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hnyls2002/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hnyls2002/subscriptions",
"organizations_url": "https://api.github.com/users/hnyls2002/orgs",
"repos_url": "https://api.github.com/users/hnyls2002/repos",
"events_url": "https://api.github.com/users/hnyls2002/events{/privacy}",
"received_events_url": "https://api.github.com/users/hnyls2002/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-21T15:39:18 | 2024-01-24T05:37:18 | null | NONE | null | ### System Info
With `transformers==4.36.2`
It seems the argument `add_prefix_space` is invalid here.
### Who can help?
@ArthurZucker
### Reproduction
```
>>> from transformers import LlamaTokenizerFast
>>> tokenizer = LlamaTokenizerFast.from_pretrained("hf-internal-testing/llama-tokenizer", add_prefix_space = False)
>>> tokenizer.tokenize("hello")
['▁hello']
>>> tokenizer.decode(tokenizer.encode("hello"))
'<s> hello'
```
### Expected behavior
Is there a bug, or is it my wrong usage? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28622/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28622/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28621 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28621/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28621/comments | https://api.github.com/repos/huggingface/transformers/issues/28621/events | https://github.com/huggingface/transformers/pull/28621 | 2,092,497,369 | PR_kwDOCUB6oc5kpdul | 28,621 | Raise `Exception` when trying to generate 0 tokens | {
"login": "danielkorat",
"id": 32893314,
"node_id": "MDQ6VXNlcjMyODkzMzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/32893314?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danielkorat",
"html_url": "https://github.com/danielkorat",
"followers_url": "https://api.github.com/users/danielkorat/followers",
"following_url": "https://api.github.com/users/danielkorat/following{/other_user}",
"gists_url": "https://api.github.com/users/danielkorat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danielkorat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danielkorat/subscriptions",
"organizations_url": "https://api.github.com/users/danielkorat/orgs",
"repos_url": "https://api.github.com/users/danielkorat/repos",
"events_url": "https://api.github.com/users/danielkorat/events{/privacy}",
"received_events_url": "https://api.github.com/users/danielkorat/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2024-01-21T09:20:13 | 2024-01-28T11:39:27 | null | NONE | null | # What does this PR do?
Currently, setting `max_new_tokens=0` produces 1 token instead of 0, and only a warning is produced.
To prevent unexpected patterns of generation, this warning should be changed to an `Exception`.
### Example:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigcode/tiny_starcoder_py"
tokenizer = AutoTokenizer.from_pretrained("bigcode/tiny_starcoder_py")
model = AutoModelForCausalLM.from_pretrained("bigcode/tiny_starcoder_py")
inputs = tokenizer("def print_hello_world():", return_tensors="pt")
outputs = model.generate(**inputs,
pad_token_id=tokenizer.eos_token_id,
max_new_tokens=0)
print(f"Input length: {len(inputs['input_ids'][0])}")
print(f"Output length: {len(outputs[0])}")
```
### Output before fix:
```bash
/home/sdp/fix-zero-max-new-tokens/transformers/src/transformers/generation/utils.py:1136: UserWarning: Input length of input_ids is 7, but `max_length` is set to 7. This can lead to unexpected behavior. You should consider increasing `max_new_tokens`.
warnings.warn(
Input length: 7
Output length: 8
```
### Output after fix:
```bash
Traceback (most recent call last):
File "/home/sdp/fix-zero-max-new-tokens/test.py", line 8, in <module>
outputs = model.generate(**inputs,
File "/storage/sdp/anaconda3/envs/fix-zero-max-new-tokens/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/sdp/fix-zero-max-new-tokens/transformers/src/transformers/generation/utils.py", line 1396, in generate
self._validate_generated_length(generation_config, input_ids_length, has_default_max_length)
File "/home/sdp/fix-zero-max-new-tokens/transformers/src/transformers/generation/utils.py", line 1136, in _validate_generated_length
raise ValueError(
ValueError: Input length of input_ids is 7, but `max_length` is set to 7. This can lead to unexpected behavior. You should consider increasing `max_new_tokens`.
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@gante
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28621/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28621/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28621",
"html_url": "https://github.com/huggingface/transformers/pull/28621",
"diff_url": "https://github.com/huggingface/transformers/pull/28621.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28621.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28620 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28620/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28620/comments | https://api.github.com/repos/huggingface/transformers/issues/28620/events | https://github.com/huggingface/transformers/pull/28620 | 2,092,471,767 | PR_kwDOCUB6oc5kpZAG | 28,620 | Unused "embedding_size" in bert attention | {
"login": "amar-jay",
"id": 64834413,
"node_id": "MDQ6VXNlcjY0ODM0NDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/64834413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amar-jay",
"html_url": "https://github.com/amar-jay",
"followers_url": "https://api.github.com/users/amar-jay/followers",
"following_url": "https://api.github.com/users/amar-jay/following{/other_user}",
"gists_url": "https://api.github.com/users/amar-jay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amar-jay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amar-jay/subscriptions",
"organizations_url": "https://api.github.com/users/amar-jay/orgs",
"repos_url": "https://api.github.com/users/amar-jay/repos",
"events_url": "https://api.github.com/users/amar-jay/events{/privacy}",
"received_events_url": "https://api.github.com/users/amar-jay/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-21T08:01:36 | 2024-01-24T14:31:41 | null | NONE | null | The embedding_size is not used.
# What does this PR do?
Fixes unused code in bert attention
Fixes a minor typo in code
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28620/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28620/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28620",
"html_url": "https://github.com/huggingface/transformers/pull/28620",
"diff_url": "https://github.com/huggingface/transformers/pull/28620.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28620.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28619 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28619/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28619/comments | https://api.github.com/repos/huggingface/transformers/issues/28619/events | https://github.com/huggingface/transformers/issues/28619 | 2,092,457,798 | I_kwDOCUB6oc58uF9G | 28,619 | KOSMOS-2, finding probability distribution of the text sequence | {
"login": "snpushpi",
"id": 55248448,
"node_id": "MDQ6VXNlcjU1MjQ4NDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/55248448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/snpushpi",
"html_url": "https://github.com/snpushpi",
"followers_url": "https://api.github.com/users/snpushpi/followers",
"following_url": "https://api.github.com/users/snpushpi/following{/other_user}",
"gists_url": "https://api.github.com/users/snpushpi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/snpushpi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/snpushpi/subscriptions",
"organizations_url": "https://api.github.com/users/snpushpi/orgs",
"repos_url": "https://api.github.com/users/snpushpi/repos",
"events_url": "https://api.github.com/users/snpushpi/events{/privacy}",
"received_events_url": "https://api.github.com/users/snpushpi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2024-01-21T07:07:54 | 2024-01-22T20:44:00 | 2024-01-22T20:43:59 | NONE | null | ### System Info
Google Colab with gpu environment enabled and related libraries installed
### Who can help?
@ydshieh
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below
### Reproduction
Hi,
I am trying to extract the probability distribution of the tokens in a sequence from the model. Here is what
`
from PIL import Image
import requests
from transformers import AutoProcessor, Kosmos2ForConditionalGeneration
import torch
model = Kosmos2ForConditionalGeneration.from_pretrained("microsoft/kosmos-2-patch14-224")
processor = AutoProcessor.from_pretrained("microsoft/kosmos-2-patch14-224")
prompt1 = "An old man sitting on a bench in a public park alone, reading a book."
url1 = "http://images.cocodataset.org/val2017/000000264535.jpg"
image1 = Image.open(requests.get(url1, stream=True).raw)
inputs1 = processor(text=prompt1, images=image1, return_tensors="pt").to(device)
model_output = model(
pixel_values=inputs1["pixel_values"],
input_ids=inputs1["input_ids"],
image_embeds=None,
image_embeds_position_mask=inputs1["image_embeds_position_mask"],
use_cache=True,
)
input_ids = processor.tokenizer(prompt, return_tensors = 'pt').input_ids.to(device)
input_ids.shape
# torch.Size([1, 14])
model_output.logits.shape
# torch.Size([1, 79, 65037])
token_logits = torch.cat([model_output.logits[:,:2,:],model_output.logits[:,67:,:]],dim=1) #Is this correct?
`
So I am trying to extract the logits of the text tokens from the model output. The goal is to eventually calculate the probability of a certain word given the previous words and the image. But I am not sure of the last line where I extracted the text token logits from the model output. I did what I did since that's what the inputs1["image_embeds_position_mask"] mask looked like, but I am not sure if that is the right thing to do. Can someone confirm that?
### Expected behavior
explained above. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28619/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28619/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28618 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28618/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28618/comments | https://api.github.com/repos/huggingface/transformers/issues/28618/events | https://github.com/huggingface/transformers/pull/28618 | 2,092,301,494 | PR_kwDOCUB6oc5koz_I | 28,618 | Fix utf-8 yaml load in marian conversion to pytorch | {
"login": "SystemPanic",
"id": 25750030,
"node_id": "MDQ6VXNlcjI1NzUwMDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/25750030?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SystemPanic",
"html_url": "https://github.com/SystemPanic",
"followers_url": "https://api.github.com/users/SystemPanic/followers",
"following_url": "https://api.github.com/users/SystemPanic/following{/other_user}",
"gists_url": "https://api.github.com/users/SystemPanic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SystemPanic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SystemPanic/subscriptions",
"organizations_url": "https://api.github.com/users/SystemPanic/orgs",
"repos_url": "https://api.github.com/users/SystemPanic/repos",
"events_url": "https://api.github.com/users/SystemPanic/events{/privacy}",
"received_events_url": "https://api.github.com/users/SystemPanic/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 5 | 2024-01-21T01:27:52 | 2024-01-31T15:29:46 | null | NONE | null | # What does this PR do?
Fix yaml load for yaml files with UTF-8 encoding in convert_marian_to_pytorch.py
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
@ArthurZucker | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28618/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28618/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28618",
"html_url": "https://github.com/huggingface/transformers/pull/28618",
"diff_url": "https://github.com/huggingface/transformers/pull/28618.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28618.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28617 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28617/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28617/comments | https://api.github.com/repos/huggingface/transformers/issues/28617/events | https://github.com/huggingface/transformers/pull/28617 | 2,092,276,191 | PR_kwDOCUB6oc5kovFV | 28,617 | [`Llava`] Update convert_llava_weights_to_hf.py script | {
"login": "isaac-vidas",
"id": 80056737,
"node_id": "MDQ6VXNlcjgwMDU2NzM3",
"avatar_url": "https://avatars.githubusercontent.com/u/80056737?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/isaac-vidas",
"html_url": "https://github.com/isaac-vidas",
"followers_url": "https://api.github.com/users/isaac-vidas/followers",
"following_url": "https://api.github.com/users/isaac-vidas/following{/other_user}",
"gists_url": "https://api.github.com/users/isaac-vidas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/isaac-vidas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/isaac-vidas/subscriptions",
"organizations_url": "https://api.github.com/users/isaac-vidas/orgs",
"repos_url": "https://api.github.com/users/isaac-vidas/repos",
"events_url": "https://api.github.com/users/isaac-vidas/events{/privacy}",
"received_events_url": "https://api.github.com/users/isaac-vidas/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-20T23:36:39 | 2024-01-22T14:28:19 | 2024-01-22T14:28:18 | CONTRIBUTOR | null | Based on discussion in the issue https://github.com/huggingface/transformers/issues/28597
Fixes: https://github.com/huggingface/transformers/issues/28597
* Remove config update of adding padding to `vocab_size` and `text_config.vocab_size` which causes `ValueError` exception.
* Remove keys that ends with `inv_freq` from the state dict.
* Add examples and instructions for creating `model_state_dict.bin` that can be used by the script.
```console
$ python src/transformers/models/llava/convert_llava_weights_to_hf.py -h
usage: convert_llava_weights_to_hf.py [-h] [--text_model_id TEXT_MODEL_ID] [--vision_model_id VISION_MODEL_ID] [--output_hub_path OUTPUT_HUB_PATH]
[--old_state_dict_id OLD_STATE_DICT_ID]
optional arguments:
-h, --help show this help message and exit
--text_model_id TEXT_MODEL_ID
Hub location of the text model
--vision_model_id VISION_MODEL_ID
Hub location of the vision model
--output_hub_path OUTPUT_HUB_PATH
Location on the hub of the converted model
--old_state_dict_id OLD_STATE_DICT_ID
Location on the hub of the raw state dict of the original model. The filename needs to be `model_state_dict.bin`
Example:
python transformers/src/transformers/models/llava/convert_llava_weights_to_hf.py --text_model_id lmsys/vicuna-7b-v1.5 --vision_model_id openai/clip-vit-large-patch14-336 --output_hub_path org/llava-v1.5-7b-conv --old_state_dict_id liuhaotian/llava-v1.5-7b
Example for creating the old state dict file with Python:
import torch
from llava.model.language_model.llava_llama import LlavaLlamaForCausalLM
# load model
kwargs = {"device_map": "auto", "torch_dtype": torch.float16}
model = LlavaLlamaForCausalLM.from_pretrained("liuhaotian/llava-v1.5-7b", low_cpu_mem_usage=True, **kwargs)
# load vision tower
model.get_vision_tower().load_model()
# Save state dict
torch.save(model.state_dict(), "tmp/hf_models/llava-v1.5-7b/model_state_dict.bin")
```
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@younesbelkada if you can please review
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28617/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28617/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28617",
"html_url": "https://github.com/huggingface/transformers/pull/28617",
"diff_url": "https://github.com/huggingface/transformers/pull/28617.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28617.patch",
"merged_at": "2024-01-22T14:28:18"
} |
https://api.github.com/repos/huggingface/transformers/issues/28616 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28616/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28616/comments | https://api.github.com/repos/huggingface/transformers/issues/28616/events | https://github.com/huggingface/transformers/pull/28616 | 2,092,273,218 | PR_kwDOCUB6oc5kougi | 28,616 | Token healing | {
"login": "Ayenem",
"id": 50707385,
"node_id": "MDQ6VXNlcjUwNzA3Mzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/50707385?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ayenem",
"html_url": "https://github.com/Ayenem",
"followers_url": "https://api.github.com/users/Ayenem/followers",
"following_url": "https://api.github.com/users/Ayenem/following{/other_user}",
"gists_url": "https://api.github.com/users/Ayenem/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ayenem/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ayenem/subscriptions",
"organizations_url": "https://api.github.com/users/Ayenem/orgs",
"repos_url": "https://api.github.com/users/Ayenem/repos",
"events_url": "https://api.github.com/users/Ayenem/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ayenem/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-20T23:21:43 | 2024-01-31T01:50:05 | null | NONE | null | # What does this PR do?
Token healing rectifies the token boundary bias in greedy tokenization. It does this by trimming and regrowing the prompt to better align with the model's tokenizer, thus enhancing generation quality. The improvement is clearest with completion models.
Token boundary bias is a silent performance killer that doesn't seem very well known. It has clear impact on completion quality, though I'm not sure where it would fit as a transformers feature.
A more thorough explanation of the problem: [The Art of Prompt Design: Prompt Boundaries and Token Healing | by Scott Lundberg](https://towardsdatascience.com/the-art-of-prompt-design-prompt-boundaries-and-token-healing-3b2448b0be38).
### Motivation
Given a completion prompt with a partial url ending with `:`, the model might have seen the expected completion `://` as a _single_ token in training. However, the prompt's tail token `:` tells it that the next token is not `//`, and so it generates a wrong completion. Such errors compound in auto-regressive language models.
Fixes [#28346](https://github.com/huggingface/transformers/issues/28346)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
- @gante | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28616/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28616/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28616",
"html_url": "https://github.com/huggingface/transformers/pull/28616",
"diff_url": "https://github.com/huggingface/transformers/pull/28616.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28616.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28615 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28615/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28615/comments | https://api.github.com/repos/huggingface/transformers/issues/28615/events | https://github.com/huggingface/transformers/pull/28615 | 2,092,050,819 | PR_kwDOCUB6oc5koCAa | 28,615 | enable graident checkpointing in DetaObjectDetection and add tests in Swin/Donut_Swin | {
"login": "SangbumChoi",
"id": 34004152,
"node_id": "MDQ6VXNlcjM0MDA0MTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/34004152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SangbumChoi",
"html_url": "https://github.com/SangbumChoi",
"followers_url": "https://api.github.com/users/SangbumChoi/followers",
"following_url": "https://api.github.com/users/SangbumChoi/following{/other_user}",
"gists_url": "https://api.github.com/users/SangbumChoi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SangbumChoi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SangbumChoi/subscriptions",
"organizations_url": "https://api.github.com/users/SangbumChoi/orgs",
"repos_url": "https://api.github.com/users/SangbumChoi/repos",
"events_url": "https://api.github.com/users/SangbumChoi/events{/privacy}",
"received_events_url": "https://api.github.com/users/SangbumChoi/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 11 | 2024-01-20T13:16:59 | 2024-02-01T00:13:24 | null | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28615/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28615/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28615",
"html_url": "https://github.com/huggingface/transformers/pull/28615",
"diff_url": "https://github.com/huggingface/transformers/pull/28615.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28615.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28614 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28614/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28614/comments | https://api.github.com/repos/huggingface/transformers/issues/28614/events | https://github.com/huggingface/transformers/issues/28614 | 2,091,954,715 | I_kwDOCUB6oc58sLIb | 28,614 | Training with FSDP slows down the convergence speed. | {
"login": "yuangpeng",
"id": 57125678,
"node_id": "MDQ6VXNlcjU3MTI1Njc4",
"avatar_url": "https://avatars.githubusercontent.com/u/57125678?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuangpeng",
"html_url": "https://github.com/yuangpeng",
"followers_url": "https://api.github.com/users/yuangpeng/followers",
"following_url": "https://api.github.com/users/yuangpeng/following{/other_user}",
"gists_url": "https://api.github.com/users/yuangpeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuangpeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuangpeng/subscriptions",
"organizations_url": "https://api.github.com/users/yuangpeng/orgs",
"repos_url": "https://api.github.com/users/yuangpeng/repos",
"events_url": "https://api.github.com/users/yuangpeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuangpeng/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 4 | 2024-01-20T07:57:33 | 2024-01-24T07:45:39 | null | NONE | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-5.4.143-2-velinux1-amd64-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.23.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: FSDP
- mixed_precision: no
- use_cpu: False
- debug: False
- num_processes: 8
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- fsdp_config: {'fsdp_auto_wrap_policy': 'TRANSFORMER_BASED_WRAP', 'fsdp_backward_prefetch_policy': 'NO_PREFETCH', 'fsdp_forward_prefetch': False, 'fsdp_offload_params': False, 'fsdp_sharding_strategy': 2, 'fsdp_state_dict_type': 'SHARDED_STATE_DICT', 'fsdp_sync_module_states': True, 'fsdp_transformer_layer_cls_to_wrap': '', 'fsdp_use_orig_params': False}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.1.2+cu118 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: A800
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@pac
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
A group of same experiments with purple and green, another group of sample experiments with yellow and blue. The other parameters are the same, one uses fsdp and the other uses deepspeed.
fsdp and deepspeed have similar losses early step, with differences appearing after about 20 steps. It’s not that fsdp is not converging, fsdp is also slowly converging.
fsdp config:
```
TrainingArguments.fsdp="shard_grad_op auto_wrap"
TrainingArguments.fsdp_config=dict(fsdp_transformer_layer_cls_to_wrap=["LlamaDecoderLayer"])
```
deepspeed config:
```
{
"bf16": {
"enabled": true
},
"train_micro_batch_size_per_gpu": "auto",
"zero_optimization": {
"stage": 2,
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto"
}
}
```
<img width="1442" alt="image" src="https://github.com/huggingface/transformers/assets/57125678/1894247d-7e5f-423f-83e8-72aa14cfb4e6">
### Expected behavior
Expect similar loss curves using fsdp and deepspeed | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28614/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/28614/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28613 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28613/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28613/comments | https://api.github.com/repos/huggingface/transformers/issues/28613/events | https://github.com/huggingface/transformers/issues/28613 | 2,091,937,176 | I_kwDOCUB6oc58sG2Y | 28,613 | Bug in GPT NeoX Implementation | {
"login": "andersonbcdefg",
"id": 17210823,
"node_id": "MDQ6VXNlcjE3MjEwODIz",
"avatar_url": "https://avatars.githubusercontent.com/u/17210823?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andersonbcdefg",
"html_url": "https://github.com/andersonbcdefg",
"followers_url": "https://api.github.com/users/andersonbcdefg/followers",
"following_url": "https://api.github.com/users/andersonbcdefg/following{/other_user}",
"gists_url": "https://api.github.com/users/andersonbcdefg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andersonbcdefg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andersonbcdefg/subscriptions",
"organizations_url": "https://api.github.com/users/andersonbcdefg/orgs",
"repos_url": "https://api.github.com/users/andersonbcdefg/repos",
"events_url": "https://api.github.com/users/andersonbcdefg/events{/privacy}",
"received_events_url": "https://api.github.com/users/andersonbcdefg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-01-20T06:55:53 | 2024-01-22T14:50:02 | 2024-01-22T14:50:02 | NONE | null | ### System Info
GPT-Neo-X does not have a "q_proj" module, so the following lines that check for the dtype raise an error.
```
input_dtype = query.dtype
if input_dtype == torch.float32:
if torch.is_autocast_enabled():
target_dtype = torch.get_autocast_gpu_dtype()
# Handle the case where the model is quantized
elif hasattr(self.config, "_pre_quantization_dtype"):
target_dtype = self.config._pre_quantization_dtype
else:
target_dtype = self.q_proj.weight.dtype
```
This is in `transformers/src/transformers/models/gpt_neox/modeling_gpt_neox.py`
### Who can help?
text models: @ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Initialize a GPT Neo X model with flash attention
model = AutoModel.from_pretrained(model_name_or_path, attn_implementation="flash_attention_2", torch_dtype=torch.bfloat16)
2. Try to run forward pass. Causes following error:
AttributeError: 'GPTNeoXFlashAttention2' object has no attribute 'q_proj'
### Expected behavior
The forward pass should work, or at least fail for a reason other than a reference a module that the given model does not actually have | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28613/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28613/timeline | null | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28612 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28612/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28612/comments | https://api.github.com/repos/huggingface/transformers/issues/28612/events | https://github.com/huggingface/transformers/pull/28612 | 2,091,894,574 | PR_kwDOCUB6oc5knigy | 28,612 | Update README_es.md | {
"login": "vladydev3",
"id": 82735444,
"node_id": "MDQ6VXNlcjgyNzM1NDQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/82735444?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vladydev3",
"html_url": "https://github.com/vladydev3",
"followers_url": "https://api.github.com/users/vladydev3/followers",
"following_url": "https://api.github.com/users/vladydev3/following{/other_user}",
"gists_url": "https://api.github.com/users/vladydev3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vladydev3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vladydev3/subscriptions",
"organizations_url": "https://api.github.com/users/vladydev3/orgs",
"repos_url": "https://api.github.com/users/vladydev3/repos",
"events_url": "https://api.github.com/users/vladydev3/events{/privacy}",
"received_events_url": "https://api.github.com/users/vladydev3/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-20T04:57:12 | 2024-01-23T21:09:02 | 2024-01-23T21:09:01 | CONTRIBUTOR | null | Fixing grammatical errors in the text | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28612/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28612/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28612",
"html_url": "https://github.com/huggingface/transformers/pull/28612",
"diff_url": "https://github.com/huggingface/transformers/pull/28612.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28612.patch",
"merged_at": "2024-01-23T21:09:01"
} |
https://api.github.com/repos/huggingface/transformers/issues/28611 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28611/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28611/comments | https://api.github.com/repos/huggingface/transformers/issues/28611/events | https://github.com/huggingface/transformers/issues/28611 | 2,091,330,691 | I_kwDOCUB6oc58pyyD | 28,611 | PatchTST and PatchTSMixer categorical features and exogenous variables | {
"login": "chrisconst2",
"id": 101289285,
"node_id": "U_kgDOBgmNRQ",
"avatar_url": "https://avatars.githubusercontent.com/u/101289285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrisconst2",
"html_url": "https://github.com/chrisconst2",
"followers_url": "https://api.github.com/users/chrisconst2/followers",
"following_url": "https://api.github.com/users/chrisconst2/following{/other_user}",
"gists_url": "https://api.github.com/users/chrisconst2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chrisconst2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chrisconst2/subscriptions",
"organizations_url": "https://api.github.com/users/chrisconst2/orgs",
"repos_url": "https://api.github.com/users/chrisconst2/repos",
"events_url": "https://api.github.com/users/chrisconst2/events{/privacy}",
"received_events_url": "https://api.github.com/users/chrisconst2/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
},
{
"id": 6462336551,
... | open | false | null | [] | null | 2 | 2024-01-19T20:18:51 | 2024-01-22T16:08:51 | null | NONE | null | ### Feature request
Include categorical features and exogenous variables as input for the PatchTST and PatchTSMixer timeseries foundation models
### Motivation
Categorical features and exogenous variables are key components in timeseries modelling
### Your contribution
- | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28611/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/28611/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28610 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28610/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28610/comments | https://api.github.com/repos/huggingface/transformers/issues/28610/events | https://github.com/huggingface/transformers/issues/28610 | 2,091,211,821 | I_kwDOCUB6oc58pVwt | 28,610 | ONNX export failure for models invoking SDPA attention | {
"login": "BowenBao",
"id": 9376104,
"node_id": "MDQ6VXNlcjkzNzYxMDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9376104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BowenBao",
"html_url": "https://github.com/BowenBao",
"followers_url": "https://api.github.com/users/BowenBao/followers",
"following_url": "https://api.github.com/users/BowenBao/following{/other_user}",
"gists_url": "https://api.github.com/users/BowenBao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BowenBao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BowenBao/subscriptions",
"organizations_url": "https://api.github.com/users/BowenBao/orgs",
"repos_url": "https://api.github.com/users/BowenBao/repos",
"events_url": "https://api.github.com/users/BowenBao/events{/privacy}",
"received_events_url": "https://api.github.com/users/BowenBao/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-19T19:25:25 | 2024-01-22T15:02:28 | null | CONTRIBUTOR | null | > ValueError: Attention using SDPA can not be traced with torch.jit.trace when no attention_mask is provided. To solve this issue, please either load your model with the argument `attn_implementation="eager"` or pass an attention_mask input when tracing the model.
There has been some discussion about its possible resolutions in the ONNX exporter team. I'd like to post an issue here as well to seek advice and preferences.
1. Check `torch.jit.is_tracing()` and fallback to eager attn implementation if needed.
2. Create `attention_mask` before passing to SDPA if it is None.
3. Support SDPA tracing w/o attention_mask (not sure how feasible this is). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28610/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28610/timeline | null | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28609 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28609/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28609/comments | https://api.github.com/repos/huggingface/transformers/issues/28609/events | https://github.com/huggingface/transformers/issues/28609 | 2,090,845,175 | I_kwDOCUB6oc58n8P3 | 28,609 | Code crashes without errors when importing Trainer in TPU context | {
"login": "samuele-bortolato",
"id": 81489249,
"node_id": "MDQ6VXNlcjgxNDg5MjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/81489249?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samuele-bortolato",
"html_url": "https://github.com/samuele-bortolato",
"followers_url": "https://api.github.com/users/samuele-bortolato/followers",
"following_url": "https://api.github.com/users/samuele-bortolato/following{/other_user}",
"gists_url": "https://api.github.com/users/samuele-bortolato/gists{/gist_id}",
"starred_url": "https://api.github.com/users/samuele-bortolato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samuele-bortolato/subscriptions",
"organizations_url": "https://api.github.com/users/samuele-bortolato/orgs",
"repos_url": "https://api.github.com/users/samuele-bortolato/repos",
"events_url": "https://api.github.com/users/samuele-bortolato/events{/privacy}",
"received_events_url": "https://api.github.com/users/samuele-bortolato/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-01-19T16:05:15 | 2024-01-24T04:22:19 | null | NONE | null | ### System Info
I'm working on Kaggle with TPU enabled (TPU VM v3-8), running !transformers-cli env returns
[libprotobuf ERROR external/com_google_protobuf/src/google/protobuf/descriptor_database.cc:642] File already exists in database: tsl/profiler/protobuf/trace_events.proto
[libprotobuf FATAL external/com_google_protobuf/src/google/protobuf/descriptor.cc:1986] CHECK failed: GeneratedDatabase()->Add(encoded_file_descriptor, size):
terminate called after throwing an instance of 'google::protobuf::FatalException'
what(): CHECK failed: GeneratedDatabase()->Add(encoded_file_descriptor, size):
https://symbolize.stripped_domain/r/?trace=7a80dd07fd3c,7a80dd030fcf,5ab82e3a7b8f&map=
*** SIGABRT received by PID 367 (TID 367) on cpu 95 from PID 367; stack trace: ***
PC: @ 0x7a80dd07fd3c (unknown) (unknown)
@ 0x7a7f654bba19 928 (unknown)
@ 0x7a80dd030fd0 (unknown) (unknown)
@ 0x5ab82e3a7b90 (unknown) (unknown)
https://symbolize.stripped_domain/r/?trace=7a80dd07fd3c,7a7f654bba18,7a80dd030fcf,5ab82e3a7b8f&map=310b7ae7682f84c5c576a0b0030121f2:7a7f56a00000-7a7f656d11c0
E0119 15:49:22.169993 367 coredump_hook.cc:447] RAW: Remote crash data gathering hook invoked.
E0119 15:49:22.170011 367 client.cc:272] RAW: Coroner client retries enabled (b/136286901), will retry for up to 30 sec.
E0119 15:49:22.170016 367 coredump_hook.cc:542] RAW: Sending fingerprint to remote end.
E0119 15:49:22.170041 367 coredump_hook.cc:551] RAW: Cannot send fingerprint to Coroner: [NOT_FOUND] stat failed on crash reporting socket /var/google/services/logmanagerd/remote_coredump.socket (Is the listener running?): No such file or directory
E0119 15:49:22.170050 367 coredump_hook.cc:603] RAW: Dumping core locally.
E0119 15:50:17.482782 367 process_state.cc:808] RAW: Raising signal 6 with default behavior
Aborted (core dumped)
Importing and printing manually
```
import torch_xla
print(torch_xla.__version__)
```
2.1.0+libtpu
```
import torch
print(torch.__version__)
```
2.1.0+cu121
```
import transformers
print(transformers.__version__)
```
4.36.2
### Who can help?
@muellerzr @stevhliu
I have been tryint to port my code to TPU, but cannot manage to import the libraries.
In my code (written in pytorch) I use the transformer library to load some pretrained LLMs and I subclassed the Trainer class to train some custom models with RL.
The code is working perfectly fine on GPU, but I can't manage to make it work on TPU and the code keeps crashing without returning any error. The documentation on how to use TPUs in the transformer library for a torch backend is still not present (after two years that the page was created in the documentation https://huggingface.co/docs/transformers/v4.21.3/en/perf_train_tpu), so I have no idea if I skipped any necessary step.
While the code imports without problems the transformer library, the whole session crashes when I try to import the Trainer class.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import torch_xla
print(torch_xla.__version__)
import torch
print(torch.__version__)
import transformers
print(transformers.__version__)
from transformers import Trainer
```
output:
->2.1.0+libtpu
->2.1.0+cu121
->4.36.2
->(crash session without outputs)
### Expected behavior
It should either import the library or throw an error, not crash the whole session without a hint. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28609/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28609/timeline | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.