"The attention mask and the pad token id were not set" Warning

#12
by brd1436 - opened

Hello,
i'm runing the model like so:

# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("Gustavosta/MagicPrompt-Stable-Diffusion")
model = AutoModelForCausalLM.from_pretrained("Gustavosta/MagicPrompt-Stable-Diffusion")
print("ready")

text = "a cute alpaga"


input = model.generate(tokenizer.encode(text, return_tensors="pt"))
output = tokenizer.decode(input[0], skip_special_tokens=True)
print(output) 

it's working but i get this red warning:

The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's attention_mask to obtain reliable results.
Setting pad_token_id to eos_token_id:50256 for open-end generation.
The attention mask is not set and cannot be inferred from input because pad token is same as eos token. As a consequence, you may observe unexpected behavior. Please pass your input's attention_mask to obtain reliable results.
/usr/local/lib/python3.9/site-packages/transformers/generation/utils.py:1258: UserWarning: Using the model-agnostic default max_length (=20) to control the generation length. We recommend setting max_new_tokens to control the maximum length of the generation.

Is there a way to get rid of it? i am running the model the right way?

Sign up or log in to comment