why the output only contains "\n"???

#20
by miaoyl - opened

this is my code:
conversation = [ {
"role": "user",
"content": [
{"type": "text", "text": prompt},
{"type": "image","image":image},
],
},]
chat_template='<|im_start|>user\n\n<|im_end|><|im_start|>assistant\n"'
precoessed_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True,chat_template=chat_template)
print(precoessed_prompt)
inputs = processor(prompt, return_tensors="pt",truncation=True).to("cuda")
# autoregressively complete prompt
output=processor.decode(model.generate(**inputs, max_new_tokens=max_new)[0], skip_special_tokens=True)

and the output only contains "\n".

Llava Hugging Face org

Hey!

The chat template didn't format correctly, so I cannot see it properly. But as a general rule, we recommend to use the models' chat template instead of writing your own. Just update your transformers to at least v4.43 and call apply_chat_template as in the demo code.

The reason here might be incorrect format of the template. Also pls check padding side with processor.tokenizer.padding_side, it should be "left"

No description provided.

Sign up or log in to comment