Wrong result when calling apply_chat_template with add_generation_prompt=False
#8
by
Annorita
- opened
According to the tutorial of Huggingface, add_generation_prompt will take actions like:
https://huggingface.co/docs/transformers/main/chat_templating#what-are-generation-prompts
But it seems doesn't take effect in deepseek chat_template:
model_path = 'local weight of deepseek-ai/deepseek-coder-6.7b-instruct'
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_path)
conversation =[
{"role": "user", "content": "write a quick sort algorithm in python."},
{"role": "assistant", "content": "Sure. I'll do that."}
]
res1 = tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=False)
print(res1)
#"<|begin▁of▁sentence|>You are an AI programming assistant, utilizing the Deepseek Coder model, developed by Deepseek Company, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer\n### Instruction:\nwrite a quick sort algorithm in python.\n### Response:\nSure. I'll do that.\n<|EOT|>\n### Response:\n"
res2 = tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)
print(res2)
#"<|begin▁of▁sentence|>You are an AI programming assistant, utilizing the Deepseek Coder model, developed by Deepseek Company, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer\n### Instruction:\nwrite a quick sort algorithm in python.\n### Response:\nSure. I'll do that.\n<|EOT|>\n### Response:\n"
expected result:
when calling add_generation_prompt=False
, it should not contain ### Response:
at the end.
My transformers version:
import transformers
transformers.version
'4.35.2'
We have fixed it.
zqh11
changed discussion status to
closed