Text Generation
Transformers
PyTorch
mpt
Composer
MosaicML
llm-foundry
conversational
custom_code
text-generation-inference

Add chat_template to tokenizer_config.json

#23
Mosaic ML, Inc. org
edited Jan 19

Manually tested with

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-30b-chat', revision='refs/pr/23')

chat = [
    {"role": "system", "content": "This is a system prompt!"},
   {"role": "user", "content": "Hello, how are you?"},
   {"role": "assistant", "content": "I'm doing great. How can I help you today?"},
   {"role": "user", "content": "I'd like to show off how chat templating works!"},
]

print(tokenizer.apply_chat_template(chat, tokenize=False))

# Remove system prompt
chat = chat[1:]

print("\nUsing default system prompt!\n")

print(tokenizer.apply_chat_template(chat, tokenize=False))

output:

<|im_start|>system
This is a system prompt!
<|im_start|>user
Hello, how are you?<|im_end|>
<|im_start|>assistant
I'm doing great. How can I help you today?<|im_end|>
<|im_start|>user
I'd like to show off how chat templating works!<|im_end|>

Using default system prompt!

<|im_start|>system
A conversation between a user and an LLM-based AI assistant. The assistant gives helpful and honest answers.
<|im_start|>user
Hello, how are you?<|im_end|>
<|im_start|>assistant
I'm doing great. How can I help you today?<|im_end|>
<|im_start|>user
I'd like to show off how chat templating works!<|im_end|>
Mosaic ML, Inc. org

LGTM!

sam-mosaic changed pull request status to merged

Sign up or log in to comment