Xenova HF staff commited on
Commit
8a993fd
1 Parent(s): f40c8a6

Add default chat template to tokenizer_config.json

Browse files

[Automated] This PR adds the default chat template to the tokenizer config, allowing the model to be used with the new conversational widget (see [PR](https://github.com/huggingface/huggingface.js/pull/457)).

If the default is not appropriate for your model, please set `tokenizer.chat_template` to an appropriate template. See https://huggingface.co/docs/transformers/main/chat_templating for more information.

Files changed (1) hide show
  1. tokenizer_config.json +3 -2
tokenizer_config.json CHANGED
@@ -5,5 +5,6 @@
5
  "model_max_length": 2048,
6
  "pad_token": "<|endoftext|>",
7
  "tokenizer_class": "PreTrainedTokenizerFast",
8
- "unk_token": "<|endoftext|>"
9
- }
 
 
5
  "model_max_length": 2048,
6
  "pad_token": "<|endoftext|>",
7
  "tokenizer_class": "PreTrainedTokenizerFast",
8
+ "unk_token": "<|endoftext|>",
9
+ "chat_template": "{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}"
10
+ }