Add chat template

#2
by Rocketknight1 HF staff - opened
Files changed (2) hide show
  1. README.md +15 -0
  2. tokenizer_config.json +1 -0
README.md CHANGED
@@ -205,6 +205,21 @@ Hello, who are you?<|im_end|>
205
  Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by a man named Teknium, who designed me to assist and support users with their needs and requests.<|im_end|>
206
  ```
207
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
208
  To utilize the prompt format without a system prompt, simply leave the line out.
209
 
210
  Currently, I recommend using LM Studio for chatting with Hermes 2. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
 
205
  Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by a man named Teknium, who designed me to assist and support users with their needs and requests.<|im_end|>
206
  ```
207
 
208
+ This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
209
+ `tokenizer.apply_chat_template()` method:
210
+
211
+ ```python
212
+ messages = [
213
+ {"role": "system", "content": "You are Hermes 2."},
214
+ {"role": "user", "content": "Hello, who are you?"}
215
+ ]
216
+ gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
217
+ model.generate(**gen_input)
218
+ ```
219
+
220
+ When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
221
+ that the model continues with an assistant response.
222
+
223
  To utilize the prompt format without a system prompt, simply leave the line out.
224
 
225
  Currently, I recommend using LM Studio for chatting with Hermes 2. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
tokenizer_config.json CHANGED
@@ -49,6 +49,7 @@
49
  "</s>"
50
  ],
51
  "bos_token": "<s>",
 
52
  "clean_up_tokenization_spaces": false,
53
  "eos_token": "<|im_end|>",
54
  "legacy": true,
 
49
  "</s>"
50
  ],
51
  "bos_token": "<s>",
52
+ "chat_template": "{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
53
  "clean_up_tokenization_spaces": false,
54
  "eos_token": "<|im_end|>",
55
  "legacy": true,