Matt commited on
Commit
50a0c3c
1 Parent(s): f82a8e1

Explain chat template in README

Browse files
Files changed (1) hide show
  1. README.md +28 -1
README.md CHANGED
@@ -64,6 +64,33 @@ We used [OpenAI's Chat Markup Language (ChatML)](https://github.com/openai/opena
64
 
65
  This means that, e.g., in [oobabooga](https://github.com/oobabooga/text-generation-webui/) the "`MPT-Chat`" instruction template should work, as it also uses ChatML.
66
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67
  ## Example Prompt Exchange
68
 
69
  ```
@@ -173,4 +200,4 @@ Commodity cost was ~$400.
173
  archivePrefix={arXiv},
174
  primaryClass={cs.AI}
175
  }
176
- ```
 
64
 
65
  This means that, e.g., in [oobabooga](https://github.com/oobabooga/text-generation-webui/) the "`MPT-Chat`" instruction template should work, as it also uses ChatML.
66
 
67
+ This formatting has also been set as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating),
68
+ which means that lists of messages can be formatted for you with the `apply_chat_template()` method:
69
+
70
+ ```python
71
+ chat = [
72
+ {"role": "user", "content": "Hello, how are you?"},
73
+ {"role": "assistant", "content": "I'm doing great. How can I help you today?"},
74
+ {"role": "user", "content": "I'd like to show off how chat templating works!"},
75
+ ]
76
+ tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
77
+ ```
78
+
79
+ which will yield:
80
+
81
+ ```
82
+ <|im_start|>user
83
+ Hello, how are you?<|im_end|>
84
+ <|im_start|>assistant
85
+ I'm doing great. How can I help you today?<|im_end|>
86
+ <|im_start|>user
87
+ I'd like to show off how chat templating works!<|im_end|>
88
+ <|im_start|>assistant
89
+ ```
90
+
91
+ If you use `tokenize=True` and `return_tensors="pt"` instead, then you will get a tokenized
92
+ and formatted conversation ready to pass to `model.generate()`.
93
+
94
  ## Example Prompt Exchange
95
 
96
  ```
 
200
  archivePrefix={arXiv},
201
  primaryClass={cs.AI}
202
  }
203
+ ```