osanseviero HF staff pcuenq HF staff commited on
Commit
5f7c116
1 Parent(s): 59aaab1

chat-prompt-detailed (#11)

Browse files

- chat-prompt-detailed (7edbaf08e4b2a4c973c90a53402d0d05c7c0259f)


Co-authored-by: Pedro Cuenca <pcuenq@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +64 -1
README.md CHANGED
@@ -31,7 +31,7 @@ Install `transformers`
31
  pip install transformers accelerate
32
  ```
33
 
34
- **Chat use:** The 70B Instruct model uses a different prompt template than the smaller versions. To use it with `transformers`, we recommend you use the built-in chat template:
35
 
36
  ```py
37
  from transformers import AutoTokenizer, AutoModelForCausalLM
@@ -86,6 +86,69 @@ for seq in sequences:
86
  print(f"Result: {seq['generated_text']}")
87
  ```
88
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
89
  ## Model Details
90
  *Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs).
91
 
 
31
  pip install transformers accelerate
32
  ```
33
 
34
+ **Chat use:** The 70B Instruct model uses a [different prompt template](#chat_prompt) than the smaller versions. To use it with `transformers`, we recommend you use the built-in chat template:
35
 
36
  ```py
37
  from transformers import AutoTokenizer, AutoModelForCausalLM
 
86
  print(f"Result: {seq['generated_text']}")
87
  ```
88
 
89
+ <a name="chat_prompt"></a>
90
+ ## Chat prompt
91
+
92
+ CodeLlama 70B Instruct uses a different format for the chat prompt than previous Llama 2 or CodeLlama models. As mentioned above, the easiest way to use it is with the help of the tokenizer's chat template. If you need to build the string or tokens, manually, here's how to do it.
93
+
94
+ We'll do our tests with the following made-up dialog:
95
+
96
+ ```py
97
+ chat = [
98
+ {"role": "system", "content": "System prompt "},
99
+ {"role": "user", "content": "First user query"},
100
+ {"role": "assistant", "content": "Model response to first query"},
101
+ {"role": "user", "content": "Second user query"},
102
+ ]
103
+ ```
104
+
105
+ First, let's see what the prompt looks like if we use the chat template:
106
+
107
+ ```py
108
+ tokenizer.apply_chat_template(chat, tokenize=False)
109
+ ```
110
+
111
+ ```
112
+ '<s>Source: system\n\n System prompt <step> Source: user\n\n First user query <step> Source: assistant\n\n Model response to first query <step> Source: user\n\n Second user query <step> Source: assistant\nDestination: user\n\n '
113
+ ```
114
+
115
+ So each turn of the conversation has a `Source` (`system`, `user`, or `assistant`), and then the content appears after two newlines and a space. Turns are separated with the special token ` <step> `. After the last turn (which must necessarily come from the `user`), we invite the model to respond by using the special syntax `Source: assistant\nDestination: user\n\n `. Let's see how we can build the same string ourselves:
116
+
117
+ ```py
118
+ output = "<s>"
119
+ for m in chat:
120
+ output += f"Source: {m['role']}\n\n {m['content'].strip()}"
121
+ output += " <step> "
122
+ output += "Source: assistant\nDestination: user\n\n "
123
+ output
124
+ ```
125
+
126
+ ```
127
+ '<s>Source: system\n\n System prompt <step> Source: user\n\n First user query <step> Source: assistant\n\n Model response to first query <step> Source: user\n\n Second user query <step> Source: assistant\nDestination: user\n\n '
128
+ ```
129
+
130
+ To verify that we got it right, we'll compare against the [reference code in the original GitHub repo](https://github.com/facebookresearch/codellama/blob/1af62e1f43db1fa5140fa43cb828465a603a48f3/llama/generation.py#L506). We used the same dialog and tokenized it with the `dialog_prompt_tokens` function and got the following tokens:
131
+
132
+ ```py
133
+ reference_tokens = [1, 7562, 29901, 1788, 13, 13, 2184, 9508, 32015, 7562, 29901, 1404, 13, 13, 3824, 1404, 2346, 32015, 7562, 29901, 20255, 13, 13, 8125, 2933, 304, 937, 2346, 32015, 7562, 29901, 1404, 13, 13, 6440, 1404, 2346, 32015, 7562, 29901, 20255, 13, 14994, 3381, 29901, 1404, 13, 13, 29871]
134
+ ```
135
+
136
+ Let's see what we get with the string we built using our Python loop. Note that we don't add "special tokens" because the string already starts with `<s>`, the beginning of sentence token:
137
+
138
+ ```py
139
+ tokens = tokenizer.encode(output, add_special_tokens=False)
140
+ assert reference_tokens == tokens
141
+ ```
142
+
143
+ Similarly, let's verify that the chat template produces the same token sequence:
144
+
145
+ ```py
146
+ assert reference_tokens == tokenizer.apply_chat_template(chat)
147
+ ```
148
+
149
+ As a final detail, please note that if the dialog does not start with a `system` turn, the [original code will insert one with an empty content string](https://github.com/facebookresearch/codellama/blob/1af62e1f43db1fa5140fa43cb828465a603a48f3/llama/generation.py#L418).
150
+
151
+
152
  ## Model Details
153
  *Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs).
154