changed "tokenizer" typo to be the one we create.
Browse files
README.md
CHANGED
@@ -130,7 +130,7 @@ chat = [
|
|
130 |
{"role": "system", "content": "You are DiscoLM, a helpful assistant."},
|
131 |
{"role": "user", "content": "Please tell me possible reasons to call a research collective Disco Research"}
|
132 |
]
|
133 |
-
x =
|
134 |
x = model.generate(x, max_new_tokens=128).cpu()
|
135 |
print(tok.batch_decode(x))
|
136 |
```
|
|
|
130 |
{"role": "system", "content": "You are DiscoLM, a helpful assistant."},
|
131 |
{"role": "user", "content": "Please tell me possible reasons to call a research collective Disco Research"}
|
132 |
]
|
133 |
+
x = tok.apply_chat_template(chat, tokenize=True, return_tensors="pt", add_generation_prompt=True).cuda()
|
134 |
x = model.generate(x, max_new_tokens=128).cpu()
|
135 |
print(tok.batch_decode(x))
|
136 |
```
|