pankajmathur
commited on
Commit
•
6773474
1
Parent(s):
69c9b5d
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: llama3
|
3 |
+
---
|
4 |
+
Introducing Pop
|
5 |
+
|
6 |
+
This model is a full fine-tuned version of meta-llama/Meta-Llama-3-8B on diverse set of data.
|
7 |
+
|
8 |
+
|
9 |
+
|
10 |
+
💬 Prompt Template
|
11 |
+
|
12 |
+
You can use ChatML prompt template while using the model:
|
13 |
+
ChatML
|
14 |
+
|
15 |
+
```
|
16 |
+
<|im_start|>system
|
17 |
+
{system}<|im_end|>
|
18 |
+
<|im_start|>user
|
19 |
+
{user}<|im_end|>
|
20 |
+
<|im_start|>assistant
|
21 |
+
{asistant}<|im_end|>
|
22 |
+
```
|
23 |
+
|
24 |
+
This prompt template is available as a chat template, which means you can format messages using the tokenizer.apply_chat_template() method:
|
25 |
+
```
|
26 |
+
messages = [
|
27 |
+
{"role": "system", "content": "You are helpful AI asistant."},
|
28 |
+
{"role": "user", "content": "Hello!"}
|
29 |
+
]
|
30 |
+
|
31 |
+
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
|
32 |
+
model.generate(**gen_input)
|
33 |
+
```
|