vilm
/

Text Generation
Transformers
PyTorch
English
qwen2
conversational
Inference Endpoints
text-generation-inference
File size: 1,768 Bytes
54316e5
 
 
 
 
 
 
8d2fdf1
54316e5
 
8d2fdf1
54316e5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8d2fdf1
54316e5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35cb3d3
8d2fdf1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
---
library_name: transformers
license: other
datasets:
- teknium/OpenHermes-2.5
- LDJnr/Capybara
- Intel/orca_dpo_pairs
- argilla/distilabel-capybara-dpo-7k-binarized
language:
- en
pipeline_tag: text-generation
---

# Quyen
<img src="quyen.webp" width="512" height="512" alt="Quyen">

# Model Description
Quyen is our first flagship LLM series based on the Qwen1.5 family. We introduced 6 different versions:

- **Quyen-SE (0.5B)**
- **Quyen-Mini (1.8B)**
- **Quyen (4B)**
- **Quyen-Plus (7B)**
- **Quyen-Pro (14B)**
- **Quyen-Pro-Max (72B)**

All models were trained with SFT and DPO using the following dataset:

- *OpenHermes-2.5* by **Teknium**
- *Capyabara* by **LDJ**
- *argilla/distilabel-capybara-dpo-7k-binarized* by **argilla**
- *orca_dpo_pairs* by **Intel**
- and Private Data by **Ontocord** & **BEE-spoke-data**

# Prompt Template
- All Quyen models use ChatML as the default template:

```
<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Hello world.<|im_end|>
<|im_start|>assistant
```

- You can also use `apply_chat_template`:

```python
messages = [
    {"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."},
    {"role": "user", "content": "Hello world."}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)
```

# Benchmarks:

- Coming Soon! We will update the benchmarks later

# Acknowledgement
- We're incredibly grateful to **Tensoic** and **Ontocord** for their generous support with compute and data preparation.
- Special thanks to the Qwen team for letting us access the models early for these amazing finetunes.