qnguyen3 commited on
Commit
67dd267
1 Parent(s): 6533db4

Upload 2 files

Browse files
Files changed (2) hide show
  1. README.md +61 -0
  2. quyen.webp +0 -0
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: other
4
+ datasets:
5
+ - teknium/OpenHermes-2.5
6
+ - LDJnr/Capybara
7
+ - Intel/orca_dpo_pairs
8
+ - argilla/distilabel-intel-orca-dpo-pairs
9
+ language:
10
+ - en
11
+ ---
12
+
13
+ # Quyen
14
+ <img src="quyen.webp" width="512" height="512" alt="Quyen">
15
+
16
+ # Model Description
17
+ Quyen is our first flagship LLM series based on the Qwen1.5 family. We introduced 6 different versions:
18
+
19
+ - **Quyen-SE (0.5B)**
20
+ - **Quyen-Mini (1.8B)**
21
+ - **Quyen (4B)**
22
+ - **Quyen-Plus (7B)**
23
+ - **Quyen-Pro (14B)**
24
+ - **Quyen-Pro-Max (72B)**
25
+
26
+ All models were trained with SFT and DPO using the following dataset:
27
+
28
+ - *OpenHermes-2.5* by **Teknium**
29
+ - *Capyabara* by **LDJ**
30
+ - *distilabel-intel-orca-dpo-pairs* by **argilla**
31
+ - *orca_dpo_pairs* by **Intel**
32
+ - and Private Data by **Ontocord** & **BEE-spoke-data**
33
+
34
+ # Prompt Template
35
+ - All Quyen models use ChatML as the default template:
36
+
37
+ ```
38
+ <|im_start|>system
39
+ You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
40
+ <|im_start|>user
41
+ Hello world.<|im_end|>
42
+ <|im_start|>assistant
43
+ ```
44
+
45
+ - You can also use `apply_chat_template`:
46
+
47
+ ```python
48
+ messages = [
49
+ {"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."},
50
+ {"role": "user", "content": "Hello world."}
51
+ ]
52
+ gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
53
+ model.generate(**gen_input)
54
+ ```
55
+
56
+ # Benchmarks:
57
+
58
+ - Coming Soon! We will update the benchmarks later
59
+
60
+ # Acknowledgement
61
+ - We're incredibly grateful to **Tensoic** and **Ontocord** for their generous support with compute and data preparation.
quyen.webp ADDED