Update README.md
Browse files
README.md
CHANGED
@@ -34,10 +34,9 @@ WestSeverus-7B-DPO-v2 can be used in mathematics, chemical, physics and even cod
|
|
34 |
- HumanEval_Plus
|
35 |
- MBPP
|
36 |
- MBPP_Plus
|
37 |
-
4. [Prompt Format](
|
38 |
-
5. [
|
39 |
-
6. [
|
40 |
-
7. [Gratitude](#Gratitude)
|
41 |
|
42 |
## ๐ช Nous Benchmark Results
|
43 |
|
@@ -80,13 +79,17 @@ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-le
|
|
80 |
|
81 |

|
82 |
|
83 |
-
##
|
84 |
|
85 |
-
|
86 |
|
87 |
-
|
88 |
-
|
89 |
-
|
|
|
|
|
|
|
|
|
90 |
|
91 |
## ๐ ๏ธ Quantized Models
|
92 |
|
@@ -99,7 +102,11 @@ TBD.
|
|
99 |
* **GPTQ**: https://huggingface.co/TheBloke/WestSeverus-7B-DPO-GPTQ
|
100 |
* **AWQ**: https://huggingface.co/TheBloke/WestSeverus-7B-DPO-AWQ
|
101 |
|
|
|
102 |
|
103 |
-
|
104 |
-
|
105 |
-
|
|
|
|
|
|
|
|
34 |
- HumanEval_Plus
|
35 |
- MBPP
|
36 |
- MBPP_Plus
|
37 |
+
4. [Prompt Format](#โ๏ธ-prompt-format)
|
38 |
+
5. [Quantized Models](#๐ ๏ธ-quantized-models)
|
39 |
+
6. [Gratitude](#๐-gratitude)
|
|
|
40 |
|
41 |
## ๐ช Nous Benchmark Results
|
42 |
|
|
|
79 |
|
80 |

|
81 |
|
82 |
+
## โ๏ธ Prompt Format
|
83 |
|
84 |
+
WestSeverus-7B-DPO-v2 was trained using the ChatML prompt templates with system prompts. An example follows below:
|
85 |
|
86 |
+
```
|
87 |
+
<|im_start|>system
|
88 |
+
{system_message}<|im_end|>
|
89 |
+
<|im_start|>user
|
90 |
+
{prompt}<|im_end|>
|
91 |
+
<|im_start|>assistant
|
92 |
+
```
|
93 |
|
94 |
## ๐ ๏ธ Quantized Models
|
95 |
|
|
|
102 |
* **GPTQ**: https://huggingface.co/TheBloke/WestSeverus-7B-DPO-GPTQ
|
103 |
* **AWQ**: https://huggingface.co/TheBloke/WestSeverus-7B-DPO-AWQ
|
104 |
|
105 |
+
## ๐ Gratitude
|
106 |
|
107 |
+
* Thanks to @senseable for [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2).
|
108 |
+
* Thanks to @jondurbin for [jondurbin/truthy-dpo-v0.1 dataset](https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1).
|
109 |
+
* Thanks to @Charles Goddard for MergeKit.
|
110 |
+
* Thanks to @TheBloke, @s3nh for Quantized Models.
|
111 |
+
* Thanks to @mlabonne, @CultriX for YALL - Yet Another LLM Leaderboard.
|
112 |
+
* Thank you to all the other people in the Open Source AI community who utilized this model for further research and improvement.
|