AndrewZeng
commited on
Commit
•
f5900c7
1
Parent(s):
6d3119d
Update README.md
Browse files
README.md
CHANGED
@@ -25,23 +25,46 @@ Deita 7B V1.0 is a fine-tuned + DPO version of Mistral-7B-v0.1 that was trained
|
|
25 |
- **Model Family:** Other models and the dataset are found in the [Deita collection](https://huggingface.co/collections/hkust-nlp/deita-6569c198c174808d94cf5bd4).
|
26 |
|
27 |
## Performance
|
28 |
-
|
|
|
|
|
|
|
|
|
|
|
29 |
|------------------------------------------------|-----------|------------|----------|---------------|----------------|
|
30 |
| **Proprietary Models** | | | | | |
|
31 |
| GPT-4-Turbo | ? | -- | 9.32 | 97.70 | -- |
|
32 |
| GPT-4 | SFT + PPO | -- | 8.99 | 95.03 | -- |
|
33 |
| Claude-2 | SFT + PPO | -- | 8.06 | 91.36 | -- |
|
34 |
| GPT-3.5-turbo | SFT + PPO | -- | 7.94 | 89.37 | -- |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
| **Open-sourced Models based on Mistral-7B** | | | | | |
|
36 |
| Mistral-7B-Instruct-v0.1 | -- | -- | 6.84 | 69.65 | 60.45 |
|
37 |
| Zephyr-7B-sft | SFT | 200K SFT | 5.32 | 75.12 | 60.93 |
|
38 |
-
| Zephyr-7B-beta | SFT + DPO | 200K SFT + 60K DPO | 7.34 | 90.60 | 66.36 |
|
39 |
-
| OpenChat-3.5 | C-RLFT |
|
40 |
-
| Starling-7B | C-RLFT + APA |
|
41 |
| Random | SFT | 10K SFT | 5.89 | 56.90 | 61.72 |
|
42 |
-
| DEITA-7B-v1.0-sft | SFT | 6K SFT | 7.22 | 80.78 | 64.94 |
|
43 |
-
| DEITA-7B-v1.0-sft
|
44 |
-
| DEITA-7B-v1.0 | SFT + DPO | 6K SFT + 10K DPO | 7.55 | 90.06 | 69.86 |
|
|
|
|
|
|
|
|
|
45 |
|
46 |
## Input Format
|
47 |
|
|
|
25 |
- **Model Family:** Other models and the dataset are found in the [Deita collection](https://huggingface.co/collections/hkust-nlp/deita-6569c198c174808d94cf5bd4).
|
26 |
|
27 |
## Performance
|
28 |
+
|
29 |
+
|
30 |
+
<details>
|
31 |
+
<summary>See full evaluations</summary>
|
32 |
+
|
33 |
+
| Model | Align | Data Size | MT-Bench | AlpacaEval(%) | OpenLLM (Avg.) |
|
34 |
|------------------------------------------------|-----------|------------|----------|---------------|----------------|
|
35 |
| **Proprietary Models** | | | | | |
|
36 |
| GPT-4-Turbo | ? | -- | 9.32 | 97.70 | -- |
|
37 |
| GPT-4 | SFT + PPO | -- | 8.99 | 95.03 | -- |
|
38 |
| Claude-2 | SFT + PPO | -- | 8.06 | 91.36 | -- |
|
39 |
| GPT-3.5-turbo | SFT + PPO | -- | 7.94 | 89.37 | -- |
|
40 |
+
| **Open-sourced Models based on LLaMA-1-13B** | | | | | |
|
41 |
+
| LIMA | SFT | 1K SFT | 4.29 | 41.98 | 59.82 |
|
42 |
+
| WizardLM-13B | SFT | 70K SFT | 6.35 | 75.31 | 58.96 |
|
43 |
+
| Vicuna-13B-v1.3 | SFT | 125K SFT | 6.39 | 82.11 | 60.01 |
|
44 |
+
| Random | SFT | 10K SFT | 6.03 | 71.52 | 60.14 |
|
45 |
+
| DEITA-LLaMA1-13B-v1.0-sft | SFT | 10K SFT | 6.60 | 78.01 | 64.27 |
|
46 |
+
| **Open-sourced Models based on LLaMA-2-13B** | | | | | |
|
47 |
+
| Tulu-2-13B | SFT | 326K SFT | 6.70 | 78.90 | -- |
|
48 |
+
| Tulu-2-13B+DPO | SFT + DPO | 326K SFT + 60K DPO | 7.00 | 89.50 | -- |
|
49 |
+
| LLaMA2-13B-Chat | SFT + PPO | -- | 6.65 | 81.09 | -- |
|
50 |
+
| WizardLM-13B-v1.2 | SFT | >70K SFT | 7.09 | 89.17 | -- |
|
51 |
+
| Vicuna-13B-v1.5 | SFT | 125K SFT | 6.57 | 78.80 | 61.63 |
|
52 |
+
| Random | SFT | 10K SFT | 5.78 | 65.19 | 61.32 |
|
53 |
+
| DEITA-LLaMA2-13B-v1.0-sft | SFT | 10K SFT | 6.79 | 81.09 | 62.71 |
|
54 |
| **Open-sourced Models based on Mistral-7B** | | | | | |
|
55 |
| Mistral-7B-Instruct-v0.1 | -- | -- | 6.84 | 69.65 | 60.45 |
|
56 |
| Zephyr-7B-sft | SFT | 200K SFT | 5.32 | 75.12 | 60.93 |
|
57 |
+
| $\text{Zephyr-7B-}\beta$ | SFT + DPO | 200K SFT + 60K DPO | 7.34 | 90.60 | 66.36 |
|
58 |
+
| OpenChat-3.5 | C-RLFT | >> 70K C-RLFT | 7.81 | 88.51 | -- |
|
59 |
+
| Starling-7B | C-RLFT + APA | >>70K C-RLFT + 183K APA | 8.09 | 91.99 | -- |
|
60 |
| Random | SFT | 10K SFT | 5.89 | 56.90 | 61.72 |
|
61 |
+
| DEITA-7B-v1.0-sft (6K) | SFT | 6K SFT | 7.22 | 80.78 | 64.94 |
|
62 |
+
| DEITA-7B-v1.0-sft (10K) | SFT | 10K SFT | 7.32 | 81.67 | 64.00 |
|
63 |
+
| DEITA-7B-v1.0 | SFT + DPO | 6K SFT + 10K DPO | 7.55 | 90.06 | 69.86 |
|
64 |
+
|
65 |
+
|
66 |
+
</details>
|
67 |
+
|
68 |
|
69 |
## Input Format
|
70 |
|