AndrewZeng commited on
Commit
2a209b2
1 Parent(s): 7fb2ebc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -1
README.md CHANGED
@@ -25,7 +25,19 @@ Deita Llama1 13B V1.0 SFT is a fine-tuned version of Llama 1 that was trained on
25
  - **Model Family:** Other models and the dataset are found in the [Deita collection](https://huggingface.co/collections/hkust-nlp/deita-6569c198c174808d94cf5bd4).
26
 
27
  ## Performance
28
-
 
 
 
 
 
 
 
 
 
 
 
 
29
 
30
  ## Input Format
31
 
 
25
  - **Model Family:** Other models and the dataset are found in the [Deita collection](https://huggingface.co/collections/hkust-nlp/deita-6569c198c174808d94cf5bd4).
26
 
27
  ## Performance
28
+ | Model | Align | Data Size | MT-Bench | AlpacaEval(%) | OpenLLM (Avg.) |
29
+ |------------------------------------------------|-----------|------------|----------|---------------|----------------|
30
+ | **Proprietary Models** | | | | | |
31
+ | GPT-4-Turbo | ? | -- | 9.32 | 97.70 | -- |
32
+ | GPT-4 | SFT + PPO | -- | 8.99 | 95.03 | -- |
33
+ | Claude-2 | SFT + PPO | -- | 8.06 | 91.36 | -- |
34
+ | GPT-3.5-turbo | SFT + PPO | -- | 7.94 | 89.37 | -- |
35
+ | **Open-sourced Models based on LLaMA-1-13B** | | | | | |
36
+ | LIMA | SFT | 1K SFT | 4.29 | 41.98 | 59.82 |
37
+ | WizardLM-13B | SFT | 70K SFT | 6.35 | 75.31 | 58.96 |
38
+ | Vicuna-13B-v1.3 | SFT | 125K SFT | 6.39 | 82.11 | 60.01 |
39
+ | Random | SFT | 10K SFT | 6.03 | 71.52 | 60.14 |
40
+ | DEITA-LLaMA1-13B-v1.0-sft | SFT | 10K SFT | 6.60 | 78.01 | 64.27 |
41
 
42
  ## Input Format
43