Text Generation
Transformers
PyTorch
Safetensors
English
llama
conversational
Inference Endpoints
text-generation-inference
hamishivi commited on
Commit
e6d72f0
β€’
1 Parent(s): 563c4f6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -23
README.md CHANGED
@@ -39,31 +39,14 @@ For more details, read the paper: [Camels in a Changing Climate: Enhancing LM Ad
39
 
40
  ## Performance
41
 
42
- At the time of release, the Tulu-v2-dpo-70b model is approximately equal to GPT4 on AlpacaEval, and has a score of 7.89 on MT-Bench.
43
- All smaller DPO'd models have strong performance per model size in the category and with lower verbosity (average completion length).
44
  | Model | Size | Alignment | MT-Bench (score) | AlpacaEval (win rate %) |
45
  |-------------|-----|----|---------------|--------------|
46
- | **Tulu-v2-7b** πŸͺ | **7B** | **dDPO** | **6.30** | **73.9** |
47
- | **Tulu-v2-dpo-7b** πŸͺ | **7B** | **dDPO** | **6.29** | **85.1** |
48
- | StableLM-Tuned-Ξ± | 7B| dSFT |2.75| -|
49
- | MPT-Chat | 7B |dSFT |5.42| -|
50
- | Xwin-LMv0.1 | 7B| dPPO| 6.19| 87.83|
51
- | Mistral-Instructv0.1 | 7B| - | 6.84 |-|
52
- | Zephyr-7b-Ξ± |7B| dDPO| 6.88| -|
53
- | Zephyr-7b-Ξ² πŸͺ | 7B | dDPO | 7.34 | 90.60 |
54
- | **Tulu-v2-13b** πŸͺ | **13B** | **dDPO** | **6.70** | **78.9** |
55
- | **Tulu-v2-dpo-13b** πŸͺ | **13B** | **dDPO** | **7.00** | **89.5** |
56
- | Falcon-Instruct | 40B |dSFT |5.17 |45.71|
57
- | Guanaco | 65B | SFT |6.41| 71.80|
58
- | Llama2-Chat | 70B |RLHF |6.86| 92.66|
59
- | Vicuna v1.3 | 33B |dSFT |7.12 |88.99|
60
- | WizardLM v1.0 | 70B |dSFT |7.71 |-|
61
- | Xwin-LM v0.1 | 70B |dPPO |- |95.57|
62
- | **Tulu-v2-70b** πŸͺ | **70B** | **dDPO** | **7.49** | **86.6** |
63
- | **Tulu-v2-dpo-70b** πŸͺ | **70B** | **dDPO** | **7.89** | **95.1** |
64
- | GPT-3.5-turbo | - |RLHF |7.94 |89.37|
65
- | Claude 2 | - |RLHF |8.06| 91.36|
66
- | GPT-4 | -| RLHF |8.99| 95.28|
67
 
68
  ## Input Format
69
 
 
39
 
40
  ## Performance
41
 
 
 
42
  | Model | Size | Alignment | MT-Bench (score) | AlpacaEval (win rate %) |
43
  |-------------|-----|----|---------------|--------------|
44
+ | **Tulu-v2-7b** πŸͺ | **7B** | **SFT** | **6.30** | **73.9** |
45
+ | **Tulu-v2-dpo-7b** πŸͺ | **7B** | **DPO** | **6.29** | **85.1** |
46
+ | **Tulu-v2-13b** πŸͺ | **13B** | **SFT** | **6.70** | **78.9** |
47
+ | **Tulu-v2-dpo-13b** πŸͺ | **13B** | **DPO** | **7.00** | **89.5** |
48
+ | **Tulu-v2-70b** πŸͺ | **70B** | **SFT** | **7.49** | **86.6** |
49
+ | **Tulu-v2-dpo-70b** πŸͺ | **70B** | **DPO** | **7.89** | **95.1** |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
 
51
  ## Input Format
52