Add evaluation results
Browse files
README.md
CHANGED
@@ -23,6 +23,25 @@ Using open source datasets with Alpaca- and OpenOrca-style and generated synthe
|
|
23 |
|
24 |
[1] Rafailov, R., Sharma, A., Mitchell, E., Ermon, S., Manning, C.D. and Finn, C., 2023. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290.
|
25 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
# **Usage Instructions**
|
27 |
|
28 |
This model has been fine-tuned primarily for single-turn conversation, making it less suitable for multi-turn conversations such as chat.
|
|
|
23 |
|
24 |
[1] Rafailov, R., Sharma, A., Mitchell, E., Ermon, S., Manning, C.D. and Finn, C., 2023. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290.
|
25 |
|
26 |
+
# **Evaluation Results**
|
27 |
+
|
28 |
+
| Model | H6 | Model Size |
|
29 |
+
|----------------------------------------|-------|------------|
|
30 |
+
| **SOLAR-10.7B-Instruct-v1.0** | **74.20** | **~ 11B** |
|
31 |
+
| mistralai/Mixtral-8x7B-Instruct-v0.1 | 72.62 | ~ 46.7B |
|
32 |
+
| 01-ai/Yi-34B-200K | 70.81 | ~ 34B |
|
33 |
+
| 01-ai/Yi-34B | 69.42 | ~ 34B |
|
34 |
+
| mistralai/Mixtral-8x7B-v0.1 | 68.42 | ~ 46.7B |
|
35 |
+
| meta-llama/Llama-2-70b-hf | 67.87 | ~ 70B |
|
36 |
+
| tiiuae/falcon-180B | 67.85 | ~ 180B |
|
37 |
+
| **SOLAR-10.7B-v1.0** | **66.04** | **~11B** |
|
38 |
+
| mistralai/Mistral-7B-Instruct-v0.2 | 65.71 | ~ 7B |
|
39 |
+
| Qwen/Qwen-14B | 65.86 | ~ 14B |
|
40 |
+
| 01-ai/Yi-34B-Chat | 65.32 | ~34B |
|
41 |
+
| meta-llama/Llama-2-70b-chat-hf | 62.4 | ~ 70B |
|
42 |
+
| mistralai/Mistral-7B-v0.1 | 60.97 | ~ 7B |
|
43 |
+
| mistralai/Mistral-7B-Instruct-v0.1 | 54.96 | ~ 7B |
|
44 |
+
|
45 |
# **Usage Instructions**
|
46 |
|
47 |
This model has been fine-tuned primarily for single-turn conversation, making it less suitable for multi-turn conversations such as chat.
|