File size: 3,021 Bytes
0c8fa5a
 
 
4221a5a
c6e60f3
0c8fa5a
 
 
 
c6e60f3
 
 
0c8fa5a
 
e2bc971
 
4221a5a
 
e2bc971
 
 
0e9114e
481eff1
 
9c1122d
e2bc971
6de09ab
 
 
e2bc971
 
0c8fa5a
e7d9d01
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
359bb0e
 
 
 
481eff1
 
13a12e0
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
license: cc-by-4.0
datasets:
- Open-Orca/OpenOrca
- Intel/orca_dpo_pairs
language:
- en
tags:
- xDAN-AI
- OpenOrca
- DPO
- Self-Think
---

<div style="display: flex; justify-content: center; align-items: center">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/643197ac288c9775673a01e9/tVAcwKkIH5vkfzqgqHeHi.png" style="width: 45%;">
</div
>




<p
 align="center"
  <a href="The TOP1 MT-Bench Model">xDAN-AI</a>>
  <a href="https://discord.gg/7NrMX5AK">Discord</a>
  <a href="https://twitter.com/shootime007">Twitter</a><a href="https://huggingface.co/xDAN-AI">Huggingface</a>
</p>


![image/png](https://cdn-uploads.huggingface.co/production/uploads/643197ac288c9775673a01e9/QANDZApzpTHM6sBsjmdew.png)

########## First turn ##########
| model              | turn | score    |
|--------------------|------|----------|
| gpt-4              | 1    | 8.95625  |
| xDAN-L1-Chat-RL-v1 | 1    | 8.87500  |
| claude-v1          | 1    | 8.15000  |
| gpt-3.5-turbo      | 1    | 8.07500  |
| claude-instant-v1  | 1    | 7.80000  |
| vicuna-33b-v1.3    | 1    | 7.45625  |
| wizardlm-30b       | 1    | 7.13125  |
| oasst-sft-7-llama-30b | 1 | 7.10625  |
| Llama-2-70b-chat   | 1    | 6.98750  |

########## Second turn ##########
| model              | turn | score     |
|--------------------|------|-----------|
| gpt-4              | 2    | 9.025000  |
| claude-instant-v1  | 2    | 8.012658  |
| xDAN-L1-Chat-RL-v1 | 2   | 7.825000  |
| gpt-3.5-turbo      | 2    | 7.812500  |
| claude-v1          | 2    | 7.650000  |
| wizardlm-30b       | 2    | 6.887500  |
| vicuna-33b-v1.3    | 2    | 6.787500  |
| Llama-2-70b-chat   | 2    | 6.725000  |


########## Average turn##########
| model              | score     |
|--------------------|-----------|
| gpt-4              | 8.990625  |
| xDAN-L1-Chat-RL-v1 | 8.350000  |
| gpt-3.5-turbo      | 7.943750  |
| claude-instant-v1  | 7.905660  |
| claude-v1          | 7.900000  |
| vicuna-33b-v1.3    | 7.121875  |
| wizardlm-30b       | 7.009375  |
| Llama-2-70b-chat   | 6.856250  |



Prompt Template(Alpaca)
### Instruction:
{instruction}
### Response:


## Created By xDAN-AI at 2023-12-15
## Check: https://www.xdan.ai
 

Disclaimer
We employ data compliance checking algorithms during the training of our language model to strive for the highest degree of compliance. However, given the intricate nature of data and the vast array of potential usage scenarios for the model, we cannot assure that it will always generate correct and reasonable outputs. Users should be cognizant of the risk of the model producing problematic outputs. Our organization will not bear responsibility for any risks or issues stemming from misuse, misguidance, illegal use, and related misinformation, as well as any consequent data security concerns.

About xDAN-AI xDAN-AI is a top lead high-performance model factory. For detailed information and further insights into our cutting-edge technology and offerings, please visit our website: www.xdan.ai.