DavidGF commited on
Commit
9b73955
1 Parent(s): ff76d43

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -88
README.md CHANGED
@@ -87,95 +87,7 @@ Bitte erkläre mir, wie die Zusammenführung von Modellen durch bestehende Spitz
87
  <|im_start|>assistant
88
  ```
89
  ## Evaluation
90
- ### MT-Bench (German):
91
- ![MT-Bench German Diagram](https://vago-solutions.de/wp-content/uploads/2023/11/MT-Bench-German.png "SauerkrautLM-7b-HerO MT-Bench German Diagram")
92
- ```
93
- ########## First turn ##########
94
- score
95
- model turn
96
- SauerkrautLM-70b-v1 1 7.25000
97
- SauerkrautLM-7b-HerO <--- 1 6.96875
98
- SauerkrautLM-7b-v1-mistral 1 6.30625
99
- leo-hessianai-13b-chat 1 6.18750
100
- SauerkrautLM-13b-v1 1 6.16250
101
- leo-mistral-hessianai-7b-chat 1 6.15625
102
- Llama-2-70b-chat-hf 1 6.03750
103
- vicuna-13b-v1.5 1 5.80000
104
- SauerkrautLM-7b-v1 1 5.65000
105
- leo-hessianai-7b-chat 1 5.52500
106
- vicuna-7b-v1.5 1 5.42500
107
- Mistral-7B-v0.1 1 5.37500
108
- SauerkrautLM-3b-v1 1 3.17500
109
- Llama-2-7b 1 1.28750
110
- open_llama_3b_v2 1 1.68750
111
-
112
- ########## Second turn ##########
113
- score
114
- model turn
115
- SauerkrautLM-70b-v1 2 6.83125
116
- SauerkrautLM-7b-HerO <--- 2 6.30625
117
- vicuna-13b-v1.5 2 5.63125
118
- SauerkrautLM-13b-v1 2 5.34375
119
- SauerkrautLM-7b-v1-mistral 2 5.26250
120
- leo-mistral-hessianai-7b-chat 2 4.99375
121
- SauerkrautLM-7b-v1 2 4.73750
122
- leo-hessianai-13b-chat 2 4.71250
123
- vicuna-7b-v1.5 2 4.67500
124
- Llama-2-70b-chat-hf 2 4.66250
125
- Mistral-7B-v0.1 2 4.53750
126
- leo-hessianai-7b-chat 2 2.65000
127
- SauerkrautLM-3b-v1 2 1.98750
128
- open_llama_3b_v2 2 1.22500
129
- Llama-2-7b 2 1.07500
130
-
131
- ########## Average ##########
132
- score
133
- model
134
- SauerkrautLM-70b-v1 7.040625
135
- SauerkrautLM-7b-HerO <--- 6.637500
136
- SauerkrautLM-7b-v1-mistral 5.784375
137
- SauerkrautLM-13b-v1 5.753125
138
- vicuna-13b-v1.5 5.715625
139
- leo-mistral-hessianai-7b-chat 5.575000
140
- leo-hessianai-13b-chat 5.450000
141
- Llama-2-70b-chat-hf 5.350000
142
- SauerkrautLM-v1-7b 5.193750
143
- vicuna-7b-v1.5 5.050000
144
- Mistral-7B-v0.1 4.956250
145
- leo-hessianai-7b-chat 4.087500
146
- SauerkrautLM-3b-v1 2.581250
147
- open_llama_3b_v2 1.456250
148
- Llama-2-7b 1.181250
149
- ```
150
-
151
-
152
- ### MT-Bench (English):
153
- ![MT-Bench English Diagram](https://vago-solutions.de/wp-content/uploads/2023/11/MT-Bench-Englisch.png "SauerkrautLM-7b-HerO MT-Bench English Diagram")
154
- ```
155
- ########## First turn ##########
156
- score
157
- model turn
158
- OpenHermes-2.5-Mistral-7B 1 8.21875
159
- SauerkrautLM-7b-HerO <--- 1 8.03125
160
- Mistral-7B-OpenOrca 1 7.65625
161
- neural-chat-7b-v3-1 1 7.22500
162
 
163
- ########## Second turn ##########
164
- score
165
- model turn
166
- OpenHermes-2.5-Mistral-7B 2 7.1000
167
- SauerkrautLM-7b-HerO <--- 2 6.7875
168
- neural-chat-7b-v3-1 2 6.4000
169
- Mistral-7B-OpenOrca 2 6.1750
170
-
171
- ########## Average ##########
172
- score
173
- model
174
- OpenHermes-2.5-Mistral-7B 7.659375
175
- SauerkrautLM-7b-HerO <--- 7.409375
176
- Mistral-7B-OpenOrca 6.915625
177
- neural-chat-7b-v3-1 6.812500
178
- ```
179
  ### GPT4ALL:
180
  Compared to Aleph Alpha Luminous Models, LeoLM and EM_German
181
  ![GPT4ALL diagram](https://vago-solutions.de/wp-content/uploads/2023/11/GPT4All.png "SauerkrautLM-7b-HerO GPT4ALL Diagram")
 
87
  <|im_start|>assistant
88
  ```
89
  ## Evaluation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
90
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
91
  ### GPT4ALL:
92
  Compared to Aleph Alpha Luminous Models, LeoLM and EM_German
93
  ![GPT4ALL diagram](https://vago-solutions.de/wp-content/uploads/2023/11/GPT4All.png "SauerkrautLM-7b-HerO GPT4ALL Diagram")