Update README.md
Browse files
README.md
CHANGED
@@ -188,9 +188,9 @@ Compared to Aleph Alpha Luminous Models
|
|
188 |
Compared to Aleph Alpha Luminous Models, LeoLM and EM_German
|
189 |
![GPT4ALL diagram](https://vago-solutions.de/wp-content/uploads/2023/11/GPT4All.png "SauerkrautLM-7b-HerO GPT4ALL Diagram")
|
190 |
|
191 |
-
![GPT4ALL table](
|
192 |
### Additional German Benchmark results:
|
193 |
-
![GermanBenchmarks](
|
194 |
*performed with newest Language Model Evaluation Harness
|
195 |
## Disclaimer
|
196 |
We must inform users that despite our best efforts in data cleansing, the possibility of some such content slipping through cannot be entirely ruled out.
|
|
|
188 |
Compared to Aleph Alpha Luminous Models, LeoLM and EM_German
|
189 |
![GPT4ALL diagram](https://vago-solutions.de/wp-content/uploads/2023/11/GPT4All.png "SauerkrautLM-7b-HerO GPT4ALL Diagram")
|
190 |
|
191 |
+
![GPT4ALL table](https://vago-solutions.de/wp-content/uploads/2023/11/GPT4All-Tabelle.png "SauerkrautLM-7b-HerO GPT4ALL Table")
|
192 |
### Additional German Benchmark results:
|
193 |
+
![GermanBenchmarks](https://vago-solutions.de/wp-content/uploads/2023/11/German-benchmarks.png "SauerkrautLM-7b-HerO German Benchmarks")
|
194 |
*performed with newest Language Model Evaluation Harness
|
195 |
## Disclaimer
|
196 |
We must inform users that despite our best efforts in data cleansing, the possibility of some such content slipping through cannot be entirely ruled out.
|