Update README.md
Browse files
README.md
CHANGED
@@ -186,7 +186,7 @@ Compared to Aleph Alpha Luminous Models
|
|
186 |
*performed with newest Language Model Evaluation Harness
|
187 |
### GPT4ALL:
|
188 |
Compared to Aleph Alpha Luminous Models, LeoLM and EM_German
|
189 |
-
![GPT4ALL diagram](
|
190 |
|
191 |
![GPT4ALL table](images/gpt4alltable.PNG "SauerkrautLM-7b-HerO GPT4ALL Table")
|
192 |
### Additional German Benchmark results:
|
|
|
186 |
*performed with newest Language Model Evaluation Harness
|
187 |
### GPT4ALL:
|
188 |
Compared to Aleph Alpha Luminous Models, LeoLM and EM_German
|
189 |
+
![GPT4ALL diagram](https://vago-solutions.de/wp-content/uploads/2023/11/GPT4All.png "SauerkrautLM-7b-HerO GPT4ALL Diagram")
|
190 |
|
191 |
![GPT4ALL table](images/gpt4alltable.PNG "SauerkrautLM-7b-HerO GPT4ALL Table")
|
192 |
### Additional German Benchmark results:
|