InferenceIllusionist commited on
Commit
5098851
1 Parent(s): d5f6c8b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -11,7 +11,7 @@ tags:
11
 
12
  ## TeTO-MS-8x7b
13
 
14
- <b>Te</b>soro + <b>T</b>yphon + <b>O</b>penGPT
15
 
16
  Presenting a Model Stock experiment combining the unique strengths from the following 8x7b Mixtral models:
17
  * Tess-2.0-Mixtral-8x7B-v0.2 / [migtissera](https://huggingface.co/migtissera) / General Purpose
@@ -63,7 +63,7 @@ dtype: float16
63
 
64
  ## Apendix - Llama.cpp MMLU Benchmark Results*
65
 
66
- <i>These results were calculated using perplexity.exe from llama.cpp using the following params:</i>
67
 
68
  `.\perplexity -m .\models\TeTO-8x7b-MS-v0.03\TeTO-MS-8x7b-Q6_K.gguf -bf .\evaluations\mmlu-test.bin --multiple-choice -c 8192 -t 23 -ngl 200`
69
 
 
11
 
12
  ## TeTO-MS-8x7b
13
 
14
+ <u><b>Te</b></u>soro + <u><b>T</b></u>yphon + <u><b>O</b></u>penGPT
15
 
16
  Presenting a Model Stock experiment combining the unique strengths from the following 8x7b Mixtral models:
17
  * Tess-2.0-Mixtral-8x7B-v0.2 / [migtissera](https://huggingface.co/migtissera) / General Purpose
 
63
 
64
  ## Apendix - Llama.cpp MMLU Benchmark Results*
65
 
66
+ <i>These results were calculated via perplexity.exe from llama.cpp using the following params:</i>
67
 
68
  `.\perplexity -m .\models\TeTO-8x7b-MS-v0.03\TeTO-MS-8x7b-Q6_K.gguf -bf .\evaluations\mmlu-test.bin --multiple-choice -c 8192 -t 23 -ngl 200`
69