codelion commited on
Commit
e0a5eb5
1 Parent(s): 7718e31

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -110,7 +110,7 @@ model-index:
110
  This is a mixture of experts (MoE) model that is half as large (4 experts instead of 8) as the [Mixtral-8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
111
  while been comparable to it across different benchmarks. You can use it as a drop in replacement for your Mixtral-8x7B and get much faster inference.
112
 
113
- mera-mix-4x7B achieves the score of 76.37 on the Open LLM Eval and compares well with 72.7 by Mixtral-8x7B and 74.46 by Mixtral-8x22B.
114
 
115
  You can try the model with the [Mera Mixture Chat](https://huggingface.co/spaces/meraGPT/mera-mixture-chat).
116
 
@@ -137,6 +137,6 @@ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-le
137
  |Winogrande (5-shot) |85.64|
138
  |GSM8k (5-shot) |66.11|
139
 
140
- In addition, to the official Open LLM Leaderboard, the results on Open LLM Eval have been validated by [others as well (76.59)](https://github.com/saucam/model_evals/tree/main?tab=readme-ov-file#model-eval-results).
141
 
142
  Our own initial eval is available [here (76.37)](https://gist.github.com/codelion/78f88333230801c9bbaa6fc22078d820).
 
110
  This is a mixture of experts (MoE) model that is half as large (4 experts instead of 8) as the [Mixtral-8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
111
  while been comparable to it across different benchmarks. You can use it as a drop in replacement for your Mixtral-8x7B and get much faster inference.
112
 
113
+ mera-mix-4x7B achieves the score of 75.91 on the OpenLLM Eval and compares well with 72.7 by Mixtral-8x7B and 74.46 by Mixtral-8x22B.
114
 
115
  You can try the model with the [Mera Mixture Chat](https://huggingface.co/spaces/meraGPT/mera-mixture-chat).
116
 
 
137
  |Winogrande (5-shot) |85.64|
138
  |GSM8k (5-shot) |66.11|
139
 
140
+ In addition, to the official Open LLM Leaderboard, the results on OpenLLM Eval have been validated by [others as well (76.59)](https://github.com/saucam/model_evals/tree/main?tab=readme-ov-file#model-eval-results).
141
 
142
  Our own initial eval is available [here (76.37)](https://gist.github.com/codelion/78f88333230801c9bbaa6fc22078d820).