codelion commited on
Commit
e2c6bb0
1 Parent(s): 79f2f62

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -1
README.md CHANGED
@@ -1,3 +1,18 @@
1
  ---
2
  license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ ---
4
+
5
+ # Model mera-mix-4x7B
6
+
7
+ This is a mixture of experts (MoE) model that is half as large (4 experts instead of 8) as the [Mixtral-8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
8
+ while been comparable to it across different benchmarks. You can use it as a drop in replacement for your Mixtral-8x7B and get much faster inference.
9
+
10
+ mera-mix-4x7B achieves 76.37 on the openLLM eval v/s 72.7 by Mixtral-8x7B (as shown [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mistralai__Mixtral-8x7B-Instruct-v0.1)).
11
+
12
+ # OpenLLM Eval
13
+
14
+ | Model | ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|Average|
15
+ |-------------------------------------------------------------|----:|--------:|----:|---------:|---------:|----:|------:|
16
+ |[mera-mix-4x7B](https://huggingface.co/meraGPT/mera-mix-4x7B)|72.01| 88.82|63.67| 77.45| 84.61|71.65| 76.37|
17
+
18
+ Raw eval results are available at this [gist](https://gist.github.com/codelion/78f88333230801c9bbaa6fc22078d820)