Kquant03 commited on
Commit
4606a39
·
verified ·
1 Parent(s): 6e7b75c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -25,7 +25,9 @@ The config looks like this...(detailed version is in the files and versions):
25
  - [ConvexAI/Metabird-7B](https://huggingface.co/ConvexAI/Metabird-7B) - expert #3
26
  - [alnrg2arg/test3_sft_16bit](https://huggingface.co/alnrg2arg/test3_sft_16bit) - expert #4
27
 
28
- # I just now uploaded it to Open LLM Evaluations, just to see how it will do.
 
 
29
 
30
  # "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
31
  ### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
 
25
  - [ConvexAI/Metabird-7B](https://huggingface.co/ConvexAI/Metabird-7B) - expert #3
26
  - [alnrg2arg/test3_sft_16bit](https://huggingface.co/alnrg2arg/test3_sft_16bit) - expert #4
27
 
28
+ # It manages to beat Buttercup-4x7B in MMLU, and I personally think it's on-par to it, if not better.
29
+
30
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/hQ44cGgs0cSf-sIv8Xk01.png)
31
 
32
  # "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
33
  ### (from the MistralAI papers...click the quoted question above to navigate to it directly.)