Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
codelion 
posted an update Apr 15
Post
1942
We just released a new MoE model (meraGPT/mera-mix-4x7B) that is half as large as Mixtral-8x7B while still been competitive with it across different benchmarks. mera-mix-4x7B achieves 76.37 on the open LLM eval.

You can check mera-mix-4x7B out on HF here - meraGPT/mera-mix-4x7B
In this post