Edit model card

Model mera-mix-4x7B

This is a mixture of experts (MoE) model that is half as large (4 experts instead of 8) as the Mixtral-8x7B while been comparable to it across different benchmarks. You can use it as a drop in replacement for your Mixtral-8x7B and get much faster inference.

mera-mix-4x7B achieves the score of 75.91 on the OpenLLM Eval and compares well with 72.7 by Mixtral-8x7B and 74.46 by Mixtral-8x22B.

You can try the model with the Mera Mixture Chat.

In addition, to the official Open LLM Leaderboard, the results on OpenLLM Eval have been validated by others as well (76.59).

Our own initial eval is available here (76.37).

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 75.91
AI2 Reasoning Challenge (25-Shot) 72.95
HellaSwag (10-Shot) 89.17
MMLU (5-Shot) 64.44
TruthfulQA (0-shot) 77.17
Winogrande (5-shot) 85.64
GSM8k (5-shot) 66.11
Downloads last month
5,761
Safetensors
Model size
24.2B params
Tensor type
BF16
ยท
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for meraGPT/mera-mix-4x7B

Quantizations
2 models

Spaces using meraGPT/mera-mix-4x7B 2

Evaluation results