New: mera-mix-4x7B GGUF
This is a repo for GGUF quants of mera-mix-4x7B. Currently it holds the FP16 and Q8_0 items only.
Original: Model mera-mix-4x7B
This is a mixture of experts (MoE) model that is half as large (4 experts instead of 8) as the Mixtral-8x7B while been comparable to it across different benchmarks. You can use it as a drop in replacement for your Mixtral-8x7B and get much faster inference.
mera-mix-4x7B achieves 76.37 on the openLLM eval v/s 72.7 by Mixtral-8x7B (as shown here).
You can try the model with the Mera Mixture Chat.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 75.91 |
AI2 Reasoning Challenge (25-Shot) | 72.95 |
HellaSwag (10-Shot) | 89.17 |
MMLU (5-Shot) | 64.44 |
TruthfulQA (0-shot) | 77.17 |
Winogrande (5-shot) | 85.64 |
GSM8k (5-shot) | 66.11 |
- Downloads last month
- 3
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard72.950
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard89.170
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard64.440
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard77.170
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard85.640
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard66.110