Edit model card

Just to obtain metrics from the HuggingFaceH4/open_llm_leaderboard.

To evaluate the impact of increasing the number of experts, modify the num_experts_per_tok setting in the config.json file from 2 to 3. This alteration aims to specifically determine if such a change leads to any notable improvements in performance metrics.

Other details to note include that the model weights are directly copied from the source available at https://huggingface.co/mistralai/Mixtral-8x7B-v0.1.

image/png

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 69.09
AI2 Reasoning Challenge (25-Shot) 67.41
HellaSwag (10-Shot) 86.63
MMLU (5-Shot) 71.98
TruthfulQA (0-shot) 48.58
Winogrande (5-shot) 82.40
GSM8k (5-shot) 57.54
Downloads last month
3,507
Safetensors
Model size
46.7B params
Tensor type
BF16
·

Evaluation results