Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 25.35
ARC (25-shot) 22.27
HellaSwag (10-shot) 28.99
MMLU (5-shot) 26.62
TruthfulQA (0-shot) 41.71
Winogrande (5-shot) 52.72
GSM8K (5-shot) 0.23
DROP (3-shot) 4.93
Downloads last month
1,059
Safetensors
Model size
315M params
Tensor type
BF16
Β·
U8
Β·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Spaces using Corianas/256_5epoch 26