Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 35.3 |
ARC (25-shot) | 41.04 |
HellaSwag (10-shot) | 71.19 |
MMLU (5-shot) | 24.32 |
TruthfulQA (0-shot) | 36.66 |
Winogrande (5-shot) | 66.93 |
GSM8K (5-shot) | 1.59 |
DROP (3-shot) | 5.39 |
- Downloads last month
- 2,033
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.