Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 25.64
ARC (25-shot) 22.53
HellaSwag (10-shot) 27.37
MMLU (5-shot) 25.38
TruthfulQA (0-shot) 47.09
Winogrande (5-shot) 50.91
GSM8K (5-shot) 0.0
DROP (3-shot) 6.18
Downloads last month
1,954
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Dataset used to train BreadAi/gpt-YA-1-1_70M

Spaces using BreadAi/gpt-YA-1-1_70M 23