leaderboard-pr-bot's picture
Adding Evaluation Results
7c9468e
|
raw
history blame
733 Bytes

https://wandb.ai/open-assistant/supervised-finetuning/runs/bqiatai0

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 35.41
ARC (25-shot) 43.0
HellaSwag (10-shot) 67.91
MMLU (5-shot) 28.33
TruthfulQA (0-shot) 36.57
Winogrande (5-shot) 64.96
GSM8K (5-shot) 1.21
DROP (3-shot) 5.91