leaderboard-pr-bot's picture
Adding Evaluation Results
d117211
|
raw
history blame
732 Bytes

https://wandb.ai/open-assistant/supervised-finetuning/runs/lguuq2c1

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 47.44
ARC (25-shot) 57.94
HellaSwag (10-shot) 82.4
MMLU (5-shot) 48.56
TruthfulQA (0-shot) 47.27
Winogrande (5-shot) 76.87
GSM8K (5-shot) 8.26
DROP (3-shot) 10.81