vicuna-7B-physics / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
90c1f4a
|
raw
history blame
659 Bytes

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 43.05
ARC (25-shot) 49.49
HellaSwag (10-shot) 75.88
MMLU (5-shot) 46.58
TruthfulQA (0-shot) 49.31
Winogrande (5-shot) 69.38
GSM8K (5-shot) 4.25
DROP (3-shot) 6.49