Llama-2-3b-hf / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
ad046f5
|
raw
history blame
653 Bytes

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 25.69
ARC (25-shot) 26.96
HellaSwag (10-shot) 26.52
MMLU (5-shot) 23.33
TruthfulQA (0-shot) 50.71
Winogrande (5-shot) 49.64
GSM8K (5-shot) 0.0
DROP (3-shot) 2.63