Llama-2-13b-sf / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
a9693ec
|
raw
history blame
687 Bytes
metadata
license: cc-by-nc-4.0

Hi

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 25.9
ARC (25-shot) 29.52
HellaSwag (10-shot) 26.49
MMLU (5-shot) 25.98
TruthfulQA (0-shot) 48.97
Winogrande (5-shot) 50.36
GSM8K (5-shot) 0.0
DROP (3-shot) 0.0