wizard-vicuna-13b / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
17faab4
|
raw
history blame
705 Bytes

https://github.com/melodysdreamj/WizardVicunaLM

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 46.64
ARC (25-shot) 54.69
HellaSwag (10-shot) 79.18
MMLU (5-shot) 48.88
TruthfulQA (0-shot) 49.62
Winogrande (5-shot) 74.82
GSM8K (5-shot) 9.17
DROP (3-shot) 10.13