README.md exists but content is empty.
Use the Edit model card button to edit it.
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard48.890
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard74.480
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard55.720
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard37.090
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard72.930
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard12.510