leaderboard-pr-bot commited on
Commit
19721b5
1 Parent(s): adc5e7b

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -106,4 +106,17 @@ Also thanks to Meta for LLaMAv2 and deciding to allow the research community at
106
 
107
  Each model and LoRA was hand picked and considered for what it could contribute to this ensemble.
108
  Thanks to each and every one of you for your incredible work developing some of the best things
109
- to come out of this community.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
106
 
107
  Each model and LoRA was hand picked and considered for what it could contribute to this ensemble.
108
  Thanks to each and every one of you for your incredible work developing some of the best things
109
+ to come out of this community.
110
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
111
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_CalderaAI__13B-Thorns-l2)
112
+
113
+ | Metric | Value |
114
+ |-----------------------|---------------------------|
115
+ | Avg. | 53.5 |
116
+ | ARC (25-shot) | 62.88 |
117
+ | HellaSwag (10-shot) | 83.57 |
118
+ | MMLU (5-shot) | 56.95 |
119
+ | TruthfulQA (0-shot) | 49.52 |
120
+ | Winogrande (5-shot) | 74.51 |
121
+ | GSM8K (5-shot) | 0.91 |
122
+ | DROP (3-shot) | 46.13 |