Adding Evaluation Results

#1
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -17,4 +17,17 @@ Current benchmarks aren't great for instruct models, so I've temporarily omitted
17
 
18
  Uses ChatML prompt formatting.
19
 
20
- I reserve no rights to the model. To the extent possible under law, I release it as public domain. However, the datasets used have various licenses that may impact how the model may be used in your jurisdiction.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
 
18
  Uses ChatML prompt formatting.
19
 
20
+ I reserve no rights to the model. To the extent possible under law, I release it as public domain. However, the datasets used have various licenses that may impact how the model may be used in your jurisdiction.
21
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
22
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_euclaise__Ferret-7B)
23
+
24
+ | Metric | Value |
25
+ |-----------------------|---------------------------|
26
+ | Avg. | 47.81 |
27
+ | ARC (25-shot) | 62.2 |
28
+ | HellaSwag (10-shot) | 81.75 |
29
+ | MMLU (5-shot) | 60.82 |
30
+ | TruthfulQA (0-shot) | 40.94 |
31
+ | Winogrande (5-shot) | 77.35 |
32
+ | GSM8K (5-shot) | 5.76 |
33
+ | DROP (3-shot) | 5.87 |