Adding Evaluation Results

#2
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -41,4 +41,17 @@ ASSISTANT:
41
  - Highest priority right now is V3.1 with more optimized training and iterative dataset improvements based on testing.
42
 
43
  ### Note:
44
- Through testing V2, I realized some alignment data had leaked in, causing the model to be less cooperative then intended. This model should do much better due to stricter filetering.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
  - Highest priority right now is V3.1 with more optimized training and iterative dataset improvements based on testing.
42
 
43
  ### Note:
44
+ Through testing V2, I realized some alignment data had leaked in, causing the model to be less cooperative then intended. This model should do much better due to stricter filetering.
45
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
46
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_totally-not-an-llm__EverythingLM-13b-V3-16k)
47
+
48
+ | Metric | Value |
49
+ |-----------------------|---------------------------|
50
+ | Avg. | 44.82 |
51
+ | ARC (25-shot) | 58.19 |
52
+ | HellaSwag (10-shot) | 80.12 |
53
+ | MMLU (5-shot) | 50.48 |
54
+ | TruthfulQA (0-shot) | 45.18 |
55
+ | Winogrande (5-shot) | 70.72 |
56
+ | GSM8K (5-shot) | 1.97 |
57
+ | DROP (3-shot) | 7.06 |