Adding Evaluation Results

#1
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -83,3 +83,17 @@ The following hyperparameters were used during training in the HPC investigation
83
  - Pytorch 2.0.1+cu117
84
  - Datasets 2.13.1
85
  - Tokenizers 0.13.3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
83
  - Pytorch 2.0.1+cu117
84
  - Datasets 2.13.1
85
  - Tokenizers 0.13.3
86
+
87
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
88
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_BramVanroy__llama2-13b-ft-mc4_nl_cleaned_tiny)
89
+
90
+ | Metric | Value |
91
+ |-----------------------|---------------------------|
92
+ | Avg. | 46.81 |
93
+ | ARC (25-shot) | 59.3 |
94
+ | HellaSwag (10-shot) | 82.04 |
95
+ | MMLU (5-shot) | 54.67 |
96
+ | TruthfulQA (0-shot) | 38.03 |
97
+ | Winogrande (5-shot) | 77.27 |
98
+ | GSM8K (5-shot) | 10.31 |
99
+ | DROP (3-shot) | 6.08 |