Adding Evaluation Results

#29
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -136,4 +136,17 @@ zero config:
136
  "train_micro_batch_size_per_gpu": "auto",
137
  "wall_clock_breakdown": false
138
  }
139
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
136
  "train_micro_batch_size_per_gpu": "auto",
137
  "wall_clock_breakdown": false
138
  }
139
+ ```
140
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
141
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenAssistant__oasst-sft-4-pythia-12b-epoch-3.5)
142
+
143
+ | Metric | Value |
144
+ |-----------------------|---------------------------|
145
+ | Avg. | 36.26 |
146
+ | ARC (25-shot) | 45.73 |
147
+ | HellaSwag (10-shot) | 68.59 |
148
+ | MMLU (5-shot) | 26.82 |
149
+ | TruthfulQA (0-shot) | 37.81 |
150
+ | Winogrande (5-shot) | 65.9 |
151
+ | GSM8K (5-shot) | 3.03 |
152
+ | DROP (3-shot) | 5.91 |