Adding Evaluation Results

#1
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -4,4 +4,17 @@ license: apache-2.0
4
 
5
  # pythia-12b-sft-v8-rlhf-2k-steps
6
 
7
- - sampling report: [2023-05-15_OpenAssistant_pythia-12b-sft-v8-rlhf-2k-steps_sampling_noprefix2.json](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-rl%2F2023-05-15_OpenAssistant_pythia-12b-sft-v8-rlhf-2k-steps_sampling_noprefix2.json)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
 
5
  # pythia-12b-sft-v8-rlhf-2k-steps
6
 
7
+ - sampling report: [2023-05-15_OpenAssistant_pythia-12b-sft-v8-rlhf-2k-steps_sampling_noprefix2.json](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-rl%2F2023-05-15_OpenAssistant_pythia-12b-sft-v8-rlhf-2k-steps_sampling_noprefix2.json)
8
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
9
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenAssistant__pythia-12b-sft-v8-rlhf-2k-steps)
10
+
11
+ | Metric | Value |
12
+ |-----------------------|---------------------------|
13
+ | Avg. | 36.36 |
14
+ | ARC (25-shot) | 43.43 |
15
+ | HellaSwag (10-shot) | 70.08 |
16
+ | MMLU (5-shot) | 26.12 |
17
+ | TruthfulQA (0-shot) | 36.06 |
18
+ | Winogrande (5-shot) | 64.64 |
19
+ | GSM8K (5-shot) | 9.55 |
20
+ | DROP (3-shot) | 4.63 |