CaterinaLac leaderboard-pr-bot commited on
Commit
b150e59
1 Parent(s): a97ff56

Adding Evaluation Results (#1)

Browse files

- Adding Evaluation Results (5551f0c282f175abcf141d8e4c11c8c934621ce2)


Co-authored-by: Open LLM Leaderboard PR Bot <leaderboard-pr-bot@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -8,4 +8,17 @@ language:
8
 
9
  Pythia-70m-deduped finetuned on a cleaned version of ShareGPT data.
10
  The cleaned dataset is obtained by removing duplicates and paraphrases from the original corpus, and keeping only the English instance.
11
- The final training size is of 3507 instances.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
 
9
  Pythia-70m-deduped finetuned on a cleaned version of ShareGPT data.
10
  The cleaned dataset is obtained by removing duplicates and paraphrases from the original corpus, and keeping only the English instance.
11
+ The final training size is of 3507 instances.
12
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
13
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_HWERI__pythia-70m-deduped-cleansharegpt-en)
14
+
15
+ | Metric | Value |
16
+ |-----------------------|---------------------------|
17
+ | Avg. | 25.06 |
18
+ | ARC (25-shot) | 21.16 |
19
+ | HellaSwag (10-shot) | 27.16 |
20
+ | MMLU (5-shot) | 25.24 |
21
+ | TruthfulQA (0-shot) | 48.57 |
22
+ | Winogrande (5-shot) | 50.12 |
23
+ | GSM8K (5-shot) | 0.0 |
24
+ | DROP (3-shot) | 3.15 |