lgaalves commited on
Commit
6ddbebe
1 Parent(s): f7db5b1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -18,11 +18,11 @@ pipeline_tag: text-generation
18
 
19
  | Metric |gpt2-xl_lima |gpt2-xl (base) |
20
  |-----------------------|-------|-------|
21
- | Avg. | - | 36.66 |
22
- | ARC (25-shot) | - | 30.29 |
23
- | HellaSwag (10-shot) | - | 51.38 |
24
- | MMLU (5-shot) | - | 26.43 |
25
- | TruthfulQA (0-shot) | - | 38.54 |
26
 
27
 
28
  We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.
 
18
 
19
  | Metric |gpt2-xl_lima |gpt2-xl (base) |
20
  |-----------------------|-------|-------|
21
+ | Avg. | 36.65 | **36.66** |
22
+ | ARC (25-shot) | **31.14** | 30.29 |
23
+ | HellaSwag (10-shot) | 51.28 | **51.38** |
24
+ | MMLU (5-shot) | 25.43 | **26.43** |
25
+ | TruthfulQA (0-shot) | **38.74** | 38.54 |
26
 
27
 
28
  We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.