lgaalves commited on
Commit
472a0a5
1 Parent(s): 2810604

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -7
README.md CHANGED
@@ -16,13 +16,14 @@ pipeline_tag: text-generation
16
 
17
  ### Benchmark Metrics
18
 
19
- | Metric | Value |
20
- |-----------------------|-------|
21
- | Avg. | 29.85 |
22
- | ARC (25-shot) | 21.76 |
23
- | HellaSwag (10-shot) | 30.77 |
24
- | MMLU (5-shot) | 24.66 |
25
- | TruthfulQA (0-shot) | 42.22 |
 
26
 
27
 
28
  We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.
 
16
 
17
  ### Benchmark Metrics
18
 
19
+ | Metric | GPT-2-dolly | GPT-2 (base) |
20
+ |-----------------------|-------|-------|
21
+ | Avg. | 29.85 | **29.99** |
22
+ | ARC (25-shot) | 21.76 | **21.84** |
23
+ | HellaSwag (10-shot) | 30.77 | **31.6** |
24
+ | MMLU (5-shot) | 24.66 | **25.86** |
25
+ | TruthfulQA (0-shot) | **42.22** | 40.67 |
26
+
27
 
28
 
29
  We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.