Update README.md
Browse files
README.md
CHANGED
@@ -19,11 +19,11 @@ language:
|
|
19 |
|
20 |
| Metric | llama-2-7b-hf_open-platypus | meta-llama/Llama-2-7b-hf (base) |
|
21 |
|-----------------------|-------|-------|
|
22 |
-
| Avg. |
|
23 |
-
| ARC (25-shot) |
|
24 |
-
| HellaSwag (10-shot) |
|
25 |
-
| MMLU (5-shot) |
|
26 |
-
| TruthfulQA (0-shot) |
|
27 |
|
28 |
|
29 |
We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.
|
|
|
19 |
|
20 |
| Metric | llama-2-7b-hf_open-platypus | meta-llama/Llama-2-7b-hf (base) |
|
21 |
|-----------------------|-------|-------|
|
22 |
+
| Avg. | **54.35** | 54.32 |
|
23 |
+
| ARC (25-shot) | 51.45 | **53.07** |
|
24 |
+
| HellaSwag (10-shot) | **78.63** | 78.59 |
|
25 |
+
| MMLU (5-shot) | 43.6 | **46.87** |
|
26 |
+
| TruthfulQA (0-shot) | **43.71** | 38.76 |
|
27 |
|
28 |
|
29 |
We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.
|