Update README.md
Browse files
README.md
CHANGED
@@ -30,15 +30,12 @@ This model uses a context window of 8k. It was trained with the ChatML template.
|
|
30 |
|
31 |
### Open LLM Leaderboard
|
32 |
|
33 |
-
|
|
34 |
-
|
35 |
-
|
|
36 |
-
|
|
37 |
-
|
|
38 |
-
|
39 |
-
| TruthfulQA (0-shot) | 50.47 |
|
40 |
-
| Winogrande (5-shot) | 79.01 |
|
41 |
-
| GSM8K (5-shot) | 48.75 |
|
42 |
|
43 |
## π Training curves
|
44 |
|
|
|
30 |
|
31 |
### Open LLM Leaderboard
|
32 |
|
33 |
+
| Model | Average | ARC (25-shot) | HellaSwag (10-shot) | MMLU (5-shot) | TruthfulQA (0-shot) | Winogrande (5-shot) | GSM8K (5-shot) |
|
34 |
+
| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------: | --------: | --------: | ---------: | --------: | --------: | --------: |
|
35 |
+
| [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [π](https://huggingface.co/datasets/open-llm-leaderboard/details_meta-llama__Meta-Llama-3-8B-Instruct) | 66.87 | 41.22 | 69.86 | 51.65 | 42.64 | 42.64 | 42.64 |
|
36 |
+
| [**dfurman/Llama-3-8B-Orpo-v0.1**](https://huggingface.co/dfurman/Llama-3-8B-Orpo-v0.1) [π](https://huggingface.co/datasets/open-llm-leaderboard/details_dfurman__Llama-3-8B-Orpo-v0.1) | **64.67** | **34.17** | **70.59** | **52.39** | **37.36** | **42.64** | **42.64** |
|
37 |
+
| [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [π](https://huggingface.co/datasets/open-llm-leaderboard/details_meta-llama__Meta-Llama-3-8B) | 62.55 | 31.1 | 69.95 | 43.91 | 36.7 | 42.64 | 42.64 |
|
38 |
+
|
|
|
|
|
|
|
39 |
|
40 |
## π Training curves
|
41 |
|