dfurman commited on
Commit
3d99bce
β€’
1 Parent(s): 9203a3b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -30,11 +30,11 @@ This model uses a context window of 8k. It was trained with the ChatML template.
30
 
31
  ### Open LLM Leaderboard
32
 
33
- | Model | Average | ARC (25-shot) | HellaSwag (10-shot) | MMLU (5-shot) | TruthfulQA (0-shot) | Winogrande (5-shot) | GSM8K (5-shot) |
34
  | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------: | --------: | --------: | ---------: | --------: | --------: | --------: |
35
- | [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [πŸ“„](https://huggingface.co/datasets/open-llm-leaderboard/details_meta-llama__Meta-Llama-3-8B-Instruct) | 66.87 | 41.22 | 69.86 | 51.65 | 42.64 | 42.64 | 42.64 |
36
- | [**dfurman/Llama-3-8B-Orpo-v0.1**](https://huggingface.co/dfurman/Llama-3-8B-Orpo-v0.1) [πŸ“„](https://huggingface.co/datasets/open-llm-leaderboard/details_dfurman__Llama-3-8B-Orpo-v0.1) | **64.67** | **34.17** | **70.59** | **52.39** | **37.36** | **42.64** | **42.64** |
37
- | [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [πŸ“„](https://huggingface.co/datasets/open-llm-leaderboard/details_meta-llama__Meta-Llama-3-8B) | 62.55 | 31.1 | 69.95 | 43.91 | 36.7 | 42.64 | 42.64 |
38
 
39
 
40
  ## πŸ“ˆ Training curves
 
30
 
31
  ### Open LLM Leaderboard
32
 
33
+ | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
34
  | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------: | --------: | --------: | ---------: | --------: | --------: | --------: |
35
+ | [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [πŸ“„](https://huggingface.co/datasets/open-llm-leaderboard/details_meta-llama__Meta-Llama-3-8B-Instruct) | 66.87 | 60.75 | 78.55 | 67.07 | 51.65 | 74.51 | 68.69 |
36
+ | [**dfurman/Llama-3-8B-Orpo-v0.1**](https://huggingface.co/dfurman/Llama-3-8B-Orpo-v0.1) [πŸ“„](https://huggingface.co/datasets/open-llm-leaderboard/details_dfurman__Llama-3-8B-Orpo-v0.1) | **64.67** | **60.67** | **82.56** | **66.59** | **50.47** | **79.01** | **48.75** |
37
+ | [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [πŸ“„](https://huggingface.co/datasets/open-llm-leaderboard/details_meta-llama__Meta-Llama-3-8B) | 62.35 | 59.22 | 82.02 | 66.49 | 43.95 | 77.11 | 45.34 |
38
 
39
 
40
  ## πŸ“ˆ Training curves