Files changed (1) hide show
  1. README.md +122 -6
README.md CHANGED
@@ -1,15 +1,118 @@
1
  ---
2
- license: apache-2.0
3
- base_model:
4
- - mistralai/Mistral-7B-v0.1
5
- datasets:
6
- - nvidia/OpenMathInstruct-1
7
  language:
8
  - en
 
9
  tags:
10
  - nvidia
11
  - code
12
  - math
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  ---
14
 
15
 
@@ -118,4 +221,17 @@ If you find our work useful, please consider citing us!
118
  year = {2024},
119
  journal = {arXiv preprint arXiv: Arxiv-2402.10176}
120
  }
121
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
 
 
 
2
  language:
3
  - en
4
+ license: apache-2.0
5
  tags:
6
  - nvidia
7
  - code
8
  - math
9
+ datasets:
10
+ - nvidia/OpenMathInstruct-1
11
+ base_model:
12
+ - mistralai/Mistral-7B-v0.1
13
+ model-index:
14
+ - name: OpenMath-Mistral-7B-v0.1-hf
15
+ results:
16
+ - task:
17
+ type: text-generation
18
+ name: Text Generation
19
+ dataset:
20
+ name: AI2 Reasoning Challenge (25-Shot)
21
+ type: ai2_arc
22
+ config: ARC-Challenge
23
+ split: test
24
+ args:
25
+ num_few_shot: 25
26
+ metrics:
27
+ - type: acc_norm
28
+ value: 59.39
29
+ name: normalized accuracy
30
+ source:
31
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nvidia/OpenMath-Mistral-7B-v0.1-hf
32
+ name: Open LLM Leaderboard
33
+ - task:
34
+ type: text-generation
35
+ name: Text Generation
36
+ dataset:
37
+ name: HellaSwag (10-Shot)
38
+ type: hellaswag
39
+ split: validation
40
+ args:
41
+ num_few_shot: 10
42
+ metrics:
43
+ - type: acc_norm
44
+ value: 81.78
45
+ name: normalized accuracy
46
+ source:
47
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nvidia/OpenMath-Mistral-7B-v0.1-hf
48
+ name: Open LLM Leaderboard
49
+ - task:
50
+ type: text-generation
51
+ name: Text Generation
52
+ dataset:
53
+ name: MMLU (5-Shot)
54
+ type: cais/mmlu
55
+ config: all
56
+ split: test
57
+ args:
58
+ num_few_shot: 5
59
+ metrics:
60
+ - type: acc
61
+ value: 59.34
62
+ name: accuracy
63
+ source:
64
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nvidia/OpenMath-Mistral-7B-v0.1-hf
65
+ name: Open LLM Leaderboard
66
+ - task:
67
+ type: text-generation
68
+ name: Text Generation
69
+ dataset:
70
+ name: TruthfulQA (0-shot)
71
+ type: truthful_qa
72
+ config: multiple_choice
73
+ split: validation
74
+ args:
75
+ num_few_shot: 0
76
+ metrics:
77
+ - type: mc2
78
+ value: 46.13
79
+ source:
80
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nvidia/OpenMath-Mistral-7B-v0.1-hf
81
+ name: Open LLM Leaderboard
82
+ - task:
83
+ type: text-generation
84
+ name: Text Generation
85
+ dataset:
86
+ name: Winogrande (5-shot)
87
+ type: winogrande
88
+ config: winogrande_xl
89
+ split: validation
90
+ args:
91
+ num_few_shot: 5
92
+ metrics:
93
+ - type: acc
94
+ value: 77.27
95
+ name: accuracy
96
+ source:
97
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nvidia/OpenMath-Mistral-7B-v0.1-hf
98
+ name: Open LLM Leaderboard
99
+ - task:
100
+ type: text-generation
101
+ name: Text Generation
102
+ dataset:
103
+ name: GSM8k (5-shot)
104
+ type: gsm8k
105
+ config: main
106
+ split: test
107
+ args:
108
+ num_few_shot: 5
109
+ metrics:
110
+ - type: acc
111
+ value: 0.08
112
+ name: accuracy
113
+ source:
114
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nvidia/OpenMath-Mistral-7B-v0.1-hf
115
+ name: Open LLM Leaderboard
116
  ---
117
 
118
 
 
221
  year = {2024},
222
  journal = {arXiv preprint arXiv: Arxiv-2402.10176}
223
  }
224
+ ```
225
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
226
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nvidia__OpenMath-Mistral-7B-v0.1-hf)
227
+
228
+ | Metric |Value|
229
+ |---------------------------------|----:|
230
+ |Avg. |54.00|
231
+ |AI2 Reasoning Challenge (25-Shot)|59.39|
232
+ |HellaSwag (10-Shot) |81.78|
233
+ |MMLU (5-Shot) |59.34|
234
+ |TruthfulQA (0-shot) |46.13|
235
+ |Winogrande (5-shot) |77.27|
236
+ |GSM8k (5-shot) | 0.08|
237
+