Commit
e1e70c4
1 Parent(s): 275cc70

Adding Evaluation Results (#5)

Browse files

- Adding Evaluation Results (a61126f5d4ca0f9b6a75ec704468b976509ebb11)


Co-authored-by: Open LLM Leaderboard PR Bot <leaderboard-pr-bot@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +121 -5
README.md CHANGED
@@ -1,4 +1,7 @@
1
  ---
 
 
 
2
  library_name: transformers
3
  tags:
4
  - medical
@@ -6,12 +9,112 @@ tags:
6
  - biology
7
  - chemistry
8
  - not-for-all-audiences
9
- license: apache-2.0
10
  datasets:
11
  - Locutusque/hercules-v4.0
12
- language:
13
- - en
14
- base_model: alpindale/Mistral-7B-v0.2-hf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ---
16
  # Model Card: Hercules-4.0-Mistral-v0.2-7B
17
 
@@ -94,4 +197,17 @@ This model was fine-tuned using my TPU-Alignment repository. https://github.com/
94
  | - mmlu_flan_cot_fewshot_social_sciences|N/A |get-answer| 0|exact_match|0.6528|± |0.0248|
95
  | - mmlu_flan_cot_fewshot_stem |N/A |get-answer| 0|exact_match|0.4925|± |0.0266|
96
  |ai2_arc |N/A |none | 0|acc |0.6936|± |0.0073|
97
- | | |none | 0|acc_norm |0.6984|± |0.0074|
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
  library_name: transformers
6
  tags:
7
  - medical
 
9
  - biology
10
  - chemistry
11
  - not-for-all-audiences
12
+ base_model: alpindale/Mistral-7B-v0.2-hf
13
  datasets:
14
  - Locutusque/hercules-v4.0
15
+ model-index:
16
+ - name: Hercules-4.0-Mistral-v0.2-7B
17
+ results:
18
+ - task:
19
+ type: text-generation
20
+ name: Text Generation
21
+ dataset:
22
+ name: AI2 Reasoning Challenge (25-Shot)
23
+ type: ai2_arc
24
+ config: ARC-Challenge
25
+ split: test
26
+ args:
27
+ num_few_shot: 25
28
+ metrics:
29
+ - type: acc_norm
30
+ value: 58.96
31
+ name: normalized accuracy
32
+ source:
33
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hercules-4.0-Mistral-v0.2-7B
34
+ name: Open LLM Leaderboard
35
+ - task:
36
+ type: text-generation
37
+ name: Text Generation
38
+ dataset:
39
+ name: HellaSwag (10-Shot)
40
+ type: hellaswag
41
+ split: validation
42
+ args:
43
+ num_few_shot: 10
44
+ metrics:
45
+ - type: acc_norm
46
+ value: 82.6
47
+ name: normalized accuracy
48
+ source:
49
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hercules-4.0-Mistral-v0.2-7B
50
+ name: Open LLM Leaderboard
51
+ - task:
52
+ type: text-generation
53
+ name: Text Generation
54
+ dataset:
55
+ name: MMLU (5-Shot)
56
+ type: cais/mmlu
57
+ config: all
58
+ split: test
59
+ args:
60
+ num_few_shot: 5
61
+ metrics:
62
+ - type: acc
63
+ value: 62.66
64
+ name: accuracy
65
+ source:
66
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hercules-4.0-Mistral-v0.2-7B
67
+ name: Open LLM Leaderboard
68
+ - task:
69
+ type: text-generation
70
+ name: Text Generation
71
+ dataset:
72
+ name: TruthfulQA (0-shot)
73
+ type: truthful_qa
74
+ config: multiple_choice
75
+ split: validation
76
+ args:
77
+ num_few_shot: 0
78
+ metrics:
79
+ - type: mc2
80
+ value: 40.99
81
+ source:
82
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hercules-4.0-Mistral-v0.2-7B
83
+ name: Open LLM Leaderboard
84
+ - task:
85
+ type: text-generation
86
+ name: Text Generation
87
+ dataset:
88
+ name: Winogrande (5-shot)
89
+ type: winogrande
90
+ config: winogrande_xl
91
+ split: validation
92
+ args:
93
+ num_few_shot: 5
94
+ metrics:
95
+ - type: acc
96
+ value: 78.53
97
+ name: accuracy
98
+ source:
99
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hercules-4.0-Mistral-v0.2-7B
100
+ name: Open LLM Leaderboard
101
+ - task:
102
+ type: text-generation
103
+ name: Text Generation
104
+ dataset:
105
+ name: GSM8k (5-shot)
106
+ type: gsm8k
107
+ config: main
108
+ split: test
109
+ args:
110
+ num_few_shot: 5
111
+ metrics:
112
+ - type: acc
113
+ value: 45.41
114
+ name: accuracy
115
+ source:
116
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hercules-4.0-Mistral-v0.2-7B
117
+ name: Open LLM Leaderboard
118
  ---
119
  # Model Card: Hercules-4.0-Mistral-v0.2-7B
120
 
 
197
  | - mmlu_flan_cot_fewshot_social_sciences|N/A |get-answer| 0|exact_match|0.6528|± |0.0248|
198
  | - mmlu_flan_cot_fewshot_stem |N/A |get-answer| 0|exact_match|0.4925|± |0.0266|
199
  |ai2_arc |N/A |none | 0|acc |0.6936|± |0.0073|
200
+ | | |none | 0|acc_norm |0.6984|± |0.0074|
201
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
202
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Locutusque__Hercules-4.0-Mistral-v0.2-7B)
203
+
204
+ | Metric |Value|
205
+ |---------------------------------|----:|
206
+ |Avg. |61.53|
207
+ |AI2 Reasoning Challenge (25-Shot)|58.96|
208
+ |HellaSwag (10-Shot) |82.60|
209
+ |MMLU (5-Shot) |62.66|
210
+ |TruthfulQA (0-shot) |40.99|
211
+ |Winogrande (5-shot) |78.53|
212
+ |GSM8k (5-shot) |45.41|
213
+