Adding Evaluation Results

#1
Files changed (1) hide show
  1. README.md +121 -5
README.md CHANGED
@@ -1,4 +1,8 @@
1
  ---
 
 
 
 
2
  tags:
3
  - rmdhirr/Foxglove_7B
4
  - ResplendentAI/Paradigm_Shift_7B
@@ -9,10 +13,109 @@ tags:
9
  base_model:
10
  - rmdhirr/Foxglove_7B
11
  - ResplendentAI/Paradigm_Shift_7B
12
- library_name: transformers
13
- license: apache-2.0
14
- language:
15
- - en
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  ---
17
 
18
  # 🌹 Anthesis_7B
@@ -35,4 +138,17 @@ base_model: ResplendentAI/Paradigm_Shift_7B
35
  parameters:
36
  int8_mask: true
37
  dtype: bfloat16
38
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ library_name: transformers
6
  tags:
7
  - rmdhirr/Foxglove_7B
8
  - ResplendentAI/Paradigm_Shift_7B
 
13
  base_model:
14
  - rmdhirr/Foxglove_7B
15
  - ResplendentAI/Paradigm_Shift_7B
16
+ model-index:
17
+ - name: Anthesis_7B
18
+ results:
19
+ - task:
20
+ type: text-generation
21
+ name: Text Generation
22
+ dataset:
23
+ name: AI2 Reasoning Challenge (25-Shot)
24
+ type: ai2_arc
25
+ config: ARC-Challenge
26
+ split: test
27
+ args:
28
+ num_few_shot: 25
29
+ metrics:
30
+ - type: acc_norm
31
+ value: 69.03
32
+ name: normalized accuracy
33
+ source:
34
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rmdhirr/Anthesis_7B
35
+ name: Open LLM Leaderboard
36
+ - task:
37
+ type: text-generation
38
+ name: Text Generation
39
+ dataset:
40
+ name: HellaSwag (10-Shot)
41
+ type: hellaswag
42
+ split: validation
43
+ args:
44
+ num_few_shot: 10
45
+ metrics:
46
+ - type: acc_norm
47
+ value: 86.2
48
+ name: normalized accuracy
49
+ source:
50
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rmdhirr/Anthesis_7B
51
+ name: Open LLM Leaderboard
52
+ - task:
53
+ type: text-generation
54
+ name: Text Generation
55
+ dataset:
56
+ name: MMLU (5-Shot)
57
+ type: cais/mmlu
58
+ config: all
59
+ split: test
60
+ args:
61
+ num_few_shot: 5
62
+ metrics:
63
+ - type: acc
64
+ value: 62.06
65
+ name: accuracy
66
+ source:
67
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rmdhirr/Anthesis_7B
68
+ name: Open LLM Leaderboard
69
+ - task:
70
+ type: text-generation
71
+ name: Text Generation
72
+ dataset:
73
+ name: TruthfulQA (0-shot)
74
+ type: truthful_qa
75
+ config: multiple_choice
76
+ split: validation
77
+ args:
78
+ num_few_shot: 0
79
+ metrics:
80
+ - type: mc2
81
+ value: 68.65
82
+ source:
83
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rmdhirr/Anthesis_7B
84
+ name: Open LLM Leaderboard
85
+ - task:
86
+ type: text-generation
87
+ name: Text Generation
88
+ dataset:
89
+ name: Winogrande (5-shot)
90
+ type: winogrande
91
+ config: winogrande_xl
92
+ split: validation
93
+ args:
94
+ num_few_shot: 5
95
+ metrics:
96
+ - type: acc
97
+ value: 78.93
98
+ name: accuracy
99
+ source:
100
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rmdhirr/Anthesis_7B
101
+ name: Open LLM Leaderboard
102
+ - task:
103
+ type: text-generation
104
+ name: Text Generation
105
+ dataset:
106
+ name: GSM8k (5-shot)
107
+ type: gsm8k
108
+ config: main
109
+ split: test
110
+ args:
111
+ num_few_shot: 5
112
+ metrics:
113
+ - type: acc
114
+ value: 42.99
115
+ name: accuracy
116
+ source:
117
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rmdhirr/Anthesis_7B
118
+ name: Open LLM Leaderboard
119
  ---
120
 
121
  # 🌹 Anthesis_7B
 
138
  parameters:
139
  int8_mask: true
140
  dtype: bfloat16
141
+ ```
142
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
143
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_rmdhirr__Anthesis_7B)
144
+
145
+ | Metric |Value|
146
+ |---------------------------------|----:|
147
+ |Avg. |67.97|
148
+ |AI2 Reasoning Challenge (25-Shot)|69.03|
149
+ |HellaSwag (10-Shot) |86.20|
150
+ |MMLU (5-Shot) |62.06|
151
+ |TruthfulQA (0-shot) |68.65|
152
+ |Winogrande (5-shot) |78.93|
153
+ |GSM8k (5-shot) |42.99|
154
+