agentlans commited on
Commit
c68b90e
·
verified ·
1 Parent(s): f7d3161

Adding Evaluation Results (#1)

Browse files

- Adding Evaluation Results (dc50a9159f1459120bfedb684e7b3f0c6e6b6e0a)

Files changed (1) hide show
  1. README.md +114 -1
README.md CHANGED
@@ -11,6 +11,105 @@ language:
11
  - en
12
  pipeline_tag: text-generation
13
  license: gemma
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  ---
15
  # Gemma2-9B-AdvancedFuse
16
 
@@ -35,4 +134,18 @@ As with most large language models:
35
  1. Use clear and specific instructions for optimal performance.
36
  2. Verify generated outputs for factual accuracy when critical information is involved.
37
  3. Avoid providing inputs that could lead to harmful or unethical responses.
38
- 4. Consider using human review, especially in high-stakes applications.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  - en
12
  pipeline_tag: text-generation
13
  license: gemma
14
+ model-index:
15
+ - name: Gemma2-9B-AdvancedFuse
16
+ results:
17
+ - task:
18
+ type: text-generation
19
+ name: Text Generation
20
+ dataset:
21
+ name: IFEval (0-Shot)
22
+ type: wis-k/instruction-following-eval
23
+ split: train
24
+ args:
25
+ num_few_shot: 0
26
+ metrics:
27
+ - type: inst_level_strict_acc and prompt_level_strict_acc
28
+ value: 15.43
29
+ name: averaged accuracy
30
+ source:
31
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=agentlans%2FGemma2-9B-AdvancedFuse
32
+ name: Open LLM Leaderboard
33
+ - task:
34
+ type: text-generation
35
+ name: Text Generation
36
+ dataset:
37
+ name: BBH (3-Shot)
38
+ type: SaylorTwift/bbh
39
+ split: test
40
+ args:
41
+ num_few_shot: 3
42
+ metrics:
43
+ - type: acc_norm
44
+ value: 40.52
45
+ name: normalized accuracy
46
+ source:
47
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=agentlans%2FGemma2-9B-AdvancedFuse
48
+ name: Open LLM Leaderboard
49
+ - task:
50
+ type: text-generation
51
+ name: Text Generation
52
+ dataset:
53
+ name: MATH Lvl 5 (4-Shot)
54
+ type: lighteval/MATH-Hard
55
+ split: test
56
+ args:
57
+ num_few_shot: 4
58
+ metrics:
59
+ - type: exact_match
60
+ value: 7.55
61
+ name: exact match
62
+ source:
63
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=agentlans%2FGemma2-9B-AdvancedFuse
64
+ name: Open LLM Leaderboard
65
+ - task:
66
+ type: text-generation
67
+ name: Text Generation
68
+ dataset:
69
+ name: GPQA (0-shot)
70
+ type: Idavidrein/gpqa
71
+ split: train
72
+ args:
73
+ num_few_shot: 0
74
+ metrics:
75
+ - type: acc_norm
76
+ value: 11.3
77
+ name: acc_norm
78
+ source:
79
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=agentlans%2FGemma2-9B-AdvancedFuse
80
+ name: Open LLM Leaderboard
81
+ - task:
82
+ type: text-generation
83
+ name: Text Generation
84
+ dataset:
85
+ name: MuSR (0-shot)
86
+ type: TAUR-Lab/MuSR
87
+ args:
88
+ num_few_shot: 0
89
+ metrics:
90
+ - type: acc_norm
91
+ value: 11.99
92
+ name: acc_norm
93
+ source:
94
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=agentlans%2FGemma2-9B-AdvancedFuse
95
+ name: Open LLM Leaderboard
96
+ - task:
97
+ type: text-generation
98
+ name: Text Generation
99
+ dataset:
100
+ name: MMLU-PRO (5-shot)
101
+ type: TIGER-Lab/MMLU-Pro
102
+ config: main
103
+ split: test
104
+ args:
105
+ num_few_shot: 5
106
+ metrics:
107
+ - type: acc
108
+ value: 33.34
109
+ name: accuracy
110
+ source:
111
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=agentlans%2FGemma2-9B-AdvancedFuse
112
+ name: Open LLM Leaderboard
113
  ---
114
  # Gemma2-9B-AdvancedFuse
115
 
 
134
  1. Use clear and specific instructions for optimal performance.
135
  2. Verify generated outputs for factual accuracy when critical information is involved.
136
  3. Avoid providing inputs that could lead to harmful or unethical responses.
137
+ 4. Consider using human review, especially in high-stakes applications.
138
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
139
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/agentlans__Gemma2-9B-AdvancedFuse-details)!
140
+ Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=agentlans%2FGemma2-9B-AdvancedFuse&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!
141
+
142
+ | Metric |Value (%)|
143
+ |-------------------|--------:|
144
+ |**Average** | 20.02|
145
+ |IFEval (0-Shot) | 15.43|
146
+ |BBH (3-Shot) | 40.52|
147
+ |MATH Lvl 5 (4-Shot)| 7.55|
148
+ |GPQA (0-shot) | 11.30|
149
+ |MuSR (0-shot) | 11.99|
150
+ |MMLU-PRO (5-shot) | 33.34|
151
+