leaderboard-pr-bot commited on
Commit
0cb36d6
1 Parent(s): fc5ad3e

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +120 -4
README.md CHANGED
@@ -1,4 +1,7 @@
1
  ---
 
 
 
2
  library_name: transformers
3
  tags:
4
  - code
@@ -6,11 +9,111 @@ tags:
6
  - qa
7
  - assistant
8
  - reasoning
9
- license: apache-2.0
10
- language:
11
- - en
12
  metrics:
13
  - code_eval
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  ---
15
 
16
 
@@ -73,4 +176,17 @@ The model was trained using a Mixture of Experts (MoE) approach, allowing it to
73
  Moe-2x7b-QA-Code employs an advanced MoE architecture with 2x7 billion parameters, optimized for high performance in QA and coding tasks. This architecture enables the model to efficiently process and generate accurate responses to complex queries.
74
 
75
  **Contact**
76
- Https://nextai.co.in
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
  library_name: transformers
6
  tags:
7
  - code
 
9
  - qa
10
  - assistant
11
  - reasoning
 
 
 
12
  metrics:
13
  - code_eval
14
+ model-index:
15
+ - name: Moe-2x7b-QA-Code
16
+ results:
17
+ - task:
18
+ type: text-generation
19
+ name: Text Generation
20
+ dataset:
21
+ name: AI2 Reasoning Challenge (25-Shot)
22
+ type: ai2_arc
23
+ config: ARC-Challenge
24
+ split: test
25
+ args:
26
+ num_few_shot: 25
27
+ metrics:
28
+ - type: acc_norm
29
+ value: 65.19
30
+ name: normalized accuracy
31
+ source:
32
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nextai-team/Moe-2x7b-QA-Code
33
+ name: Open LLM Leaderboard
34
+ - task:
35
+ type: text-generation
36
+ name: Text Generation
37
+ dataset:
38
+ name: HellaSwag (10-Shot)
39
+ type: hellaswag
40
+ split: validation
41
+ args:
42
+ num_few_shot: 10
43
+ metrics:
44
+ - type: acc_norm
45
+ value: 85.36
46
+ name: normalized accuracy
47
+ source:
48
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nextai-team/Moe-2x7b-QA-Code
49
+ name: Open LLM Leaderboard
50
+ - task:
51
+ type: text-generation
52
+ name: Text Generation
53
+ dataset:
54
+ name: MMLU (5-Shot)
55
+ type: cais/mmlu
56
+ config: all
57
+ split: test
58
+ args:
59
+ num_few_shot: 5
60
+ metrics:
61
+ - type: acc
62
+ value: 61.71
63
+ name: accuracy
64
+ source:
65
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nextai-team/Moe-2x7b-QA-Code
66
+ name: Open LLM Leaderboard
67
+ - task:
68
+ type: text-generation
69
+ name: Text Generation
70
+ dataset:
71
+ name: TruthfulQA (0-shot)
72
+ type: truthful_qa
73
+ config: multiple_choice
74
+ split: validation
75
+ args:
76
+ num_few_shot: 0
77
+ metrics:
78
+ - type: mc2
79
+ value: 65.23
80
+ source:
81
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nextai-team/Moe-2x7b-QA-Code
82
+ name: Open LLM Leaderboard
83
+ - task:
84
+ type: text-generation
85
+ name: Text Generation
86
+ dataset:
87
+ name: Winogrande (5-shot)
88
+ type: winogrande
89
+ config: winogrande_xl
90
+ split: validation
91
+ args:
92
+ num_few_shot: 5
93
+ metrics:
94
+ - type: acc
95
+ value: 77.35
96
+ name: accuracy
97
+ source:
98
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nextai-team/Moe-2x7b-QA-Code
99
+ name: Open LLM Leaderboard
100
+ - task:
101
+ type: text-generation
102
+ name: Text Generation
103
+ dataset:
104
+ name: GSM8k (5-shot)
105
+ type: gsm8k
106
+ config: main
107
+ split: test
108
+ args:
109
+ num_few_shot: 5
110
+ metrics:
111
+ - type: acc
112
+ value: 49.66
113
+ name: accuracy
114
+ source:
115
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nextai-team/Moe-2x7b-QA-Code
116
+ name: Open LLM Leaderboard
117
  ---
118
 
119
 
 
176
  Moe-2x7b-QA-Code employs an advanced MoE architecture with 2x7 billion parameters, optimized for high performance in QA and coding tasks. This architecture enables the model to efficiently process and generate accurate responses to complex queries.
177
 
178
  **Contact**
179
+ Https://nextai.co.in
180
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
181
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nextai-team__Moe-2x7b-QA-Code)
182
+
183
+ | Metric |Value|
184
+ |---------------------------------|----:|
185
+ |Avg. |67.42|
186
+ |AI2 Reasoning Challenge (25-Shot)|65.19|
187
+ |HellaSwag (10-Shot) |85.36|
188
+ |MMLU (5-Shot) |61.71|
189
+ |TruthfulQA (0-shot) |65.23|
190
+ |Winogrande (5-shot) |77.35|
191
+ |GSM8k (5-shot) |49.66|
192
+