beberik leaderboard-pr-bot commited on
Commit
dfef26f
1 Parent(s): cf8144d

Adding Evaluation Results (#1)

Browse files

- Adding Evaluation Results (8f8fccb526a46436460411af6386e89570dac3ec)


Co-authored-by: Open LLM Leaderboard PR Bot <leaderboard-pr-bot@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +117 -1
README.md CHANGED
@@ -3,6 +3,109 @@ license: cc-by-nc-4.0
3
  tags:
4
  - merge
5
  - llama
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  ---
7
  Some experiment with [moe merge](https://github.com/cg123/mergekit/tree/mixtral).
8
 
@@ -21,4 +124,17 @@ Answer: the meaning of life is to ask yourself questions that make you think abo
21
 
22
  ```
23
 
24
- But seriously if you need something that can at least be useful, then it's better to use [phi](https://huggingface.co/models?sort=trending&search=microsoft%2Fphi).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  tags:
4
  - merge
5
  - llama
6
+ model-index:
7
+ - name: TinyExperts-v0-4x1B
8
+ results:
9
+ - task:
10
+ type: text-generation
11
+ name: Text Generation
12
+ dataset:
13
+ name: AI2 Reasoning Challenge (25-Shot)
14
+ type: ai2_arc
15
+ config: ARC-Challenge
16
+ split: test
17
+ args:
18
+ num_few_shot: 25
19
+ metrics:
20
+ - type: acc_norm
21
+ value: 31.4
22
+ name: normalized accuracy
23
+ source:
24
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/TinyExperts-v0-4x1B
25
+ name: Open LLM Leaderboard
26
+ - task:
27
+ type: text-generation
28
+ name: Text Generation
29
+ dataset:
30
+ name: HellaSwag (10-Shot)
31
+ type: hellaswag
32
+ split: validation
33
+ args:
34
+ num_few_shot: 10
35
+ metrics:
36
+ - type: acc_norm
37
+ value: 52.29
38
+ name: normalized accuracy
39
+ source:
40
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/TinyExperts-v0-4x1B
41
+ name: Open LLM Leaderboard
42
+ - task:
43
+ type: text-generation
44
+ name: Text Generation
45
+ dataset:
46
+ name: MMLU (5-Shot)
47
+ type: cais/mmlu
48
+ config: all
49
+ split: test
50
+ args:
51
+ num_few_shot: 5
52
+ metrics:
53
+ - type: acc
54
+ value: 25.87
55
+ name: accuracy
56
+ source:
57
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/TinyExperts-v0-4x1B
58
+ name: Open LLM Leaderboard
59
+ - task:
60
+ type: text-generation
61
+ name: Text Generation
62
+ dataset:
63
+ name: TruthfulQA (0-shot)
64
+ type: truthful_qa
65
+ config: multiple_choice
66
+ split: validation
67
+ args:
68
+ num_few_shot: 0
69
+ metrics:
70
+ - type: mc2
71
+ value: 41.13
72
+ source:
73
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/TinyExperts-v0-4x1B
74
+ name: Open LLM Leaderboard
75
+ - task:
76
+ type: text-generation
77
+ name: Text Generation
78
+ dataset:
79
+ name: Winogrande (5-shot)
80
+ type: winogrande
81
+ config: winogrande_xl
82
+ split: validation
83
+ args:
84
+ num_few_shot: 5
85
+ metrics:
86
+ - type: acc
87
+ value: 60.14
88
+ name: accuracy
89
+ source:
90
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/TinyExperts-v0-4x1B
91
+ name: Open LLM Leaderboard
92
+ - task:
93
+ type: text-generation
94
+ name: Text Generation
95
+ dataset:
96
+ name: GSM8k (5-shot)
97
+ type: gsm8k
98
+ config: main
99
+ split: test
100
+ args:
101
+ num_few_shot: 5
102
+ metrics:
103
+ - type: acc
104
+ value: 0.53
105
+ name: accuracy
106
+ source:
107
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/TinyExperts-v0-4x1B
108
+ name: Open LLM Leaderboard
109
  ---
110
  Some experiment with [moe merge](https://github.com/cg123/mergekit/tree/mixtral).
111
 
 
124
 
125
  ```
126
 
127
+ But seriously if you need something that can at least be useful, then it's better to use [phi](https://huggingface.co/models?sort=trending&search=microsoft%2Fphi).
128
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
129
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_beberik__TinyExperts-v0-4x1B)
130
+
131
+ | Metric |Value|
132
+ |---------------------------------|----:|
133
+ |Avg. |35.23|
134
+ |AI2 Reasoning Challenge (25-Shot)|31.40|
135
+ |HellaSwag (10-Shot) |52.29|
136
+ |MMLU (5-Shot) |25.87|
137
+ |TruthfulQA (0-shot) |41.13|
138
+ |Winogrande (5-shot) |60.14|
139
+ |GSM8k (5-shot) | 0.53|
140
+