zhengr commited on
Commit
16bf59f
1 Parent(s): 1456aae

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -8
README.md CHANGED
@@ -20,7 +20,8 @@ model-index:
20
  value: 73.81
21
  name: normalized accuracy
22
  source:
23
- url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=zhengr/MixTAO-7Bx2-MoE-v8.1
 
24
  name: Open LLM Leaderboard
25
  - task:
26
  type: text-generation
@@ -36,7 +37,8 @@ model-index:
36
  value: 89.22
37
  name: normalized accuracy
38
  source:
39
- url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=zhengr/MixTAO-7Bx2-MoE-v8.1
 
40
  name: Open LLM Leaderboard
41
  - task:
42
  type: text-generation
@@ -53,7 +55,8 @@ model-index:
53
  value: 64.92
54
  name: accuracy
55
  source:
56
- url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=zhengr/MixTAO-7Bx2-MoE-v8.1
 
57
  name: Open LLM Leaderboard
58
  - task:
59
  type: text-generation
@@ -69,7 +72,8 @@ model-index:
69
  - type: mc2
70
  value: 78.57
71
  source:
72
- url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=zhengr/MixTAO-7Bx2-MoE-v8.1
 
73
  name: Open LLM Leaderboard
74
  - task:
75
  type: text-generation
@@ -86,7 +90,8 @@ model-index:
86
  value: 87.37
87
  name: accuracy
88
  source:
89
- url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=zhengr/MixTAO-7Bx2-MoE-v8.1
 
90
  name: Open LLM Leaderboard
91
  - task:
92
  type: text-generation
@@ -103,7 +108,8 @@ model-index:
103
  value: 71.11
104
  name: accuracy
105
  source:
106
- url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=zhengr/MixTAO-7Bx2-MoE-v8.1
 
107
  name: Open LLM Leaderboard
108
  ---
109
 
@@ -117,6 +123,7 @@ This model is mainly used for large model technology experiments, and increasing
117
  | --- | --- |
118
  |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1y2XmAGrQvVfbgtimTsCBO3tem735q7HZ?usp=sharing) | MixTAO-7Bx2-MoE-v8.1 |
119
  |[mixtao-7bx2-moe-v8.1.Q4_K_M.gguf](https://huggingface.co/zhengr/MixTAO-7Bx2-MoE-v8.1-GGUF/resolve/main/mixtao-7bx2-moe-v8.1.Q4_K_M.gguf) | GGUF of MixTAO-7Bx2-MoE-v8.1 <br> Only Q4_K_M in https://huggingface.co/zhengr/MixTAO-7Bx2-MoE-v8.1-GGUF |
 
120
 
121
  # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
122
  Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_zhengr__MixTAO-7Bx2-MoE-v8.1)
@@ -129,5 +136,4 @@ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-le
129
  |MMLU (5-Shot) |64.92|
130
  |TruthfulQA (0-shot) |78.57|
131
  |Winogrande (5-shot) |87.37|
132
- |GSM8k (5-shot) |71.11|
133
-
 
20
  value: 73.81
21
  name: normalized accuracy
22
  source:
23
+ url: >-
24
+ https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=zhengr/MixTAO-7Bx2-MoE-v8.1
25
  name: Open LLM Leaderboard
26
  - task:
27
  type: text-generation
 
37
  value: 89.22
38
  name: normalized accuracy
39
  source:
40
+ url: >-
41
+ https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=zhengr/MixTAO-7Bx2-MoE-v8.1
42
  name: Open LLM Leaderboard
43
  - task:
44
  type: text-generation
 
55
  value: 64.92
56
  name: accuracy
57
  source:
58
+ url: >-
59
+ https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=zhengr/MixTAO-7Bx2-MoE-v8.1
60
  name: Open LLM Leaderboard
61
  - task:
62
  type: text-generation
 
72
  - type: mc2
73
  value: 78.57
74
  source:
75
+ url: >-
76
+ https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=zhengr/MixTAO-7Bx2-MoE-v8.1
77
  name: Open LLM Leaderboard
78
  - task:
79
  type: text-generation
 
90
  value: 87.37
91
  name: accuracy
92
  source:
93
+ url: >-
94
+ https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=zhengr/MixTAO-7Bx2-MoE-v8.1
95
  name: Open LLM Leaderboard
96
  - task:
97
  type: text-generation
 
108
  value: 71.11
109
  name: accuracy
110
  source:
111
+ url: >-
112
+ https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=zhengr/MixTAO-7Bx2-MoE-v8.1
113
  name: Open LLM Leaderboard
114
  ---
115
 
 
123
  | --- | --- |
124
  |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1y2XmAGrQvVfbgtimTsCBO3tem735q7HZ?usp=sharing) | MixTAO-7Bx2-MoE-v8.1 |
125
  |[mixtao-7bx2-moe-v8.1.Q4_K_M.gguf](https://huggingface.co/zhengr/MixTAO-7Bx2-MoE-v8.1-GGUF/resolve/main/mixtao-7bx2-moe-v8.1.Q4_K_M.gguf) | GGUF of MixTAO-7Bx2-MoE-v8.1 <br> Only Q4_K_M in https://huggingface.co/zhengr/MixTAO-7Bx2-MoE-v8.1-GGUF |
126
+ | Demo Space | https://zhengr-mixtao-7bx2-moe-v8-1.hf.space/ |
127
 
128
  # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
129
  Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_zhengr__MixTAO-7Bx2-MoE-v8.1)
 
136
  |MMLU (5-Shot) |64.92|
137
  |TruthfulQA (0-shot) |78.57|
138
  |Winogrande (5-shot) |87.37|
139
+ |GSM8k (5-shot) |71.11|