ibivibiv leaderboard-pr-bot commited on
Commit
b2f3be3
1 Parent(s): 3d45ea8

Adding Evaluation Results (#1)

Browse files

- Adding Evaluation Results (780ea06d6de89fb8668b1cec3b4a0297a0c6e4a7)


Co-authored-by: Open LLM Leaderboard PR Bot <leaderboard-pr-bot@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +118 -2
README.md CHANGED
@@ -1,13 +1,129 @@
1
  ---
2
- license: llama2
3
  language:
4
  - en
 
5
  tags:
6
  - moe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  ---
8
 
9
  ![img](./orthorus.png)
10
 
11
  This is a test run for a future 70b parameter models moe model. I took WizardLM/WizardLM-70B-V1.0 and migtissera/Synthia-70B as two base models and created the discriminator prompts to push technical, logic, and math type questions to the Wizard side and then all creative or conversation questions to the Synthia side. Now that this is working for me I am going to move to fine tuning models for more specific tasks. This model takes about 240GB of VRAM for full resolution inference. As far as I know, it is the first 125B parameter moe model publicly available. I plan on making more and sharing of course.
12
 
13
- Hopefully I can add more info on this model, it loads perfectly for me and responds nicely. It might take me a bit since I want to make "Cerberus" with the fine tuned models and get it released. But enjoy this one, llama2 model.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  language:
3
  - en
4
+ license: llama2
5
  tags:
6
  - moe
7
+ model-index:
8
+ - name: orthorus-125b-moe
9
+ results:
10
+ - task:
11
+ type: text-generation
12
+ name: Text Generation
13
+ dataset:
14
+ name: AI2 Reasoning Challenge (25-Shot)
15
+ type: ai2_arc
16
+ config: ARC-Challenge
17
+ split: test
18
+ args:
19
+ num_few_shot: 25
20
+ metrics:
21
+ - type: acc_norm
22
+ value: 67.66
23
+ name: normalized accuracy
24
+ source:
25
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/orthorus-125b-moe
26
+ name: Open LLM Leaderboard
27
+ - task:
28
+ type: text-generation
29
+ name: Text Generation
30
+ dataset:
31
+ name: HellaSwag (10-Shot)
32
+ type: hellaswag
33
+ split: validation
34
+ args:
35
+ num_few_shot: 10
36
+ metrics:
37
+ - type: acc_norm
38
+ value: 85.52
39
+ name: normalized accuracy
40
+ source:
41
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/orthorus-125b-moe
42
+ name: Open LLM Leaderboard
43
+ - task:
44
+ type: text-generation
45
+ name: Text Generation
46
+ dataset:
47
+ name: MMLU (5-Shot)
48
+ type: cais/mmlu
49
+ config: all
50
+ split: test
51
+ args:
52
+ num_few_shot: 5
53
+ metrics:
54
+ - type: acc
55
+ value: 68.94
56
+ name: accuracy
57
+ source:
58
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/orthorus-125b-moe
59
+ name: Open LLM Leaderboard
60
+ - task:
61
+ type: text-generation
62
+ name: Text Generation
63
+ dataset:
64
+ name: TruthfulQA (0-shot)
65
+ type: truthful_qa
66
+ config: multiple_choice
67
+ split: validation
68
+ args:
69
+ num_few_shot: 0
70
+ metrics:
71
+ - type: mc2
72
+ value: 56.27
73
+ source:
74
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/orthorus-125b-moe
75
+ name: Open LLM Leaderboard
76
+ - task:
77
+ type: text-generation
78
+ name: Text Generation
79
+ dataset:
80
+ name: Winogrande (5-shot)
81
+ type: winogrande
82
+ config: winogrande_xl
83
+ split: validation
84
+ args:
85
+ num_few_shot: 5
86
+ metrics:
87
+ - type: acc
88
+ value: 82.32
89
+ name: accuracy
90
+ source:
91
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/orthorus-125b-moe
92
+ name: Open LLM Leaderboard
93
+ - task:
94
+ type: text-generation
95
+ name: Text Generation
96
+ dataset:
97
+ name: GSM8k (5-shot)
98
+ type: gsm8k
99
+ config: main
100
+ split: test
101
+ args:
102
+ num_few_shot: 5
103
+ metrics:
104
+ - type: acc
105
+ value: 56.79
106
+ name: accuracy
107
+ source:
108
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/orthorus-125b-moe
109
+ name: Open LLM Leaderboard
110
  ---
111
 
112
  ![img](./orthorus.png)
113
 
114
  This is a test run for a future 70b parameter models moe model. I took WizardLM/WizardLM-70B-V1.0 and migtissera/Synthia-70B as two base models and created the discriminator prompts to push technical, logic, and math type questions to the Wizard side and then all creative or conversation questions to the Synthia side. Now that this is working for me I am going to move to fine tuning models for more specific tasks. This model takes about 240GB of VRAM for full resolution inference. As far as I know, it is the first 125B parameter moe model publicly available. I plan on making more and sharing of course.
115
 
116
+ Hopefully I can add more info on this model, it loads perfectly for me and responds nicely. It might take me a bit since I want to make "Cerberus" with the fine tuned models and get it released. But enjoy this one, llama2 model.
117
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
118
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ibivibiv__orthorus-125b-moe)
119
+
120
+ | Metric |Value|
121
+ |---------------------------------|----:|
122
+ |Avg. |69.58|
123
+ |AI2 Reasoning Challenge (25-Shot)|67.66|
124
+ |HellaSwag (10-Shot) |85.52|
125
+ |MMLU (5-Shot) |68.94|
126
+ |TruthfulQA (0-shot) |56.27|
127
+ |Winogrande (5-shot) |82.32|
128
+ |GSM8k (5-shot) |56.79|
129
+