yuuko-eth commited on
Commit
2dabe38
1 Parent(s): becb9af

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -0
README.md CHANGED
@@ -1,3 +1,28 @@
1
  ---
2
  license: unknown
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: unknown
3
+ tags:
4
+ - merge
5
+ - mergekit
6
+ - WizardLM/WizardMath-7B-V1.1
7
+ - teknium/OpenHermes-2.5-Mistral-7B
8
+ - mlabonne/Marcoro14-7B-slerp
9
  ---
10
+
11
+ # DraftReasoner-2x7B-MoE-v0.1
12
+
13
+ Experimental 2-expert MoE merge using mlabonne/Marcoro14-7B-slerp as base.
14
+
15
+ * [Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp ) as base.
16
+ * [OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) as model 0.
17
+ * [WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1) as model 1.
18
+
19
+ ![](https://i.imgur.com/cDQS6rq.jpg)
20
+
21
+ ## Notes
22
+
23
+ Please evaluate before use in any application pipeline. Activation for Math part of the model would be `'math'`, `'reason'`, `'solve'`, `'count'`.
24
+
25
+
26
+
27
+
28
+