fernandofernandes commited on
Commit
d49ae4a
1 Parent(s): 1149a16

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -0
README.md CHANGED
@@ -14,6 +14,14 @@ This model has half size in comparison to the Mixtral 8x7b Instruct. And it basi
14
 
15
  Used models (all lasered using laserRMT, except for the base model):
16
 
 
 
 
 
 
 
 
 
17
 
18
  *mlabonne/Marcoro14-7B-slerp (base)
19
 
 
14
 
15
  Used models (all lasered using laserRMT, except for the base model):
16
 
17
+ # Beyonder-4x7B-v2
18
+
19
+ This model is a Mixture of Experts (MoE) made with [mergekit](https://github.com/cg123/mergekit) (mixtral branch). It uses the following base models:
20
+ * [cognitivecomputations/dolphin-2.6-mistral-7b-dpo](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo)
21
+ * [mlabonne/Marcoro14-7B-slerp (base)](https://huggingface.co/mlabonne/Marcoro14-7B-slerp)
22
+ * [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B)
23
+ * [Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling)
24
+ * [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
25
 
26
  *mlabonne/Marcoro14-7B-slerp (base)
27