Kearm commited on
Commit
b451972
1 Parent(s): d6a69db

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -1
README.md CHANGED
@@ -30,4 +30,37 @@ Original model: https://huggingface.co/cognitivecomputations/laserxtral
30
 
31
  Credit to Bartowski for help and model card formatting
32
 
33
- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/655dc641accde1bbc8b41aec/iToMZFTp1DuXnpw9oJ61y.jpeg)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
  Credit to Bartowski for help and model card formatting
32
 
33
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/655dc641accde1bbc8b41aec/iToMZFTp1DuXnpw9oJ61y.jpeg)
34
+
35
+ ## Original Model Card Below
36
+
37
+ ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/646e57a5cb6ea6e6b6df1ad4/BtnWsqZnaG1I6aa-Ldkfz.webp)
38
+
39
+ by David, Fernando and Eric
40
+
41
+ Sponsored by: [VAGO Solutions](https://vago-solutions.de)
42
+
43
+ Join our Discord! https://discord.gg/vT3sktQ3zb
44
+
45
+ An experimentation regarding 'lasering' each expert to denoise and enhance model capabilities.
46
+
47
+ This model has half size in comparison to the Mixtral 8x7b Instruct. And it basically has the same level of performance (we are working to get a better MMLU score).
48
+
49
+
50
+ # Laserxtral - 4x7b (all, except for base, lasered using laserRMT)
51
+
52
+ This model is a Mixture of Experts (MoE) made with [mergekit](https://github.com/cg123/mergekit) (mixtral branch). It uses the following base models:
53
+ * [cognitivecomputations/dolphin-2.6-mistral-7b-dpo](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo)
54
+ * [mlabonne/Marcoro14-7B-slerp (base)](https://huggingface.co/mlabonne/Marcoro14-7B-slerp)
55
+ * [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B)
56
+ * [Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling)
57
+ * [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
58
+
59
+
60
+ It follows the implementation of laserRMT @ https://github.com/cognitivecomputations/laserRMT
61
+
62
+ Here, we are controlling layers checking which ones have lower signal to noise ratios (which are more subject to noise), to apply Laser interventions, still using Machenko Pastur to calculate this ratio.
63
+
64
+ We intend to be the first of a family of experimentations being carried out @ Cognitive Computations.
65
+
66
+ In this experiment we have observed very high truthfulness and high reasoning capabilities.