ehartford commited on
Commit
6733b03
1 Parent(s): bab62aa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -24,7 +24,6 @@ This model is a Mixture of Experts (MoE) made with [mergekit](https://github.com
24
  * [Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling)
25
  * [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
26
 
27
-
28
  It follows the implementation of laserRMT @ https://github.com/cognitivecomputations/laserRMT
29
 
30
  Here, we are controlling layers checking which ones have lower signal to noise ratios (which are more subject to noise), to apply Laser interventions, still using Machenko Pastur to calculate this ratio.
@@ -32,3 +31,9 @@ Here, we are controlling layers checking which ones have lower signal to noise r
32
  We intend to be the first of a family of experimentations being carried out @ Cognitive Computations.
33
 
34
  In this experiment we have observed very high truthfulness and high reasoning capabilities.
 
 
 
 
 
 
 
24
  * [Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling)
25
  * [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
26
 
 
27
  It follows the implementation of laserRMT @ https://github.com/cognitivecomputations/laserRMT
28
 
29
  Here, we are controlling layers checking which ones have lower signal to noise ratios (which are more subject to noise), to apply Laser interventions, still using Machenko Pastur to calculate this ratio.
 
31
  We intend to be the first of a family of experimentations being carried out @ Cognitive Computations.
32
 
33
  In this experiment we have observed very high truthfulness and high reasoning capabilities.
34
+
35
+ # Evals
36
+
37
+
38
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/j_fg_zwGXC1RS9npuJMAK.png)
39
+