image/webp

by David, Fernando and Eric

Sponsored by: VAGO Solutions

Discord Discord: https://discord.gg/cognitivecomputations

An experimentation regarding 'lasering' each expert to denoise and enhance model capabilities.

This model has half size in comparison to the Mixtral 8x7b Instruct. And it basically has the same level of performance (we are working to get a better MMLU score).

Laserxtral - 4x7b (all, except for base, lasered using laserRMT)

This model is a Mixture of Experts (MoE) made with mergekit (mixtral branch). It uses the following base models:

It follows the implementation of laserRMT @ https://github.com/cognitivecomputations/laserRMT

Here, we are controlling layers checking which ones have lower signal to noise ratios (which are more subject to noise), to apply Laser interventions, still using Machenko Pastur to calculate this ratio.

We intend to be the first of a family of experimentations being carried out @ Cognitive Computations.

In this experiment we have observed very high truthfulness and high reasoning capabilities.

Evals

image/png

Downloads last month
5,748
Safetensors
Model size
24.2B params
Tensor type
BF16
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for cognitivecomputations/laserxtral

Quantizations
5 models

Space using cognitivecomputations/laserxtral 1