Fenris.png

Fenris M1 BasicData GGUF - Q8_0 - FP16

Laser-Dolphin-Mixtral-2x7b-dpo -GGUF

Credit to Fernando Fernandes and Eric Hartford for their project laserRMT : https://github.com/cognitivecomputations/laserRMT

Credit to Tim Dolan for source code.

Overview :

This model is a medium-sized MoE implementation based on cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser

Finetuned from : mistralai/Mistral-7B-v0.1

Downloads last month
4
GGUF
Model size
12.9B params
Architecture
llama
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train Denath-Khor/laser-dolphin-mixtral-2x7b-dpo-GGUF