Edit model card

Barcenas Mixtral 8x7b based on argilla/notux-8x7b-v1

It is a 4-bit version of this model to make it more accessible to users

Trained with DPO and using MoE Technology makes it a powerful and innovative model.

Made with 鉂わ笍 in Guadalupe, Nuevo Leon, Mexico 馃嚥馃嚱

Downloads last month
1
Safetensors
Model size
24.2B params
Tensor type
F32
FP16
U8
Inference API
Input a message to start chatting with Danielbrdz/Barcenas-Mixtral-8x7b-4bit.
Inference API (serverless) has been turned off for this model.