GGUF
English
Edit model card

Dolphin 2.6 Mistral 7b - DPO 🐬

This is a quantized GGUF version of dolphin-2.6-mistral-7b DPO to 4_0, 8_0 bits and the converted 16 FP model.

(link to the original model : https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo)

Downloads last month
156
GGUF
Unable to determine this model's library. Check the docs .

Datasets used to train kroonen/dolphin-2.6-mistral-7b-dpo-GGUF