Edit model card

QuantFactory/Mistral-7B-Instruct-DPO-GGUF

This is quantized version of princeton-nlp/Mistral-7B-Instruct-DPO created using llama.cpp

Model Description

This is a model released from the preprint: SimPO: Simple Preference Optimization with a Reference-Free Reward Please refer to our repository for more details.

Downloads last month
278
GGUF
Model size
7.24B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for QuantFactory/Mistral-7B-Instruct-DPO-GGUF

Quantized
(2)
this model

Collection including QuantFactory/Mistral-7B-Instruct-DPO-GGUF