munish0838's picture
Create README.md
8e50a10 verified
|
raw
history blame contribute delete
No virus
577 Bytes
---
library_name: transformers
pipeline_tag: text-generation
base_model: princeton-nlp/Mistral-7B-Instruct-DPO
---
# QuantFactory/Mistral-7B-Instruct-DPO-GGUF
This is quantized version of [princeton-nlp/Mistral-7B-Instruct-DPO](https://huggingface.co/princeton-nlp/Mistral-7B-Instruct-DPO) created using llama.cpp
# Model Description
This is a model released from the preprint: *[SimPO: Simple Preference Optimization with a Reference-Free Reward](https://arxiv.org/abs/2405.14734)* Please refer to our [repository](https://github.com/princeton-nlp/SimPO) for more details.