--- pipeline_tag: text-generation base_model: Mistral-7B-Instruct-SLiC-HF library_name: transformers --- ![](https://cdn.discordapp.com/attachments/791342238541152306/1264099835221381251/image.png?ex=669ca436&is=669b52b6&hm=129f56187c31e1ed22cbd1bcdbc677a2baeea5090761d2f1a458c8b1ec7cca4b&) # QuantFactory/Mistral-7B-Instruct-SLiC-HF-GGUF This is quantized version of [princeton-nlp/Mistral-7B-Instruct-SLiC-HF](https://huggingface.co/princeton-nlp/Mistral-7B-Instruct-SLiC-HF) created using llama.cpp # Original Model Card This is a model released from the preprint: [SimPO: Simple Preference Optimization with a Reference-Free Reward](https://arxiv.org/abs/2405.14734). Please refer to our [repository](https://github.com/princeton-nlp/SimPO) for more details.