munish0838's picture
Create README.md
7f47dfe verified
|
raw
history blame
577 Bytes
metadata
library_name: transformers
pipeline_tag: text-generation
base_model: princeton-nlp/Llama-3-Instruct-8B-DPO

QuantFactory/Llama-3-Instruct-8B-DPO-GGUF

This is quantized version of princeton-nlp/Llama-3-Instruct-8B-DPO created using llama.cpp

Model Description

This is a model released from the preprint: SimPO: Simple Preference Optimization with a Reference-Free Reward Please refer to our repository for more details.