This is a model released from the preprint: [SimPO: Simple Preference Optimization with a Reference-Free Reward](https://arxiv.org/abs/2405.14734). Please refer to our [repository](https://github.com/princeton-nlp/SimPO) for more details.