Direct Preference Optimization: Your Language Model is Secretly a Reward Model Paper โข 2305.18290 โข Published May 29, 2023 โข 50
Zephyr 7B Collection Models, datasets, and demos associated with Zephyr 7B. For code to train the models, see: https://github.com/huggingface/alignment-handbook โข 9 items โข Updated Apr 12, 2024 โข 146