Collections

Discover the best community collections!

Collections including paper arxiv:2403.07691
Zephyr ORPO
Models and datasets to align LLMs with Odds Ratio Preference Optimisation (ORPO). Recipes here: https://github.com/huggingface/alignment-handbook
Foundation AI Papers (II)
Collection by about 5 hours ago
About ORPO
Contains some information and experiments fine-tuning LLMs using 🤗 `trl.ORPOTrainer`