WPO
Collection
Models trained in paper "WPO: Enhancing RLHF with Weighted Preference Optimization".
•
5 items
•
Updated
Llama3-Instruct-8B model finetuned by hybrid WPO (GPT-4-turbo + on-policy sampling + Ultrafeedback). Details in WPO: Enhancing RLHF with Weighted Preference Optimization.
In comparison to the Llama3-Instruct-8B-WPO-HB model, it employs an enhanced preference data construction method:
This model is licensed under the Zoom software license and is permitted for use only for noncommercial, educational, or academic research purposes.