-
KTO: Model Alignment as Prospect Theoretic Optimization
Paper • 2402.01306 • Published • 14 -
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 45 -
SimPO: Simple Preference Optimization with a Reference-Free Reward
Paper • 2405.14734 • Published • 9 -
Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment
Paper • 2408.06266 • Published • 9
Collections
Discover the best community collections!
Collections including paper arxiv:2403.07691
-
Iterative Reasoning Preference Optimization
Paper • 2404.19733 • Published • 46 -
Better & Faster Large Language Models via Multi-token Prediction
Paper • 2404.19737 • Published • 73 -
ORPO: Monolithic Preference Optimization without Reference Model
Paper • 2403.07691 • Published • 60 -
KAN: Kolmogorov-Arnold Networks
Paper • 2404.19756 • Published • 108
-
ORPO: Monolithic Preference Optimization without Reference Model
Paper • 2403.07691 • Published • 60 -
ResearchAgent: Iterative Research Idea Generation over Scientific Literature with Large Language Models
Paper • 2404.07738 • Published • 2 -
Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models
Paper • 2405.01535 • Published • 114
-
A General Theoretical Paradigm to Understand Learning from Human Preferences
Paper • 2310.12036 • Published • 12 -
ORPO: Monolithic Preference Optimization without Reference Model
Paper • 2403.07691 • Published • 60 -
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 45