Vignesh
Vigneshwaran
AI & ML interests
None yet
Organizations
Collections
4
-
ORPO: Monolithic Preference Optimization without Reference Model
Paper • 2403.07691 • Published • 61 -
sDPO: Don't Use Your Data All at Once
Paper • 2403.19270 • Published • 38 -
Teaching Large Language Models to Reason with Reinforcement Learning
Paper • 2403.04642 • Published • 46 -
Best Practices and Lessons Learned on Synthetic Data for Language Models
Paper • 2404.07503 • Published • 29
models
None public yet
datasets
None public yet