taesiri commited on
Commit
82a2e37
1 Parent(s): 44db81a

Upload abstract/2304.05302.txt with huggingface_hub

Browse files
Files changed (1) hide show
  1. abstract/2304.05302.txt +1 -0
abstract/2304.05302.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ "Reinforcement Learning from Human Feedback (RLHF) facilitates the alignment of large language models with human preferences, significantly enhancing the quality of interactions between humans and these models. InstructGPT implements RLHF through several stages, including Supervised Fine-Tuning (SFT), reward model training, and Proximal Policy Optimization (PPO). PPO, however, is sensitive to hyperparameters and requires a minimum of four models in its standard implementation, which makes it hard to train. In contrast, we propose a novel learning paradigm called RRHF, which scores responses generated by different sampling policies and learns to align them with human preferences through ranking loss. RRHF can efficiently align language model output probabilities with human preferences as robust as fine-tuning and it only needs 1 to 2 models during tuning. In addition, RRHF can be considered an extension of SFT and reward models while being simpler than PPO in terms of coding, model counts, and hyperparameters. The entire alignment process can be accomplished within a single RRHF training session. We evaluate RRHF using LLaMA and Alpaca on helpful and harmless data, demonstrating performance comparable to PPO."