TRL documentation

TRL - Transformer Reinforcement Learning

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v0.8.6).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

TRL - Transformer Reinforcement Learning

TRL is a full stack library where we provide a set of tools to train transformer language models with Reinforcement Learning, from the Supervised Fine-tuning step (SFT), Reward Modeling step (RM) to the Proximal Policy Optimization (PPO) step. The library is integrated with 🤗 transformers.

Check the appropriate sections of the documentation depending on your needs:

API documentation

  • Model Classes: A brief overview of what each public model class does.
  • SFTTrainer: Supervise Fine-tune your model easily with SFTTrainer
  • RewardTrainer: Train easily your reward model using RewardTrainer.
  • PPOTrainer: Further fine-tune the supervised fine-tuned model using PPO algorithm
  • Best-of-N Sampling: Use best of n sampling as an alternative way to sample predictions from your active model
  • DPOTrainer: Direct Preference Optimization training using DPOTrainer.
  • TextEnvironment: Text environment to train your model using tools with RL.

Examples

Blog posts

< > Update on GitHub