TRL documentation

Trainer

You are viewing v0.1.1 version. A newer version v0.12.1 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Trainer

At TRL we support PPO (Proximal Policy Optimisation) with an implementation that largely follows the structure introduced in the paper “Fine-Tuning Language Models from Human Preferences” by D. Ziegler et al. [paper, code]. The Trainer and model classes are largely inspired from transformers.Trainer and transformers.AutoModel classes and adapted for RL.

PPOConfig

class trl.PPOConfig

< >

( model_name: typing.Optional[str] = None steps: typing.Optional[int] = 20000 learning_rate: typing.Optional[float] = 1e-05 adap_kl_ctrl: typing.Optional[bool] = True init_kl_coef: typing.Optional[float] = 0.2 target: typing.Optional[float] = 6 horizon: typing.Optional[float] = 10000 gamma: typing.Optional[float] = 1 lam: typing.Optional[float] = 0.95 cliprange: typing.Optional[float] = 0.2 cliprange_value: typing.Optional[float] = 0.2 vf_coef: typing.Optional[float] = 0.1 batch_size: typing.Optional[int] = 256 forward_batch_size: typing.Optional[int] = 16 ppo_epochs: typing.Optional[int] = 4 remove_unused_columns: typing.Optional[bool] = True log_with: typing.Optional[str] = None tracker_kwargs: typing.Optional[dict] = {} accelerator_kwargs: typing.Optional[dict] = {} tracker_project_name: typing.Optional[str] = 'trl' )

Parameters

  • model_name (str, optional, defaults to None) — Name of model to use - used only for tracking purposes
  • steps (int, optional, defaults to 20000) — Number of training steps
  • learning_rate (float, optional, defaults to 1.41e-5) — Adam learning rate
  • adap_kl_ctrl (bool, optional, defaults to True) — Use adaptive KL control, otherwise linear
  • init_kl_coef (float, optional, defaults to 0.2) — Initial KL penalty coefficient (used for adaptive and linear control)
  • target (float, optional, defaults to 6) — Target KL value for adaptive KL control
  • horizon (float, optional, defaults to 10000) — Horizon for adaptive KL control
  • gamma (float, optional, defaults to 1) — Gamma parameter for advantage calculation
  • lam (float, optional, defaults to 0.95) — Lambda parameter for advantage calcualation
  • cliprange (float, optional, defaults to 0.2) — Range for clipping in PPO policy gradient loss
  • cliprange_value (float, optional, defaults to 0.2) — Range for clipping values in loss calculation
  • vf_coef (float, optional, defaults to 0.1) — Scaling factor for value loss
  • batch_size (int, optional, defaults to 256) — Number of samples per optimisation step
  • forward_batch_size (int, optional, defaults to 16) — Number of samples forward passed through model at a time
  • ppo_epochs (int, optional, defaults to 4) — Number of optimisation epochs per batch of samples
  • remove_unused_columns (bool, optional, defaults to True) — Remove unused columns from the dataset if datasets.Dataset is used
  • log_with (str, optional, defaults to None) — Log with either “wandb” or “tensorboard”, check https://huggingface.co/docs/accelerate/usage_guides/tracking for more details
  • accelerator_kwargs (dict, optional, defaults to {}) — Keyword arguments for the accelerator (e.g. logging_dir)
  • tracker_kwargs (dict, optional, defaults to {}) — Keyword arguments for the tracker (e.g. wandb_project)
  • tracker_project_name (str, optional, defaults to “trl”) — Name of project to use for tracking

Configuration class for PPOTrainer

PPOTrainer

class trl.PPOTrainer

< >

( config: PPOConfig model: PreTrainedModelWrapper ref_model: PreTrainedModelWrapper tokenizer: typing.Union[transformers.tokenization_utils.PreTrainedTokenizer, transformers.tokenization_utils_fast.PreTrainedTokenizerFast] dataset: typing.Union[torch.utils.data.dataset.Dataset, datasets.arrow_dataset.Dataset, NoneType] = None optimizer: typing.Optional[torch.optim.optimizer.Optimizer] = None data_collator = None num_shared_layers: typing.Optional[int] = None lr_scheduler: typing.Optional[torch.optim.lr_scheduler._LRScheduler] = None )

Parameters

  • **config** (PPOConfig) — Configuration object for PPOTrainer. Check the documentation of PPOConfig for more details. —
  • **model** (PreTrainedModelWrapper) — Model to be optimized, Hugging Face transformer model with a value head. — Check the documentation of PreTrainedModelWrapper for more details.
  • **ref_model** (PreTrainedModelWrapper, optional) — Reference model to be used for KL penalty, Hugging Face transformer model with a casual language modelling head. — Check the documentation of PreTrainedModelWrapper for more details. If no reference model is provided, the trainer will create a reference model with the same architecture as the model to be optimized with shared layers.
  • **tokenizer** (Union[PreTrainedTokenizer, PreTrainedTokenizerFast]) — Tokenizer to be used for encoding the data. Check the documentation of transformers.PreTrainedTokenizer and — transformers.PreTrainedTokenizerFast for more details.
  • **dataset** (Union[torch.utils.data.Dataset, datasets.Dataset], optional) — PyTorch dataset or Hugging Face dataset. This is used to create a PyTorch dataloader. If no dataset is provided, — the dataloader must be created outside the trainer users needs to design their own dataloader and make sure the batch size that is used is the same as the one specified in the configuration object.
  • **optimizer** (torch.optim.Optimizer, optional) — Optimizer to be used for training. If no optimizer is provided, the trainer will create an Adam optimizer with — the learning rate specified in the configuration object.
  • **data_collator** (DataCollatorForLanguageModeling, optional) — Data collator to be used for training and passed along the dataloader —
  • **num_shared_layers** (int, optional) — Number of layers to be shared between the model and the reference model, if no reference model is passed. If no number is provided, all the layers — will be shared.
  • **lr_scheduler** (torch.optim.lr_scheduler, optional) — Learning rate scheduler to be used for training. —

The PPOTrainer uses Proximal Policy Optimization to optimise language models.

batched_forward_pass

< >

( queries: Tensor responses: Tensor ) (tuple)

Parameters

  • queries (torch.LongTensor) — List of tensors containing the encoded queries, shape (batch_size, query_length)
  • responses (torch.LongTensor) — List of tensors containing the encoded responses, shape (batch_size, response_length)

Returns

(tuple)

  • all_logprobs (torch.FloatTensor): Log probabilities of the responses, shape (batch_size, response_length)
  • all_ref_logprobs (torch.FloatTensor): Log probabilities of the responses, shape (batch_size, response_length)
  • all_values (torch.FloatTensor): Values of the responses, shape (batch_size, response_length)

Calculate model outputs in multiple batches.

compute_rewards

< >

( scores: FloatTensor logprobs: FloatTensor ref_logprobs: FloatTensor )

Parameters

  • scores (torch.FloatTensor) — Scores from the reward model, shape (batch_size)
  • logprobs (torch.FloatTensor) — Log probabilities of the model, shape (batch_size, response_length)
  • ref_logprobs (torch.FloatTensor) — Log probabilities of the reference model, shape (batch_size, response_length)

Compute per token rewards from scores and KL-penalty.

gather_stats

< >

( stats ) dict[str, Any]

Parameters

  • stats (dict[str, Any]) —
  • a dictionary of stats to be gathered. The stats should contain torch tensors. —

Returns

dict[str, Any]

A dictionary of stats with the tensors gathered.

Gather stats from all processes. Useful in the context of distributed training.

generate

< >

( query_tensor: Tensor **generation_kwargs ) torch.LongTensor

Parameters

  • query_tensor (torch.LongTensor) — A tensor of shape (batch_size, seq_len) containing query tokens.
  • gen_kwargs (dict[str, Any]) — Keyword arguments for generation.

Returns

torch.LongTensor

A tensor of shape (batch_size, gen_len) containing response tokens.

Generate response with the model given the query tensor. call the generate method of the model.

log_stats

< >

( stats: dict batch: dict rewards: typing.List[torch.FloatTensor] )

Parameters

  • stats (dict[str, Any]) — A dictionary of training stats.
  • batch (dict[str, Any]) — A dictionary of batch data, this containes the queries and responses.
  • rewards (List[torch.FloatTensor]) — A tensor of rewards.

A function that logs all the training stats. Call it at the end of each epoch.

loss

< >

( old_logprobs: FloatTensor values: FloatTensor rewards: FloatTensor query: LongTensor response: LongTensor model_input: LongTensor )

Parameters

  • old_logprobs (torch.FloatTensor) — Log probabilities of the model, shape (batch_size, response_length)
  • values (torch.FloatTensor) — Values of the value head, shape (batch_size, hidden_dim)
  • rewards (torch.FloatTensor) — Rewards from the reward model, shape (batch_size)
  • query (torch.LongTensor) — Encoded queries, shape (batch_size, query_length)
  • response (torch.LongTensor) — Encoded responses, shape (batch_size, response_length)
  • model_input (torch.LongTensor) — Concatenated queries and responses, shape (batch_size, query_length+response_length)

Calculate policy and value losses.

prepare_dataloader

< >

( dataset: typing.Union[torch.utils.data.dataset.Dataset, datasets.arrow_dataset.Dataset] data_collator = None ) torch.utils.data.DataLoader

Parameters

  • dataset (Union[torch.utils.data.Dataset, datasets.Dataset]) — PyTorch dataset or Hugging Face dataset. If a Hugging Face dataset is passed, the dataset will be preprocessed by removing the columns that are not used by the model.
  • data_collator (Optional[function]) — Data collator function.

Returns

torch.utils.data.DataLoader

PyTorch dataloader

Prepare the dataloader for training.

record_step_stats

< >

( kl_coef: float **data ) stats (dict)

Parameters

  • kl_coef (float) — KL coefficient
  • data (dict) — Dictionary of training step data

Returns

stats (dict)

Dictionary of training step statistics

Record training step statistics.

step

< >

( queries: typing.List[torch.LongTensor] responses: typing.List[torch.LongTensor] scores: typing.List[torch.FloatTensor] ) dict[str, Any]

Parameters

  • queries (Listtorch.LongTensor) — List of tensors containing the encoded queries of shape (query_length)
  • responses (Listtorch.LongTensor) — List of tensors containing the encoded responses of shape (response_length)
  • scores (Listtorch.FloatTensor) — List of tensors containing the scores.

Returns

dict[str, Any]

A summary of the training statistics

Run a PPO optimisation step given a list of queries, model responses, and rewards.

train_minibatch

< >

( logprobs: FloatTensor values: FloatTensor rewards: FloatTensor query: LongTensor response: LongTensor model_input: LongTensor ) train_stats (dict[str, torch.Tensor])

Parameters

  • logprobs (torch.FloatTensor) — Log probabilities of the model, shape [batch_size, response_length]
  • values (torch.FloatTensor) — Values of the value head, shape [batch_size, response_length]
  • rewards (torch.FloatTensor) — Rewards from the reward model, shape [batch_size, response_length]
  • query (torch.LongTensor) — Encoded queries, shape [batch_size, query_length]
  • response (torch.LongTensor) — Encoded responses, shape [batch_size, response_length]
  • model_input (torch.LongTensor) — Concatenated queries and responses, shape [batch_size, query_length+response_length]

Returns

train_stats (dict[str, torch.Tensor])

Dictionary of training statistics

Train one PPO minibatch