Trainer
At TRL we support PPO (Proximal Policy Optimisation) with an implementation that largely follows the structure introduced in the paper “Fine-Tuning Language Models from Human Preferences” by D. Ziegler et al. [paper, code].
The Trainer and model classes are largely inspired from transformers.Trainer
and transformers.AutoModel
classes and adapted for RL.
We also support a RewardTrainer
that can be used to train a reward model.
PPOConfig
class trl.PPOConfig
< source >( exp_name: str = 'doc-buil' seed: int = 0 log_with: Optional = None task_name: Optional = None model_name: Optional = None query_dataset: Optional = None reward_model: Optional = None remove_unused_columns: bool = True tracker_kwargs: Annotated = <factory> accelerator_kwargs: Annotated = <factory> project_kwargs: Annotated = <factory> tracker_project_name: str = 'trl' push_to_hub_if_best_kwargs: Annotated = <factory> steps: int = 20000 learning_rate: float = 1e-05 adap_kl_ctrl: bool = True init_kl_coef: Optional = 0.2 kl_penalty: Literal = 'kl' target: Optional = 6 horizon: Optional = 10000 gamma: float = 1 lam: float = 0.95 cliprange: float = 0.2 cliprange_value: float = 0.2 vf_coef: float = 0.1 batch_size: int = 256 forward_batch_size: Optional = None mini_batch_size: int = 1 gradient_accumulation_steps: int = 1 world_size: Annotated = None ppo_epochs: int = 4 max_grad_norm: Optional = None optimize_cuda_cache: Optional = None optimize_device_cache: Optional = False early_stopping: bool = False target_kl: float = 1 compare_steps: int = 1 ratio_threshold: float = 10.0 use_score_scaling: bool = False use_score_norm: bool = False score_clip: Optional = None whiten_rewards: bool = False is_encoder_decoder: Optional = None is_peft_model: Optional = None backward_batch_size: Annotated = None global_backward_batch_size: Annotated = None global_batch_size: Annotated = None )
Configuration class for PPOTrainer
PPOTrainer
class trl.PPOTrainer
< source >( config: PPOConfig = None model: PreTrainedModelWrapper = None ref_model: Optional = None tokenizer: PreTrainedTokenizerBase = None dataset: Union = None optimizer: Optional = None data_collator: Optional = None num_shared_layers: Optional = None lr_scheduler: Optional = None )
Parameters
- **config** (
PPOConfig
) — Configuration object for PPOTrainer. Check the documentation ofPPOConfig
for more — details. - **model** (
PreTrainedModelWrapper
) — Model to be optimized, Hugging Face transformer model with a value head. — Check the documentation ofPreTrainedModelWrapper
for more details. - **ref_model** (
PreTrainedModelWrapper
, optional) — Reference model to be used for KL penalty, Hugging Face — transformer model with a casual language modelling head. Check the documentation ofPreTrainedModelWrapper
for more details. If no reference model is provided, the trainer will create a reference model with the same architecture as the model to be optimized with shared layers. - **tokenizer** (
PreTrainedTokenizerBase
) — Tokenizer to be used for encoding the — data. Check the documentation oftransformers.PreTrainedTokenizer
andtransformers.PreTrainedTokenizerFast
for more details. - **dataset** (Union[
torch.utils.data.Dataset
,datasets.Dataset
], optional) — PyTorch dataset or Hugging — Face dataset. This is used to create a PyTorch dataloader. If no dataset is provided, the dataloader must be created outside the trainer users needs to design their own dataloader and make sure the batch size that is used is the same as the one specified in the configuration object. - **optimizer** (
torch.optim.Optimizer
, optional) — Optimizer to be used for training. If no optimizer is — provided, the trainer will create an Adam optimizer with the learning rate specified in the configuration object. - **data_collator** (DataCollatorForLanguageModeling, optional) — Data collator to be used for training and — passed along the dataloader
- **num_shared_layers** (int, optional) — Number of layers to be shared between the model and the reference — model, if no reference model is passed. If no number is provided, all the layers will be shared.
- **lr_scheduler** (
torch.optim.lr_scheduler
, optional) — Learning rate scheduler to be used for training. —
The PPOTrainer uses Proximal Policy Optimization to optimise language models. Note, this trainer is heavily inspired by the original OpenAI learning to summarize work here: https://github.com/openai/summarize-from-feedback
batched_forward_pass
< source >( model: PreTrainedModelWrapper queries: Tensor responses: Tensor model_inputs: dict return_logits: bool = False response_masks: Optional = None ) → (tuple)
Parameters
- queries (
torch.LongTensor
) — List of tensors containing the encoded queries, shape (batch_size
,query_length
) - responses (
torch.LongTensor
) — List of tensors containing the encoded responses, shape (batch_size
,response_length
) - return_logits (
bool
, optional, defaults toFalse
) — Whether to return all_logits. Set toFalse
if logits are not needed to reduce memory consumption.
Returns
(tuple)
- all_logprobs (
torch.FloatTensor
): Log probabilities of the responses, shape (batch_size
,response_length
) - all_ref_logprobs (
torch.FloatTensor
): Log probabilities of the responses, shape (batch_size
,response_length
) - all_values (
torch.FloatTensor
): Values of the responses, shape (batch_size
,response_length
)
Calculate model outputs in multiple batches.
compute_rewards
< source >( scores: FloatTensor logprobs: FloatTensor ref_logprobs: FloatTensor masks: LongTensor ) → torch.FloatTensor
Parameters
- scores (
torch.FloatTensor
) — Scores from the reward model, shape (batch_size
) - logprobs (
torch.FloatTensor
) — Log probabilities of the model, shape (batch_size
,response_length
) - ref_logprobs (
torch.FloatTensor
) — Log probabilities of the reference model, shape (batch_size
,response_length
)
Returns
torch.FloatTensor
Per token rewards, shape (batch_size
, response_length
)
torch.FloatTensor
: Non score rewards, shape (batch_size
, response_length
)
torch.FloatTensor
: KL penalty, shape (batch_size
, response_length
)
Compute per token rewards from scores and KL-penalty.
create_model_card
< source >( path: str model_name: Optional = 'TRL Model' )
Creates and saves a model card for a TRL model.
gather_stats
< source >( stats ) → dict[str, Any]
Gather stats from all processes. Useful in the context of distributed training.
generate
< source >( query_tensor: Union length_sampler: Callable = None batch_size: int = 4 return_prompt: bool = True generate_ref_response: bool = False **generation_kwargs ) → torch.LongTensor
Parameters
- query_tensor (
torch.LongTensor
) — A tensor of shape (seq_len
) containing query tokens or a list of tensors of shape (seq_len
). - length_sampler (
Callable
, optional) — Callable that returns the number of newly generated tokens. - batch_size (
int
, *optional) — Batch size used for generation, defaults to4
. - return_prompt (
bool
, optional) — If set toFalse
the prompt is not returned but only the newly generated tokens, defaults toTrue
. - generate_ref_response (
bool
, optional) — If set toTrue
the reference response is also generated, defaults toFalse
. - generation_kwargs (dict[str, Any]) — Keyword arguments for generation.
Returns
torch.LongTensor
A tensor of shape (batch_size
, gen_len
) containing response tokens.
Generate response with the model given the query tensor.
call the generate
method of the model.
log_stats
< source >( stats: dict batch: dict rewards: List columns_to_log: List = ['query', 'response'] )
A function that logs all the training stats. Call it at the end of each epoch.
loss
< source >( old_logprobs: FloatTensor values: FloatTensor logits: FloatTensor vpreds: FloatTensor logprobs: FloatTensor mask: LongTensor advantages: FloatTensor returns: FloatTensor )
Parameters
- old_logprobs (
torch.FloatTensor
) — Log probabilities of the model, shape (batch_size
,response_length
) - values (
torch.FloatTensor
) — Values of the value head, shape (batch_size
,response_length
) - rewards (
torch.FloatTensor
) — Rewards from the reward model, shape (batch_size
,response_length
) - logits (
torch.FloatTensor
) — Logits of the model, shape (batch_size
,response_length
,vocab_size
) - v_pred (
torch.FloatTensor
) — Values of the value head, shape (batch_size
,response_length
) - logprobs (
torch.FloatTensor
) — Log probabilities of the model, shape (batch_size
,response_length
)
Calculate policy and value losses.
prepare_dataloader
< source >( dataset: Union data_collator = None ) → torch.utils.data.DataLoader
Parameters
- dataset (Union[
torch.utils.data.Dataset
,datasets.Dataset
]) — PyTorch dataset or Hugging Face dataset. If a Hugging Face dataset is passed, the dataset will be preprocessed by removing the columns that are not used by the model. - data_collator (Optional[function]) — Data collator function.
Returns
torch.utils.data.DataLoader
PyTorch dataloader
Prepare the dataloader for training.
record_step_stats
< source >( kl_coef: float **data ) → stats (dict
)
Record training step statistics.
step
< source >( queries: List responses: List scores: List response_masks: Optional = None ) → dict[str, Any]
Parameters
- queries (List
torch.LongTensor
) — List of tensors containing the encoded queries of shape (query_length
) - responses (List
torch.LongTensor
) — List of tensors containing the encoded responses of shape (response_length
) - scores (List
torch.FloatTensor
) — List of tensors containing the scores. - response_masks (List
torch.FloatTensor
, optional)) — List of tensors containing masks of the response tokens.
Returns
dict[str, Any]
A summary of the training statistics
Run a PPO optimisation step given a list of queries, model responses, and rewards.
train_minibatch
< source >( old_logprobs: FloatTensor values: FloatTensor logprobs: FloatTensor logits: FloatTensor vpreds: FloatTensor mask: LongTensor advantages: FloatTensor returns: FloatTensor ) → train_stats (dict[str, torch.Tensor
])
Parameters
- logprobs (
torch.FloatTensor
) — Log probabilities of the model, shape [mini_batch_size, response_length] - values (
torch.FloatTensor
) — Values of the value head, shape [mini_batch_size, response_length] - query (
torch.LongTensor
) — Encoded queries, shape [mini_batch_size, query_length] - response (
torch.LongTensor
) — Encoded responses, shape [mini_batch_size, response_length] - model_input (
torch.LongTensor
) — Concatenated queries and responses, shape [mini_batch_size, query_length+response_length]
Returns
train_stats (dict[str, torch.Tensor
])
Dictionary of training statistics
Train one PPO minibatch
RewardConfig
class trl.RewardConfig
< source >( output_dir: str overwrite_output_dir: bool = False do_train: bool = False do_eval: bool = False do_predict: bool = False evaluation_strategy: Union = 'no' prediction_loss_only: bool = False per_device_train_batch_size: int = 8 per_device_eval_batch_size: int = 8 per_gpu_train_batch_size: Optional = None per_gpu_eval_batch_size: Optional = None gradient_accumulation_steps: int = 1 eval_accumulation_steps: Optional = None eval_delay: Optional = 0 learning_rate: float = 5e-05 weight_decay: float = 0.0 adam_beta1: float = 0.9 adam_beta2: float = 0.999 adam_epsilon: float = 1e-08 max_grad_norm: float = 1.0 num_train_epochs: float = 3.0 max_steps: int = -1 lr_scheduler_type: Union = 'linear' lr_scheduler_kwargs: Optional = <factory> warmup_ratio: float = 0.0 warmup_steps: int = 0 log_level: Optional = 'passive' log_level_replica: Optional = 'warning' log_on_each_node: bool = True logging_dir: Optional = None logging_strategy: Union = 'steps' logging_first_step: bool = False logging_steps: float = 500 logging_nan_inf_filter: bool = True save_strategy: Union = 'steps' save_steps: float = 500 save_total_limit: Optional = None save_safetensors: Optional = True save_on_each_node: bool = False save_only_model: bool = False no_cuda: bool = False use_cpu: bool = False use_mps_device: bool = False seed: int = 42 data_seed: Optional = None jit_mode_eval: bool = False use_ipex: bool = False bf16: bool = False fp16: bool = False fp16_opt_level: str = 'O1' half_precision_backend: str = 'auto' bf16_full_eval: bool = False fp16_full_eval: bool = False tf32: Optional = None local_rank: int = -1 ddp_backend: Optional = None tpu_num_cores: Optional = None tpu_metrics_debug: bool = False debug: Union = '' dataloader_drop_last: bool = False eval_steps: Optional = None dataloader_num_workers: int = 0 past_index: int = -1 run_name: Optional = None disable_tqdm: Optional = None remove_unused_columns: Optional = True label_names: Optional = None load_best_model_at_end: Optional = False metric_for_best_model: Optional = None greater_is_better: Optional = None ignore_data_skip: bool = False fsdp: Union = '' fsdp_min_num_params: int = 0 fsdp_config: Optional = None fsdp_transformer_layer_cls_to_wrap: Optional = None deepspeed: Optional = None label_smoothing_factor: float = 0.0 optim: Union = 'adamw_torch' optim_args: Optional = None adafactor: bool = False group_by_length: bool = False length_column_name: Optional = 'length' report_to: Optional = None ddp_find_unused_parameters: Optional = None ddp_bucket_cap_mb: Optional = None ddp_broadcast_buffers: Optional = None dataloader_pin_memory: bool = True dataloader_persistent_workers: bool = False skip_memory_metrics: bool = True use_legacy_prediction_loop: bool = False push_to_hub: bool = False resume_from_checkpoint: Optional = None hub_model_id: Optional = None hub_strategy: Union = 'every_save' hub_token: Optional = None hub_private_repo: bool = False hub_always_push: bool = False gradient_checkpointing: Optional = True gradient_checkpointing_kwargs: Optional = None include_inputs_for_metrics: bool = False fp16_backend: str = 'auto' push_to_hub_model_id: Optional = None push_to_hub_organization: Optional = None push_to_hub_token: Optional = None mp_parameters: str = '' auto_find_batch_size: bool = False full_determinism: bool = False torchdynamo: Optional = None ray_scope: Optional = 'last' ddp_timeout: Optional = 1800 torch_compile: bool = False torch_compile_backend: Optional = None torch_compile_mode: Optional = None dispatch_batches: Optional = None split_batches: Optional = False include_tokens_per_second: Optional = False include_num_input_tokens_seen: Optional = False neftune_noise_alpha: float = None max_length: Optional = None )
Parameters
- max_length (
int
, optional, defaults toNone
) — The maximum length of the sequences in the batch. This argument is required if you want to use the default data collator. - gradient_checkpointing (
bool
, optional, defaults toTrue
) — If True, use gradient checkpointing to save memory at the expense of slower backward pass.
RewardConfig collects all training arguments related to the RewardTrainer class.
Using HfArgumentParser
we can turn this class into
argparse arguments that can be specified on the
command line.
RewardTrainer
class trl.RewardTrainer
< source >( model: Union = None args: Optional = None data_collator: Optional = None train_dataset: Optional = None eval_dataset: Union = None tokenizer: Optional = None model_init: Optional = None compute_metrics: Optional = None callbacks: Optional = None optimizers: Tuple = (None, None) preprocess_logits_for_metrics: Optional = None max_length: Optional = None peft_config: Optional = None )
The RewardTrainer can be used to train your custom Reward Model. It is a subclass of the
transformers.Trainer
class and inherits all of its attributes and methods. It is recommended to use
an AutoModelForSequenceClassification
as the reward model. The reward model should be trained on a dataset
of paired examples, where each example is a tuple of two sequences. The reward model should be trained to
predict which example in the pair is more relevant to the task at hand.
The reward trainer expects a very specific format for the dataset. The dataset should contain two 4 entries at least
if you don’t use the default RewardDataCollatorWithPadding
data collator. The entries should be named
input_ids_chosen
attention_mask_chosen
input_ids_rejected
attention_mask_rejected
Optionally, you can also pass a margin
entry to the dataset. This entry should contain the margin used to modulate the
loss of the reward model as outlined in https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/.
If you don’t pass a margin, no margin will be used.
SFTTrainer
class trl.SFTTrainer
< source >( model: Union = None args: TrainingArguments = None data_collator: Optional = None train_dataset: Optional = None eval_dataset: Union = None tokenizer: Optional = None model_init: Optional = None compute_metrics: Optional = None callbacks: Optional = None optimizers: Tuple = (None, None) preprocess_logits_for_metrics: Optional = None peft_config: Optional = None dataset_text_field: Optional = None packing: Optional = False formatting_func: Optional = None max_seq_length: Optional = None infinite: Optional = None num_of_sequences: Optional = 1024 chars_per_token: Optional = 3.6 dataset_num_proc: Optional = None dataset_batch_size: int = 1000 neftune_noise_alpha: Optional = None model_init_kwargs: Optional = None dataset_kwargs: Optional = None )
Parameters
- model (Union[
transformers.PreTrainedModel
,nn.Module
,str
]) — The model to train, can be aPreTrainedModel
, atorch.nn.Module
or a string with the model name to load from cache or download. The model can be also converted to aPeftModel
if aPeftConfig
object is passed to thepeft_config
argument. - args (Optionaltransformers.TrainingArguments) —
The arguments to tweak for training. Please refer to the official documentation of
transformers.TrainingArguments
for more information. - data_collator (Optional
transformers.DataCollator
) — The data collator to use for training. - train_dataset (Optionaldatasets.Dataset) —
The dataset to use for training. We recommend users to use
trl.trainer.ConstantLengthDataset
to create their dataset. - eval_dataset (Optional[Union[
datasets.Dataset
, Dict[str
,datasets.Dataset
]]]) — The dataset to use for evaluation. We recommend users to usetrl.trainer.ConstantLengthDataset
to create their dataset. - tokenizer (Optionaltransformers.PreTrainedTokenizer) — The tokenizer to use for training. If not specified, the tokenizer associated to the model will be used.
- model_init (
Callable[[], transformers.PreTrainedModel]
) — The model initializer to use for training. If None is specified, the default model initializer will be used. - compute_metrics (
Callable[[transformers.EvalPrediction], Dict]
, optional defaults to None) — The function used to compute metrics during evaluation. It should return a dictionary mapping metric names to metric values. If not specified, only the loss will be computed during evaluation. - callbacks (
List[transformers.TrainerCallback]
) — The callbacks to use for training. - optimizers (
Tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR]
) — The optimizer and scheduler to use for training. - preprocess_logits_for_metrics (
Callable[[torch.Tensor, torch.Tensor], torch.Tensor]
) — The function to use to preprocess the logits before computing the metrics. - peft_config (
Optional[PeftConfig]
) — The PeftConfig object to use to initialize the PeftModel. - dataset_text_field (
Optional[str]
) — The name of the text field of the dataset, in case this is passed by a user, the trainer will automatically create aConstantLengthDataset
based on thedataset_text_field
argument. - formatting_func (
Optional[Callable]
) — The formatting function to be used for creating theConstantLengthDataset
. - max_seq_length (
Optional[int]
) — The maximum sequence length to use for theConstantLengthDataset
and for automatically creating the Dataset. Defaults to512
. - infinite (
Optional[bool]
) — Whether to use an infinite dataset or not. Defaults toFalse
. - num_of_sequences (
Optional[int]
) — The number of sequences to use for theConstantLengthDataset
. Defaults to1024
. - chars_per_token (
Optional[float]
) — The number of characters per token to use for theConstantLengthDataset
. Defaults to3.6
. You can check how this is computed in the stack-llama example: https://github.com/huggingface/trl/blob/08f550674c553c36c51d1027613c29f14f3676a5/examples/stack_llama/scripts/supervised_finetuning.py#L53. - packing (
Optional[bool]
) — Used only in casedataset_text_field
is passed. This argument is used by theConstantLengthDataset
to pack the sequences of the dataset. - dataset_num_proc (
Optional[int]
) — The number of workers to use to tokenize the data. Only used whenpacking=False
. Defaults to None. - dataset_batch_size (
int
) — The number of examples to tokenize per batch. If batch_size <= 0 or batch_size == None, tokenize the full dataset as a single batch. Defaults to 1000. - neftune_noise_alpha (
Optional[float]
) — If notNone
, this will activate NEFTune noise embeddings. This has been proven to drastically improve model performances for instruction fine-tuning. Check out the original paper here: https://arxiv.org/abs/2310.05914 and the original code here: https://github.com/neelsjain/NEFTune model_init_kwargs — (Optional[Dict]
, optional): Dict of Optional kwargs to pass when instantiating the model from a string dataset_kwargs — (Optional[Dict]
, optional): Dict of Optional kwargs to pass when creating packed or non-packed datasets
Class definition of the Supervised Finetuning Trainer (SFT Trainer).
This class is a wrapper around the transformers.Trainer
class and inherits all of its attributes and methods.
The trainer takes care of properly initializing the PeftModel in case a user passes a PeftConfig
object.
DPOTrainer
class trl.DPOTrainer
< source >( model: Union = None ref_model: Union = None beta: float = 0.1 label_smoothing: float = 0 loss_type: Literal = 'sigmoid' args: TrainingArguments = None data_collator: Optional = None label_pad_token_id: int = -100 padding_value: int = 0 truncation_mode: str = 'keep_end' train_dataset: Optional = None eval_dataset: Union = None tokenizer: Optional = None model_init: Optional = None callbacks: Optional = None optimizers: Tuple = (None, None) preprocess_logits_for_metrics: Optional = None max_length: Optional = None max_prompt_length: Optional = None max_target_length: Optional = None peft_config: Optional = None is_encoder_decoder: Optional = None disable_dropout: bool = True generate_during_eval: bool = False compute_metrics: Optional = None precompute_ref_log_probs: bool = False model_init_kwargs: Optional = None ref_model_init_kwargs: Optional = None model_adapter_name: str = None ref_adapter_name: str = None )
Parameters
- model (
transformers.PreTrainedModel
) — The model to train, preferably anAutoModelForSequenceClassification
. - ref_model (
PreTrainedModelWrapper
) — Hugging Face transformer model with a casual language modelling head. Used for implicit reward computation and loss. If no reference model is provided, the trainer will create a reference model with the same architecture as the model to be optimized. - beta (
float
, defaults to 0.1) — The beta factor in DPO loss. Higher beta means less divergence from the initial policy. For the IPO loss, beta is the regularization parameter denoted by tau in the paper. - label_smoothing (
float
, defaults to 0) — The robust DPO label smoothing parameter from the cDPO report that should be between 0 and 0.5. - loss_type (
str
, defaults to"sigmoid"
) — The type of DPO loss to use. Either"sigmoid"
the default DPO loss,"hinge"
loss from SLiC paper,"ipo"
from IPO paper, or"kto"
from the HALOs report. - args (
transformers.TrainingArguments
) — The arguments to use for training. - data_collator (
transformers.DataCollator
) — The data collator to use for training. If None is specified, the default data collator (DPODataCollatorWithPadding
) will be used which will pad the sequences to the maximum length of the sequences in the batch, given a dataset of paired sequences. - label_pad_token_id (
int
, defaults to-100
) — The label pad token id. This argument is required if you want to use the default data collator. - padding_value (
int
, defaults to0
) — The padding value if it is different to the tokenizer’s pad_token_id. - truncation_mode (
str
, defaults tokeep_end
) — The truncation mode to use, eitherkeep_end
orkeep_start
. This argument is required if you want to use the default data collator. - train_dataset (
datasets.Dataset
) — The dataset to use for training. - eval_dataset (
datasets.Dataset
) — The dataset to use for evaluation. - tokenizer (
transformers.PreTrainedTokenizerBase
) — The tokenizer to use for training. This argument is required if you want to use the default data collator. - model_init (
Callable[[], transformers.PreTrainedModel]
) — The model initializer to use for training. If None is specified, the default model initializer will be used. - callbacks (
List[transformers.TrainerCallback]
) — The callbacks to use for training. - optimizers (
Tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR]
) — The optimizer and scheduler to use for training. - preprocess_logits_for_metrics (
Callable[[torch.Tensor, torch.Tensor], torch.Tensor]
) — The function to use to preprocess the logits before computing the metrics. - max_length (
int
, defaults toNone
) — The maximum length of the sequences in the batch. This argument is required if you want to use the default data collator. - max_prompt_length (
int
, defaults toNone
) — The maximum length of the prompt. This argument is required if you want to use the default data collator. - max_target_length (
int
, defaults toNone
) — The maximum length of the target. This argument is required if you want to use the default data collator and your model is an encoder-decoder. - peft_config (
Dict
, defaults toNone
) — The PEFT configuration to use for training. If you pass a PEFT configuration, the model will be wrapped in a PEFT model. - is_encoder_decoder (
Optional[bool]
,optional
, defaults toNone
) — If no model is provided, we need to know if the model_init returns an encoder-decoder. - disable_dropout (
bool
, defaults toTrue
) — Whether or not to disable dropouts inmodel
andref_model
. - generate_during_eval (
bool
, defaults toFalse
) — Whether to sample and log generations during evaluation step. - compute_metrics (
Callable[[EvalPrediction], Dict]
, optional) — The function to use to compute the metrics. Must take aEvalPrediction
and return a dictionary string to metric values. - precompute_ref_log_probs (
bool
, defaults toFalse
) — Flag to precompute reference model log probabilities and evaluation datasets. This is useful if you want to train without the reference model and reduce the total GPU memory needed. model_init_kwargs — (Optional[Dict]
, optional): Dict of Optional kwargs to pass when instantiating the model from a string ref_model_init_kwargs — (Optional[Dict]
, optional): Dict of Optional kwargs to pass when instantiating the ref model from a string - model_adapter_name (
str
, defaults toNone
) — Name of the train target PEFT adapter, when using LoRA with multiple adapters. - ref_adapter_name (
str
, defaults toNone
) — Name of the reference PEFT adapter, when using LoRA with multiple adapters.
Initialize DPOTrainer.
Llama tokenizer does satisfy enc(a + b) = enc(a) + enc(b)
.
It does ensure enc(a + b) = enc(a) + enc(a + b)[len(enc(a)):]
.
Reference:
https://github.com/EleutherAI/lm-evaluation-harness/pull/531#issuecomment-1595586257
Computes log probabilities of the reference model for a single padded batch of a DPO specific dataset.
Run the given model on the given batch of inputs, concatenating the chosen and rejected inputs together.
We do this to avoid doing two forward passes, because it’s faster for FSDP.
concatenated_inputs
< source >( batch: Dict is_encoder_decoder: bool = False label_pad_token_id: int = -100 padding_value: int = 0 device: Optional = None )
Concatenate the chosen and rejected inputs into a single tensor.
dpo_loss
< source >( policy_chosen_logps: FloatTensor policy_rejected_logps: FloatTensor reference_chosen_logps: FloatTensor reference_rejected_logps: FloatTensor reference_free: bool = False ) → A tuple of three tensors
Returns
A tuple of three tensors
(losses, chosen_rewards, rejected_rewards). The losses tensor contains the DPO loss for each example in the batch. The chosen_rewards and rejected_rewards tensors contain the rewards for the chosen and rejected responses, respectively.
Compute the DPO loss for a batch of policy and reference model log probabilities.
evaluation_loop
< source >( dataloader: DataLoader description: str prediction_loss_only: Optional = None ignore_keys: Optional = None metric_key_prefix: str = 'eval' )
Overriding built-in evaluation loop to store metrics for each batch.
Prediction/evaluation loop, shared by Trainer.evaluate()
and Trainer.predict()
.
Works both with or without labels.
get_batch_logps
< source >( logits: FloatTensor labels: LongTensor average_log_prob: bool = False label_pad_token_id: int = -100 is_encoder_decoder: bool = False )
Compute the log probabilities of the given labels under the given logits.
Compute the DPO loss and other metrics for the given batch of inputs for train or test.
Generate samples from the model and reference model for the given batch of inputs.
get_eval_dataloader
< source >( eval_dataset: Optional = None )
Parameters
- eval_dataset (
torch.utils.data.Dataset
, optional) — If provided, will overrideself.eval_dataset
. If it is a Dataset, columns not accepted by themodel.forward()
method are automatically removed. It must implement__len__
.
Returns the evaluation ~torch.utils.data.DataLoader
.
Subclass of transformers.src.transformers.trainer.get_eval_dataloader to precompute ref_log_probs
.
Returns the training ~torch.utils.data.DataLoader
.
Subclass of transformers.src.transformers.trainer.get_train_dataloader to precompute ref_log_probs
.
Log logs
on the various objects watching training, including stored metrics.
Context manager for handling null reference model (that is, peft adapter manipulation).
Tokenize a single row from a DPO specific dataset.
At this stage, we don’t convert to PyTorch tensors yet; we just handle the truncation in case the prompt + chosen or prompt + rejected responses is/are too long. First we truncate the prompt; if we’re still too long, we truncate the chosen/rejected.
We also create the labels for the chosen/rejected responses, which are of length equal to the sum of the length of the prompt and the chosen/rejected response, with label_pad_token_id for the prompt tokens.
DDPOConfig
class trl.DDPOConfig
< source >( exp_name: str = 'doc-buil' run_name: Optional = '' seed: int = 0 log_with: Optional = None tracker_kwargs: dict = <factory> accelerator_kwargs: dict = <factory> project_kwargs: dict = <factory> tracker_project_name: str = 'trl' logdir: str = 'logs' num_epochs: int = 100 save_freq: int = 1 num_checkpoint_limit: int = 5 mixed_precision: str = 'fp16' allow_tf32: bool = True resume_from: Optional = '' sample_num_steps: int = 50 sample_eta: float = 1.0 sample_guidance_scale: float = 5.0 sample_batch_size: int = 1 sample_num_batches_per_epoch: int = 2 train_batch_size: int = 1 train_use_8bit_adam: bool = False train_learning_rate: float = 0.0003 train_adam_beta1: float = 0.9 train_adam_beta2: float = 0.999 train_adam_weight_decay: float = 0.0001 train_adam_epsilon: float = 1e-08 train_gradient_accumulation_steps: int = 1 train_max_grad_norm: float = 1.0 train_num_inner_epochs: int = 1 train_cfg: bool = True train_adv_clip_max: float = 5 train_clip_range: float = 0.0001 train_timestep_fraction: float = 1.0 per_prompt_stat_tracking: bool = False per_prompt_stat_tracking_buffer_size: int = 16 per_prompt_stat_tracking_min_count: int = 16 async_reward_computation: bool = False max_workers: int = 2 negative_prompts: Optional = '' )
Configuration class for DDPOTrainer
DDPOTrainer
class trl.DDPOTrainer
< source >( config: DDPOConfig reward_function: Callable prompt_function: Callable sd_pipeline: DDPOStableDiffusionPipeline image_samples_hook: Optional = None )
Parameters
- **config** (
DDPOConfig
) — Configuration object for DDPOTrainer. Check the documentation ofPPOConfig
for more — details. - **reward_function** (Callable[[torch.Tensor, Tuple[str], Tuple[Any]], torch.Tensor]) — Reward function to be used —
- **prompt_function** (Callable[[], Tuple[str, Any]]) — Function to generate prompts to guide model —
- **sd_pipeline** (
DDPOStableDiffusionPipeline
) — Stable Diffusion pipeline to be used for training. — - **image_samples_hook** (Optional[Callable[[Any, Any, Any], Any]]) — Hook to be called to log images —
The DDPOTrainer uses Deep Diffusion Policy Optimization to optimise diffusion models. Note, this trainer is heavily inspired by the work here: https://github.com/kvablack/ddpo-pytorch As of now only Stable Diffusion based pipelines are supported
calculate_loss
< source >( latents timesteps next_latents log_probs advantages embeds )
Parameters
- latents (torch.Tensor) — The latents sampled from the diffusion model, shape: [batch_size, num_channels_latents, height, width]
- timesteps (torch.Tensor) — The timesteps sampled from the diffusion model, shape: [batch_size]
- next_latents (torch.Tensor) — The next latents sampled from the diffusion model, shape: [batch_size, num_channels_latents, height, width]
- log_probs (torch.Tensor) — The log probabilities of the latents, shape: [batch_size]
- advantages (torch.Tensor) — The advantages of the latents, shape: [batch_size]
- embeds (torch.Tensor) — The embeddings of the prompts, shape: [2*batch_size or batch_size, …] Note: the “or” is because if train_cfg is True, the expectation is that negative prompts are concatenated to the embeds
Calculate the loss for a batch of an unpacked sample
create_model_card
< source >( path: str model_name: Optional = 'TRL DDPO Model' )
Creates and saves a model card for a TRL model.
step
< source >( epoch: int global_step: int ) → global_step (int)
Perform a single step of training.
Side Effects:
- Model weights are updated
- Logs the statistics to the accelerator trackers.
- If
self.image_samples_callback
is not None, it will be called with the prompt_image_pairs, global_step, and the accelerator tracker.
Train the model for a given number of epochs
IterativeSFTTrainer
class trl.IterativeSFTTrainer
< source >( model: PreTrainedModel = None args: TrainingArguments = None tokenizer: PreTrainedTokenizerBase = None optimizers: Tuple = (None, None) data_collator: Optional = None eval_dataset: Union = None max_length: Optional = None truncation_mode: Optional = 'keep_end' preprocess_logits_for_metrics: Optional = None compute_metrics: Optional = None optimize_device_cache: Optional = False )
Parameters
- **model** (
PreTrainedModel
) — Model to be optimized, either an ‘AutoModelForCausalLM’ or an ‘AutoModelForSeq2SeqLM’. — Check the documentation ofPreTrainedModel
for more details. - **args** (
transformers.TrainingArguments
) — — The arguments to use for training. - **tokenizer** (
PreTrainedTokenizerBase
) — Tokenizer to be used for encoding the — data. Check the documentation oftransformers.PreTrainedTokenizer
andtransformers.PreTrainedTokenizerFast
for more details. - **optimizers** (
Tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR]
) — — The optimizer and scheduler to use for training. - **data_collator** (Union[DataCollatorForLanguageModeling, DataCollatorForSeq2Seq], optional) — Data collator to be used for training and — passed along the dataloader.
- **eval_dataset** (
datasets.Dataset
) — The dataset to use for evaluation. - **max_length** (
int
, defaults toNone
) — — The maximum length of the input. - **truncation_mode** (
str
, defaults tokeep_end
) — — The truncation mode to use, eitherkeep_end
orkeep_start
. - **preprocess_logits_for_metrics** (
Callable[[torch.Tensor, torch.Tensor], torch.Tensor]
) — — The function to use to preprocess the logits before computing the metrics. - **compute_metrics** (
Callable[[EvalPrediction], Dict]
, optional) — — The function to use to compute the metrics. Must take aEvalPrediction
and return a dictionary string to metric values. - **optimize_device_cache * (
bool
, optional*, defaults toFalse
) — Optimize CUDA cache for slightly more memory-efficient training. —
The IterativeSFTTrainer can be used to finetune models with methods that requires some steps between optimization.
step
< source >( input_ids: Optional = None attention_mask: Optional = None labels: Optional = None texts: Optional = None texts_labels: Optional = None ) → dict[str, Any]
Parameters
- input_ids (List
torch.LongTensor
) — List of tensors containing the input_ids (if not provided, text will be used) - attention_mask (List
torch.LongTensor
, , optional) — List of tensors containing the attention_mask - labels (List
torch.FloatTensor
, optional) — List of tensors containing the labels (if set to None, will default to input_ids) - texts (List
str
, optional) — List of strings containing the text input (if not provided, input_ids will directly be used) - texts_labels (List
str
, optional) — List of strings containing the text labels (if set to None, will default to text)
Returns
dict[str, Any]
A summary of the training statistics
Run an optimisation step given a list of input_ids, attention_mask, and labels or a list of text and text_labels.
set_seed
Helper function for reproducible behavior to set the seed in random
, numpy
, and torch
.