CPO Trainer
Contrastive Preference Optimization (CPO) as introduced in the paper Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation by Haoran Xu, Amr Sharaf, Yunmo Chen, Weiting Tan, Lingfeng Shen, Benjamin Van Durme, Kenton Murray, and Young Jin Kim. At a high-level, CPO trains models to avoid generating adequate, but not perfect translations in Machine Translation (MT) tasks. However, CPO is a general approximation to the DPO loss and can be applied to other domains like chat.
CPO aims to mitigate two fundamental shortcomings of SFT. First, SFT’s methodology of minimizing the discrepancy between predicted outputs and gold-standard references inherently caps model performance at the quality level of the training data. Secondly, SFT lacks a mechanism to prevent the model from rejecting mistakes in translations. The CPO objective is derived from the DPO objective.
Expected dataset format
The CPO trainer expects a format identical to the DPO trainer, which should include three entries. These entries should be named as follows:
prompt
chosen
rejected
for example:
cpo_dataset_dict = {
"prompt": [
"hello",
"how are you",
"What is your name?",
"What is your name?",
"Which is the best programming language?",
"Which is the best programming language?",
"Which is the best programming language?",
],
"chosen": [
"hi nice to meet you",
"I am fine",
"My name is Mary",
"My name is Mary",
"Python",
"Python",
"Java",
],
"rejected": [
"leave me alone",
"I am not fine",
"Whats it to you?",
"I dont have a name",
"Javascript",
"C++",
"C++",
],
}
where the prompt
contains the context inputs, chosen
contains the corresponding chosen responses and rejected
contains the corresponding negative (rejected) responses. As can be seen a prompt can have multiple responses and this is reflected in the entries being repeated in the dictionary’s value arrays.
Expected model format
The CPO trainer expects a model of AutoModelForCausalLM
, compared to PPO that expects AutoModelForCausalLMWithValueHead
for the value function.
Using the CPOTrainer
For a detailed example have a look at the examples/scripts/cpo.py
script. At a high level we need to initialize the CPOTrainer
with a model
we wish to train. Note that CPOTrainer eliminates the need to use the reference model, simplifying the optimization process. The beta
refers to the hyperparameter of the implicit reward, and the dataset contains the 3 entries listed above.
cpo_config = CPOConfig(
beta=0.1,
)
cpo_trainer = CPOTrainer(
model,
args=cpo_config,
train_dataset=train_dataset,
tokenizer=tokenizer,
)
After this one can then call:
cpo_trainer.train()
Loss functions
Given the preference data, the CPOTrainer
uses the sigmoid loss on the normalized likelihood via the logsigmoid
to fit a logistic regression.
The RSO authors propose to use a hinge loss on the normalized likelihood from the SLiC paper. The CPOTrainer
can be switched to this loss via the loss_type="hinge"
argument and the beta
in this case is the reciprocal of the margin.
The IPO authors provide a deeper theoretical understanding of the CPO algorithms and identify an issue with overfitting and propose an alternative loss which can be used via the loss_type="ipo"
argument to the trainer. Note that the beta
parameter is the reciprocal of the gap between the log-likelihood ratios of the chosen vs the rejected completion pair and thus the smaller the beta
the larger this gaps is. As per the paper the loss is averaged over log-likelihoods of the completion (unlike CPO which is summed only).
Logging
While training and evaluating we record the following reward metrics:
rewards/chosen
: the mean log probabilities of the policy model for the chosen responses scaled by betarewards/rejected
: the mean log probabilities of the policy model for the rejected responses scaled by betarewards/accuracies
: mean of how often the chosen rewards are > than the corresponding rejected rewardsrewards/margins
: the mean difference between the chosen and corresponding rejected rewardsnll_loss
: the mean negative log likelihood loss of the policy model for the chosen responses
CPOTrainer
class trl.CPOTrainer
< source >( model: Union = None args: Optional = None data_collator: Optional = None train_dataset: Optional = None eval_dataset: Union = None tokenizer: Optional = None model_init: Optional = None callbacks: Optional = None optimizers: Tuple = (None, None) preprocess_logits_for_metrics: Optional = None peft_config: Optional = None compute_metrics: Optional = None )
Parameters
- model (
transformers.PreTrainedModel
) — The model to train, preferably anAutoModelForSequenceClassification
. - args (
CPOConfig
) — The CPO config arguments to use for training. - data_collator (
transformers.DataCollator
) — The data collator to use for training. If None is specified, the default data collator (DPODataCollatorWithPadding
) will be used which will pad the sequences to the maximum length of the sequences in the batch, given a dataset of paired sequences. - train_dataset (
datasets.Dataset
) — The dataset to use for training. - eval_dataset (
datasets.Dataset
) — The dataset to use for evaluation. - tokenizer (
transformers.PreTrainedTokenizerBase
) — The tokenizer to use for training. This argument is required if you want to use the default data collator. - model_init (
Callable[[], transformers.PreTrainedModel]
) — The model initializer to use for training. If None is specified, the default model initializer will be used. - callbacks (
List[transformers.TrainerCallback]
) — The callbacks to use for training. - optimizers (
Tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR]
) — The optimizer and scheduler to use for training. - preprocess_logits_for_metrics (
Callable[[torch.Tensor, torch.Tensor], torch.Tensor]
) — The function to use to preprocess the logits before computing the metrics. - peft_config (
Dict
, defaults toNone
) — The PEFT configuration to use for training. If you pass a PEFT configuration, the model will be wrapped in a PEFT model. - compute_metrics (
Callable[[EvalPrediction], Dict]
, optional) — The function to use to compute the metrics. Must take aEvalPrediction
and return a dictionary string to metric values.
Initialize CPOTrainer.
Llama tokenizer does satisfy enc(a + b) = enc(a) + enc(b)
.
It does ensure enc(a + b) = enc(a) + enc(a + b)[len(enc(a)):]
.
Reference:
https://github.com/EleutherAI/lm-evaluation-harness/pull/531#issuecomment-1595586257
Run the given model on the given batch of inputs, concatenating the chosen and rejected inputs together.
We do this to avoid doing two forward passes, because it’s faster for FSDP.
concatenated_inputs
< source >( batch: Dict is_encoder_decoder: bool = False label_pad_token_id: int = -100 padding_value: int = 0 device: Optional = None )
Concatenate the chosen and rejected inputs into a single tensor.
cpo_loss
< source >( policy_chosen_logps: FloatTensor policy_rejected_logps: FloatTensor ) → A tuple of three tensors
Returns
A tuple of three tensors
(losses, chosen_rewards, rejected_rewards). The losses tensor contains the CPO loss for each example in the batch. The chosen_rewards and rejected_rewards tensors contain the rewards for the chosen and rejected responses, respectively.
Compute the CPO loss for a batch of policy and reference model log probabilities.
evaluation_loop
< source >( dataloader: DataLoader description: str prediction_loss_only: Optional = None ignore_keys: Optional = None metric_key_prefix: str = 'eval' )
Overriding built-in evaluation loop to store metrics for each batch.
Prediction/evaluation loop, shared by Trainer.evaluate()
and Trainer.predict()
.
Works both with or without labels.
get_batch_logps
< source >( logits: FloatTensor labels: LongTensor average_log_prob: bool = False label_pad_token_id: int = -100 is_encoder_decoder: bool = False )
Compute the log probabilities of the given labels under the given logits.
Compute the CPO loss and other metrics for the given batch of inputs for train or test.
Generate samples from the model and reference model for the given batch of inputs.
Log logs
on the various objects watching training, including stored metrics.
Tokenize a single row from a CPO specific dataset.
At this stage, we don’t convert to PyTorch tensors yet; we just handle the truncation in case the prompt + chosen or prompt + rejected responses is/are too long. First we truncate the prompt; if we’re still too long, we truncate the chosen/rejected.
We also create the labels for the chosen/rejected responses, which are of length equal to the sum of the length of the prompt and the chosen/rejected response, with label_pad_token_id for the prompt tokens.
CPOConfig
class trl.CPOConfig
< source >( output_dir: str overwrite_output_dir: bool = False do_train: bool = False do_eval: bool = False do_predict: bool = False eval_strategy: Union = 'no' prediction_loss_only: bool = False per_device_train_batch_size: int = 8 per_device_eval_batch_size: int = 8 per_gpu_train_batch_size: Optional = None per_gpu_eval_batch_size: Optional = None gradient_accumulation_steps: int = 1 eval_accumulation_steps: Optional = None eval_delay: Optional = 0 learning_rate: float = 5e-05 weight_decay: float = 0.0 adam_beta1: float = 0.9 adam_beta2: float = 0.999 adam_epsilon: float = 1e-08 max_grad_norm: float = 1.0 num_train_epochs: float = 3.0 max_steps: int = -1 lr_scheduler_type: Union = 'linear' lr_scheduler_kwargs: Union = <factory> warmup_ratio: float = 0.0 warmup_steps: int = 0 log_level: Optional = 'passive' log_level_replica: Optional = 'warning' log_on_each_node: bool = True logging_dir: Optional = None logging_strategy: Union = 'steps' logging_first_step: bool = False logging_steps: float = 500 logging_nan_inf_filter: bool = True save_strategy: Union = 'steps' save_steps: float = 500 save_total_limit: Optional = None save_safetensors: Optional = True save_on_each_node: bool = False save_only_model: bool = False restore_callback_states_from_checkpoint: bool = False no_cuda: bool = False use_cpu: bool = False use_mps_device: bool = False seed: int = 42 data_seed: Optional = None jit_mode_eval: bool = False use_ipex: bool = False bf16: bool = False fp16: bool = False fp16_opt_level: str = 'O1' half_precision_backend: str = 'auto' bf16_full_eval: bool = False fp16_full_eval: bool = False tf32: Optional = None local_rank: int = -1 ddp_backend: Optional = None tpu_num_cores: Optional = None tpu_metrics_debug: bool = False debug: Union = '' dataloader_drop_last: bool = False eval_steps: Optional = None dataloader_num_workers: int = 0 dataloader_prefetch_factor: Optional = None past_index: int = -1 run_name: Optional = None disable_tqdm: Optional = None remove_unused_columns: Optional = True label_names: Optional = None load_best_model_at_end: Optional = False metric_for_best_model: Optional = None greater_is_better: Optional = None ignore_data_skip: bool = False fsdp: Union = '' fsdp_min_num_params: int = 0 fsdp_config: Union = None fsdp_transformer_layer_cls_to_wrap: Optional = None accelerator_config: Union = None deepspeed: Union = None label_smoothing_factor: float = 0.0 optim: Union = 'adamw_torch' optim_args: Optional = None adafactor: bool = False group_by_length: bool = False length_column_name: Optional = 'length' report_to: Union = None ddp_find_unused_parameters: Optional = None ddp_bucket_cap_mb: Optional = None ddp_broadcast_buffers: Optional = None dataloader_pin_memory: bool = True dataloader_persistent_workers: bool = False skip_memory_metrics: bool = True use_legacy_prediction_loop: bool = False push_to_hub: bool = False resume_from_checkpoint: Optional = None hub_model_id: Optional = None hub_strategy: Union = 'every_save' hub_token: Optional = None hub_private_repo: bool = False hub_always_push: bool = False gradient_checkpointing: bool = False gradient_checkpointing_kwargs: Union = None include_inputs_for_metrics: bool = False eval_do_concat_batches: bool = True fp16_backend: str = 'auto' evaluation_strategy: Union = None push_to_hub_model_id: Optional = None push_to_hub_organization: Optional = None push_to_hub_token: Optional = None mp_parameters: str = '' auto_find_batch_size: bool = False full_determinism: bool = False torchdynamo: Optional = None ray_scope: Optional = 'last' ddp_timeout: Optional = 1800 torch_compile: bool = False torch_compile_backend: Optional = None torch_compile_mode: Optional = None dispatch_batches: Optional = None split_batches: Optional = None include_tokens_per_second: Optional = False include_num_input_tokens_seen: Optional = False neftune_noise_alpha: Optional = None optim_target_modules: Union = None batch_eval_metrics: bool = False eval_on_start: bool = False max_length: Optional = None max_prompt_length: Optional = None max_completion_length: Optional = None max_target_length: Optional = None beta: float = 0.1 label_smoothing: float = 0 loss_type: Literal = 'sigmoid' disable_dropout: bool = True label_pad_token_id: int = -100 padding_value: int = None truncation_mode: str = 'keep_end' generate_during_eval: bool = False is_encoder_decoder: Optional = None model_init_kwargs: Optional = None dataset_num_proc: Optional = None )
Parameters
- max_length (
int
, defaults toNone
) — The maximum length of the sequences in the batch. This argument is required if you want to use the default data collator. - max_prompt_length (
int
, defaults toNone
) — The maximum length of the prompt. This argument is required if you want to use the default data collator. - max_target_length (
int
, defaults toNone
) — The maximum length of the target. This argument is required if you want to use the default data collator and your model is an encoder-decoder. - beta (
float
, defaults to 0.1) — The beta factor in CPO loss. - label_smoothing (
float
, defaults to 0) — The label smoothing factor. This argument is required if you want to use the default data collator. - loss_type (
str
, defaults tosigmoid
) — The type of loss to use. This argument is required if you want to use the default data collator. - label_pad_token_id (
int
, defaults to-100
) — The label pad token id. This argument is required if you want to use the default data collator. - padding_value (
int
, defaults toNone
) — The padding value if it is different to the tokenizer’s pad_token_id. - truncation_mode (
str
, defaults tokeep_end
) — The truncation mode to use, eitherkeep_end
orkeep_start
. This argument is required if you want to use the default data collator. - generate_during_eval (
bool
, defaults toFalse
) — Whether to sample and log generations during evaluation step. - is_encoder_decoder (
Optional[bool]
,optional
, defaults toNone
) — If no model is provided, we need to know if the model_init returns an encoder-decoder. - disable_dropout (
bool
, defaults toTrue
) — Whether or not to disable dropouts inmodel
. - model_init_kwargs (
Optional[Dict]
, optional) — Dict of Optional kwargs to pass when instantiating the model from a string - dataset_num_proc (
Optional[int]
, optional) — The number of workers to use to tokenize the data. Defaults to None.
CPOConfig collects all training arguments related to the CPOTrainer class.
Using HfArgumentParser
we can turn this class into
argparse arguments that can be specified on the
command line.