TRL documentation

BCO Trainer

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v0.12.1).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

BCO Trainer

TRL supports the Binary Classifier Optimization (BCO). The BCO authors train a binary classifier whose logit serves as a reward so that the classifier maps {prompt, chosen completion} pairs to 1 and {prompt, rejected completion} pairs to 0. For a full example have a look at examples/scripts/bco.py.

Expected dataset type

The BCOTrainer requires an unpaired preference dataset. The BCOTrainer supports both conversational and standard dataset format. When provided with a conversational dataset, the trainer will automatically apply the chat template to the dataset.

Expected model format

The BCO trainer expects a model of AutoModelForCausalLM, compared to PPO that expects AutoModelForCausalLMWithValueHead for the value function.

Using the BCOTrainer

For a detailed example have a look at the examples/scripts/bco.py script. At a high level we need to initialize the BCOTrainer with a model we wish to train and a reference ref_model which we will use to calculate the implicit rewards of the preferred and rejected response.

The beta refers to the hyperparameter of the implicit reward, and the dataset contains the 3 entries listed above. Note that the model and ref_model need to have the same architecture (ie decoder only or encoder-decoder).

training_args = BCOConfig(
    beta=0.1,
)

bco_trainer = BCOTrainer(
    model,
    model_ref,
    args=training_args,
    train_dataset=train_dataset,
    processing_class=tokenizer,
)

After this one can then call:

bco_trainer.train()

Underlying Distribution matching (UDM)

In practical scenarios, the thumbs-up and thumbs-down datasets are likely to have divergent underlying distributions of prompts. Consider an LLM deployed for user feedback: if the model excels in writing tasks but underperforms in coding, the thumbs-up dataset will be dominated by writing-related prompts, while the thumbs-down dataset will contain mostly coding-related prompts.
If the prompts in your desired and undesired datasets differ a lot, it is useful to enable UDM.

Choose an embedding model and tokenizer:

embedding_model = AutoModel.from_pretrained(your_model_id)
embedding_tokenizer = AutoTokenizer.from_pretrained(your_model_id)

# customize this function depending on your embedding model
def embed_prompt(input_ids, attention_mask, model):
    outputs = model(input_ids=input_ids, attention_mask=attention_mask)
    return outputs.last_hidden_state.mean(dim=1)

embedding_model = Accelerator().prepare_model(self.embedding_model)
embedding_func = partial(embed_prompt, model=embedding_model)

Set prompt_sample_size to defined how many prompts are selected to train the UDM classifier and start the training with the provided embedding function:

training_args = BCOConfig(
    beta=0.1,
    prompt_sample_size=512,
)

bco_trainer = BCOTrainer(
    model,
    model_ref,
    args=training_args,
    train_dataset=train_dataset,
    processing_class=tokenizer,
    embedding_func=embedding_func,
    embedding_tokenizer=self.embedding_tokenizer,
)

bco_trainer.train()

For Mixture of Experts Models: Enabling the auxiliary loss

MOEs are the most efficient if the load is about equally distributed between experts.
To ensure that we train MOEs similarly during preference-tuning, it is beneficial to add the auxiliary loss from the load balancer to the final loss.

This option is enabled by setting output_router_logits=True in the model config (e.g. MixtralConfig).
To scale how much the auxiliary loss contributes to the total loss, use the hyperparameter router_aux_loss_coef=... (default: 0.001).

BCOTrainer

class trl.BCOTrainer

< >

( model: typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module, str] = None ref_model: typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module, str, NoneType] = None args: BCOConfig = None train_dataset: typing.Optional[datasets.arrow_dataset.Dataset] = None eval_dataset: typing.Union[datasets.arrow_dataset.Dataset, typing.Dict[str, datasets.arrow_dataset.Dataset], NoneType] = None processing_class: typing.Union[transformers.tokenization_utils_base.PreTrainedTokenizerBase, transformers.image_processing_utils.BaseImageProcessor, transformers.feature_extraction_utils.FeatureExtractionMixin, transformers.processing_utils.ProcessorMixin, NoneType] = None data_collator: typing.Optional[transformers.data.data_collator.DataCollator] = None model_init: typing.Optional[typing.Callable[[], transformers.modeling_utils.PreTrainedModel]] = None callbacks: typing.Optional[typing.List[transformers.trainer_callback.TrainerCallback]] = None optimizers: typing.Tuple[torch.optim.optimizer.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None) preprocess_logits_for_metrics: typing.Optional[typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor]] = None peft_config: typing.Optional[typing.Dict] = None compute_metrics: typing.Optional[typing.Callable[[transformers.trainer_utils.EvalLoopOutput], typing.Dict]] = None model_adapter_name: typing.Optional[str] = None ref_adapter_name: typing.Optional[str] = None embedding_func: typing.Optional[typing.Callable] = None embedding_tokenizer: typing.Optional[transformers.tokenization_utils_base.PreTrainedTokenizerBase] = None )

Parameters

  • model (transformers.PreTrainedModel) — The model to train, preferably an AutoModelForSequenceClassification.
  • ref_model (PreTrainedModelWrapper) — Hugging Face transformer model with a casual language modelling head. Used for implicit reward computation and loss. If no reference model is provided, the trainer will create a reference model with the same architecture as the model to be optimized.
  • args (BCOConfig) — The arguments to use for training.
  • train_dataset (datasets.Dataset) — The dataset to use for training.
  • eval_dataset (datasets.Dataset) — The dataset to use for evaluation.
  • processing_class (PreTrainedTokenizerBase or BaseImageProcessor or FeatureExtractionMixin or ProcessorMixin, optional) — Processing class used to process the data. If provided, will be used to automatically process the inputs for the model, and it will be saved along the model to make it easier to rerun an interrupted training or reuse the fine-tuned model.
  • data_collator (transformers.DataCollator, optional, defaults to None) — The data collator to use for training. If None is specified, the default data collator (DPODataCollatorWithPadding) will be used which will pad the sequences to the maximum length of the sequences in the batch, given a dataset of paired sequences.
  • model_init (Callable[[], transformers.PreTrainedModel]) — The model initializer to use for training. If None is specified, the default model initializer will be used.
  • callbacks (List[transformers.TrainerCallback]) — The callbacks to use for training.
  • optimizers (Tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR]) — The optimizer and scheduler to use for training.
  • preprocess_logits_for_metrics (Callable[[torch.Tensor, torch.Tensor], torch.Tensor]) — The function to use to preprocess the logits before computing the metrics.
  • peft_config (Dict, defaults to None) — The PEFT configuration to use for training. If you pass a PEFT configuration, the model will be wrapped in a PEFT model.
  • disable_dropout (bool, defaults to True) — Whether or not to disable dropouts in model and ref_model.
  • compute_metrics (Callable[[EvalPrediction], Dict], optional) — The function to use to compute the metrics. Must take a EvalPrediction and return a dictionary string to metric values.
  • model_adapter_name (str, defaults to None) — Name of the train target PEFT adapter, when using LoRA with multiple adapters.
  • ref_adapter_name (str, defaults to None) — Name of the reference PEFT adapter, when using LoRA with multiple adapters.

Initialize BCOTrainer from BCO paper.

bco_loss

< >

( policy_chosen_logps: FloatTensor policy_rejected_logps: FloatTensor reference_chosen_logps: FloatTensor reference_rejected_logps: FloatTensor chosen_embeddings: typing.Optional[torch.FloatTensor] rejected_embeddings: typing.Optional[torch.FloatTensor] ) A tuple of four tensors

Parameters

  • policy_chosen_logps — Log probabilities of the policy model for the chosen responses. Shape: (num(chosen) in batch_size,)
  • policy_rejected_logps — Log probabilities of the policy model for the rejected responses. Shape: (num(rejected) in batch_size,)
  • reference_chosen_logps — Log probabilities of the reference model for the chosen responses. Shape: (num(chosen) in batch_size,)
  • reference_rejected_logps — Log probabilities of the reference model for the rejected responses. Shape: (num(rejected) in batch_size,)
  • chosen_embeddings — embeddings of desirable prompts
  • rejected_embeddings — embeddings of undesirable prompts

Returns

A tuple of four tensors

(losses, chosen_rewards, rejected_rewards, delta). The losses tensor contains the BCO loss for each example in the batch. The chosen_rewards and rejected_rewards tensors contain the rewards for the chosen and rejected responses, respectively. The delta value contains the moving average of all implicit rewards.

Compute the BCO loss for a batch of policy and reference model log probabilities.

compute_reference_log_probs

< >

( padded_batch: typing.Dict )

Computes log probabilities of the reference model for a single padded batch of a BCO specific dataset.

create_model_card

< >

( model_name: typing.Optional[str] = None dataset_name: typing.Optional[str] = None tags: typing.Union[str, typing.List[str], NoneType] = None )

Parameters

  • model_name (str, optional, defaults to None) — The name of the model.
  • dataset_name (str, optional, defaults to None) — The name of the dataset used for training.
  • tags (str, List[str] or None, optional, defaults to None) — Tags to be associated with the model card.

Creates a draft of a model card using the information available to the Trainer.

evaluation_loop

< >

( dataloader: DataLoader description: str prediction_loss_only: typing.Optional[bool] = None ignore_keys: typing.Optional[typing.List[str]] = None metric_key_prefix: str = 'eval' )

Overriding built-in evaluation loop to store metrics for each batch. Prediction/evaluation loop, shared by Trainer.evaluate() and Trainer.predict().

Works both with or without labels.

generate_from_model_and_ref

< >

( model batch: typing.Dict[str, torch.LongTensor] )

Generate samples from the model and reference model for the given batch of inputs.

get_batch_logps

< >

( logits: FloatTensor labels: LongTensor average_log_prob: bool = False label_pad_token_id: int = -100 is_encoder_decoder: bool = False )

Parameters

  • logits — Logits of the model (unnormalized). Shape: (batch_size, sequence_length, vocab_size)
  • labels — Labels for which to compute the log probabilities. Label tokens with a value of label_pad_token_id are ignored. Shape: (batch_size, sequence_length)
  • average_log_prob — If True, return the average log probability per (non-masked) token. Otherwise, return the sum of the log probabilities of the (non-masked) tokens.

Compute the log probabilities of the given labels under the given logits.

get_batch_loss_metrics

< >

( model batch: typing.Dict[str, typing.Union[typing.List, torch.LongTensor]] )

Compute the BCO loss and other metrics for the given batch of inputs for train or test.

get_eval_dataloader

< >

( eval_dataset: typing.Optional[datasets.arrow_dataset.Dataset] = None )

Parameters

  • eval_dataset (torch.utils.data.Dataset, optional) — If provided, will override self.eval_dataset. If it is a Dataset, columns not accepted by the model.forward() method are automatically removed. It must implement __len__.

Returns the evaluation ~torch.utils.data.DataLoader.

Subclass of transformers.src.transformers.trainer.get_eval_dataloader to precompute ref_log_probs.

get_train_dataloader

< >

( )

Returns the training ~torch.utils.data.DataLoader.

Subclass of transformers.src.transformers.trainer.get_train_dataloader to precompute ref_log_probs.

log

< >

( logs: typing.Dict[str, float] start_time: typing.Optional[float] = None )

Parameters

  • logs (Dict[str, float]) — The values to log.
  • start_time (float or None, optional, defaults to None) — Start time of the training.

Log logs on the various objects watching training, including stored metrics.

null_ref_context

< >

( )

Context manager for handling null reference model (that is, peft adapter manipulation).

BCOConfig

class trl.BCOConfig

< >

( output_dir: str overwrite_output_dir: bool = False do_train: bool = False do_eval: bool = False do_predict: bool = False eval_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'no' prediction_loss_only: bool = False per_device_train_batch_size: int = 8 per_device_eval_batch_size: int = 8 per_gpu_train_batch_size: typing.Optional[int] = None per_gpu_eval_batch_size: typing.Optional[int] = None gradient_accumulation_steps: int = 1 eval_accumulation_steps: typing.Optional[int] = None eval_delay: typing.Optional[float] = 0 torch_empty_cache_steps: typing.Optional[int] = None learning_rate: float = 5e-05 weight_decay: float = 0.0 adam_beta1: float = 0.9 adam_beta2: float = 0.999 adam_epsilon: float = 1e-08 max_grad_norm: float = 1.0 num_train_epochs: float = 3.0 max_steps: int = -1 lr_scheduler_type: typing.Union[transformers.trainer_utils.SchedulerType, str] = 'linear' lr_scheduler_kwargs: typing.Union[dict, str, NoneType] = <factory> warmup_ratio: float = 0.0 warmup_steps: int = 0 log_level: typing.Optional[str] = 'passive' log_level_replica: typing.Optional[str] = 'warning' log_on_each_node: bool = True logging_dir: typing.Optional[str] = None logging_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps' logging_first_step: bool = False logging_steps: float = 500 logging_nan_inf_filter: bool = True save_strategy: typing.Union[transformers.trainer_utils.SaveStrategy, str] = 'steps' save_steps: float = 500 save_total_limit: typing.Optional[int] = None save_safetensors: typing.Optional[bool] = True save_on_each_node: bool = False save_only_model: bool = False restore_callback_states_from_checkpoint: bool = False no_cuda: bool = False use_cpu: bool = False use_mps_device: bool = False seed: int = 42 data_seed: typing.Optional[int] = None jit_mode_eval: bool = False use_ipex: bool = False bf16: bool = False fp16: bool = False fp16_opt_level: str = 'O1' half_precision_backend: str = 'auto' bf16_full_eval: bool = False fp16_full_eval: bool = False tf32: typing.Optional[bool] = None local_rank: int = -1 ddp_backend: typing.Optional[str] = None tpu_num_cores: typing.Optional[int] = None tpu_metrics_debug: bool = False debug: typing.Union[str, typing.List[transformers.debug_utils.DebugOption]] = '' dataloader_drop_last: bool = False eval_steps: typing.Optional[float] = None dataloader_num_workers: int = 0 dataloader_prefetch_factor: typing.Optional[int] = None past_index: int = -1 run_name: typing.Optional[str] = None disable_tqdm: typing.Optional[bool] = None remove_unused_columns: typing.Optional[bool] = True label_names: typing.Optional[typing.List[str]] = None load_best_model_at_end: typing.Optional[bool] = False metric_for_best_model: typing.Optional[str] = None greater_is_better: typing.Optional[bool] = None ignore_data_skip: bool = False fsdp: typing.Union[typing.List[transformers.trainer_utils.FSDPOption], str, NoneType] = '' fsdp_min_num_params: int = 0 fsdp_config: typing.Union[dict, str, NoneType] = None fsdp_transformer_layer_cls_to_wrap: typing.Optional[str] = None accelerator_config: typing.Union[dict, str, NoneType] = None deepspeed: typing.Union[dict, str, NoneType] = None label_smoothing_factor: float = 0.0 optim: typing.Union[transformers.training_args.OptimizerNames, str] = 'adamw_torch' optim_args: typing.Optional[str] = None adafactor: bool = False group_by_length: bool = False length_column_name: typing.Optional[str] = 'length' report_to: typing.Union[NoneType, str, typing.List[str]] = None ddp_find_unused_parameters: typing.Optional[bool] = None ddp_bucket_cap_mb: typing.Optional[int] = None ddp_broadcast_buffers: typing.Optional[bool] = None dataloader_pin_memory: bool = True dataloader_persistent_workers: bool = False skip_memory_metrics: bool = True use_legacy_prediction_loop: bool = False push_to_hub: bool = False resume_from_checkpoint: typing.Optional[str] = None hub_model_id: typing.Optional[str] = None hub_strategy: typing.Union[transformers.trainer_utils.HubStrategy, str] = 'every_save' hub_token: typing.Optional[str] = None hub_private_repo: bool = False hub_always_push: bool = False gradient_checkpointing: bool = False gradient_checkpointing_kwargs: typing.Union[dict, str, NoneType] = None include_inputs_for_metrics: bool = False include_for_metrics: typing.List[str] = <factory> eval_do_concat_batches: bool = True fp16_backend: str = 'auto' evaluation_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = None push_to_hub_model_id: typing.Optional[str] = None push_to_hub_organization: typing.Optional[str] = None push_to_hub_token: typing.Optional[str] = None mp_parameters: str = '' auto_find_batch_size: bool = False full_determinism: bool = False torchdynamo: typing.Optional[str] = None ray_scope: typing.Optional[str] = 'last' ddp_timeout: typing.Optional[int] = 1800 torch_compile: bool = False torch_compile_backend: typing.Optional[str] = None torch_compile_mode: typing.Optional[str] = None dispatch_batches: typing.Optional[bool] = None split_batches: typing.Optional[bool] = None include_tokens_per_second: typing.Optional[bool] = False include_num_input_tokens_seen: typing.Optional[bool] = False neftune_noise_alpha: typing.Optional[float] = None optim_target_modules: typing.Union[NoneType, str, typing.List[str]] = None batch_eval_metrics: bool = False eval_on_start: bool = False use_liger_kernel: typing.Optional[bool] = False eval_use_gather_object: typing.Optional[bool] = False average_tokens_across_devices: typing.Optional[bool] = False max_length: typing.Optional[int] = None max_prompt_length: typing.Optional[int] = None max_completion_length: typing.Optional[int] = None beta: float = 0.1 label_pad_token_id: int = -100 padding_value: typing.Optional[int] = None truncation_mode: str = 'keep_end' generate_during_eval: bool = False is_encoder_decoder: typing.Optional[bool] = None precompute_ref_log_probs: bool = False model_init_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None ref_model_init_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None dataset_num_proc: typing.Optional[int] = None prompt_sample_size: int = 1024 min_density_ratio: float = 0.5 max_density_ratio: float = 10.0 )

Parameters

  • max_length (Optional[int], optional, defaults to None) — Maximum length of the sequences (prompt + completion) in the batch. This argument is required if you want to use the default data collator.
  • max_prompt_length (Optional[int], optional, defaults to None) — Maximum length of the prompt. This argument is required if you want to use the default data collator.
  • max_completion_length (Optional[int], optional, defaults to None) — Maximum length of the completion. This argument is required if you want to use the default data collator and your model is an encoder-decoder.
  • beta (float, optional, defaults to 0.1) — Parameter controlling the deviation from the reference model. Higher β means less deviation from the reference model.
  • label_pad_token_id (int, optional, defaults to -100) — Label pad token id. This argument is required if you want to use the default data collator.
  • padding_value (Optional[int], optional, defaults to None) — Padding value to use. If None, the padding value of the tokenizer is used.
  • truncation_mode (str, optional, defaults to "keep_end") — Truncation mode to use when the prompt is too long. Possible values are "keep_end" or "keep_start". This argument is required if you want to use the default data collator.
  • generate_during_eval (bool, optional, defaults to False) — If True, generates and logs completions from both the model and the reference model to W&B during evaluation.
  • is_encoder_decoder (Optional[bool], optional, defaults to None) — When using the model_init argument (callable) to instantiate the model instead of the model argument, you need to specify if the model returned by the callable is an encoder-decoder model.
  • precompute_ref_log_probs (bool, optional, defaults to False) — Whether to precompute reference model log probabilities for training and evaluation datasets. This is useful when training without the reference model to reduce the total GPU memory needed.
  • model_init_kwargs (Optional[Dict[str, Any]], optional, defaults to None) — Keyword arguments to pass to AutoModelForCausalLM.from_pretrained when instantiating the model from a string.
  • ref_model_init_kwargs (Optional[Dict[str, Any]], optional, defaults to None) — Keyword arguments to pass to AutoModelForCausalLM.from_pretrained when instantiating the reference model from a string.
  • dataset_num_proc (Optional[int], optional, defaults to None) — Number of processes to use for processing the dataset.
  • prompt_sample_size (int, optional, defaults to 1024) — Number of prompts that are fed to density ratio classifier.
  • min_density_ratio (float, optional, defaults to 0.5) — Minimum value of the density ratio. The estimated density ratio is clamped to this value.
  • max_density_ratio (float, optional, defaults to 10.0) — Maximum value of the density ratio. The estimated density ratio is clamped to this value.

Configuration class for the BCOTrainer.

Using HfArgumentParser we can turn this class into argparse arguments that can be specified on the command line.

< > Update on GitHub