TRL documentation

Iterative Trainer

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v0.13.0).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Iterative Trainer

Iterative fine-tuning is a training method that enables to perform custom actions (generation and filtering for example) between optimization steps. In TRL we provide an easy-to-use API to fine-tune your models in an iterative way in just a few lines of code.

Usage

To get started quickly, instantiate an instance a model, and a tokenizer.


model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
if tokenizer.pad_token is None:
    tokenizer.pad_token = tokenizer.eos_token

trainer = IterativeSFTTrainer(
    model,
    tokenizer
)

You have the choice to either provide a list of strings or a list of tensors to the step function.

Using a list of tensors as input:


inputs = {
    "input_ids": input_ids,
    "attention_mask": attention_mask
}

trainer.step(**inputs)

Using a list of strings as input:


inputs = {
    "texts": texts
}

trainer.step(**inputs)

For causal language models, labels will automatically be created from input_ids or from texts. When using sequence to sequence models you will have to provide your own labels or text_labels.

IterativeTrainer

class trl.IterativeSFTTrainer

< >

( model: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None args: typing.Optional[transformers.training_args.TrainingArguments] = None processing_class: typing.Union[transformers.tokenization_utils_base.PreTrainedTokenizerBase, transformers.image_processing_utils.BaseImageProcessor, transformers.feature_extraction_utils.FeatureExtractionMixin, transformers.processing_utils.ProcessorMixin, NoneType] = None optimizers: tuple = (None, None) data_collator: typing.Optional[transformers.data.data_collator.DataCollator] = None eval_dataset: typing.Union[datasets.arrow_dataset.Dataset, dict[str, datasets.arrow_dataset.Dataset], NoneType] = None max_length: typing.Optional[int] = None truncation_mode: typing.Optional[str] = 'keep_end' preprocess_logits_for_metrics: typing.Optional[typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor]] = None compute_metrics: typing.Optional[typing.Callable[[transformers.trainer_utils.EvalLoopOutput], dict]] = None optimize_device_cache: typing.Optional[bool] = False )

Parameters

  • model (PreTrainedModel) — Model to be optimized, either an ‘AutoModelForCausalLM’ or an ‘AutoModelForSeq2SeqLM’. Check the documentation of PreTrainedModel for more details.
  • args (transformers.TrainingArguments) — The arguments to use for training.
  • processing_class (PreTrainedTokenizerBase or BaseImageProcessor or FeatureExtractionMixin or ProcessorMixin, optional) — Processing class used to process the data. If provided, will be used to automatically process the inputs for the model, and it will be saved along the model to make it easier to rerun an interrupted training or reuse the fine-tuned model.
  • optimizers (tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR]) — The optimizer and scheduler to use for training.
  • data_collator (Union[DataCollatorForLanguageModeling, DataCollatorForSeq2Seq], optional) — Data collator to be used for training and passed along the dataloader.
  • eval_dataset (datasets.Dataset) — The dataset to use for evaluation.
  • max_length (int, defaults to None) — The maximum length of the input.
  • truncation_mode (str, defaults to keep_end) — The truncation mode to use, either keep_end or keep_start.
  • preprocess_logits_for_metrics (Callable[[torch.Tensor, torch.Tensor], torch.Tensor]) — The function to use to preprocess the logits before computing the metrics.
  • compute_metrics (Callable[[EvalPrediction], dict], optional) — The function to use to compute the metrics. Must take a EvalPrediction and return a dictionary string to metric values.
  • optimize_device_cache (bool, optional, defaults to False) — Optimize CUDA cache for slightly more memory-efficient training.

The IterativeSFTTrainer can be used to finetune models with methods that requires some steps between optimization.

create_model_card

< >

( model_name: typing.Optional[str] = None dataset_name: typing.Optional[str] = None tags: typing.Union[str, list[str], NoneType] = None )

Parameters

  • model_name (str, optional, defaults to None) — The name of the model.
  • dataset_name (str, optional, defaults to None) — The name of the dataset used for training.
  • tags (str, list[str] or None, optional, defaults to None) — Tags to be associated with the model card.

Creates a draft of a model card using the information available to the Trainer.

step

< >

( input_ids: typing.Optional[list[torch.LongTensor]] = None attention_mask: typing.Optional[list[torch.LongTensor]] = None labels: typing.Optional[list[torch.LongTensor]] = None texts: typing.Optional[list[str]] = None texts_labels: typing.Optional[list[str]] = None ) dict[str, Any]

Parameters

  • input_ids (listtorch.LongTensor) — List of tensors containing the input_ids (if not provided, text will be used)
  • attention_mask (listtorch.LongTensor, , optional) — List of tensors containing the attention_mask
  • labels (listtorch.FloatTensor, optional) — List of tensors containing the labels (if set to None, will default to input_ids)
  • texts (liststr, optional) — List of strings containing the text input (if not provided, input_ids will directly be used)
  • texts_labels (liststr, optional) — List of strings containing the text labels (if set to None, will default to text)

Returns

dict[str, Any]

A summary of the training statistics

Run an optimisation step given a list of input_ids, attention_mask, and labels or a list of text and text_labels.

< > Update on GitHub