Trainer¶
The Trainer
and TFTrainer
classes provide an API for feature-complete
training in most standard use cases. It’s used in most of the example scripts.
Before instantiating your Trainer
/TFTrainer
, create a
TrainingArguments
/TFTrainingArguments
to access all the points of
customization during training.
The API supports distributed training on multiple GPUs/TPUs, mixed precision through NVIDIA Apex and Native AMP for PyTorch and tf.keras.mixed_precision
for TensorFlow.
Both Trainer
and TFTrainer
contain the basic training loop which supports
the above features. To inject custom behavior you can subclass them and override the following methods:
get_train_dataloader/get_train_tfdataset – Creates the training DataLoader (PyTorch) or TF Dataset.
get_eval_dataloader/get_eval_tfdataset – Creates the evaluation DataLoader (PyTorch) or TF Dataset.
get_test_dataloader/get_test_tfdataset – Creates the test DataLoader (PyTorch) or TF Dataset.
log – Logs information on the various objects watching training.
create_optimizer_and_scheduler – Sets up the optimizer and learning rate scheduler if they were not passed at init. Note, that you can also subclass or override the
create_optimizer
andcreate_scheduler
methods separately.create_optimizer – Sets up the optimizer if it wasn’t passed at init.
create_scheduler – Sets up the learning rate scheduler if it wasn’t passed at init.
compute_loss - Computes the loss on a batch of training inputs.
training_step – Performs a training step.
prediction_step – Performs an evaluation/test step.
run_model (TensorFlow only) – Basic pass through the model.
evaluate – Runs an evaluation loop and returns metrics.
predict – Returns predictions (with metrics if labels are available) on a test set.
Warning
The Trainer
class is optimized for 🤗 Transformers models and can have surprising behaviors
when you use it on other models. When using it on your own model, make sure:
your model always return tuples or subclasses of
ModelOutput
.your model can compute the loss if a
labels
argument is provided and that loss is returned as the first element of the tuple (if your model returns tuples)your model can accept multiple label arguments (use the
label_names
in yourTrainingArguments
to indicate their name to theTrainer
) but none of them should be named"label"
.
Here is an example of how to customize Trainer
using a custom loss function for multi-label
classification:
from torch import nn
from transformers import Trainer
class MultilabelTrainer(Trainer):
def compute_loss(self, model, inputs, return_outputs=False):
labels = inputs.pop("labels")
outputs = model(**inputs)
logits = outputs.logits
loss_fct = nn.BCEWithLogitsLoss()
loss = loss_fct(logits.view(-1, self.model.config.num_labels),
labels.float().view(-1, self.model.config.num_labels))
return (loss, outputs) if return_outputs else loss
Another way to customize the training loop behavior for the PyTorch Trainer
is to use
callbacks that can inspect the training loop state (for progress reporting, logging on TensorBoard or
other ML platforms…) and take decisions (like early stopping).
Trainer¶
-
class
transformers.
Trainer
(model: torch.nn.modules.module.Module = None, args: transformers.training_args.TrainingArguments = None, data_collator: Optional[NewType.<locals>.new_type] = None, train_dataset: Optional[torch.utils.data.dataset.Dataset] = None, eval_dataset: Optional[torch.utils.data.dataset.Dataset] = None, tokenizer: Optional[transformers.tokenization_utils_base.PreTrainedTokenizerBase] = None, model_init: Callable[transformers.modeling_utils.PreTrainedModel] = None, compute_metrics: Optional[Callable[transformers.trainer_utils.EvalPrediction, Dict]] = None, callbacks: Optional[List[transformers.trainer_callback.TrainerCallback]] = None, optimizers: Tuple[torch.optim.optimizer.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None))[source]¶ Trainer is a simple but feature-complete training and eval loop for PyTorch, optimized for 🤗 Transformers.
- Parameters
model (
PreTrainedModel
ortorch.nn.Module
, optional) –The model to train, evaluate or use for predictions. If not provided, a
model_init
must be passed.Note
Trainer
is optimized to work with thePreTrainedModel
provided by the library. You can still use your own models defined astorch.nn.Module
as long as they work the same way as the 🤗 Transformers models.args (
TrainingArguments
, optional) – The arguments to tweak for training. Will default to a basic instance ofTrainingArguments
with theoutput_dir
set to a directory named tmp_trainer in the current directory if not provided.data_collator (
DataCollator
, optional) – The function to use to form a batch from a list of elements oftrain_dataset
oreval_dataset
. Will default todefault_data_collator()
if notokenizer
is provided, an instance ofDataCollatorWithPadding()
otherwise.train_dataset (
torch.utils.data.dataset.Dataset
ortorch.utils.data.dataset.IterableDataset
, optional) –The dataset to use for training. If it is an
datasets.Dataset
, columns not accepted by themodel.forward()
method are automatically removed.Note that if it’s a
torch.utils.data.dataset.IterableDataset
with some randomization and you are training in a distributed fashion, your iterable dataset should either use a internal attributegenerator
that is atorch.Generator
for the randomization that must be identical on all processes (and the Trainer will manually set the seed of thisgenerator
at each epoch) or have aset_epoch()
method that internally sets the seed of the RNGs used.eval_dataset (
torch.utils.data.dataset.Dataset
, optional) – The dataset to use for evaluation. If it is andatasets.Dataset
, columns not accepted by themodel.forward()
method are automatically removed.tokenizer (
PreTrainedTokenizerBase
, optional) – The tokenizer used to preprocess the data. If provided, will be used to automatically pad the inputs the maximum length when batching inputs, and it will be saved along the model to make it easier to rerun an interrupted training or reuse the fine-tuned model.model_init (
Callable[[], PreTrainedModel]
, optional) –A function that instantiates the model to be used. If provided, each call to
train()
will start from a new instance of the model as given by this function.The function may have zero argument, or a single one containing the optuna/Ray Tune trial object, to be able to choose different architectures according to hyper parameters (such as layer count, sizes of inner layers, dropout probabilities etc).
compute_metrics (
Callable[[EvalPrediction], Dict]
, optional) – The function that will be used to compute metrics at evaluation. Must take aEvalPrediction
and return a dictionary string to metric values.callbacks (List of
TrainerCallback
, optional) –A list of callbacks to customize the training loop. Will add those to the list of default callbacks detailed in here.
If you want to remove one of the default callbacks used, use the
Trainer.remove_callback()
method.optimizers (
Tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR
, optional) – A tuple containing the optimizer and the scheduler to use. Will default to an instance ofAdamW
on your model and a scheduler given byget_linear_schedule_with_warmup()
controlled byargs
.
Important attributes:
model – Always points to the core model. If using a transformers model, it will be a
PreTrainedModel
subclass.model_wrapped – Always points to the most external model in case one or more other modules wrap the original model. This is the model that should be used for the forward pass. For example, under
DeepSpeed
, the inner model is wrapped inDeepSpeed
and then again intorch.nn.DistributedDataParallel
. If the inner model hasn’t been wrapped, thenself.model_wrapped
is the same asself.model
.is_model_parallel – Whether or not a model has been switched to a model parallel mode (different from data parallelism, this means some of the model layers are split on different GPUs).
place_model_on_device – Whether or not to automatically place the model on the device - it will be set to
False
if model parallel or deepspeed is used, or if the defaultTrainingArguments.place_model_on_device
is overridden to returnFalse
.is_in_train – Whether or not a model is currently running
train
(e.g. whenevaluate
is called while intrain
)
-
add_callback
(callback)[source]¶ Add a callback to the current list of
TrainerCallback
.- Parameters
callback (
type
orTrainerCallback
) – ATrainerCallback
class or an instance of aTrainerCallback
. In the first case, will instantiate a member of that class.
-
compute_loss
(model, inputs, return_outputs=False)[source]¶ How the loss is computed by Trainer. By default, all models return the loss in the first element.
Subclass and override for custom behavior.
-
create_optimizer
()[source]¶ Setup the optimizer.
We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the Trainer’s init through
optimizers
, or subclass and override this method in a subclass.
-
create_optimizer_and_scheduler
(num_training_steps: int)[source]¶ Setup the optimizer and the learning rate scheduler.
We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the Trainer’s init through
optimizers
, or subclass and override this method (orcreate_optimizer
and/orcreate_scheduler
) in a subclass.
-
create_scheduler
(num_training_steps: int)[source]¶ Setup the scheduler. The optimizer of the trainer must have been set up before this method is called.
- Parameters
num_training_steps (int) – The number of training steps to do.
-
evaluate
(eval_dataset: Optional[torch.utils.data.dataset.Dataset] = None, ignore_keys: Optional[List[str]] = None, metric_key_prefix: str = 'eval') → Dict[str, float][source]¶ Run evaluation and returns metrics.
The calling script will be responsible for providing a method to compute metrics, as they are task-dependent (pass it to the init
compute_metrics
argument).You can also subclass and override this method to inject custom behavior.
- Parameters
eval_dataset (
Dataset
, optional) – Pass a dataset if you wish to overrideself.eval_dataset
. If it is andatasets.Dataset
, columns not accepted by themodel.forward()
method are automatically removed. It must implement the__len__
method.ignore_keys (
Lst[str]
, optional) – A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions.metric_key_prefix (
str
, optional, defaults to"eval"
) – An optional prefix to be used as the metrics key prefix. For example the metrics “bleu” will be named “eval_bleu” if the prefix is “eval” (default)
- Returns
A dictionary containing the evaluation loss and the potential metrics computed from the predictions. The dictionary also contains the epoch number which comes from the training state.
-
evaluation_loop
(dataloader: torch.utils.data.dataloader.DataLoader, description: str, prediction_loss_only: Optional[bool] = None, ignore_keys: Optional[List[str]] = None, metric_key_prefix: str = 'eval') → transformers.trainer_utils.EvalLoopOutput[source]¶ Prediction/evaluation loop, shared by
Trainer.evaluate()
andTrainer.predict()
.Works both with or without labels.
-
floating_point_ops
(inputs: Dict[str, Union[torch.Tensor, Any]])[source]¶ For models that inherit from
PreTrainedModel
, uses that method to compute the number of floating point operations for every backward + forward pass. If using another model, either implement such a method in the model or subclass and override this method.- Parameters
inputs (
Dict[str, Union[torch.Tensor, Any]]
) – The inputs and targets of the model.- Returns
The number of floating-point operations.
- Return type
int
-
get_eval_dataloader
(eval_dataset: Optional[torch.utils.data.dataset.Dataset] = None) → torch.utils.data.dataloader.DataLoader[source]¶ Returns the evaluation
DataLoader
.Subclass and override this method if you want to inject some custom behavior.
- Parameters
eval_dataset (
torch.utils.data.dataset.Dataset
, optional) – If provided, will overrideself.eval_dataset
. If it is andatasets.Dataset
, columns not accepted by themodel.forward()
method are automatically removed. It must implement__len__
.
-
get_test_dataloader
(test_dataset: torch.utils.data.dataset.Dataset) → torch.utils.data.dataloader.DataLoader[source]¶ Returns the test
DataLoader
.Subclass and override this method if you want to inject some custom behavior.
- Parameters
test_dataset (
torch.utils.data.dataset.Dataset
, optional) – The test dataset to use. If it is andatasets.Dataset
, columns not accepted by themodel.forward()
method are automatically removed. It must implement__len__
.
-
get_train_dataloader
() → torch.utils.data.dataloader.DataLoader[source]¶ Returns the training
DataLoader
.Will use no sampler if
self.train_dataset
does not implement__len__
, a random sampler (adapted to distributed training if necessary) otherwise.Subclass and override this method if you want to inject some custom behavior.
-
hyperparameter_search
(hp_space: Optional[Callable[optuna.Trial, Dict[str, float]]] = None, compute_objective: Optional[Callable[Dict[str, float], float]] = None, n_trials: int = 20, direction: str = 'minimize', backend: Optional[Union[str, transformers.trainer_utils.HPSearchBackend]] = None, hp_name: Optional[Callable[optuna.Trial, str]] = None, **kwargs) → transformers.trainer_utils.BestRun[source]¶ Launch an hyperparameter search using
optuna
orRay Tune
. The optimized quantity is determined bycompute_objective
, which defaults to a function returning the evaluation loss when no metric is provided, the sum of all metrics otherwise.Warning
To use this method, you need to have provided a
model_init
when initializing yourTrainer
: we need to reinitialize the model at each new run. This is incompatible with theoptimizers
argument, so you need to subclassTrainer
and override the methodcreate_optimizer_and_scheduler()
for custom optimizer/scheduler.- Parameters
hp_space (
Callable[["optuna.Trial"], Dict[str, float]]
, optional) – A function that defines the hyperparameter search space. Will default todefault_hp_space_optuna()
ordefault_hp_space_ray()
depending on your backend.compute_objective (
Callable[[Dict[str, float]], float]
, optional) – A function computing the objective to minimize or maximize from the metrics returned by theevaluate
method. Will default todefault_compute_objective()
.n_trials (
int
, optional, defaults to 100) – The number of trial runs to test.direction (
str
, optional, defaults to"minimize"
) – Whether to optimize greater or lower objects. Can be"minimize"
or"maximize"
, you should pick"minimize"
when optimizing the validation loss,"maximize"
when optimizing one or several metrics.backend (
str
orHPSearchBackend
, optional) – The backend to use for hyperparameter search. Will default to optuna or Ray Tune, depending on which one is installed. If both are installed, will default to optuna.kwargs –
Additional keyword arguments passed along to
optuna.create_study
orray.tune.run
. For more information see:the documentation of optuna.create_study
the documentation of tune.run
- Returns
All the information about the best run.
- Return type
transformers.trainer_utils.BestRun
-
is_local_process_zero
() → bool[source]¶ Whether or not this process is the local (e.g., on one machine if training in a distributed fashion on several machines) main process.
-
is_world_process_zero
() → bool[source]¶ Whether or not this process is the global main process (when training in a distributed fashion on several machines, this is only going to be
True
for one process).
-
log
(logs: Dict[str, float]) → None[source]¶ Log
logs
on the various objects watching training.Subclass and override this method to inject custom behavior.
- Parameters
logs (
Dict[str, float]
) – The values to log.
-
log_metrics
(split, metrics)¶ Log metrics in a specially formatted way
Under distributed environment this is done only for a process with rank 0.
- Parameters
split (
str
) – Mode/split name: one oftrain
,eval
,test
metrics (
Dict[str, float]
) – The metrics returned from train/evaluate/predictmetrics: metrics dict
Notes on memory reports:
In order to get memory usage report you need to install
psutil
. You can do that withpip install psutil
.Now when this method is run, you will see a report that will include:
init_mem_cpu_alloc_delta = 1301MB init_mem_cpu_peaked_delta = 154MB init_mem_gpu_alloc_delta = 230MB init_mem_gpu_peaked_delta = 0MB train_mem_cpu_alloc_delta = 1345MB train_mem_cpu_peaked_delta = 0MB train_mem_gpu_alloc_delta = 693MB train_mem_gpu_peaked_delta = 7MB
Understanding the reports:
the first segment, e.g.,
train__
, tells you which stage the metrics are for. Reports starting withinit_
will be added to the first stage that gets run. So that if only evaluation is run, the memory usage for the__init__
will be reported along with theeval_
metrics.the third segment, is either
cpu
orgpu
, tells you whether it’s the general RAM or the gpu0 memory metric.*_alloc_delta
- is the difference in the used/allocated memory counter between the end and the start of the stage - it can be negative if a function released more memory than it allocated.*_peaked_delta
- is any extra memory that was consumed and then freed - relative to the current allocated memory counter - it is never negative. When you look at the metrics of any stage you add upalloc_delta
+peaked_delta
and you know how much memory was needed to complete that stage.
The reporting happens only for process of rank 0 and gpu 0 (if there is a gpu). Typically this is enough since the main process does the bulk of work, but it could be not quite so if model parallel is used and then other GPUs may use a different amount of gpu memory. This is also not the same under DataParallel where gpu0 may require much more memory than the rest since it stores the gradient and optimizer states for all participating GPUS. Perhaps in the future these reports will evolve to measure those too.
The CPU RAM metric measures RSS (Resident Set Size) includes both the memory which is unique to the process and the memory shared with other processes. It is important to note that it does not include swapped out memory, so the reports could be imprecise.
The CPU peak memory is measured using a sampling thread. Due to python’s GIL it may miss some of the peak memory if that thread didn’t get a chance to run when the highest memory was used. Therefore this report can be less than reality. Using
tracemalloc
would have reported the exact peak memory, but it doesn’t report memory allocations outside of python. So if some C++ CUDA extension allocated its own memory it won’t be reported. And therefore it was dropped in favor of the memory sampling approach, which reads the current process memory usage.The GPU allocated and peak memory reporting is done with
torch.cuda.memory_allocated()
andtorch.cuda.max_memory_allocated()
. This metric reports only “deltas” for pytorch-specific allocations, astorch.cuda
memory management system doesn’t track any memory allocated outside of pytorch. For example, the very first cuda call typically loads CUDA kernels, which may take from 0.5 to 2GB of GPU memory.Note that this tracker doesn’t account for memory allocations outside of
Trainer
’s__init__
,train
,evaluate
andpredict
calls.Because
evaluation
calls may happen duringtrain
, we can’t handle nested invocations becausetorch.cuda.max_memory_allocated
is a single counter, so if it gets reset by a nested eval call,train
’s tracker will report incorrect info. If this pytorch issue gets resolved it will be possible to change this class to be re-entrant. Until then we will only track the outer level oftrain
,evaluate
andpredict
methods. Which means that ifeval
is called duringtrain
, it’s the latter that will account for its memory usage and that of the former.This also means that if any other tool that is used along the
Trainer
callstorch.cuda.reset_peak_memory_stats
, the gpu peak memory stats could be invalid. And theTrainer
will disrupt the normal behavior of any such tools that rely on callingtorch.cuda.reset_peak_memory_stats
themselves.For best performance you may want to consider turning the memory profiling off for production runs.
-
metrics_format
(metrics: Dict[str, float]) → Dict[str, float]¶ Reformat Trainer metrics values to a human-readable format
- Parameters
metrics (
Dict[str, float]
) – The metrics returned from train/evaluate/predict- Returns
The reformatted metrics
- Return type
metrics (
Dict[str, float]
)
-
num_examples
(dataloader: torch.utils.data.dataloader.DataLoader) → int[source]¶ Helper to get number of samples in a
DataLoader
by accessing its dataset.Will raise an exception if the underlying dataset does not implement method
__len__
-
pop_callback
(callback)[source]¶ Remove a callback from the current list of
TrainerCallback
and returns it.If the callback is not found, returns
None
(and no error is raised).- Parameters
callback (
type
orTrainerCallback
) – ATrainerCallback
class or an instance of aTrainerCallback
. In the first case, will pop the first member of that class found in the list of callbacks.- Returns
The callback removed, if found.
- Return type
TrainerCallback
-
predict
(test_dataset: torch.utils.data.dataset.Dataset, ignore_keys: Optional[List[str]] = None, metric_key_prefix: str = 'test') → transformers.trainer_utils.PredictionOutput[source]¶ Run prediction and returns predictions and potential metrics.
Depending on the dataset and your use case, your test dataset may contain labels. In that case, this method will also return metrics, like in
evaluate()
.- Parameters
test_dataset (
Dataset
) – Dataset to run the predictions on. If it is andatasets.Dataset
, columns not accepted by themodel.forward()
method are automatically removed. Has to implement the method__len__
ignore_keys (
Lst[str]
, optional) – A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions.metric_key_prefix (
str
, optional, defaults to"test"
) – An optional prefix to be used as the metrics key prefix. For example the metrics “bleu” will be named “test_bleu” if the prefix is “test” (default)
Note
If your predictions or labels have different sequence length (for instance because you’re doing dynamic padding in a token classification task) the predictions will be padded (on the right) to allow for concatenation into one array. The padding index is -100.
Returns: NamedTuple A namedtuple with the following keys:
predictions (
np.ndarray
): The predictions ontest_dataset
.label_ids (
np.ndarray
, optional): The labels (if the dataset contained some).metrics (
Dict[str, float]
, optional): The potential dictionary of metrics (if the dataset contained labels).
-
prediction_loop
(dataloader: torch.utils.data.dataloader.DataLoader, description: str, prediction_loss_only: Optional[bool] = None, ignore_keys: Optional[List[str]] = None, metric_key_prefix: str = 'eval') → transformers.trainer_utils.PredictionOutput[source]¶ Prediction/evaluation loop, shared by
Trainer.evaluate()
andTrainer.predict()
.Works both with or without labels.
-
prediction_step
(model: torch.nn.modules.module.Module, inputs: Dict[str, Union[torch.Tensor, Any]], prediction_loss_only: bool, ignore_keys: Optional[List[str]] = None) → Tuple[Optional[torch.Tensor], Optional[torch.Tensor], Optional[torch.Tensor]][source]¶ Perform an evaluation step on
model
using obj:inputs.Subclass and override to inject custom behavior.
- Parameters
model (
nn.Module
) – The model to evaluate.inputs (
Dict[str, Union[torch.Tensor, Any]]
) –The inputs and targets of the model.
The dictionary will be unpacked before being fed to the model. Most models expect the targets under the argument
labels
. Check your model’s documentation for all accepted arguments.prediction_loss_only (
bool
) – Whether or not to return the loss only.ignore_keys (
Lst[str]
, optional) – A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions.
- Returns
A tuple with the loss, logits and labels (each being optional).
- Return type
Tuple[Optional[torch.Tensor], Optional[torch.Tensor], Optional[torch.Tensor]]
-
push_to_hub
(repo_name: Optional[str] = None, repo_url: Optional[str] = None, commit_message: Optional[str] = 'add model', organization: Optional[str] = None, private: bool = None, use_auth_token: Optional[Union[bool, str]] = None, **kwargs)[source]¶ Upload self.model to the 🤗 model hub.
- Parameters
repo_name (
str
, optional) – Repository name for your model or tokenizer in the hub. If not specified andrepo_url
is not specified either, will default to the stem ofself.args.output_dir
.repo_url (
str
, optional) – Specify this in case you want to push to an existing repository in the hub. If unspecified, a new repository will be created in your namespace (unless you specify anorganization
) withrepo_name
.commit_message (
str
, optional, defaults to"add model"
) – Message to commit while pushing.organization (
str
, optional) – Organization in which you want to push your model or tokenizer (you must be a member of this organization).private (
bool
, optional) – Whether or not the repository created should be private (requires a paying subscription).use_auth_token (
bool
orstr
, optional) – The token to use as HTTP bearer authorization for remote files. IfTrue
, will use the token generated when runningtransformers-cli login
(stored inhuggingface
). Will default toTrue
ifrepo_url
is not specified.kwargs – Additional keyword arguments passed along to
create_model_card()
.
- Returns
The url of the commit of your model in the given repository.
-
remove_callback
(callback)[source]¶ Remove a callback from the current list of
TrainerCallback
.- Parameters
callback (
type
orTrainerCallback
) – ATrainerCallback
class or an instance of aTrainerCallback
. In the first case, will remove the first member of that class found in the list of callbacks.
-
save_metrics
(split, metrics, combined=True)¶ Save metrics into a json file for that split, e.g.
train_results.json
.Under distributed environment this is done only for a process with rank 0.
- Parameters
split (
str
) – Mode/split name: one oftrain
,eval
,test
,all
metrics (
Dict[str, float]
) – The metrics returned from train/evaluate/predictcombined (
bool
, optional, defaults toTrue
) – Creates combined metrics by updatingall_results.json
with metrics of this call
To understand the metrics please read the docstring of
log_metrics()
. The only difference is that raw unformatted numbers are saved in the current method.
-
save_model
(output_dir: Optional[str] = None)[source]¶ Will save the model, so you can reload it using
from_pretrained()
.Will only save from the main process.
-
save_state
()¶ Saves the Trainer state, since Trainer.save_model saves only the tokenizer with the model
Under distributed environment this is done only for a process with rank 0.
-
train
(resume_from_checkpoint: Optional[Union[bool, str]] = None, trial: Union[optuna.Trial, Dict[str, Any]] = None, **kwargs)[source]¶ Main training entry point.
- Parameters
resume_from_checkpoint (
str
orbool
, optional) – If astr
, local path to a saved checkpoint as saved by a previous instance ofTrainer
. If abool
and equals True, load the last checkpoint in args.output_dir as saved by a previous instance ofTrainer
. If present, training will resume from the model/optimizer/scheduler states loaded here.trial (
optuna.Trial
orDict[str, Any]
, optional) – The trial run or the hyperparameter dictionary for hyperparameter search.kwargs – Additional keyword arguments used to hide deprecated arguments
-
training_step
(model: torch.nn.modules.module.Module, inputs: Dict[str, Union[torch.Tensor, Any]]) → torch.Tensor[source]¶ Perform a training step on a batch of inputs.
Subclass and override to inject custom behavior.
- Parameters
model (
nn.Module
) – The model to train.inputs (
Dict[str, Union[torch.Tensor, Any]]
) –The inputs and targets of the model.
The dictionary will be unpacked before being fed to the model. Most models expect the targets under the argument
labels
. Check your model’s documentation for all accepted arguments.
- Returns
The tensor with training loss on this batch.
- Return type
torch.Tensor
Seq2SeqTrainer¶
-
class
transformers.
Seq2SeqTrainer
(model: torch.nn.modules.module.Module = None, args: transformers.training_args.TrainingArguments = None, data_collator: Optional[NewType.<locals>.new_type] = None, train_dataset: Optional[torch.utils.data.dataset.Dataset] = None, eval_dataset: Optional[torch.utils.data.dataset.Dataset] = None, tokenizer: Optional[transformers.tokenization_utils_base.PreTrainedTokenizerBase] = None, model_init: Callable[transformers.modeling_utils.PreTrainedModel] = None, compute_metrics: Optional[Callable[transformers.trainer_utils.EvalPrediction, Dict]] = None, callbacks: Optional[List[transformers.trainer_callback.TrainerCallback]] = None, optimizers: Tuple[torch.optim.optimizer.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None))[source]¶ -
evaluate
(eval_dataset: Optional[torch.utils.data.dataset.Dataset] = None, ignore_keys: Optional[List[str]] = None, metric_key_prefix: str = 'eval', max_length: Optional[int] = None, num_beams: Optional[int] = None) → Dict[str, float][source]¶ Run evaluation and returns metrics.
The calling script will be responsible for providing a method to compute metrics, as they are task-dependent (pass it to the init
compute_metrics
argument).You can also subclass and override this method to inject custom behavior.
- Parameters
eval_dataset (
Dataset
, optional) – Pass a dataset if you wish to overrideself.eval_dataset
. If it is andatasets.Dataset
, columns not accepted by themodel.forward()
method are automatically removed. It must implement the__len__
method.ignore_keys (
List[str]
, optional) – A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions.metric_key_prefix (
str
, optional, defaults to"eval"
) – An optional prefix to be used as the metrics key prefix. For example the metrics “bleu” will be named “eval_bleu” if the prefix is"eval"
(default)max_length (
int
, optional) – The maximum target length to use when predicting with the generate method.num_beams (
int
, optional) – Number of beams for beam search that will be used when predicting with the generate method. 1 means no beam search.
- Returns
A dictionary containing the evaluation loss and the potential metrics computed from the predictions. The dictionary also contains the epoch number which comes from the training state.
-
predict
(test_dataset: torch.utils.data.dataset.Dataset, ignore_keys: Optional[List[str]] = None, metric_key_prefix: str = 'eval', max_length: Optional[int] = None, num_beams: Optional[int] = None) → transformers.trainer_utils.PredictionOutput[source]¶ Run prediction and returns predictions and potential metrics.
Depending on the dataset and your use case, your test dataset may contain labels. In that case, this method will also return metrics, like in
evaluate()
.- Parameters
test_dataset (
Dataset
) – Dataset to run the predictions on. If it is andatasets.Dataset
, columns not accepted by themodel.forward()
method are automatically removed. Has to implement the method__len__
ignore_keys (
List[str]
, optional) – A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions.metric_key_prefix (
str
, optional, defaults to"eval"
) – An optional prefix to be used as the metrics key prefix. For example the metrics “bleu” will be named “eval_bleu” if the prefix is"eval"
(default)max_length (
int
, optional) – The maximum target length to use when predicting with the generate method.num_beams (
int
, optional) – Number of beams for beam search that will be used when predicting with the generate method. 1 means no beam search.
Note
If your predictions or labels have different sequence lengths (for instance because you’re doing dynamic padding in a token classification task) the predictions will be padded (on the right) to allow for concatenation into one array. The padding index is -100.
Returns: NamedTuple A namedtuple with the following keys:
predictions (
np.ndarray
): The predictions ontest_dataset
.label_ids (
np.ndarray
, optional): The labels (if the dataset contained some).metrics (
Dict[str, float]
, optional): The potential dictionary of metrics (if the dataset contained labels).
-
TFTrainer¶
-
class
transformers.
TFTrainer
(model: transformers.modeling_tf_utils.TFPreTrainedModel, args: transformers.training_args_tf.TFTrainingArguments, train_dataset: Optional[tensorflow.python.data.ops.dataset_ops.DatasetV2] = None, eval_dataset: Optional[tensorflow.python.data.ops.dataset_ops.DatasetV2] = None, compute_metrics: Optional[Callable[transformers.trainer_utils.EvalPrediction, Dict]] = None, tb_writer: Optional[tensorflow.python.ops.summary_ops_v2.SummaryWriter] = None, optimizers: Tuple[tensorflow.python.keras.optimizer_v2.optimizer_v2.OptimizerV2, tensorflow.python.keras.optimizer_v2.learning_rate_schedule.LearningRateSchedule] = None, None)[source]¶ TFTrainer is a simple but feature-complete training and eval loop for TensorFlow, optimized for 🤗 Transformers.
- Parameters
model (
TFPreTrainedModel
) – The model to train, evaluate or use for predictions.args (
TFTrainingArguments
) – The arguments to tweak training.train_dataset (
Dataset
, optional) – The dataset to use for training. The dataset should yield tuples of(features, labels)
wherefeatures
is a dict of input features andlabels
is the labels. Iflabels
is a tensor, the loss is calculated by the model by callingmodel(features, labels=labels)
. Iflabels
is a dict, such as when using a QuestionAnswering head model with multiple targets, the loss is instead calculated by callingmodel(features, **labels)
.eval_dataset (
Dataset
, optional) – The dataset to use for evaluation. The dataset should yield tuples of(features, labels)
wherefeatures
is a dict of input features andlabels
is the labels. Iflabels
is a tensor, the loss is calculated by the model by callingmodel(features, labels=labels)
. Iflabels
is a dict, such as when using a QuestionAnswering head model with multiple targets, the loss is instead calculated by callingmodel(features, **labels)
.compute_metrics (
Callable[[EvalPrediction], Dict]
, optional) – The function that will be used to compute metrics at evaluation. Must take aEvalPrediction
and return a dictionary string to metric values.tb_writer (
tf.summary.SummaryWriter
, optional) – Object to write to TensorBoard.optimizers (
Tuple[tf.keras.optimizers.Optimizer, tf.keras.optimizers.schedules.LearningRateSchedule]
, optional) – A tuple containing the optimizer and the scheduler to use. The optimizer default to an instance oftf.keras.optimizers.Adam
ifargs.weight_decay_rate
is 0 else an instance ofAdamWeightDecay
. The scheduler will default to an instance oftf.keras.optimizers.schedules.PolynomialDecay
ifargs.num_warmup_steps
is 0 else an instance ofWarmUp
.
-
create_optimizer_and_scheduler
(num_training_steps: int)[source]¶ Setup the optimizer and the learning rate scheduler.
We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the TFTrainer’s init through
optimizers
, or subclass and override this method.
-
evaluate
(eval_dataset: Optional[tensorflow.python.data.ops.dataset_ops.DatasetV2] = None) → Dict[str, float][source]¶ Run evaluation and returns metrics.
The calling script will be responsible for providing a method to compute metrics, as they are task-dependent (pass it to the init
compute_metrics
argument).- Parameters
eval_dataset (
Dataset
, optional) – Pass a dataset if you wish to overrideself.eval_dataset
. The dataset should yield tuples of(features, labels)
wherefeatures
is a dict of input features andlabels
is the labels. Iflabels
is a tensor, the loss is calculated by the model by callingmodel(features, labels=labels)
. Iflabels
is a dict, such as when using a QuestionAnswering head model with multiple targets, the loss is instead calculated by callingmodel(features, **labels)
.- Returns
A dictionary containing the evaluation loss and the potential metrics computed from the predictions.
-
get_eval_tfdataset
(eval_dataset: Optional[tensorflow.python.data.ops.dataset_ops.DatasetV2] = None) → tensorflow.python.data.ops.dataset_ops.DatasetV2[source]¶ Returns the evaluation
Dataset
.- Parameters
eval_dataset (
Dataset
, optional) – If provided, will override self.eval_dataset. The dataset should yield tuples of(features, labels)
wherefeatures
is a dict of input features andlabels
is the labels. Iflabels
is a tensor, the loss is calculated by the model by callingmodel(features, labels=labels)
. Iflabels
is a dict, such as when using a QuestionAnswering head model with multiple targets, the loss is instead calculated by callingmodel(features, **labels)
.
Subclass and override this method if you want to inject some custom behavior.
-
get_test_tfdataset
(test_dataset: tensorflow.python.data.ops.dataset_ops.DatasetV2) → tensorflow.python.data.ops.dataset_ops.DatasetV2[source]¶ Returns a test
Dataset
.- Parameters
test_dataset (
Dataset
) – The dataset to use. The dataset should yield tuples of(features, labels)
wherefeatures
is a dict of input features andlabels
is the labels. Iflabels
is a tensor, the loss is calculated by the model by callingmodel(features, labels=labels)
. Iflabels
is a dict, such as when using a QuestionAnswering head model with multiple targets, the loss is instead calculated by callingmodel(features, **labels)
.
Subclass and override this method if you want to inject some custom behavior.
-
get_train_tfdataset
() → tensorflow.python.data.ops.dataset_ops.DatasetV2[source]¶ Returns the training
Dataset
.Subclass and override this method if you want to inject some custom behavior.
-
log
(logs: Dict[str, float]) → None[source]¶ Log
logs
on the various objects watching training.Subclass and override this method to inject custom behavior.
- Parameters
logs (
Dict[str, float]
) – The values to log.
-
predict
(test_dataset: tensorflow.python.data.ops.dataset_ops.DatasetV2) → transformers.trainer_utils.PredictionOutput[source]¶ Run prediction and returns predictions and potential metrics.
Depending on the dataset and your use case, your test dataset may contain labels. In that case, this method will also return metrics, like in
evaluate()
.- Parameters
test_dataset (
Dataset
) – Dataset to run the predictions on. The dataset should yield tuples of(features, labels)
wherefeatures
is a dict of input features andlabels
is the labels. Iflabels
is a tensor, the loss is calculated by the model by callingmodel(features, labels=labels)
. Iflabels
is a dict, such as when using a QuestionAnswering head model with multiple targets, the loss is instead calculated by callingmodel(features, **labels)
Returns: NamedTuple A namedtuple with the following keys:
predictions (
np.ndarray
): The predictions ontest_dataset
.label_ids (
np.ndarray
, optional): The labels (if the dataset contained some).metrics (
Dict[str, float]
, optional): The potential dictionary of metrics (if the dataset contained labels).
-
prediction_loop
(dataset: tensorflow.python.data.ops.dataset_ops.DatasetV2, steps: int, num_examples: int, description: str, prediction_loss_only: Optional[bool] = None) → transformers.trainer_utils.PredictionOutput[source]¶ Prediction/evaluation loop, shared by
evaluate()
andpredict()
.Works both with or without labels.
-
prediction_step
(features: tensorflow.python.framework.ops.Tensor, labels: tensorflow.python.framework.ops.Tensor, nb_instances_in_global_batch: tensorflow.python.framework.ops.Tensor) → tensorflow.python.framework.ops.Tensor[source]¶ Compute the prediction on features and update the loss with labels.
Subclass and override to inject some custom behavior.
-
run_model
(features, labels, training)[source]¶ Computes the loss of the given features and labels pair.
Subclass and override this method if you want to inject some custom behavior.
- Parameters
features (
tf.Tensor
) – A batch of input features.labels (
tf.Tensor
) – A batch of labels.training (
bool
) – Whether or not to run the model in training mode.
- Returns
The loss and logits.
- Return type
A tuple of two
tf.Tensor
-
save_model
(output_dir: Optional[str] = None)[source]¶ Will save the model, so you can reload it using
from_pretrained()
.
-
setup_comet
()[source]¶ Setup the optional Comet.ml integration.
- Environment:
- COMET_MODE:
(Optional): str - “OFFLINE”, “ONLINE”, or “DISABLED”
- COMET_PROJECT_NAME:
(Optional): str - Comet.ml project name for experiments
- COMET_OFFLINE_DIRECTORY:
(Optional): str - folder to use for saving offline experiments when COMET_MODE is “OFFLINE”
For a number of configurable items in the environment, see here
-
setup_wandb
()[source]¶ Setup the optional Weights & Biases (wandb) integration.
One can subclass and override this method to customize the setup if needed. Find more information here. You can also override the following environment variables:
- Environment:
- WANDB_PROJECT:
(Optional): str - “huggingface” by default, set this to a custom string to store results in a different project.
- WANDB_DISABLED:
(Optional): boolean - defaults to false, set to “true” to disable wandb entirely.
TrainingArguments¶
-
class
transformers.
TrainingArguments
(output_dir: str, overwrite_output_dir: bool = False, do_train: bool = False, do_eval: bool = False, do_predict: bool = False, evaluation_strategy: transformers.trainer_utils.IntervalStrategy = 'no', prediction_loss_only: bool = False, per_device_train_batch_size: int = 8, per_device_eval_batch_size: int = 8, per_gpu_train_batch_size: Optional[int] = None, per_gpu_eval_batch_size: Optional[int] = None, gradient_accumulation_steps: int = 1, eval_accumulation_steps: Optional[int] = None, learning_rate: float = 5e-05, weight_decay: float = 0.0, adam_beta1: float = 0.9, adam_beta2: float = 0.999, adam_epsilon: float = 1e-08, max_grad_norm: float = 1.0, num_train_epochs: float = 3.0, max_steps: int = -1, lr_scheduler_type: transformers.trainer_utils.SchedulerType = 'linear', warmup_ratio: float = 0.0, warmup_steps: int = 0, logging_dir: Optional[str] = <factory>, logging_strategy: transformers.trainer_utils.IntervalStrategy = 'steps', logging_first_step: bool = False, logging_steps: int = 500, save_strategy: transformers.trainer_utils.IntervalStrategy = 'steps', save_steps: int = 500, save_total_limit: Optional[int] = None, no_cuda: bool = False, seed: int = 42, fp16: bool = False, fp16_opt_level: str = 'O1', fp16_backend: str = 'auto', fp16_full_eval: bool = False, local_rank: int = -1, tpu_num_cores: Optional[int] = None, tpu_metrics_debug: bool = False, debug: str = '', dataloader_drop_last: bool = False, eval_steps: int = None, dataloader_num_workers: int = 0, past_index: int = -1, run_name: Optional[str] = None, disable_tqdm: Optional[bool] = None, remove_unused_columns: Optional[bool] = True, label_names: Optional[List[str]] = None, load_best_model_at_end: Optional[bool] = False, metric_for_best_model: Optional[str] = None, greater_is_better: Optional[bool] = None, ignore_data_skip: bool = False, sharded_ddp: str = '', deepspeed: Optional[str] = None, label_smoothing_factor: float = 0.0, adafactor: bool = False, group_by_length: bool = False, length_column_name: Optional[str] = 'length', report_to: Optional[List[str]] = None, ddp_find_unused_parameters: Optional[bool] = None, dataloader_pin_memory: bool = True, skip_memory_metrics: bool = True, use_legacy_prediction_loop: bool = False, push_to_hub: bool = False, resume_from_checkpoint: Optional[str] = None, log_on_each_node: bool = True, mp_parameters: str = '')[source]¶ TrainingArguments is the subset of the arguments we use in our example scripts which relate to the training loop itself.
Using
HfArgumentParser
we can turn this class into argparse arguments that can be specified on the command line.- Parameters
output_dir (
str
) – The output directory where the model predictions and checkpoints will be written.overwrite_output_dir (
bool
, optional, defaults toFalse
) – IfTrue
, overwrite the content of the output directory. Use this to continue training ifoutput_dir
points to a checkpoint directory.do_train (
bool
, optional, defaults toFalse
) – Whether to run training or not. This argument is not directly used byTrainer
, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.do_eval (
bool
, optional) – Whether to run evaluation on the validation set or not. Will be set toTrue
ifevaluation_strategy
is different from"no"
. This argument is not directly used byTrainer
, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.do_predict (
bool
, optional, defaults toFalse
) – Whether to run predictions on the test set or not. This argument is not directly used byTrainer
, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.evaluation_strategy (
str
orIntervalStrategy
, optional, defaults to"no"
) –The evaluation strategy to adopt during training. Possible values are:
"no"
: No evaluation is done during training."steps"
: Evaluation is done (and logged) everyeval_steps
."epoch"
: Evaluation is done at the end of each epoch.
prediction_loss_only (
bool
, optional, defaults to False) – When performing evaluation and generating predictions, only returns the loss.per_device_train_batch_size (
int
, optional, defaults to 8) – The batch size per GPU/TPU core/CPU for training.per_device_eval_batch_size (
int
, optional, defaults to 8) – The batch size per GPU/TPU core/CPU for evaluation.gradient_accumulation_steps (
int
, optional, defaults to 1) –Number of updates steps to accumulate the gradients for, before performing a backward/update pass.
Warning
When using gradient accumulation, one step is counted as one step with backward pass. Therefore, logging, evaluation, save will be conducted every
gradient_accumulation_steps * xxx_step
training examples.eval_accumulation_steps (
int
, optional) – Number of predictions steps to accumulate the output tensors for, before moving the results to the CPU. If left unset, the whole predictions are accumulated on GPU/TPU before being moved to the CPU (faster but requires more memory).learning_rate (
float
, optional, defaults to 5e-5) – The initial learning rate forAdamW
optimizer.weight_decay (
float
, optional, defaults to 0) – The weight decay to apply (if not zero) to all layers except all bias and LayerNorm weights inAdamW
optimizer.adam_beta1 (
float
, optional, defaults to 0.9) – The beta1 hyperparameter for theAdamW
optimizer.adam_beta2 (
float
, optional, defaults to 0.999) – The beta2 hyperparameter for theAdamW
optimizer.adam_epsilon (
float
, optional, defaults to 1e-8) – The epsilon hyperparameter for theAdamW
optimizer.max_grad_norm (
float
, optional, defaults to 1.0) – Maximum gradient norm (for gradient clipping).num_train_epochs (
float
, optional, defaults to 3.0) – Total number of training epochs to perform (if not an integer, will perform the decimal part percents of the last epoch before stopping training).max_steps (
int
, optional, defaults to -1) – If set to a positive number, the total number of training steps to perform. Overridesnum_train_epochs
.lr_scheduler_type (
str
orSchedulerType
, optional, defaults to"linear"
) – The scheduler type to use. See the documentation ofSchedulerType
for all possible values.warmup_ratio (
float
, optional, defaults to 0.0) – Ratio of total training steps used for a linear warmup from 0 tolearning_rate
.warmup_steps (
int
, optional, defaults to 0) – Number of steps used for a linear warmup from 0 tolearning_rate
. Overrides any effect ofwarmup_ratio
.logging_dir (
str
, optional) – TensorBoard log directory. Will default to runs/**CURRENT_DATETIME_HOSTNAME**.logging_strategy (
str
orIntervalStrategy
, optional, defaults to"steps"
) –The logging strategy to adopt during training. Possible values are:
"no"
: No logging is done during training."epoch"
: Logging is done at the end of each epoch."steps"
: Logging is done everylogging_steps
.
logging_first_step (
bool
, optional, defaults toFalse
) – Whether to log and evaluate the firstglobal_step
or not.logging_steps (
int
, optional, defaults to 500) – Number of update steps between two logs iflogging_strategy="steps"
.save_strategy (
str
orIntervalStrategy
, optional, defaults to"steps"
) –The checkpoint save strategy to adopt during training. Possible values are:
"no"
: No save is done during training."epoch"
: Save is done at the end of each epoch."steps"
: Save is done everysave_steps
.
save_steps (
int
, optional, defaults to 500) – Number of updates steps before two checkpoint saves ifsave_strategy="steps"
.save_total_limit (
int
, optional) – If a value is passed, will limit the total amount of checkpoints. Deletes the older checkpoints inoutput_dir
.no_cuda (
bool
, optional, defaults toFalse
) – Whether to not use CUDA even when it is available or not.seed (
int
, optional, defaults to 42) – Random seed that will be set at the beginning of training. To ensure reproducibility across runs, use themodel_init()
function to instantiate the model if it has some randomly initialized parameters.fp16 (
bool
, optional, defaults toFalse
) – Whether to use 16-bit (mixed) precision training instead of 32-bit training.fp16_opt_level (
str
, optional, defaults to ‘O1’) – Forfp16
training, Apex AMP optimization level selected in [‘O0’, ‘O1’, ‘O2’, and ‘O3’]. See details on the Apex documentation.fp16_backend (
str
, optional, defaults to"auto"
) – The backend to use for mixed precision training. Must be one of"auto"
,"amp"
or"apex"
."auto"
will use AMP or APEX depending on the PyTorch version detected, while the other choices will force the requested backend.fp16_full_eval (
bool
, optional, defaults toFalse
) – Whether to use full 16-bit precision evaluation instead of 32-bit. This will be faster and save memory but can harm metric values.local_rank (
int
, optional, defaults to -1) – Rank of the process during distributed training.tpu_num_cores (
int
, optional) – When training on TPU, the number of TPU cores (automatically passed by launcher script).dataloader_drop_last (
bool
, optional, defaults toFalse
) – Whether to drop the last incomplete batch (if the length of the dataset is not divisible by the batch size) or not.eval_steps (
int
, optional) – Number of update steps between two evaluations ifevaluation_strategy="steps"
. Will default to the same value aslogging_steps
if not set.dataloader_num_workers (
int
, optional, defaults to 0) – Number of subprocesses to use for data loading (PyTorch only). 0 means that the data will be loaded in the main process.past_index (
int
, optional, defaults to -1) – Some models like TransformerXL or XLNet can make use of the past hidden states for their predictions. If this argument is set to a positive int, theTrainer
will use the corresponding output (usually index 2) as the past state and feed it to the model at the next training step under the keyword argumentmems
.run_name (
str
, optional) – A descriptor for the run. Typically used for wandb logging.disable_tqdm (
bool
, optional) – Whether or not to disable the tqdm progress bars and table of metrics produced byNotebookTrainingTracker
in Jupyter Notebooks. Will default toTrue
if the logging level is set to warn or lower (default),False
otherwise.remove_unused_columns (
bool
, optional, defaults toTrue
) –If using
datasets.Dataset
datasets, whether or not to automatically remove the columns unused by the model forward method.(Note that this behavior is not implemented for
TFTrainer
yet.)label_names (
List[str]
, optional) –The list of keys in your dictionary of inputs that correspond to the labels.
Will eventually default to
["labels"]
except if the model used is one of theXxxForQuestionAnswering
in which case it will default to["start_positions", "end_positions"]
.load_best_model_at_end (
bool
, optional, defaults toFalse
) –Whether or not to load the best model found during training at the end of training.
Note
When set to
True
, the parameterssave_strategy
andsave_steps
will be ignored and the model will be saved after each evaluation.metric_for_best_model (
str
, optional) –Use in conjunction with
load_best_model_at_end
to specify the metric to use to compare two different models. Must be the name of a metric returned by the evaluation with or without the prefix"eval_"
. Will default to"loss"
if unspecified andload_best_model_at_end=True
(to use the evaluation loss).If you set this value,
greater_is_better
will default toTrue
. Don’t forget to set it toFalse
if your metric is better when lower.greater_is_better (
bool
, optional) –Use in conjunction with
load_best_model_at_end
andmetric_for_best_model
to specify if better models should have a greater metric or not. Will default to:True
ifmetric_for_best_model
is set to a value that isn’t"loss"
or"eval_loss"
.False
ifmetric_for_best_model
is not set, or set to"loss"
or"eval_loss"
.
ignore_data_skip (
bool
, optional, defaults toFalse
) – When resuming training, whether or not to skip the epochs and batches to get the data loading at the same stage as in the previous training. If set toTrue
, the training will begin faster (as that skipping step can take a long time) but will not yield the same results as the interrupted training would have.sharded_ddp (
bool
,str
or list ofShardedDDPOption
, optional, defaults toFalse
) –Use Sharded DDP training from FairScale (in distributed training only). This is an experimental feature.
A list of options along the following:
"simple"
: to use first instance of sharded DDP released by fairscale (ShardedDDP
) similar to ZeRO-2."zero_dp_2"
: to use the second instance of sharded DPP released by fairscale (FullyShardedDDP
) in Zero-2 mode (withreshard_after_forward=False
)."zero_dp_3"
: to use the second instance of sharded DPP released by fairscale (FullyShardedDDP
) in Zero-3 mode (withreshard_after_forward=True
)."offload"
: to add ZeRO-offload (only compatible with"zero_dp_2"
and"zero_dp_3"
).
If a string is passed, it will be split on space. If a bool is passed, it will be converted to an empty list for
False
and["simple"]
forTrue
.deepspeed (
str
ordict
, optional) – Use Deepspeed. This is an experimental feature and its API may evolve in the future. The value is either the location of DeepSpeed json config file (e.g.,ds_config.json
) or an already loaded json file as adict
”label_smoothing_factor (
float
, optional, defaults to 0.0) – The label smoothing factor to use. Zero means no label smoothing, otherwise the underlying onehot-encoded labels are changed from 0s and 1s tolabel_smoothing_factor/num_labels
and1 - label_smoothing_factor + label_smoothing_factor/num_labels
respectively.debug (
str
or list ofDebugOption
, optional, defaults to""
) –Enable one or more debug features. This is an experimental feature.
Possible options are:
"underflow_overflow"
: detects overflow in model’s input/outputs and reports the last frames that led to the event"tpu_metrics_debug"
: print debug metrics on TPU
The options should be separated by whitespaces.
adafactor (
bool
, optional, defaults toFalse
) – Whether or not to use theAdafactor
optimizer instead ofAdamW
.group_by_length (
bool
, optional, defaults toFalse
) – Whether or not to group together samples of roughly the same length in the training dataset (to minimize padding applied and be more efficient). Only useful if applying dynamic padding.length_column_name (
str
, optional, defaults to"length"
) – Column name for precomputed lengths. If the column exists, grouping by length will use these values rather than computing them on train startup. Ignored unlessgroup_by_length
isTrue
and the dataset is an instance ofDataset
.report_to (
str
orList[str]
, optional, defaults to"all"
) – The list of integrations to report the results and logs to. Supported platforms are"azure_ml"
,"comet_ml"
,"mlflow"
,"tensorboard"
and"wandb"
. Use"all"
to report to all integrations installed,"none"
for no integrations.ddp_find_unused_parameters (
bool
, optional) – When using distributed training, the value of the flagfind_unused_parameters
passed toDistributedDataParallel
. Will default toFalse
if gradient checkpointing is used,True
otherwise.dataloader_pin_memory (
bool
, optional, defaults toTrue
) – Whether you want to pin memory in data loaders or not. Will default toTrue
.skip_memory_metrics (
bool
, optional, defaults toTrue
) – Whether to skip adding of memory profiler reports to metrics. This is skipped by default because it slows down the training and evaluation speed.push_to_hub (
bool
, optional, defaults toFalse
) – Whether or not to upload the trained model to the hub after training. This argument is not directly used byTrainer
, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.resume_from_checkpoint (
str
, optional) – The path to a folder with a valid checkpoint for your model. This argument is not directly used byTrainer
, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.log_on_each_node (
bool
, optional, defaults toTrue
) – In multinode distributed training, whether to log once per node, or only on the main node.
-
property
device
¶ The device used by this process.
-
property
eval_batch_size
¶ The actual batch size for evaluation (may differ from
per_gpu_eval_batch_size
in distributed training).
-
property
local_process_index
¶ The index of the local process used.
-
property
n_gpu
¶ The number of GPUs used by this process.
Note
This will only be greater than one when you have multiple GPUs available but are not using distributed training. For distributed training, it will always be 1.
-
property
parallel_mode
¶ The current mode used for parallelism if multiple GPUs/TPU cores are available. One of:
ParallelMode.NOT_PARALLEL
: no parallelism (CPU or one GPU).ParallelMode.NOT_DISTRIBUTED
: several GPUs in one single process (usestorch.nn.DataParallel
).ParallelMode.DISTRIBUTED
: several GPUs, each having its own process (usestorch.nn.DistributedDataParallel
).ParallelMode.TPU
: several TPU cores.
-
property
place_model_on_device
¶ Can be subclassed and overridden for some specific integrations.
-
property
process_index
¶ The index of the current process used.
-
property
should_log
¶ Whether or not the current process should produce log.
-
to_dict
()[source]¶ Serializes this instance while replace Enum by their values (for JSON serialization support).
-
to_sanitized_dict
() → Dict[str, Any][source]¶ Sanitized serialization to use with TensorBoard’s hparams
-
property
train_batch_size
¶ The actual batch size for training (may differ from
per_gpu_train_batch_size
in distributed training).
-
property
world_size
¶ The number of processes used in parallel.
Seq2SeqTrainingArguments¶
-
class
transformers.
Seq2SeqTrainingArguments
(output_dir: str, overwrite_output_dir: bool = False, do_train: bool = False, do_eval: bool = False, do_predict: bool = False, evaluation_strategy: transformers.trainer_utils.IntervalStrategy = 'no', prediction_loss_only: bool = False, per_device_train_batch_size: int = 8, per_device_eval_batch_size: int = 8, per_gpu_train_batch_size: Optional[int] = None, per_gpu_eval_batch_size: Optional[int] = None, gradient_accumulation_steps: int = 1, eval_accumulation_steps: Optional[int] = None, learning_rate: float = 5e-05, weight_decay: float = 0.0, adam_beta1: float = 0.9, adam_beta2: float = 0.999, adam_epsilon: float = 1e-08, max_grad_norm: float = 1.0, num_train_epochs: float = 3.0, max_steps: int = -1, lr_scheduler_type: transformers.trainer_utils.SchedulerType = 'linear', warmup_ratio: float = 0.0, warmup_steps: int = 0, logging_dir: Optional[str] = <factory>, logging_strategy: transformers.trainer_utils.IntervalStrategy = 'steps', logging_first_step: bool = False, logging_steps: int = 500, save_strategy: transformers.trainer_utils.IntervalStrategy = 'steps', save_steps: int = 500, save_total_limit: Optional[int] = None, no_cuda: bool = False, seed: int = 42, fp16: bool = False, fp16_opt_level: str = 'O1', fp16_backend: str = 'auto', fp16_full_eval: bool = False, local_rank: int = -1, tpu_num_cores: Optional[int] = None, tpu_metrics_debug: bool = False, debug: str = '', dataloader_drop_last: bool = False, eval_steps: int = None, dataloader_num_workers: int = 0, past_index: int = -1, run_name: Optional[str] = None, disable_tqdm: Optional[bool] = None, remove_unused_columns: Optional[bool] = True, label_names: Optional[List[str]] = None, load_best_model_at_end: Optional[bool] = False, metric_for_best_model: Optional[str] = None, greater_is_better: Optional[bool] = None, ignore_data_skip: bool = False, sharded_ddp: str = '', deepspeed: Optional[str] = None, label_smoothing_factor: float = 0.0, adafactor: bool = False, group_by_length: bool = False, length_column_name: Optional[str] = 'length', report_to: Optional[List[str]] = None, ddp_find_unused_parameters: Optional[bool] = None, dataloader_pin_memory: bool = True, skip_memory_metrics: bool = True, use_legacy_prediction_loop: bool = False, push_to_hub: bool = False, resume_from_checkpoint: Optional[str] = None, log_on_each_node: bool = True, mp_parameters: str = '', sortish_sampler: bool = False, predict_with_generate: bool = False)[source]¶ TrainingArguments is the subset of the arguments we use in our example scripts which relate to the training loop itself.
Using
HfArgumentParser
we can turn this class into argparse arguments that can be specified on the command line.- Parameters
output_dir (
str
) – The output directory where the model predictions and checkpoints will be written.overwrite_output_dir (
bool
, optional, defaults toFalse
) – IfTrue
, overwrite the content of the output directory. Use this to continue training ifoutput_dir
points to a checkpoint directory.do_train (
bool
, optional, defaults toFalse
) – Whether to run training or not. This argument is not directly used byTrainer
, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.do_eval (
bool
, optional) – Whether to run evaluation on the validation set or not. Will be set toTrue
ifevaluation_strategy
is different from"no"
. This argument is not directly used byTrainer
, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.do_predict (
bool
, optional, defaults toFalse
) – Whether to run predictions on the test set or not. This argument is not directly used byTrainer
, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.evaluation_strategy (
str
orIntervalStrategy
, optional, defaults to"no"
) –The evaluation strategy to adopt during training. Possible values are:
"no"
: No evaluation is done during training."steps"
: Evaluation is done (and logged) everyeval_steps
."epoch"
: Evaluation is done at the end of each epoch.
prediction_loss_only (
bool
, optional, defaults to False) – When performing evaluation and generating predictions, only returns the loss.per_device_train_batch_size (
int
, optional, defaults to 8) – The batch size per GPU/TPU core/CPU for training.per_device_eval_batch_size (
int
, optional, defaults to 8) – The batch size per GPU/TPU core/CPU for evaluation.gradient_accumulation_steps (
int
, optional, defaults to 1) –Number of updates steps to accumulate the gradients for, before performing a backward/update pass.
Warning
When using gradient accumulation, one step is counted as one step with backward pass. Therefore, logging, evaluation, save will be conducted every
gradient_accumulation_steps * xxx_step
training examples.eval_accumulation_steps (
int
, optional) – Number of predictions steps to accumulate the output tensors for, before moving the results to the CPU. If left unset, the whole predictions are accumulated on GPU/TPU before being moved to the CPU (faster but requires more memory).learning_rate (
float
, optional, defaults to 5e-5) – The initial learning rate forAdamW
optimizer.weight_decay (
float
, optional, defaults to 0) – The weight decay to apply (if not zero) to all layers except all bias and LayerNorm weights inAdamW
optimizer.adam_beta1 (
float
, optional, defaults to 0.9) – The beta1 hyperparameter for theAdamW
optimizer.adam_beta2 (
float
, optional, defaults to 0.999) – The beta2 hyperparameter for theAdamW
optimizer.adam_epsilon (
float
, optional, defaults to 1e-8) – The epsilon hyperparameter for theAdamW
optimizer.max_grad_norm (
float
, optional, defaults to 1.0) – Maximum gradient norm (for gradient clipping).num_train_epochs (
float
, optional, defaults to 3.0) – Total number of training epochs to perform (if not an integer, will perform the decimal part percents of the last epoch before stopping training).max_steps (
int
, optional, defaults to -1) – If set to a positive number, the total number of training steps to perform. Overridesnum_train_epochs
.lr_scheduler_type (
str
orSchedulerType
, optional, defaults to"linear"
) – The scheduler type to use. See the documentation ofSchedulerType
for all possible values.warmup_ratio (
float
, optional, defaults to 0.0) – Ratio of total training steps used for a linear warmup from 0 tolearning_rate
.warmup_steps (
int
, optional, defaults to 0) – Number of steps used for a linear warmup from 0 tolearning_rate
. Overrides any effect ofwarmup_ratio
.logging_dir (
str
, optional) – TensorBoard log directory. Will default to runs/**CURRENT_DATETIME_HOSTNAME**.logging_strategy (
str
orIntervalStrategy
, optional, defaults to"steps"
) –The logging strategy to adopt during training. Possible values are:
"no"
: No logging is done during training."epoch"
: Logging is done at the end of each epoch."steps"
: Logging is done everylogging_steps
.
logging_first_step (
bool
, optional, defaults toFalse
) – Whether to log and evaluate the firstglobal_step
or not.logging_steps (
int
, optional, defaults to 500) – Number of update steps between two logs iflogging_strategy="steps"
.save_strategy (
str
orIntervalStrategy
, optional, defaults to"steps"
) –The checkpoint save strategy to adopt during training. Possible values are:
"no"
: No save is done during training."epoch"
: Save is done at the end of each epoch."steps"
: Save is done everysave_steps
.
save_steps (
int
, optional, defaults to 500) – Number of updates steps before two checkpoint saves ifsave_strategy="steps"
.save_total_limit (
int
, optional) – If a value is passed, will limit the total amount of checkpoints. Deletes the older checkpoints inoutput_dir
.no_cuda (
bool
, optional, defaults toFalse
) – Whether to not use CUDA even when it is available or not.seed (
int
, optional, defaults to 42) – Random seed that will be set at the beginning of training. To ensure reproducibility across runs, use themodel_init()
function to instantiate the model if it has some randomly initialized parameters.fp16 (
bool
, optional, defaults toFalse
) – Whether to use 16-bit (mixed) precision training instead of 32-bit training.fp16_opt_level (
str
, optional, defaults to ‘O1’) – Forfp16
training, Apex AMP optimization level selected in [‘O0’, ‘O1’, ‘O2’, and ‘O3’]. See details on the Apex documentation.fp16_backend (
str
, optional, defaults to"auto"
) – The backend to use for mixed precision training. Must be one of"auto"
,"amp"
or"apex"
."auto"
will use AMP or APEX depending on the PyTorch version detected, while the other choices will force the requested backend.fp16_full_eval (
bool
, optional, defaults toFalse
) – Whether to use full 16-bit precision evaluation instead of 32-bit. This will be faster and save memory but can harm metric values.local_rank (
int
, optional, defaults to -1) – Rank of the process during distributed training.tpu_num_cores (
int
, optional) – When training on TPU, the number of TPU cores (automatically passed by launcher script).dataloader_drop_last (
bool
, optional, defaults toFalse
) – Whether to drop the last incomplete batch (if the length of the dataset is not divisible by the batch size) or not.eval_steps (
int
, optional) – Number of update steps between two evaluations ifevaluation_strategy="steps"
. Will default to the same value aslogging_steps
if not set.dataloader_num_workers (
int
, optional, defaults to 0) – Number of subprocesses to use for data loading (PyTorch only). 0 means that the data will be loaded in the main process.past_index (
int
, optional, defaults to -1) – Some models like TransformerXL or XLNet can make use of the past hidden states for their predictions. If this argument is set to a positive int, theTrainer
will use the corresponding output (usually index 2) as the past state and feed it to the model at the next training step under the keyword argumentmems
.run_name (
str
, optional) –A descriptor for the run. Typically used for wandb logging.
disable_tqdm (
bool
, optional) – Whether or not to disable the tqdm progress bars and table of metrics produced byNotebookTrainingTracker
in Jupyter Notebooks. Will default toTrue
if the logging level is set to warn or lower (default),False
otherwise.remove_unused_columns (
bool
, optional, defaults toTrue
) –If using
datasets.Dataset
datasets, whether or not to automatically remove the columns unused by the model forward method.(Note that this behavior is not implemented for
TFTrainer
yet.)label_names (
List[str]
, optional) –The list of keys in your dictionary of inputs that correspond to the labels.
Will eventually default to
["labels"]
except if the model used is one of theXxxForQuestionAnswering
in which case it will default to["start_positions", "end_positions"]
.load_best_model_at_end (
bool
, optional, defaults toFalse
) –Whether or not to load the best model found during training at the end of training.
Note
When set to
True
, the parameterssave_strategy
andsave_steps
will be ignored and the model will be saved after each evaluation.metric_for_best_model (
str
, optional) –Use in conjunction with
load_best_model_at_end
to specify the metric to use to compare two different models. Must be the name of a metric returned by the evaluation with or without the prefix"eval_"
. Will default to"loss"
if unspecified andload_best_model_at_end=True
(to use the evaluation loss).If you set this value,
greater_is_better
will default toTrue
. Don’t forget to set it toFalse
if your metric is better when lower.greater_is_better (
bool
, optional) –Use in conjunction with
load_best_model_at_end
andmetric_for_best_model
to specify if better models should have a greater metric or not. Will default to:True
ifmetric_for_best_model
is set to a value that isn’t"loss"
or"eval_loss"
.False
ifmetric_for_best_model
is not set, or set to"loss"
or"eval_loss"
.
ignore_data_skip (
bool
, optional, defaults toFalse
) – When resuming training, whether or not to skip the epochs and batches to get the data loading at the same stage as in the previous training. If set toTrue
, the training will begin faster (as that skipping step can take a long time) but will not yield the same results as the interrupted training would have.sharded_ddp (
bool
,str
or list ofShardedDDPOption
, optional, defaults toFalse
) –Use Sharded DDP training from FairScale (in distributed training only). This is an experimental feature.
A list of options along the following:
"simple"
: to use first instance of sharded DDP released by fairscale (ShardedDDP
) similar to ZeRO-2."zero_dp_2"
: to use the second instance of sharded DPP released by fairscale (FullyShardedDDP
) in Zero-2 mode (withreshard_after_forward=False
)."zero_dp_3"
: to use the second instance of sharded DPP released by fairscale (FullyShardedDDP
) in Zero-3 mode (withreshard_after_forward=True
)."offload"
: to add ZeRO-offload (only compatible with"zero_dp_2"
and"zero_dp_3"
).
If a string is passed, it will be split on space. If a bool is passed, it will be converted to an empty list for
False
and["simple"]
forTrue
.deepspeed (
str
ordict
, optional) – Use Deepspeed. This is an experimental feature and its API may evolve in the future. The value is either the location of DeepSpeed json config file (e.g.,ds_config.json
) or an already loaded json file as adict
”label_smoothing_factor (
float
, optional, defaults to 0.0) – The label smoothing factor to use. Zero means no label smoothing, otherwise the underlying onehot-encoded labels are changed from 0s and 1s tolabel_smoothing_factor/num_labels
and1 - label_smoothing_factor + label_smoothing_factor/num_labels
respectively.debug (
str
or list ofDebugOption
, optional, defaults to""
) –Enable one or more debug features. This is an experimental feature.
Possible options are:
"underflow_overflow"
: detects overflow in model’s input/outputs and reports the last frames that led to the event"tpu_metrics_debug"
: print debug metrics on TPU
The options should be separated by whitespaces.
adafactor (
bool
, optional, defaults toFalse
) – Whether or not to use theAdafactor
optimizer instead ofAdamW
.group_by_length (
bool
, optional, defaults toFalse
) – Whether or not to group together samples of roughly the same length in the training dataset (to minimize padding applied and be more efficient). Only useful if applying dynamic padding.length_column_name (
str
, optional, defaults to"length"
) – Column name for precomputed lengths. If the column exists, grouping by length will use these values rather than computing them on train startup. Ignored unlessgroup_by_length
isTrue
and the dataset is an instance ofDataset
.report_to (
str
orList[str]
, optional, defaults to"all"
) – The list of integrations to report the results and logs to. Supported platforms are"azure_ml"
,"comet_ml"
,"mlflow"
,"tensorboard"
and"wandb"
. Use"all"
to report to all integrations installed,"none"
for no integrations.ddp_find_unused_parameters (
bool
, optional) – When using distributed training, the value of the flagfind_unused_parameters
passed toDistributedDataParallel
. Will default toFalse
if gradient checkpointing is used,True
otherwise.dataloader_pin_memory (
bool
, optional, defaults toTrue
) – Whether you want to pin memory in data loaders or not. Will default toTrue
.skip_memory_metrics (
bool
, optional, defaults toTrue
) – Whether to skip adding of memory profiler reports to metrics. This is skipped by default because it slows down the training and evaluation speed.push_to_hub (
bool
, optional, defaults toFalse
) – Whether or not to upload the trained model to the hub after training. This argument is not directly used byTrainer
, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.resume_from_checkpoint (
str
, optional) – The path to a folder with a valid checkpoint for your model. This argument is not directly used byTrainer
, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.log_on_each_node (
bool
, optional, defaults toTrue
) – In multinode distributed training, whether to log once per node, or only on the main node.
- sortish_sampler (
bool
, optional, defaults toFalse
): Whether to use a sortish sampler or not. Only possible if the underlying datasets are Seq2SeqDataset for now but will become generally available in the near future.
It sorts the inputs according to lengths in order to minimize the padding size, with a bit of randomness for the training set.
- predict_with_generate (
bool
, optional, defaults toFalse
): Whether to use generate to calculate generative metrics (ROUGE, BLEU).
TFTrainingArguments¶
-
class
transformers.
TFTrainingArguments
(output_dir: str, overwrite_output_dir: bool = False, do_train: bool = False, do_eval: bool = False, do_predict: bool = False, evaluation_strategy: transformers.trainer_utils.IntervalStrategy = 'no', prediction_loss_only: bool = False, per_device_train_batch_size: int = 8, per_device_eval_batch_size: int = 8, per_gpu_train_batch_size: Optional[int] = None, per_gpu_eval_batch_size: Optional[int] = None, gradient_accumulation_steps: int = 1, eval_accumulation_steps: Optional[int] = None, learning_rate: float = 5e-05, weight_decay: float = 0.0, adam_beta1: float = 0.9, adam_beta2: float = 0.999, adam_epsilon: float = 1e-08, max_grad_norm: float = 1.0, num_train_epochs: float = 3.0, max_steps: int = -1, lr_scheduler_type: transformers.trainer_utils.SchedulerType = 'linear', warmup_ratio: float = 0.0, warmup_steps: int = 0, logging_dir: Optional[str] = <factory>, logging_strategy: transformers.trainer_utils.IntervalStrategy = 'steps', logging_first_step: bool = False, logging_steps: int = 500, save_strategy: transformers.trainer_utils.IntervalStrategy = 'steps', save_steps: int = 500, save_total_limit: Optional[int] = None, no_cuda: bool = False, seed: int = 42, fp16: bool = False, fp16_opt_level: str = 'O1', fp16_backend: str = 'auto', fp16_full_eval: bool = False, local_rank: int = -1, tpu_num_cores: Optional[int] = None, tpu_metrics_debug: bool = False, debug: str = '', dataloader_drop_last: bool = False, eval_steps: int = None, dataloader_num_workers: int = 0, past_index: int = -1, run_name: Optional[str] = None, disable_tqdm: Optional[bool] = None, remove_unused_columns: Optional[bool] = True, label_names: Optional[List[str]] = None, load_best_model_at_end: Optional[bool] = False, metric_for_best_model: Optional[str] = None, greater_is_better: Optional[bool] = None, ignore_data_skip: bool = False, sharded_ddp: str = '', deepspeed: Optional[str] = None, label_smoothing_factor: float = 0.0, adafactor: bool = False, group_by_length: bool = False, length_column_name: Optional[str] = 'length', report_to: Optional[List[str]] = None, ddp_find_unused_parameters: Optional[bool] = None, dataloader_pin_memory: bool = True, skip_memory_metrics: bool = True, use_legacy_prediction_loop: bool = False, push_to_hub: bool = False, resume_from_checkpoint: Optional[str] = None, log_on_each_node: bool = True, mp_parameters: str = '', tpu_name: str = None, tpu_zone: str = None, gcp_project: str = None, poly_power: float = 1.0, xla: bool = False)[source]¶ TrainingArguments is the subset of the arguments we use in our example scripts which relate to the training loop itself.
Using
HfArgumentParser
we can turn this class into argparse arguments that can be specified on the command line.- Parameters
output_dir (
str
) – The output directory where the model predictions and checkpoints will be written.overwrite_output_dir (
bool
, optional, defaults toFalse
) – IfTrue
, overwrite the content of the output directory. Use this to continue training ifoutput_dir
points to a checkpoint directory.do_train (
bool
, optional, defaults toFalse
) – Whether to run training or not. This argument is not directly used byTrainer
, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.do_eval (
bool
, optional) – Whether to run evaluation on the validation set or not. Will be set toTrue
ifevaluation_strategy
is different from"no"
. This argument is not directly used byTrainer
, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.do_predict (
bool
, optional, defaults toFalse
) – Whether to run predictions on the test set or not. This argument is not directly used byTrainer
, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.evaluation_strategy (
str
orIntervalStrategy
, optional, defaults to"no"
) –The evaluation strategy to adopt during training. Possible values are:
"no"
: No evaluation is done during training."steps"
: Evaluation is done (and logged) everyeval_steps
."epoch"
: Evaluation is done at the end of each epoch.
per_device_train_batch_size (
int
, optional, defaults to 8) – The batch size per GPU/TPU core/CPU for training.per_device_eval_batch_size (
int
, optional, defaults to 8) – The batch size per GPU/TPU core/CPU for evaluation.gradient_accumulation_steps –
(
int
, optional, defaults to 1): Number of updates steps to accumulate the gradients for, before performing a backward/update pass.Warning
When using gradient accumulation, one step is counted as one step with backward pass. Therefore, logging, evaluation, save will be conducted every
gradient_accumulation_steps * xxx_step
training examples.learning_rate (
float
, optional, defaults to 5e-5) – The initial learning rate for Adam.weight_decay (
float
, optional, defaults to 0) – The weight decay to apply (if not zero).adam_beta1 (
float
, optional, defaults to 0.9) – The beta1 hyperparameter for the Adam optimizer.adam_beta2 (
float
, optional, defaults to 0.999) – The beta2 hyperparameter for the Adam optimizer.adam_epsilon (
float
, optional, defaults to 1e-8) – The epsilon hyperparameter for the Adam optimizer.max_grad_norm (
float
, optional, defaults to 1.0) – Maximum gradient norm (for gradient clipping).num_train_epochs (
float
, optional, defaults to 3.0) – Total number of training epochs to perform.max_steps (
int
, optional, defaults to -1) – If set to a positive number, the total number of training steps to perform. Overridesnum_train_epochs
.warmup_ratio (
float
, optional, defaults to 0.0) – Ratio of total training steps used for a linear warmup from 0 tolearning_rate
.warmup_steps (
int
, optional, defaults to 0) – Number of steps used for a linear warmup from 0 tolearning_rate
. Overrides any effect ofwarmup_ratio
.logging_dir (
str
, optional) – TensorBoard log directory. Will default to runs/**CURRENT_DATETIME_HOSTNAME**.logging_strategy (
str
orIntervalStrategy
, optional, defaults to"steps"
) –The logging strategy to adopt during training. Possible values are:
"no"
: No logging is done during training."epoch"
: Logging is done at the end of each epoch."steps"
: Logging is done everylogging_steps
.
logging_first_step (
bool
, optional, defaults toFalse
) – Whether to log and evaluate the firstglobal_step
or not.logging_steps (
int
, optional, defaults to 500) – Number of update steps between two logs iflogging_strategy="steps"
.save_strategy (
str
orIntervalStrategy
, optional, defaults to"steps"
) –The checkpoint save strategy to adopt during training. Possible values are:
"no"
: No save is done during training."epoch"
: Save is done at the end of each epoch."steps"
: Save is done everysave_steps
.
save_steps (
int
, optional, defaults to 500) – Number of updates steps before two checkpoint saves ifsave_strategy="steps"
.save_total_limit (
int
, optional) – If a value is passed, will limit the total amount of checkpoints. Deletes the older checkpoints inoutput_dir
.no_cuda (
bool
, optional, defaults toFalse
) – Whether to not use CUDA even when it is available or not.seed (
int
, optional, defaults to 42) – Random seed that will be set at the beginning of training.fp16 (
bool
, optional, defaults toFalse
) – Whether to use 16-bit (mixed) precision training (through NVIDIA Apex) instead of 32-bit training.fp16_opt_level (
str
, optional, defaults to ‘O1’) – Forfp16
training, Apex AMP optimization level selected in [‘O0’, ‘O1’, ‘O2’, and ‘O3’]. See details on the Apex documentation.local_rank (
int
, optional, defaults to -1) – During distributed training, the rank of the process.tpu_num_cores (
int
, optional) – When training on TPU, the number of TPU cores (automatically passed by launcher script).debug (
bool
, optional, defaults toFalse
) – Whether to activate the trace to record computation graphs and profiling information or not.dataloader_drop_last (
bool
, optional, defaults toFalse
) – Whether to drop the last incomplete batch (if the length of the dataset is not divisible by the batch size) or not.eval_steps (
int
, optional, defaults to 1000) – Number of update steps before two evaluations.past_index (
int
, optional, defaults to -1) – Some models like TransformerXL or :doc`XLNet <../model_doc/xlnet>` can make use of the past hidden states for their predictions. If this argument is set to a positive int, theTrainer
will use the corresponding output (usually index 2) as the past state and feed it to the model at the next training step under the keyword argumentmems
.tpu_name (
str
, optional) – The name of the TPU the process is running on.tpu_zone (
str
, optional) – The zone of the TPU the process is running on. If not specified, we will attempt to automatically detect from metadata.gcp_project (
str
, optional) – Google Cloud Project name for the Cloud TPU-enabled project. If not specified, we will attempt to automatically detect from metadata.run_name (
str
, optional) – A descriptor for the run. Notably used for wandb logging.xla (
bool
, optional) – Whether to activate the XLA compilation or not.
-
property
eval_batch_size
¶ The actual batch size for evaluation (may differ from
per_gpu_eval_batch_size
in distributed training).
-
property
n_gpu
¶ The number of replicas (CPUs, GPUs or TPU cores) used in this training.
-
property
n_replicas
¶ The number of replicas (CPUs, GPUs or TPU cores) used in this training.
-
property
strategy
¶ The strategy used for distributed training.
-
property
train_batch_size
¶ The actual batch size for training (may differ from
per_gpu_train_batch_size
in distributed training).
Randomness¶
When resuming from a checkpoint generated by Trainer
all efforts are made to restore the
python, numpy and pytorch RNG states to the same states as they were at the moment of saving that checkpoint,
which should make the “stop and resume” style of training as close as possible to non-stop training.
However, due to various default non-deterministic pytorch settings this might not fully work. If you want full
determinism please refer to Controlling sources of randomness. As explained in the document, that some of those settings
that make things determinstic (.e.g., torch.backends.cudnn.deterministic
) may slow things down, therefore this
can’t be done by default, but you can enable those yourself if needed.
Trainer Integrations¶
The Trainer
has been extended to support libraries that may dramatically improve your training
time and fit much bigger models.
Currently it supports third party solutions, DeepSpeed and FairScale, which implement parts of the paper ZeRO: Memory Optimizations Toward Training Trillion Parameter Models, by Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, Yuxiong He.
This provided support is new and experimental as of this writing.
CUDA Extension Installation Notes¶
As of this writing, both FairScale and Deepspeed require compilation of CUDA C++ code, before they can be used.
While all installation issues should be dealt with through the corresponding GitHub Issues of FairScale and Deepspeed, there are a few common issues that one may encounter while building any PyTorch extension that needs to build CUDA extensions.
Therefore, if you encounter a CUDA-related build issue while doing one of the following or both:
pip install fairscale
pip install deepspeed
please, read the following notes first.
In these notes we give examples for what to do when pytorch
has been built with CUDA 10.2
. If your situation is
different remember to adjust the version number to the one you are after.
Possible problem #1¶
While, Pytorch comes with its own CUDA toolkit, to build these two projects you must have an identical version of CUDA installed system-wide.
For example, if you installed pytorch
with cudatoolkit==10.2
in the Python environment, you also need to have
CUDA 10.2
installed system-wide.
The exact location may vary from system to system, but /usr/local/cuda-10.2
is the most common location on many
Unix systems. When CUDA is correctly set up and added to the PATH
environment variable, one can find the
installation location by doing:
which nvcc
If you don’t have CUDA installed system-wide, install it first. You will find the instructions by using your favorite search engine. For example, if you’re on Ubuntu you may want to search for: ubuntu cuda 10.2 install.
Possible problem #2¶
Another possible common problem is that you may have more than one CUDA toolkit installed system-wide. For example you may have:
/usr/local/cuda-10.2
/usr/local/cuda-11.0
Now, in this situation you need to make sure that your PATH
and LD_LIBRARY_PATH
environment variables contain
the correct paths to the desired CUDA version. Typically, package installers will set these to contain whatever the
last version was installed. If you encounter the problem, where the package build fails because it can’t find the right
CUDA version despite you having it installed system-wide, it means that you need to adjust the 2 aforementioned
environment variables.
First, you may look at their contents:
echo $PATH
echo $LD_LIBRARY_PATH
so you get an idea of what is inside.
It’s possible that LD_LIBRARY_PATH
is empty.
PATH
lists the locations of where executables can be found and LD_LIBRARY_PATH
is for where shared libraries
are to looked for. In both cases, earlier entries have priority over the later ones. :
is used to separate multiple
entries.
Now, to tell the build program where to find the specific CUDA toolkit, insert the desired paths to be listed first by doing:
export PATH=/usr/local/cuda-10.2/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-10.2/lib64:$LD_LIBRARY_PATH
Note that we aren’t overwriting the existing values, but prepending instead.
Of course, adjust the version number, the full path if need be. Check that the directories you assign actually do
exist. lib64
sub-directory is where the various CUDA .so
objects, like libcudart.so
reside, it’s unlikely
that your system will have it named differently, but if it is adjust it to reflect your reality.
Possible problem #3¶
Some older CUDA versions may refuse to build with newer compilers. For example, you my have gcc-9
but it wants
gcc-7
.
There are various ways to go about it.
If you can install the latest CUDA toolkit it typically should support the newer compiler.
Alternatively, you could install the lower version of the compiler in addition to the one you already have, or you may
already have it but it’s not the default one, so the build system can’t see it. If you have gcc-7
installed but the
build system complains it can’t find it, the following might do the trick:
sudo ln -s /usr/bin/gcc-7 /usr/local/cuda-10.2/bin/gcc
sudo ln -s /usr/bin/g++-7 /usr/local/cuda-10.2/bin/g++
Here, we are making a symlink to gcc-7
from /usr/local/cuda-10.2/bin/gcc
and since
/usr/local/cuda-10.2/bin/
should be in the PATH
environment variable (see the previous problem’s solution), it
should find gcc-7
(and g++7
) and then the build will succeed.
As always make sure to edit the paths in the example to match your situation.
FairScale¶
By integrating FairScale the Trainer
provides support for the following features from the ZeRO paper:
Optimizer State Sharding
Gradient Sharding
Model Parameters Sharding (new and very experimental)
CPU offload (new and very experimental)
You will need at least two GPUs to use this feature.
Installation:
Install the library via pypi:
pip install fairscale
or via transformers
’ extras
:
pip install transformers[fairscale]
(will become available starting from transformers==4.6.0
)
or find more details on the FairScale’s GitHub page.
If you’re still struggling with the build, first make sure to read CUDA Extension Installation Notes.
If it’s still not resolved the build issue, here are a few more ideas.
fairscale
seems to have an issue with the recently introduced by pip build isolation feature. If you have a problem
with it, you may want to try one of:
pip install fairscale --no-build-isolation .
or:
git clone https://github.com/facebookresearch/fairscale/
cd fairscale
rm -r dist build
python setup.py bdist_wheel
pip uninstall -y fairscale
pip install dist/fairscale-*.whl
fairscale
also has issues with building against pytorch-nightly, so if you use it you may have to try one of:
pip uninstall -y fairscale; pip install fairscale --pre \
-f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html \
--no-cache --no-build-isolation
or:
pip install -v --disable-pip-version-check . \
-f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html --pre
Of course, adjust the urls to match the cuda version you use.
If after trying everything suggested you still encounter build issues, please, proceed with the GitHub Issue of FairScale.
Usage:
To use the first version of Sharded data-parallelism, add --sharded_ddp simple
to the command line arguments, and
make sure you have added the distributed launcher -m torch.distributed.launch
--nproc_per_node=NUMBER_OF_GPUS_YOU_HAVE
if you haven’t been using it already.
For example here is how you could use it for run_translation.py
with 2 GPUs:
python -m torch.distributed.launch --nproc_per_node=2 examples/pytorch/translation/run_translation.py \
--model_name_or_path t5-small --per_device_train_batch_size 1 \
--output_dir output_dir --overwrite_output_dir \
--do_train --max_train_samples 500 --num_train_epochs 1 \
--dataset_name wmt16 --dataset_config "ro-en" \
--source_lang en --target_lang ro \
--fp16 --sharded_ddp simple
Notes:
This feature requires distributed training (so multiple GPUs).
It is not implemented for TPUs.
It works with
--fp16
too, to make things even faster.One of the main benefits of enabling
--sharded_ddp simple
is that it uses a lot less GPU memory, so you should be able to use significantly larger batch sizes using the same hardware (e.g. 3x and even bigger) which should lead to significantly shorter training time.
To use the second version of Sharded data-parallelism, add
--sharded_ddp zero_dp_2
or--sharded_ddp zero_dp_3
to the command line arguments, and make sure you have added the distributed launcher-m torch.distributed.launch --nproc_per_node=NUMBER_OF_GPUS_YOU_HAVE
if you haven’t been using it already.
For example here is how you could use it for run_translation.py
with 2 GPUs:
python -m torch.distributed.launch --nproc_per_node=2 examples/pytorch/translation/run_translation.py \
--model_name_or_path t5-small --per_device_train_batch_size 1 \
--output_dir output_dir --overwrite_output_dir \
--do_train --max_train_samples 500 --num_train_epochs 1 \
--dataset_name wmt16 --dataset_config "ro-en" \
--source_lang en --target_lang ro \
--fp16 --sharded_ddp zero_dp_2
zero_dp_2
is an optimized version of the simple wrapper, while zero_dp_3
fully shards model weights,
gradients and optimizer states.
Both are compatible with adding cpu_offload
to enable ZeRO-offload (activate it like this: --sharded_ddp
"zero_dp_2 cpu_offload"
).
Notes:
This feature requires distributed training (so multiple GPUs).
It is not implemented for TPUs.
It works with
--fp16
too, to make things even faster.The
cpu_offload
additional option requires--fp16
.This is an area of active development, so make sure you have a source install of fairscale to use this feature as some bugs you encounter may have been fixed there already.
Known caveats:
This feature is incompatible with
--predict_with_generate
in the run_translation.py script.Using
--sharded_ddp zero_dp_3
requires wrapping each layer of the model in the special containerFullyShardedDataParallelism
of fairscale. It should be used with the optionauto_wrap
if you are not doing this yourself:--sharded_ddp "zero_dp_3 auto_wrap"
.
DeepSpeed¶
Moved to Trainer Deepspeed Integration.
Installation¶
Moved to Installation.
Deployment with multiple GPUs¶
Moved to Deployment with multiple GPUs.
Deployment with one GPU¶
Moved to Deployment with one GPU.
Deployment in Notebooks¶
Moved to Deployment in Notebooks.
Configuration¶
Moved to Configuration.
Passing Configuration¶
Moved to Passing Configuration.
NVMe Support¶
Moved to NVMe Support.
ZeRO-2 vs ZeRO-3 Performance¶
Moved to ZeRO-2 vs ZeRO-3 Performance.
ZeRO-2 Example¶
Moved to ZeRO-2 Example.
ZeRO-3 Example¶
Moved to ZeRO-3 Example.
fp32 Precision¶
Moved to fp32 Precision.
Automatic Mixed Precision¶
Moved to Automatic Mixed Precision.
Batch Size¶
Moved to Batch Size.
Gradient Accumulation¶
Moved to Gradient Accumulation.
Gradient Clipping¶
Moved to Gradient Clipping.
Getting The Model Weights Out¶
Moved to Getting The Model Weights Out.