Trainer¶
The Trainer
and TFTrainer
classes provide an API for feature-complete
training in most standard use cases. It’s used in most of the example scripts.
Before instantiating your Trainer
/TFTrainer
, create a
TrainingArguments
/TFTrainingArguments
to access all the points of
customization during training.
The API supports distributed training on multiple GPUs/TPUs, mixed precision through NVIDIA Apex and Native AMP for PyTorch and tf.keras.mixed_precision
for TensorFlow.
Both Trainer
and TFTrainer
contain the basic training loop which supports
the above features. To inject custom behavior you can subclass them and override the following methods:
get_train_dataloader/get_train_tfdataset – Creates the training DataLoader (PyTorch) or TF Dataset.
get_eval_dataloader/get_eval_tfdataset – Creates the evaluation DataLoader (PyTorch) or TF Dataset.
get_test_dataloader/get_test_tfdataset – Creates the test DataLoader (PyTorch) or TF Dataset.
log – Logs information on the various objects watching training.
create_optimizer_and_scheduler – Sets up the optimizer and learning rate scheduler if they were not passed at init. Note, that you can also subclass or override the
create_optimizer
andcreate_scheduler
methods separately.create_optimizer – Sets up the optimizer if it wasn’t passed at init.
create_scheduler – Sets up the learning rate scheduler if it wasn’t passed at init.
compute_loss - Computes the loss on a batch of training inputs.
training_step – Performs a training step.
prediction_step – Performs an evaluation/test step.
run_model (TensorFlow only) – Basic pass through the model.
evaluate – Runs an evaluation loop and returns metrics.
predict – Returns predictions (with metrics if labels are available) on a test set.
Warning
The Trainer
class is optimized for 🤗 Transformers models and can have surprising behaviors
when you use it on other models. When using it on your own model, make sure:
your model always return tuples or subclasses of
ModelOutput
.your model can compute the loss if a
labels
argument is provided and that loss is returned as the first element of the tuple (if your model returns tuples)your model can accept multiple label arguments (use the
label_names
in yourTrainingArguments
to indicate their name to theTrainer
) but none of them should be named"label"
.
Here is an example of how to customize Trainer
using a custom loss function for multi-label
classification:
import torch
from transformers import Trainer
class MultilabelTrainer(Trainer):
def compute_loss(self, model, inputs, return_outputs=False):
labels = inputs.pop("labels")
outputs = model(**inputs)
logits = outputs.logits
loss_fct = torch.nn.BCEWithLogitsLoss()
loss = loss_fct(logits.view(-1, self.model.config.num_labels),
labels.float().view(-1, self.model.config.num_labels))
return (loss, outputs) if return_outputs else loss
Another way to customize the training loop behavior for the PyTorch Trainer
is to use
callbacks that can inspect the training loop state (for progress reporting, logging on TensorBoard or
other ML platforms…) and take decisions (like early stopping).
Trainer¶
-
class
transformers.
Trainer
(model: torch.nn.modules.module.Module = None, args: transformers.training_args.TrainingArguments = None, data_collator: Optional[NewType.<locals>.new_type] = None, train_dataset: Optional[torch.utils.data.dataset.Dataset] = None, eval_dataset: Optional[torch.utils.data.dataset.Dataset] = None, tokenizer: Optional[transformers.tokenization_utils_base.PreTrainedTokenizerBase] = None, model_init: Callable[transformers.modeling_utils.PreTrainedModel] = None, compute_metrics: Optional[Callable[transformers.trainer_utils.EvalPrediction, Dict]] = None, callbacks: Optional[List[transformers.trainer_callback.TrainerCallback]] = None, optimizers: Tuple[torch.optim.optimizer.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None))[source]¶ Trainer is a simple but feature-complete training and eval loop for PyTorch, optimized for 🤗 Transformers.
- Parameters
model (
PreTrainedModel
ortorch.nn.Module
, optional) –The model to train, evaluate or use for predictions. If not provided, a
model_init
must be passed.Note
Trainer
is optimized to work with thePreTrainedModel
provided by the library. You can still use your own models defined astorch.nn.Module
as long as they work the same way as the 🤗 Transformers models.args (
TrainingArguments
, optional) – The arguments to tweak for training. Will default to a basic instance ofTrainingArguments
with theoutput_dir
set to a directory named tmp_trainer in the current directory if not provided.data_collator (
DataCollator
, optional) – The function to use to form a batch from a list of elements oftrain_dataset
oreval_dataset
. Will default todefault_data_collator()
if notokenizer
is provided, an instance ofDataCollatorWithPadding()
otherwise.train_dataset (
torch.utils.data.dataset.Dataset
, optional) – The dataset to use for training. If it is andatasets.Dataset
, columns not accepted by themodel.forward()
method are automatically removed.eval_dataset (
torch.utils.data.dataset.Dataset
, optional) – The dataset to use for evaluation. If it is andatasets.Dataset
, columns not accepted by themodel.forward()
method are automatically removed.tokenizer (
PreTrainedTokenizerBase
, optional) – The tokenizer used to preprocess the data. If provided, will be used to automatically pad the inputs the maximum length when batching inputs, and it will be saved along the model to make it easier to rerun an interrupted training or reuse the fine-tuned model.model_init (
Callable[[], PreTrainedModel]
, optional) –A function that instantiates the model to be used. If provided, each call to
train()
will start from a new instance of the model as given by this function.The function may have zero argument, or a single one containing the optuna/Ray Tune trial object, to be able to choose different architectures according to hyper parameters (such as layer count, sizes of inner layers, dropout probabilities etc).
compute_metrics (
Callable[[EvalPrediction], Dict]
, optional) – The function that will be used to compute metrics at evaluation. Must take aEvalPrediction
and return a dictionary string to metric values.callbacks (List of
TrainerCallback
, optional) –A list of callbacks to customize the training loop. Will add those to the list of default callbacks detailed in here.
If you want to remove one of the default callbacks used, use the
Trainer.remove_callback()
method.optimizers (
Tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR
, optional) – A tuple containing the optimizer and the scheduler to use. Will default to an instance ofAdamW
on your model and a scheduler given byget_linear_schedule_with_warmup()
controlled byargs
.
Important attributes:
model – Always points to the core model. If using a transformers model, it will be a
PreTrainedModel
subclass.model_wrapped – Always points to the most external model in case one or more other modules wrap the original model. This is the model that should be used for the forward pass. For example, under
DeepSpeed
, the inner model is wrapped inDeepSpeed
and then again intorch.nn.DistributedDataParallel
. If the inner model hasn’t been wrapped, thenself.model_wrapped
is the same asself.model
.is_model_parallel – Whether or not a model has been switched to a model parallel mode (different from data parallelism, this means some of the model layers are split on different GPUs).
place_model_on_device – Whether or not to automatically place the model on the device - it will be set to
False
if model parallel or deepspeed is used, or if the defaultTrainingArguments.place_model_on_device
is overridden to returnFalse
.is_in_train – Whether or not a model is currently running
train
(e.g. whenevaluate
is called while intrain
)
-
add_callback
(callback)[source]¶ Add a callback to the current list of
TrainerCallback
.- Parameters
callback (
type
orTrainerCallback
) – ATrainerCallback
class or an instance of aTrainerCallback
. In the first case, will instantiate a member of that class.
-
compute_loss
(model, inputs, return_outputs=False)[source]¶ How the loss is computed by Trainer. By default, all models return the loss in the first element.
Subclass and override for custom behavior.
-
create_optimizer
()[source]¶ Setup the optimizer.
We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the Trainer’s init through
optimizers
, or subclass and override this method in a subclass.
-
create_optimizer_and_scheduler
(num_training_steps: int)[source]¶ Setup the optimizer and the learning rate scheduler.
We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the Trainer’s init through
optimizers
, or subclass and override this method (orcreate_optimizer
and/orcreate_scheduler
) in a subclass.
-
create_scheduler
(num_training_steps: int)[source]¶ Setup the scheduler. The optimizer of the trainer must have been set up before this method is called.
- Parameters
num_training_steps (int) – The number of training steps to do.
-
evaluate
(eval_dataset: Optional[torch.utils.data.dataset.Dataset] = None, ignore_keys: Optional[List[str]] = None, metric_key_prefix: str = 'eval') → Dict[str, float][source]¶ Run evaluation and returns metrics.
The calling script will be responsible for providing a method to compute metrics, as they are task-dependent (pass it to the init
compute_metrics
argument).You can also subclass and override this method to inject custom behavior.
- Parameters
eval_dataset (
Dataset
, optional) – Pass a dataset if you wish to overrideself.eval_dataset
. If it is andatasets.Dataset
, columns not accepted by themodel.forward()
method are automatically removed. It must implement the__len__
method.ignore_keys (
Lst[str]
, optional) – A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions.metric_key_prefix (
str
, optional, defaults to"eval"
) – An optional prefix to be used as the metrics key prefix. For example the metrics “bleu” will be named “eval_bleu” if the prefix is “eval” (default)
- Returns
A dictionary containing the evaluation loss and the potential metrics computed from the predictions. The dictionary also contains the epoch number which comes from the training state.
-
floating_point_ops
(inputs: Dict[str, Union[torch.Tensor, Any]])[source]¶ For models that inherit from
PreTrainedModel
, uses that method to compute the number of floating point operations for every backward + forward pass. If using another model, either implement such a method in the model or subclass and override this method.- Parameters
inputs (
Dict[str, Union[torch.Tensor, Any]]
) – The inputs and targets of the model.- Returns
The number of floating-point operations.
- Return type
int
-
get_eval_dataloader
(eval_dataset: Optional[torch.utils.data.dataset.Dataset] = None) → torch.utils.data.dataloader.DataLoader[source]¶ Returns the evaluation
DataLoader
.Subclass and override this method if you want to inject some custom behavior.
- Parameters
eval_dataset (
torch.utils.data.dataset.Dataset
, optional) – If provided, will overrideself.eval_dataset
. If it is andatasets.Dataset
, columns not accepted by themodel.forward()
method are automatically removed. It must implement__len__
.
-
get_test_dataloader
(test_dataset: torch.utils.data.dataset.Dataset) → torch.utils.data.dataloader.DataLoader[source]¶ Returns the test
DataLoader
.Subclass and override this method if you want to inject some custom behavior.
- Parameters
test_dataset (
torch.utils.data.dataset.Dataset
, optional) – The test dataset to use. If it is andatasets.Dataset
, columns not accepted by themodel.forward()
method are automatically removed. It must implement__len__
.
-
get_train_dataloader
() → torch.utils.data.dataloader.DataLoader[source]¶ Returns the training
DataLoader
.Will use no sampler if
self.train_dataset
does not implement__len__
, a random sampler (adapted to distributed training if necessary) otherwise.Subclass and override this method if you want to inject some custom behavior.
-
hyperparameter_search
(hp_space: Optional[Callable[optuna.Trial, Dict[str, float]]] = None, compute_objective: Optional[Callable[Dict[str, float], float]] = None, n_trials: int = 20, direction: str = 'minimize', backend: Optional[Union[str, transformers.trainer_utils.HPSearchBackend]] = None, hp_name: Optional[Callable[optuna.Trial, str]] = None, **kwargs) → transformers.trainer_utils.BestRun[source]¶ Launch an hyperparameter search using
optuna
orRay Tune
. The optimized quantity is determined bycompute_objective
, which defaults to a function returning the evaluation loss when no metric is provided, the sum of all metrics otherwise.Warning
To use this method, you need to have provided a
model_init
when initializing yourTrainer
: we need to reinitialize the model at each new run. This is incompatible with theoptimizers
argument, so you need to subclassTrainer
and override the methodcreate_optimizer_and_scheduler()
for custom optimizer/scheduler.- Parameters
hp_space (
Callable[["optuna.Trial"], Dict[str, float]]
, optional) – A function that defines the hyperparameter search space. Will default todefault_hp_space_optuna()
ordefault_hp_space_ray()
depending on your backend.compute_objective (
Callable[[Dict[str, float]], float]
, optional) – A function computing the objective to minimize or maximize from the metrics returned by theevaluate
method. Will default todefault_compute_objective()
.n_trials (
int
, optional, defaults to 100) – The number of trial runs to test.direction (
str
, optional, defaults to"minimize"
) – Whether to optimize greater or lower objects. Can be"minimize"
or"maximize"
, you should pick"minimize"
when optimizing the validation loss,"maximize"
when optimizing one or several metrics.backend (
str
orHPSearchBackend
, optional) – The backend to use for hyperparameter search. Will default to optuna or Ray Tune, depending on which one is installed. If both are installed, will default to optuna.kwargs –
Additional keyword arguments passed along to
optuna.create_study
orray.tune.run
. For more information see:the documentation of optuna.create_study
the documentation of tune.run
- Returns
All the information about the best run.
- Return type
transformers.trainer_utils.BestRun
-
is_local_process_zero
() → bool[source]¶ Whether or not this process is the local (e.g., on one machine if training in a distributed fashion on several machines) main process.
-
is_world_process_zero
() → bool[source]¶ Whether or not this process is the global main process (when training in a distributed fashion on several machines, this is only going to be
True
for one process).
-
log
(logs: Dict[str, float]) → None[source]¶ Log
logs
on the various objects watching training.Subclass and override this method to inject custom behavior.
- Parameters
logs (
Dict[str, float]
) – The values to log.
-
log_metrics
(split, metrics)¶ Log metrics in a specially formatted way
Under distributed environment this is done only for a process with rank 0.
- Parameters
split (
str
) – Mode/split name: one oftrain
,eval
,test
metrics (
Dict[str, float]
) – The metrics returned from train/evaluate/predictmetrics: metrics dict
Notes on memory reports:
In order to get memory usage report you need to install
psutil
. You can do that withpip install psutil
.Now when this method is run, you will see a report that will include:
init_mem_cpu_alloc_delta = 1301MB init_mem_cpu_peaked_delta = 154MB init_mem_gpu_alloc_delta = 230MB init_mem_gpu_peaked_delta = 0MB train_mem_cpu_alloc_delta = 1345MB train_mem_cpu_peaked_delta = 0MB train_mem_gpu_alloc_delta = 693MB train_mem_gpu_peaked_delta = 7MB
Understanding the reports:
the first segment, e.g.,
train__
, tells you which stage the metrics are for. Reports starting withinit_
will be added to the first stage that gets run. So that if only evaluation is run, the memory usage for the__init__
will be reported along with theeval_
metrics.the third segment, is either
cpu
orgpu
, tells you whether it’s the general RAM or the gpu0 memory metric.*_alloc_delta
- is the difference in the used/allocated memory counter between the end and the start of the stage - it can be negative if a function released more memory than it allocated.*_peaked_delta
- is any extra memory that was consumed and then freed - relative to the current allocated memory counter - it is never negative. When you look at the metrics of any stage you add upalloc_delta
+peaked_delta
and you know how much memory was needed to complete that stage.
The reporting happens only for process of rank 0 and gpu 0 (if there is a gpu). Typically this is enough since the main process does the bulk of work, but it could be not quite so if model parallel is used and then other GPUs may use a different amount of gpu memory. This is also not the same under DataParallel where gpu0 may require much more memory than the rest since it stores the gradient and optimizer states for all participating GPUS. Perhaps in the future these reports will evolve to measure those too.
The CPU RAM metric measures RSS (Resident Set Size) includes both the memory which is unique to the process and the memory shared with other processes. It is important to note that it does not include swapped out memory, so the reports could be imprecise.
The CPU peak memory is measured using a sampling thread. Due to python’s GIL it may miss some of the peak memory if that thread didn’t get a chance to run when the highest memory was used. Therefore this report can be less than reality. Using
tracemalloc
would have reported the exact peak memory, but it doesn’t report memory allocations outside of python. So if some C++ CUDA extension allocated its own memory it won’t be reported. And therefore it was dropped in favor of the memory sampling approach, which reads the current process memory usage.The GPU allocated and peak memory reporting is done with
torch.cuda.memory_allocated()
andtorch.cuda.max_memory_allocated()
. This metric reports only “deltas” for pytorch-specific allocations, astorch.cuda
memory management system doesn’t track any memory allocated outside of pytorch. For example, the very first cuda call typically loads CUDA kernels, which may take from 0.5 to 2GB of GPU memory.Note that this tracker doesn’t account for memory allocations outside of
Trainer
’s__init__
,train
,evaluate
andpredict
calls.Because
evaluation
calls may happen duringtrain
, we can’t handle nested invocations becausetorch.cuda.max_memory_allocated
is a single counter, so if it gets reset by a nested eval call,train
’s tracker will report incorrect info. If this pytorch issue gets resolved it will be possible to change this class to be re-entrant. Until then we will only track the outer level oftrain
,evaluate
andpredict
methods. Which means that ifeval
is called duringtrain
, it’s the latter that will account for its memory usage and that of the former.This also means that if any other tool that is used along the
Trainer
callstorch.cuda.reset_peak_memory_stats
, the gpu peak memory stats could be invalid. And theTrainer
will disrupt the normal behavior of any such tools that rely on callingtorch.cuda.reset_peak_memory_stats
themselves.For best performance you may want to consider turning the memory profiling off for production runs.
-
metrics_format
(metrics: Dict[str, float]) → Dict[str, float]¶ Reformat Trainer metrics values to a human-readable format
- Parameters
metrics (
Dict[str, float]
) – The metrics returned from train/evaluate/predict- Returns
The reformatted metrics
- Return type
metrics (
Dict[str, float]
)
-
num_examples
(dataloader: torch.utils.data.dataloader.DataLoader) → int[source]¶ Helper to get number of samples in a
DataLoader
by accessing its dataset.Will raise an exception if the underlying dataset does not implement method
__len__
-
pop_callback
(callback)[source]¶ Remove a callback from the current list of
TrainerCallback
and returns it.If the callback is not found, returns
None
(and no error is raised).- Parameters
callback (
type
orTrainerCallback
) – ATrainerCallback
class or an instance of aTrainerCallback
. In the first case, will pop the first member of that class found in the list of callbacks.- Returns
The callback removed, if found.
- Return type
TrainerCallback
-
predict
(test_dataset: torch.utils.data.dataset.Dataset, ignore_keys: Optional[List[str]] = None, metric_key_prefix: str = 'test') → transformers.trainer_utils.PredictionOutput[source]¶ Run prediction and returns predictions and potential metrics.
Depending on the dataset and your use case, your test dataset may contain labels. In that case, this method will also return metrics, like in
evaluate()
.- Parameters
test_dataset (
Dataset
) – Dataset to run the predictions on. If it is andatasets.Dataset
, columns not accepted by themodel.forward()
method are automatically removed. Has to implement the method__len__
ignore_keys (
Lst[str]
, optional) – A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions.metric_key_prefix (
str
, optional, defaults to"test"
) – An optional prefix to be used as the metrics key prefix. For example the metrics “bleu” will be named “test_bleu” if the prefix is “test” (default)
Note
If your predictions or labels have different sequence length (for instance because you’re doing dynamic padding in a token classification task) the predictions will be padded (on the right) to allow for concatenation into one array. The padding index is -100.
Returns: NamedTuple A namedtuple with the following keys:
predictions (
np.ndarray
): The predictions ontest_dataset
.label_ids (
np.ndarray
, optional): The labels (if the dataset contained some).metrics (
Dict[str, float]
, optional): The potential dictionary of metrics (if the dataset contained labels).
-
prediction_loop
(dataloader: torch.utils.data.dataloader.DataLoader, description: str, prediction_loss_only: Optional[bool] = None, ignore_keys: Optional[List[str]] = None, metric_key_prefix: str = 'eval') → transformers.trainer_utils.PredictionOutput[source]¶ Prediction/evaluation loop, shared by
Trainer.evaluate()
andTrainer.predict()
.Works both with or without labels.
-
prediction_step
(model: torch.nn.modules.module.Module, inputs: Dict[str, Union[torch.Tensor, Any]], prediction_loss_only: bool, ignore_keys: Optional[List[str]] = None) → Tuple[Optional[float], Optional[torch.Tensor], Optional[torch.Tensor]][source]¶ Perform an evaluation step on
model
using obj:inputs.Subclass and override to inject custom behavior.
- Parameters
model (
nn.Module
) – The model to evaluate.inputs (
Dict[str, Union[torch.Tensor, Any]]
) –The inputs and targets of the model.
The dictionary will be unpacked before being fed to the model. Most models expect the targets under the argument
labels
. Check your model’s documentation for all accepted arguments.prediction_loss_only (
bool
) – Whether or not to return the loss only.ignore_keys (
Lst[str]
, optional) – A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions.
- Returns
A tuple with the loss, logits and labels (each being optional).
- Return type
Tuple[Optional[float], Optional[torch.Tensor], Optional[torch.Tensor]]
-
remove_callback
(callback)[source]¶ Remove a callback from the current list of
TrainerCallback
.- Parameters
callback (
type
orTrainerCallback
) – ATrainerCallback
class or an instance of aTrainerCallback
. In the first case, will remove the first member of that class found in the list of callbacks.
-
save_metrics
(split, metrics, combined=True)¶ Save metrics into a json file for that split, e.g.
train_results.json
.Under distributed environment this is done only for a process with rank 0.
- Parameters
split (
str
) – Mode/split name: one oftrain
,eval
,test
,all
metrics (
Dict[str, float]
) – The metrics returned from train/evaluate/predictcombined (
bool
, optional, defaults toTrue
) – Creates combined metrics by updatingall_results.json
with metrics of this call
To understand the metrics please read the docstring of
log_metrics()
. The only difference is that raw unformatted numbers are saved in the current method.
-
save_model
(output_dir: Optional[str] = None)[source]¶ Will save the model, so you can reload it using
from_pretrained()
.Will only save from the main process.
-
save_state
()¶ Saves the Trainer state, since Trainer.save_model saves only the tokenizer with the model
Under distributed environment this is done only for a process with rank 0.
-
train
(resume_from_checkpoint: Optional[Union[str, bool]] = None, trial: Union[optuna.Trial, Dict[str, Any]] = None, **kwargs)[source]¶ Main training entry point.
- Parameters
resume_from_checkpoint (
str
orbool
, optional) – If astr
, local path to a saved checkpoint as saved by a previous instance ofTrainer
. If abool
and equals True, load the last checkpoint in args.output_dir as saved by a previous instance ofTrainer
. If present, training will resume from the model/optimizer/scheduler states loaded here.trial (
optuna.Trial
orDict[str, Any]
, optional) – The trial run or the hyperparameter dictionary for hyperparameter search.kwargs – Additional keyword arguments used to hide deprecated arguments
-
training_step
(model: torch.nn.modules.module.Module, inputs: Dict[str, Union[torch.Tensor, Any]]) → torch.Tensor[source]¶ Perform a training step on a batch of inputs.
Subclass and override to inject custom behavior.
- Parameters
model (
nn.Module
) – The model to train.inputs (
Dict[str, Union[torch.Tensor, Any]]
) –The inputs and targets of the model.
The dictionary will be unpacked before being fed to the model. Most models expect the targets under the argument
labels
. Check your model’s documentation for all accepted arguments.
- Returns
The tensor with training loss on this batch.
- Return type
torch.Tensor
Seq2SeqTrainer¶
-
class
transformers.
Seq2SeqTrainer
(model: torch.nn.modules.module.Module = None, args: transformers.training_args.TrainingArguments = None, data_collator: Optional[NewType.<locals>.new_type] = None, train_dataset: Optional[torch.utils.data.dataset.Dataset] = None, eval_dataset: Optional[torch.utils.data.dataset.Dataset] = None, tokenizer: Optional[transformers.tokenization_utils_base.PreTrainedTokenizerBase] = None, model_init: Callable[transformers.modeling_utils.PreTrainedModel] = None, compute_metrics: Optional[Callable[transformers.trainer_utils.EvalPrediction, Dict]] = None, callbacks: Optional[List[transformers.trainer_callback.TrainerCallback]] = None, optimizers: Tuple[torch.optim.optimizer.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None))[source]¶ -
evaluate
(eval_dataset: Optional[torch.utils.data.dataset.Dataset] = None, ignore_keys: Optional[List[str]] = None, metric_key_prefix: str = 'eval', max_length: Optional[int] = None, num_beams: Optional[int] = None) → Dict[str, float][source]¶ Run evaluation and returns metrics.
The calling script will be responsible for providing a method to compute metrics, as they are task-dependent (pass it to the init
compute_metrics
argument).You can also subclass and override this method to inject custom behavior.
- Parameters
eval_dataset (
Dataset
, optional) – Pass a dataset if you wish to overrideself.eval_dataset
. If it is andatasets.Dataset
, columns not accepted by themodel.forward()
method are automatically removed. It must implement the__len__
method.ignore_keys (
List[str]
, optional) – A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions.metric_key_prefix (
str
, optional, defaults to"eval"
) – An optional prefix to be used as the metrics key prefix. For example the metrics “bleu” will be named “eval_bleu” if the prefix is"eval"
(default)max_length (
int
, optional) – The maximum target length to use when predicting with the generate method.num_beams (
int
, optional) – Number of beams for beam search that will be used when predicting with the generate method. 1 means no beam search.
- Returns
A dictionary containing the evaluation loss and the potential metrics computed from the predictions. The dictionary also contains the epoch number which comes from the training state.
-
predict
(test_dataset: torch.utils.data.dataset.Dataset, ignore_keys: Optional[List[str]] = None, metric_key_prefix: str = 'eval', max_length: Optional[int] = None, num_beams: Optional[int] = None) → transformers.trainer_utils.PredictionOutput[source]¶ Run prediction and returns predictions and potential metrics.
Depending on the dataset and your use case, your test dataset may contain labels. In that case, this method will also return metrics, like in
evaluate()
.- Parameters
test_dataset (
Dataset
) – Dataset to run the predictions on. If it is andatasets.Dataset
, columns not accepted by themodel.forward()
method are automatically removed. Has to implement the method__len__
ignore_keys (
List[str]
, optional) – A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions.metric_key_prefix (
str
, optional, defaults to"eval"
) – An optional prefix to be used as the metrics key prefix. For example the metrics “bleu” will be named “eval_bleu” if the prefix is"eval"
(default)max_length (
int
, optional) – The maximum target length to use when predicting with the generate method.num_beams (
int
, optional) – Number of beams for beam search that will be used when predicting with the generate method. 1 means no beam search.
Note
If your predictions or labels have different sequence lengths (for instance because you’re doing dynamic padding in a token classification task) the predictions will be padded (on the right) to allow for concatenation into one array. The padding index is -100.
Returns: NamedTuple A namedtuple with the following keys:
predictions (
np.ndarray
): The predictions ontest_dataset
.label_ids (
np.ndarray
, optional): The labels (if the dataset contained some).metrics (
Dict[str, float]
, optional): The potential dictionary of metrics (if the dataset contained labels).
-
TFTrainer¶
-
class
transformers.
TFTrainer
(model: transformers.modeling_tf_utils.TFPreTrainedModel, args: transformers.training_args_tf.TFTrainingArguments, train_dataset: Optional[tensorflow.python.data.ops.dataset_ops.DatasetV2] = None, eval_dataset: Optional[tensorflow.python.data.ops.dataset_ops.DatasetV2] = None, compute_metrics: Optional[Callable[transformers.trainer_utils.EvalPrediction, Dict]] = None, tb_writer: Optional[tensorflow.python.ops.summary_ops_v2.SummaryWriter] = None, optimizers: Tuple[tensorflow.python.keras.optimizer_v2.optimizer_v2.OptimizerV2, tensorflow.python.keras.optimizer_v2.learning_rate_schedule.LearningRateSchedule] = None, None)[source]¶ TFTrainer is a simple but feature-complete training and eval loop for TensorFlow, optimized for 🤗 Transformers.
- Parameters
model (
TFPreTrainedModel
) – The model to train, evaluate or use for predictions.args (
TFTrainingArguments
) – The arguments to tweak training.train_dataset (
Dataset
, optional) – The dataset to use for training. The dataset should yield tuples of(features, labels)
wherefeatures
is a dict of input features andlabels
is the labels. Iflabels
is a tensor, the loss is calculated by the model by callingmodel(features, labels=labels)
. Iflabels
is a dict, such as when using a QuestionAnswering head model with multiple targets, the loss is instead calculated by callingmodel(features, **labels)
.eval_dataset (
Dataset
, optional) – The dataset to use for evaluation. The dataset should yield tuples of(features, labels)
wherefeatures
is a dict of input features andlabels
is the labels. Iflabels
is a tensor, the loss is calculated by the model by callingmodel(features, labels=labels)
. Iflabels
is a dict, such as when using a QuestionAnswering head model with multiple targets, the loss is instead calculated by callingmodel(features, **labels)
.compute_metrics (
Callable[[EvalPrediction], Dict]
, optional) – The function that will be used to compute metrics at evaluation. Must take aEvalPrediction
and return a dictionary string to metric values.tb_writer (
tf.summary.SummaryWriter
, optional) – Object to write to TensorBoard.optimizers (
Tuple[tf.keras.optimizers.Optimizer, tf.keras.optimizers.schedules.LearningRateSchedule]
, optional) – A tuple containing the optimizer and the scheduler to use. The optimizer default to an instance oftf.keras.optimizers.Adam
ifargs.weight_decay_rate
is 0 else an instance ofAdamWeightDecay
. The scheduler will default to an instance oftf.keras.optimizers.schedules.PolynomialDecay
ifargs.num_warmup_steps
is 0 else an instance ofWarmUp
.
-
create_optimizer_and_scheduler
(num_training_steps: int)[source]¶ Setup the optimizer and the learning rate scheduler.
We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the TFTrainer’s init through
optimizers
, or subclass and override this method.
-
evaluate
(eval_dataset: Optional[tensorflow.python.data.ops.dataset_ops.DatasetV2] = None) → Dict[str, float][source]¶ Run evaluation and returns metrics.
The calling script will be responsible for providing a method to compute metrics, as they are task-dependent (pass it to the init
compute_metrics
argument).- Parameters
eval_dataset (
Dataset
, optional) – Pass a dataset if you wish to overrideself.eval_dataset
. The dataset should yield tuples of(features, labels)
wherefeatures
is a dict of input features andlabels
is the labels. Iflabels
is a tensor, the loss is calculated by the model by callingmodel(features, labels=labels)
. Iflabels
is a dict, such as when using a QuestionAnswering head model with multiple targets, the loss is instead calculated by callingmodel(features, **labels)
.- Returns
A dictionary containing the evaluation loss and the potential metrics computed from the predictions.
-
get_eval_tfdataset
(eval_dataset: Optional[tensorflow.python.data.ops.dataset_ops.DatasetV2] = None) → tensorflow.python.data.ops.dataset_ops.DatasetV2[source]¶ Returns the evaluation
Dataset
.- Parameters
eval_dataset (
Dataset
, optional) – If provided, will override self.eval_dataset. The dataset should yield tuples of(features, labels)
wherefeatures
is a dict of input features andlabels
is the labels. Iflabels
is a tensor, the loss is calculated by the model by callingmodel(features, labels=labels)
. Iflabels
is a dict, such as when using a QuestionAnswering head model with multiple targets, the loss is instead calculated by callingmodel(features, **labels)
.
Subclass and override this method if you want to inject some custom behavior.
-
get_test_tfdataset
(test_dataset: tensorflow.python.data.ops.dataset_ops.DatasetV2) → tensorflow.python.data.ops.dataset_ops.DatasetV2[source]¶ Returns a test
Dataset
.- Parameters
test_dataset (
Dataset
) – The dataset to use. The dataset should yield tuples of(features, labels)
wherefeatures
is a dict of input features andlabels
is the labels. Iflabels
is a tensor, the loss is calculated by the model by callingmodel(features, labels=labels)
. Iflabels
is a dict, such as when using a QuestionAnswering head model with multiple targets, the loss is instead calculated by callingmodel(features, **labels)
.
Subclass and override this method if you want to inject some custom behavior.
-
get_train_tfdataset
() → tensorflow.python.data.ops.dataset_ops.DatasetV2[source]¶ Returns the training
Dataset
.Subclass and override this method if you want to inject some custom behavior.
-
log
(logs: Dict[str, float]) → None[source]¶ Log
logs
on the various objects watching training.Subclass and override this method to inject custom behavior.
- Parameters
logs (
Dict[str, float]
) – The values to log.
-
predict
(test_dataset: tensorflow.python.data.ops.dataset_ops.DatasetV2) → transformers.trainer_utils.PredictionOutput[source]¶ Run prediction and returns predictions and potential metrics.
Depending on the dataset and your use case, your test dataset may contain labels. In that case, this method will also return metrics, like in
evaluate()
.- Parameters
test_dataset (
Dataset
) – Dataset to run the predictions on. The dataset should yield tuples of(features, labels)
wherefeatures
is a dict of input features andlabels
is the labels. Iflabels
is a tensor, the loss is calculated by the model by callingmodel(features, labels=labels)
. Iflabels
is a dict, such as when using a QuestionAnswering head model with multiple targets, the loss is instead calculated by callingmodel(features, **labels)
Returns: NamedTuple A namedtuple with the following keys:
predictions (
np.ndarray
): The predictions ontest_dataset
.label_ids (
np.ndarray
, optional): The labels (if the dataset contained some).metrics (
Dict[str, float]
, optional): The potential dictionary of metrics (if the dataset contained labels).
-
prediction_loop
(dataset: tensorflow.python.data.ops.dataset_ops.DatasetV2, steps: int, num_examples: int, description: str, prediction_loss_only: Optional[bool] = None) → transformers.trainer_utils.PredictionOutput[source]¶ Prediction/evaluation loop, shared by
evaluate()
andpredict()
.Works both with or without labels.
-
prediction_step
(features: tensorflow.python.framework.ops.Tensor, labels: tensorflow.python.framework.ops.Tensor, nb_instances_in_global_batch: tensorflow.python.framework.ops.Tensor) → tensorflow.python.framework.ops.Tensor[source]¶ Compute the prediction on features and update the loss with labels.
Subclass and override to inject some custom behavior.
-
run_model
(features, labels, training)[source]¶ Computes the loss of the given features and labels pair.
Subclass and override this method if you want to inject some custom behavior.
- Parameters
features (
tf.Tensor
) – A batch of input features.labels (
tf.Tensor
) – A batch of labels.training (
bool
) – Whether or not to run the model in training mode.
- Returns
The loss and logits.
- Return type
A tuple of two
tf.Tensor
-
save_model
(output_dir: Optional[str] = None)[source]¶ Will save the model, so you can reload it using
from_pretrained()
.
-
setup_comet
()[source]¶ Setup the optional Comet.ml integration.
- Environment:
- COMET_MODE:
(Optional): str - “OFFLINE”, “ONLINE”, or “DISABLED”
- COMET_PROJECT_NAME:
(Optional): str - Comet.ml project name for experiments
- COMET_OFFLINE_DIRECTORY:
(Optional): str - folder to use for saving offline experiments when COMET_MODE is “OFFLINE”
For a number of configurable items in the environment, see here
-
setup_wandb
()[source]¶ Setup the optional Weights & Biases (wandb) integration.
One can subclass and override this method to customize the setup if needed. Find more information here. You can also override the following environment variables:
- Environment:
- WANDB_PROJECT:
(Optional): str - “huggingface” by default, set this to a custom string to store results in a different project.
- WANDB_DISABLED:
(Optional): boolean - defaults to false, set to “true” to disable wandb entirely.
TrainingArguments¶
-
class
transformers.
TrainingArguments
(output_dir: str, overwrite_output_dir: bool = False, do_train: bool = False, do_eval: bool = None, do_predict: bool = False, evaluation_strategy: transformers.trainer_utils.IntervalStrategy = 'no', prediction_loss_only: bool = False, per_device_train_batch_size: int = 8, per_device_eval_batch_size: int = 8, per_gpu_train_batch_size: Optional[int] = None, per_gpu_eval_batch_size: Optional[int] = None, gradient_accumulation_steps: int = 1, eval_accumulation_steps: Optional[int] = None, learning_rate: float = 5e-05, weight_decay: float = 0.0, adam_beta1: float = 0.9, adam_beta2: float = 0.999, adam_epsilon: float = 1e-08, max_grad_norm: float = 1.0, num_train_epochs: float = 3.0, max_steps: int = -1, lr_scheduler_type: transformers.trainer_utils.SchedulerType = 'linear', warmup_ratio: float = 0.0, warmup_steps: int = 0, logging_dir: Optional[str] = <factory>, logging_strategy: transformers.trainer_utils.IntervalStrategy = 'steps', logging_first_step: bool = False, logging_steps: int = 500, save_strategy: transformers.trainer_utils.IntervalStrategy = 'steps', save_steps: int = 500, save_total_limit: Optional[int] = None, no_cuda: bool = False, seed: int = 42, fp16: bool = False, fp16_opt_level: str = 'O1', fp16_backend: str = 'auto', fp16_full_eval: bool = False, local_rank: int = -1, tpu_num_cores: Optional[int] = None, tpu_metrics_debug: bool = False, debug: bool = False, dataloader_drop_last: bool = False, eval_steps: int = None, dataloader_num_workers: int = 0, past_index: int = -1, run_name: Optional[str] = None, disable_tqdm: Optional[bool] = None, remove_unused_columns: Optional[bool] = True, label_names: Optional[List[str]] = None, load_best_model_at_end: Optional[bool] = False, metric_for_best_model: Optional[str] = None, greater_is_better: Optional[bool] = None, ignore_data_skip: bool = False, sharded_ddp: str = '', deepspeed: Optional[str] = None, label_smoothing_factor: float = 0.0, adafactor: bool = False, group_by_length: bool = False, length_column_name: Optional[str] = 'length', report_to: Optional[List[str]] = None, ddp_find_unused_parameters: Optional[bool] = None, dataloader_pin_memory: bool = True, skip_memory_metrics: bool = False, mp_parameters: str = '')[source]¶ TrainingArguments is the subset of the arguments we use in our example scripts which relate to the training loop itself.
Using
HfArgumentParser
we can turn this class into argparse arguments that can be specified on the command line.- Parameters
output_dir (
str
) – The output directory where the model predictions and checkpoints will be written.overwrite_output_dir (
bool
, optional, defaults toFalse
) – IfTrue
, overwrite the content of the output directory. Use this to continue training ifoutput_dir
points to a checkpoint directory.do_train (
bool
, optional, defaults toFalse
) – Whether to run training or not. This argument is not directly used byTrainer
, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.do_eval (
bool
, optional) – Whether to run evaluation on the validation set or not. Will be set toTrue
ifevaluation_strategy
is different from"no"
. This argument is not directly used byTrainer
, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.do_predict (
bool
, optional, defaults toFalse
) – Whether to run predictions on the test set or not. This argument is not directly used byTrainer
, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.evaluation_strategy (
str
orIntervalStrategy
, optional, defaults to"no"
) –The evaluation strategy to adopt during training. Possible values are:
"no"
: No evaluation is done during training."steps"
: Evaluation is done (and logged) everyeval_steps
."epoch"
: Evaluation is done at the end of each epoch.
prediction_loss_only (
bool
, optional, defaults to False) – When performing evaluation and generating predictions, only returns the loss.per_device_train_batch_size (
int
, optional, defaults to 8) – The batch size per GPU/TPU core/CPU for training.per_device_eval_batch_size (
int
, optional, defaults to 8) – The batch size per GPU/TPU core/CPU for evaluation.gradient_accumulation_steps (
int
, optional, defaults to 1) –Number of updates steps to accumulate the gradients for, before performing a backward/update pass.
Warning
When using gradient accumulation, one step is counted as one step with backward pass. Therefore, logging, evaluation, save will be conducted every
gradient_accumulation_steps * xxx_step
training examples.eval_accumulation_steps (
int
, optional) – Number of predictions steps to accumulate the output tensors for, before moving the results to the CPU. If left unset, the whole predictions are accumulated on GPU/TPU before being moved to the CPU (faster but requires more memory).learning_rate (
float
, optional, defaults to 5e-5) – The initial learning rate forAdamW
optimizer.weight_decay (
float
, optional, defaults to 0) – The weight decay to apply (if not zero) to all layers except all bias and LayerNorm weights inAdamW
optimizer.adam_beta1 (
float
, optional, defaults to 0.9) – The beta1 hyperparameter for theAdamW
optimizer.adam_beta2 (
float
, optional, defaults to 0.999) – The beta2 hyperparameter for theAdamW
optimizer.adam_epsilon (
float
, optional, defaults to 1e-8) – The epsilon hyperparameter for theAdamW
optimizer.max_grad_norm (
float
, optional, defaults to 1.0) – Maximum gradient norm (for gradient clipping).num_train_epochs (
float
, optional, defaults to 3.0) – Total number of training epochs to perform (if not an integer, will perform the decimal part percents of the last epoch before stopping training).max_steps (
int
, optional, defaults to -1) – If set to a positive number, the total number of training steps to perform. Overridesnum_train_epochs
.lr_scheduler_type (
str
orSchedulerType
, optional, defaults to"linear"
) – The scheduler type to use. See the documentation ofSchedulerType
for all possible values.warmup_ratio (
float
, optional, defaults to 0.0) – Ratio of total training steps used for a linear warmup from 0 tolearning_rate
.warmup_steps (
int
, optional, defaults to 0) – Number of steps used for a linear warmup from 0 tolearning_rate
. Overrides any effect ofwarmup_ratio
.logging_dir (
str
, optional) – TensorBoard log directory. Will default to runs/**CURRENT_DATETIME_HOSTNAME**.logging_strategy (
str
orIntervalStrategy
, optional, defaults to"steps"
) –The logging strategy to adopt during training. Possible values are:
"no"
: No logging is done during training."epoch"
: Logging is done at the end of each epoch."steps"
: Logging is done everylogging_steps
.
logging_first_step (
bool
, optional, defaults toFalse
) – Whether to log and evaluate the firstglobal_step
or not.logging_steps (
int
, optional, defaults to 500) – Number of update steps between two logs iflogging_strategy="steps"
.save_strategy (
str
orIntervalStrategy
, optional, defaults to"steps"
) –The checkpoint save strategy to adopt during training. Possible values are:
"no"
: No save is done during training."epoch"
: Save is done at the end of each epoch."steps"
: Save is done everysave_steps
.
save_steps (
int
, optional, defaults to 500) – Number of updates steps before two checkpoint saves ifsave_strategy="steps"
.save_total_limit (
int
, optional) – If a value is passed, will limit the total amount of checkpoints. Deletes the older checkpoints inoutput_dir
.no_cuda (
bool
, optional, defaults toFalse
) – Whether to not use CUDA even when it is available or not.seed (
int
, optional, defaults to 42) – Random seed that will be set at the beginning of training. To ensure reproducibility across runs, use themodel_init()
function to instantiate the model if it has some randomly initialized parameters.fp16 (
bool
, optional, defaults toFalse
) – Whether to use 16-bit (mixed) precision training instead of 32-bit training.fp16_opt_level (
str
, optional, defaults to ‘O1’) – Forfp16
training, Apex AMP optimization level selected in [‘O0’, ‘O1’, ‘O2’, and ‘O3’]. See details on the Apex documentation.fp16_backend (
str
, optional, defaults to"auto"
) – The backend to use for mixed precision training. Must be one of"auto"
,"amp"
or"apex"
."auto"
will use AMP or APEX depending on the PyTorch version detected, while the other choices will force the requested backend.fp16_full_eval (
bool
, optional, defaults toFalse
) – Whether to use full 16-bit precision evaluation instead of 32-bit. This will be faster and save memory but can harm metric values.local_rank (
int
, optional, defaults to -1) – Rank of the process during distributed training.tpu_num_cores (
int
, optional) – When training on TPU, the number of TPU cores (automatically passed by launcher script).debug (
bool
, optional, defaults toFalse
) – When training on TPU, whether to print debug metrics or not.dataloader_drop_last (
bool
, optional, defaults toFalse
) – Whether to drop the last incomplete batch (if the length of the dataset is not divisible by the batch size) or not.eval_steps (
int
, optional) – Number of update steps between two evaluations ifevaluation_strategy="steps"
. Will default to the same value aslogging_steps
if not set.dataloader_num_workers (
int
, optional, defaults to 0) – Number of subprocesses to use for data loading (PyTorch only). 0 means that the data will be loaded in the main process.past_index (
int
, optional, defaults to -1) – Some models like TransformerXL or :doc`XLNet <../model_doc/xlnet>` can make use of the past hidden states for their predictions. If this argument is set to a positive int, theTrainer
will use the corresponding output (usually index 2) as the past state and feed it to the model at the next training step under the keyword argumentmems
.run_name (
str
, optional) – A descriptor for the run. Typically used for wandb logging.disable_tqdm (
bool
, optional) – Whether or not to disable the tqdm progress bars and table of metrics produced byNotebookTrainingTracker
in Jupyter Notebooks. Will default toTrue
if the logging level is set to warn or lower (default),False
otherwise.remove_unused_columns (
bool
, optional, defaults toTrue
) –If using
datasets.Dataset
datasets, whether or not to automatically remove the columns unused by the model forward method.(Note that this behavior is not implemented for
TFTrainer
yet.)label_names (
List[str]
, optional) –The list of keys in your dictionary of inputs that correspond to the labels.
Will eventually default to
["labels"]
except if the model used is one of theXxxForQuestionAnswering
in which case it will default to["start_positions", "end_positions"]
.load_best_model_at_end (
bool
, optional, defaults toFalse
) –Whether or not to load the best model found during training at the end of training.
Note
When set to
True
, the parameterssave_strategy
andsave_steps
will be ignored and the model will be saved after each evaluation.metric_for_best_model (
str
, optional) –Use in conjunction with
load_best_model_at_end
to specify the metric to use to compare two different models. Must be the name of a metric returned by the evaluation with or without the prefix"eval_"
. Will default to"loss"
if unspecified andload_best_model_at_end=True
(to use the evaluation loss).If you set this value,
greater_is_better
will default toTrue
. Don’t forget to set it toFalse
if your metric is better when lower.greater_is_better (
bool
, optional) –Use in conjunction with
load_best_model_at_end
andmetric_for_best_model
to specify if better models should have a greater metric or not. Will default to:True
ifmetric_for_best_model
is set to a value that isn’t"loss"
or"eval_loss"
.False
ifmetric_for_best_model
is not set, or set to"loss"
or"eval_loss"
.
ignore_data_skip (
bool
, optional, defaults toFalse
) – When resuming training, whether or not to skip the epochs and batches to get the data loading at the same stage as in the previous training. If set toTrue
, the training will begin faster (as that skipping step can take a long time) but will not yield the same results as the interrupted training would have.sharded_ddp (
bool
,str
or list ofShardedDDPOption
, optional, defaults toFalse
) –Use Sharded DDP training from FairScale (in distributed training only). This is an experimental feature.
A list of options along the following:
"simple"
: to use first instance of sharded DDP released by fairscale (ShardedDDP
) similar to ZeRO-2."zero_dp_2"
: to use the second instance of sharded DPP released by fairscale (FullyShardedDDP
) in Zero-2 mode (withreshard_after_forward=False
)."zero_dp_3"
: to use the second instance of sharded DPP released by fairscale (FullyShardedDDP
) in Zero-3 mode (withreshard_after_forward=True
)."offload"
: to add ZeRO-offload (only compatible with"zero_dp_2"
and"zero_dp_3"
).
If a string is passed, it will be split on space. If a bool is passed, it will be converted to an empty list for
False
and["simple"]
forTrue
.deepspeed (
str
ordict
, optional) – Use Deepspeed. This is an experimental feature and its API may evolve in the future. The value is either the location of DeepSpeed json config file (e.g.,ds_config.json
) or an already loaded json file as adict
”label_smoothing_factor (
float
, optional, defaults to 0.0) – The label smoothing factor to use. Zero means no label smoothing, otherwise the underlying onehot-encoded labels are changed from 0s and 1s tolabel_smoothing_factor/num_labels
and1 - label_smoothing_factor + label_smoothing_factor/num_labels
respectively.adafactor (
bool
, optional, defaults toFalse
) – Whether or not to use theAdafactor
optimizer instead ofAdamW
.group_by_length (
bool
, optional, defaults toFalse
) – Whether or not to group together samples of roughly the same length in the training dataset (to minimize padding applied and be more efficient). Only useful if applying dynamic padding.length_column_name (
str
, optional, defaults to"length"
) – Column name for precomputed lengths. If the column exists, grouping by length will use these values rather than computing them on train startup. Ignored unlessgroup_by_length
isTrue
and the dataset is an instance ofDataset
.report_to (
str
orList[str]
, optional, defaults to"all"
) – The list of integrations to report the results and logs to. Supported platforms are"azure_ml"
,"comet_ml"
,"mlflow"
,"tensorboard"
and"wandb"
. Use"all"
to report to all integrations installed,"none"
for no integrations.ddp_find_unused_parameters (
bool
, optional) – When using distributed training, the value of the flagfind_unused_parameters
passed toDistributedDataParallel
. Will default toFalse
if gradient checkpointing is used,True
otherwise.dataloader_pin_memory (
bool
, optional, defaults toTrue
)) – Whether you want to pin memory in data loaders or not. Will default toTrue
.skip_memory_metrics (
bool
, optional, defaults toFalse
)) – Whether to skip adding of memory profiler reports to metrics. Defaults toFalse
.
-
property
device
¶ The device used by this process.
-
property
eval_batch_size
¶ The actual batch size for evaluation (may differ from
per_gpu_eval_batch_size
in distributed training).
-
property
n_gpu
¶ The number of GPUs used by this process.
Note
This will only be greater than one when you have multiple GPUs available but are not using distributed training. For distributed training, it will always be 1.
-
property
parallel_mode
¶ The current mode used for parallelism if multiple GPUs/TPU cores are available. One of:
ParallelMode.NOT_PARALLEL
: no parallelism (CPU or one GPU).ParallelMode.NOT_DISTRIBUTED
: several GPUs in one single process (usestorch.nn.DataParallel
).ParallelMode.DISTRIBUTED
: several GPUs, each having its own process (usestorch.nn.DistributedDataParallel
).ParallelMode.TPU
: several TPU cores.
-
property
place_model_on_device
¶ Can be subclassed and overridden for some specific integrations.
-
property
process_index
¶ The number of processes used in parallel.
-
to_dict
()[source]¶ Serializes this instance while replace Enum by their values (for JSON serialization support).
-
to_sanitized_dict
() → Dict[str, Any][source]¶ Sanitized serialization to use with TensorBoard’s hparams
-
property
train_batch_size
¶ The actual batch size for training (may differ from
per_gpu_train_batch_size
in distributed training).
-
property
world_size
¶ The number of processes used in parallel.
Seq2SeqTrainingArguments¶
-
class
transformers.
Seq2SeqTrainingArguments
(output_dir: str, overwrite_output_dir: bool = False, do_train: bool = False, do_eval: bool = None, do_predict: bool = False, evaluation_strategy: transformers.trainer_utils.IntervalStrategy = 'no', prediction_loss_only: bool = False, per_device_train_batch_size: int = 8, per_device_eval_batch_size: int = 8, per_gpu_train_batch_size: Optional[int] = None, per_gpu_eval_batch_size: Optional[int] = None, gradient_accumulation_steps: int = 1, eval_accumulation_steps: Optional[int] = None, learning_rate: float = 5e-05, weight_decay: float = 0.0, adam_beta1: float = 0.9, adam_beta2: float = 0.999, adam_epsilon: float = 1e-08, max_grad_norm: float = 1.0, num_train_epochs: float = 3.0, max_steps: int = -1, lr_scheduler_type: transformers.trainer_utils.SchedulerType = 'linear', warmup_ratio: float = 0.0, warmup_steps: int = 0, logging_dir: Optional[str] = <factory>, logging_strategy: transformers.trainer_utils.IntervalStrategy = 'steps', logging_first_step: bool = False, logging_steps: int = 500, save_strategy: transformers.trainer_utils.IntervalStrategy = 'steps', save_steps: int = 500, save_total_limit: Optional[int] = None, no_cuda: bool = False, seed: int = 42, fp16: bool = False, fp16_opt_level: str = 'O1', fp16_backend: str = 'auto', fp16_full_eval: bool = False, local_rank: int = -1, tpu_num_cores: Optional[int] = None, tpu_metrics_debug: bool = False, debug: bool = False, dataloader_drop_last: bool = False, eval_steps: int = None, dataloader_num_workers: int = 0, past_index: int = -1, run_name: Optional[str] = None, disable_tqdm: Optional[bool] = None, remove_unused_columns: Optional[bool] = True, label_names: Optional[List[str]] = None, load_best_model_at_end: Optional[bool] = False, metric_for_best_model: Optional[str] = None, greater_is_better: Optional[bool] = None, ignore_data_skip: bool = False, sharded_ddp: str = '', deepspeed: Optional[str] = None, label_smoothing_factor: float = 0.0, adafactor: bool = False, group_by_length: bool = False, length_column_name: Optional[str] = 'length', report_to: Optional[List[str]] = None, ddp_find_unused_parameters: Optional[bool] = None, dataloader_pin_memory: bool = True, skip_memory_metrics: bool = False, mp_parameters: str = '', sortish_sampler: bool = False, predict_with_generate: bool = False)[source]¶ TrainingArguments is the subset of the arguments we use in our example scripts which relate to the training loop itself.
Using
HfArgumentParser
we can turn this class into argparse arguments that can be specified on the command line.- Parameters
output_dir (
str
) – The output directory where the model predictions and checkpoints will be written.overwrite_output_dir (
bool
, optional, defaults toFalse
) – IfTrue
, overwrite the content of the output directory. Use this to continue training ifoutput_dir
points to a checkpoint directory.do_train (
bool
, optional, defaults toFalse
) – Whether to run training or not. This argument is not directly used byTrainer
, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.do_eval (
bool
, optional) – Whether to run evaluation on the validation set or not. Will be set toTrue
ifevaluation_strategy
is different from"no"
. This argument is not directly used byTrainer
, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.do_predict (
bool
, optional, defaults toFalse
) – Whether to run predictions on the test set or not. This argument is not directly used byTrainer
, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.evaluation_strategy (
str
orIntervalStrategy
, optional, defaults to"no"
) –The evaluation strategy to adopt during training. Possible values are:
"no"
: No evaluation is done during training."steps"
: Evaluation is done (and logged) everyeval_steps
."epoch"
: Evaluation is done at the end of each epoch.
prediction_loss_only (
bool
, optional, defaults to False) – When performing evaluation and generating predictions, only returns the loss.per_device_train_batch_size (
int
, optional, defaults to 8) – The batch size per GPU/TPU core/CPU for training.per_device_eval_batch_size (
int
, optional, defaults to 8) – The batch size per GPU/TPU core/CPU for evaluation.gradient_accumulation_steps (
int
, optional, defaults to 1) –Number of updates steps to accumulate the gradients for, before performing a backward/update pass.
Warning
When using gradient accumulation, one step is counted as one step with backward pass. Therefore, logging, evaluation, save will be conducted every
gradient_accumulation_steps * xxx_step
training examples.eval_accumulation_steps (
int
, optional) – Number of predictions steps to accumulate the output tensors for, before moving the results to the CPU. If left unset, the whole predictions are accumulated on GPU/TPU before being moved to the CPU (faster but requires more memory).learning_rate (
float
, optional, defaults to 5e-5) – The initial learning rate forAdamW
optimizer.weight_decay (
float
, optional, defaults to 0) – The weight decay to apply (if not zero) to all layers except all bias and LayerNorm weights inAdamW
optimizer.adam_beta1 (
float
, optional, defaults to 0.9) – The beta1 hyperparameter for theAdamW
optimizer.adam_beta2 (
float
, optional, defaults to 0.999) – The beta2 hyperparameter for theAdamW
optimizer.adam_epsilon (
float
, optional, defaults to 1e-8) – The epsilon hyperparameter for theAdamW
optimizer.max_grad_norm (
float
, optional, defaults to 1.0) – Maximum gradient norm (for gradient clipping).num_train_epochs (
float
, optional, defaults to 3.0) – Total number of training epochs to perform (if not an integer, will perform the decimal part percents of the last epoch before stopping training).max_steps (
int
, optional, defaults to -1) – If set to a positive number, the total number of training steps to perform. Overridesnum_train_epochs
.lr_scheduler_type (
str
orSchedulerType
, optional, defaults to"linear"
) – The scheduler type to use. See the documentation ofSchedulerType
for all possible values.warmup_ratio (
float
, optional, defaults to 0.0) – Ratio of total training steps used for a linear warmup from 0 tolearning_rate
.warmup_steps (
int
, optional, defaults to 0) – Number of steps used for a linear warmup from 0 tolearning_rate
. Overrides any effect ofwarmup_ratio
.logging_dir (
str
, optional) – TensorBoard log directory. Will default to runs/**CURRENT_DATETIME_HOSTNAME**.logging_strategy (
str
orIntervalStrategy
, optional, defaults to"steps"
) –The logging strategy to adopt during training. Possible values are:
"no"
: No logging is done during training."epoch"
: Logging is done at the end of each epoch."steps"
: Logging is done everylogging_steps
.
logging_first_step (
bool
, optional, defaults toFalse
) – Whether to log and evaluate the firstglobal_step
or not.logging_steps (
int
, optional, defaults to 500) – Number of update steps between two logs iflogging_strategy="steps"
.save_strategy (
str
orIntervalStrategy
, optional, defaults to"steps"
) –The checkpoint save strategy to adopt during training. Possible values are:
"no"
: No save is done during training."epoch"
: Save is done at the end of each epoch."steps"
: Save is done everysave_steps
.
save_steps (
int
, optional, defaults to 500) – Number of updates steps before two checkpoint saves ifsave_strategy="steps"
.save_total_limit (
int
, optional) – If a value is passed, will limit the total amount of checkpoints. Deletes the older checkpoints inoutput_dir
.no_cuda (
bool
, optional, defaults toFalse
) – Whether to not use CUDA even when it is available or not.seed (
int
, optional, defaults to 42) – Random seed that will be set at the beginning of training. To ensure reproducibility across runs, use themodel_init()
function to instantiate the model if it has some randomly initialized parameters.fp16 (
bool
, optional, defaults toFalse
) – Whether to use 16-bit (mixed) precision training instead of 32-bit training.fp16_opt_level (
str
, optional, defaults to ‘O1’) – Forfp16
training, Apex AMP optimization level selected in [‘O0’, ‘O1’, ‘O2’, and ‘O3’]. See details on the Apex documentation.fp16_backend (
str
, optional, defaults to"auto"
) – The backend to use for mixed precision training. Must be one of"auto"
,"amp"
or"apex"
."auto"
will use AMP or APEX depending on the PyTorch version detected, while the other choices will force the requested backend.fp16_full_eval (
bool
, optional, defaults toFalse
) – Whether to use full 16-bit precision evaluation instead of 32-bit. This will be faster and save memory but can harm metric values.local_rank (
int
, optional, defaults to -1) – Rank of the process during distributed training.tpu_num_cores (
int
, optional) – When training on TPU, the number of TPU cores (automatically passed by launcher script).debug (
bool
, optional, defaults toFalse
) – When training on TPU, whether to print debug metrics or not.dataloader_drop_last (
bool
, optional, defaults toFalse
) – Whether to drop the last incomplete batch (if the length of the dataset is not divisible by the batch size) or not.eval_steps (
int
, optional) – Number of update steps between two evaluations ifevaluation_strategy="steps"
. Will default to the same value aslogging_steps
if not set.dataloader_num_workers (
int
, optional, defaults to 0) – Number of subprocesses to use for data loading (PyTorch only). 0 means that the data will be loaded in the main process.past_index (
int
, optional, defaults to -1) – Some models like TransformerXL or :doc`XLNet <../model_doc/xlnet>` can make use of the past hidden states for their predictions. If this argument is set to a positive int, theTrainer
will use the corresponding output (usually index 2) as the past state and feed it to the model at the next training step under the keyword argumentmems
.run_name (
str
, optional) –A descriptor for the run. Typically used for wandb logging.
disable_tqdm (
bool
, optional) – Whether or not to disable the tqdm progress bars and table of metrics produced byNotebookTrainingTracker
in Jupyter Notebooks. Will default toTrue
if the logging level is set to warn or lower (default),False
otherwise.remove_unused_columns (
bool
, optional, defaults toTrue
) –If using
datasets.Dataset
datasets, whether or not to automatically remove the columns unused by the model forward method.(Note that this behavior is not implemented for
TFTrainer
yet.)label_names (
List[str]
, optional) –The list of keys in your dictionary of inputs that correspond to the labels.
Will eventually default to
["labels"]
except if the model used is one of theXxxForQuestionAnswering
in which case it will default to["start_positions", "end_positions"]
.load_best_model_at_end (
bool
, optional, defaults toFalse
) –Whether or not to load the best model found during training at the end of training.
Note
When set to
True
, the parameterssave_strategy
andsave_steps
will be ignored and the model will be saved after each evaluation.metric_for_best_model (
str
, optional) –Use in conjunction with
load_best_model_at_end
to specify the metric to use to compare two different models. Must be the name of a metric returned by the evaluation with or without the prefix"eval_"
. Will default to"loss"
if unspecified andload_best_model_at_end=True
(to use the evaluation loss).If you set this value,
greater_is_better
will default toTrue
. Don’t forget to set it toFalse
if your metric is better when lower.greater_is_better (
bool
, optional) –Use in conjunction with
load_best_model_at_end
andmetric_for_best_model
to specify if better models should have a greater metric or not. Will default to:True
ifmetric_for_best_model
is set to a value that isn’t"loss"
or"eval_loss"
.False
ifmetric_for_best_model
is not set, or set to"loss"
or"eval_loss"
.
ignore_data_skip (
bool
, optional, defaults toFalse
) – When resuming training, whether or not to skip the epochs and batches to get the data loading at the same stage as in the previous training. If set toTrue
, the training will begin faster (as that skipping step can take a long time) but will not yield the same results as the interrupted training would have.sharded_ddp (
bool
,str
or list ofShardedDDPOption
, optional, defaults toFalse
) –Use Sharded DDP training from FairScale (in distributed training only). This is an experimental feature.
A list of options along the following:
"simple"
: to use first instance of sharded DDP released by fairscale (ShardedDDP
) similar to ZeRO-2."zero_dp_2"
: to use the second instance of sharded DPP released by fairscale (FullyShardedDDP
) in Zero-2 mode (withreshard_after_forward=False
)."zero_dp_3"
: to use the second instance of sharded DPP released by fairscale (FullyShardedDDP
) in Zero-3 mode (withreshard_after_forward=True
)."offload"
: to add ZeRO-offload (only compatible with"zero_dp_2"
and"zero_dp_3"
).
If a string is passed, it will be split on space. If a bool is passed, it will be converted to an empty list for
False
and["simple"]
forTrue
.deepspeed (
str
ordict
, optional) – Use Deepspeed. This is an experimental feature and its API may evolve in the future. The value is either the location of DeepSpeed json config file (e.g.,ds_config.json
) or an already loaded json file as adict
”label_smoothing_factor (
float
, optional, defaults to 0.0) – The label smoothing factor to use. Zero means no label smoothing, otherwise the underlying onehot-encoded labels are changed from 0s and 1s tolabel_smoothing_factor/num_labels
and1 - label_smoothing_factor + label_smoothing_factor/num_labels
respectively.adafactor (
bool
, optional, defaults toFalse
) – Whether or not to use theAdafactor
optimizer instead ofAdamW
.group_by_length (
bool
, optional, defaults toFalse
) – Whether or not to group together samples of roughly the same length in the training dataset (to minimize padding applied and be more efficient). Only useful if applying dynamic padding.length_column_name (
str
, optional, defaults to"length"
) – Column name for precomputed lengths. If the column exists, grouping by length will use these values rather than computing them on train startup. Ignored unlessgroup_by_length
isTrue
and the dataset is an instance ofDataset
.report_to (
str
orList[str]
, optional, defaults to"all"
) – The list of integrations to report the results and logs to. Supported platforms are"azure_ml"
,"comet_ml"
,"mlflow"
,"tensorboard"
and"wandb"
. Use"all"
to report to all integrations installed,"none"
for no integrations.ddp_find_unused_parameters (
bool
, optional) – When using distributed training, the value of the flagfind_unused_parameters
passed toDistributedDataParallel
. Will default toFalse
if gradient checkpointing is used,True
otherwise.dataloader_pin_memory (
bool
, optional, defaults toTrue
)) – Whether you want to pin memory in data loaders or not. Will default toTrue
.skip_memory_metrics (
bool
, optional, defaults toFalse
)) – Whether to skip adding of memory profiler reports to metrics. Defaults toFalse
.
- sortish_sampler (
bool
, optional, defaults toFalse
): Whether to use a sortish sampler or not. Only possible if the underlying datasets are Seq2SeqDataset for now but will become generally available in the near future.
It sorts the inputs according to lengths in order to minimize the padding size, with a bit of randomness for the training set.
- predict_with_generate (
bool
, optional, defaults toFalse
): Whether to use generate to calculate generative metrics (ROUGE, BLEU).
TFTrainingArguments¶
-
class
transformers.
TFTrainingArguments
(output_dir: str, overwrite_output_dir: bool = False, do_train: bool = False, do_eval: bool = None, do_predict: bool = False, evaluation_strategy: transformers.trainer_utils.IntervalStrategy = 'no', prediction_loss_only: bool = False, per_device_train_batch_size: int = 8, per_device_eval_batch_size: int = 8, per_gpu_train_batch_size: Optional[int] = None, per_gpu_eval_batch_size: Optional[int] = None, gradient_accumulation_steps: int = 1, eval_accumulation_steps: Optional[int] = None, learning_rate: float = 5e-05, weight_decay: float = 0.0, adam_beta1: float = 0.9, adam_beta2: float = 0.999, adam_epsilon: float = 1e-08, max_grad_norm: float = 1.0, num_train_epochs: float = 3.0, max_steps: int = -1, lr_scheduler_type: transformers.trainer_utils.SchedulerType = 'linear', warmup_ratio: float = 0.0, warmup_steps: int = 0, logging_dir: Optional[str] = <factory>, logging_strategy: transformers.trainer_utils.IntervalStrategy = 'steps', logging_first_step: bool = False, logging_steps: int = 500, save_strategy: transformers.trainer_utils.IntervalStrategy = 'steps', save_steps: int = 500, save_total_limit: Optional[int] = None, no_cuda: bool = False, seed: int = 42, fp16: bool = False, fp16_opt_level: str = 'O1', fp16_backend: str = 'auto', fp16_full_eval: bool = False, local_rank: int = -1, tpu_num_cores: Optional[int] = None, tpu_metrics_debug: bool = False, debug: bool = False, dataloader_drop_last: bool = False, eval_steps: int = None, dataloader_num_workers: int = 0, past_index: int = -1, run_name: Optional[str] = None, disable_tqdm: Optional[bool] = None, remove_unused_columns: Optional[bool] = True, label_names: Optional[List[str]] = None, load_best_model_at_end: Optional[bool] = False, metric_for_best_model: Optional[str] = None, greater_is_better: Optional[bool] = None, ignore_data_skip: bool = False, sharded_ddp: str = '', deepspeed: Optional[str] = None, label_smoothing_factor: float = 0.0, adafactor: bool = False, group_by_length: bool = False, length_column_name: Optional[str] = 'length', report_to: Optional[List[str]] = None, ddp_find_unused_parameters: Optional[bool] = None, dataloader_pin_memory: bool = True, skip_memory_metrics: bool = False, mp_parameters: str = '', tpu_name: str = None, tpu_zone: str = None, gcp_project: str = None, poly_power: float = 1.0, xla: bool = False)[source]¶ TrainingArguments is the subset of the arguments we use in our example scripts which relate to the training loop itself.
Using
HfArgumentParser
we can turn this class into argparse arguments that can be specified on the command line.- Parameters
output_dir (
str
) – The output directory where the model predictions and checkpoints will be written.overwrite_output_dir (
bool
, optional, defaults toFalse
) – IfTrue
, overwrite the content of the output directory. Use this to continue training ifoutput_dir
points to a checkpoint directory.do_train (
bool
, optional, defaults toFalse
) – Whether to run training or not. This argument is not directly used byTrainer
, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.do_eval (
bool
, optional) – Whether to run evaluation on the validation set or not. Will be set toTrue
ifevaluation_strategy
is different from"no"
. This argument is not directly used byTrainer
, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.do_predict (
bool
, optional, defaults toFalse
) – Whether to run predictions on the test set or not. This argument is not directly used byTrainer
, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.evaluation_strategy (
str
orIntervalStrategy
, optional, defaults to"no"
) –The evaluation strategy to adopt during training. Possible values are:
"no"
: No evaluation is done during training."steps"
: Evaluation is done (and logged) everyeval_steps
."epoch"
: Evaluation is done at the end of each epoch.
per_device_train_batch_size (
int
, optional, defaults to 8) – The batch size per GPU/TPU core/CPU for training.per_device_eval_batch_size (
int
, optional, defaults to 8) – The batch size per GPU/TPU core/CPU for evaluation.gradient_accumulation_steps –
(
int
, optional, defaults to 1): Number of updates steps to accumulate the gradients for, before performing a backward/update pass.Warning
When using gradient accumulation, one step is counted as one step with backward pass. Therefore, logging, evaluation, save will be conducted every
gradient_accumulation_steps * xxx_step
training examples.learning_rate (
float
, optional, defaults to 5e-5) – The initial learning rate for Adam.weight_decay (
float
, optional, defaults to 0) – The weight decay to apply (if not zero).adam_beta1 (
float
, optional, defaults to 0.9) – The beta1 hyperparameter for the Adam optimizer.adam_beta2 (
float
, optional, defaults to 0.999) – The beta2 hyperparameter for the Adam optimizer.adam_epsilon (
float
, optional, defaults to 1e-8) – The epsilon hyperparameter for the Adam optimizer.max_grad_norm (
float
, optional, defaults to 1.0) – Maximum gradient norm (for gradient clipping).num_train_epochs (
float
, optional, defaults to 3.0) – Total number of training epochs to perform.max_steps (
int
, optional, defaults to -1) – If set to a positive number, the total number of training steps to perform. Overridesnum_train_epochs
.warmup_ratio (
float
, optional, defaults to 0.0) – Ratio of total training steps used for a linear warmup from 0 tolearning_rate
.warmup_steps (
int
, optional, defaults to 0) – Number of steps used for a linear warmup from 0 tolearning_rate
. Overrides any effect ofwarmup_ratio
.logging_dir (
str
, optional) – TensorBoard log directory. Will default to runs/**CURRENT_DATETIME_HOSTNAME**.logging_strategy (
str
orIntervalStrategy
, optional, defaults to"steps"
) –The logging strategy to adopt during training. Possible values are:
"no"
: No logging is done during training."epoch"
: Logging is done at the end of each epoch."steps"
: Logging is done everylogging_steps
.
logging_first_step (
bool
, optional, defaults toFalse
) – Whether to log and evaluate the firstglobal_step
or not.logging_steps (
int
, optional, defaults to 500) – Number of update steps between two logs iflogging_strategy="steps"
.save_strategy (
str
orIntervalStrategy
, optional, defaults to"steps"
) –The checkpoint save strategy to adopt during training. Possible values are:
"no"
: No save is done during training."epoch"
: Save is done at the end of each epoch."steps"
: Save is done everysave_steps
.
save_steps (
int
, optional, defaults to 500) – Number of updates steps before two checkpoint saves ifsave_strategy="steps"
.save_total_limit (
int
, optional) – If a value is passed, will limit the total amount of checkpoints. Deletes the older checkpoints inoutput_dir
.no_cuda (
bool
, optional, defaults toFalse
) – Whether to not use CUDA even when it is available or not.seed (
int
, optional, defaults to 42) – Random seed that will be set at the beginning of training.fp16 (
bool
, optional, defaults toFalse
) – Whether to use 16-bit (mixed) precision training (through NVIDIA Apex) instead of 32-bit training.fp16_opt_level (
str
, optional, defaults to ‘O1’) – Forfp16
training, Apex AMP optimization level selected in [‘O0’, ‘O1’, ‘O2’, and ‘O3’]. See details on the Apex documentation.local_rank (
int
, optional, defaults to -1) – During distributed training, the rank of the process.tpu_num_cores (
int
, optional) – When training on TPU, the number of TPU cores (automatically passed by launcher script).debug (
bool
, optional, defaults toFalse
) – Whether to activate the trace to record computation graphs and profiling information or not.dataloader_drop_last (
bool
, optional, defaults toFalse
) – Whether to drop the last incomplete batch (if the length of the dataset is not divisible by the batch size) or not.eval_steps (
int
, optional, defaults to 1000) – Number of update steps before two evaluations.past_index (
int
, optional, defaults to -1) – Some models like TransformerXL or :doc`XLNet <../model_doc/xlnet>` can make use of the past hidden states for their predictions. If this argument is set to a positive int, theTrainer
will use the corresponding output (usually index 2) as the past state and feed it to the model at the next training step under the keyword argumentmems
.tpu_name (
str
, optional) – The name of the TPU the process is running on.tpu_zone (
str
, optional) – The zone of the TPU the process is running on. If not specified, we will attempt to automatically detect from metadata.gcp_project (
str
, optional) – Google Cloud Project name for the Cloud TPU-enabled project. If not specified, we will attempt to automatically detect from metadata.run_name (
str
, optional) – A descriptor for the run. Notably used for wandb logging.xla (
bool
, optional) – Whether to activate the XLA compilation or not.
-
property
eval_batch_size
¶ The actual batch size for evaluation (may differ from
per_gpu_eval_batch_size
in distributed training).
-
property
n_gpu
¶ The number of replicas (CPUs, GPUs or TPU cores) used in this training.
-
property
n_replicas
¶ The number of replicas (CPUs, GPUs or TPU cores) used in this training.
-
property
strategy
¶ The strategy used for distributed training.
-
property
train_batch_size
¶ The actual batch size for training (may differ from
per_gpu_train_batch_size
in distributed training).
Trainer Integrations¶
The Trainer
has been extended to support libraries that may dramatically improve your training
time and fit much bigger models.
Currently it supports third party solutions, DeepSpeed and FairScale, which implement parts of the paper ZeRO: Memory Optimizations Toward Training Trillion Parameter Models, by Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, Yuxiong He.
This provided support is new and experimental as of this writing.
Installation Notes¶
As of this writing, both FairScale and Deepspeed require compilation of CUDA C++ code, before they can be used.
While all installation issues should be dealt with through the corresponding GitHub Issues of FairScale and Deepspeed, there are a few common issues that one may encounter while building any PyTorch extension that needs to build CUDA extensions.
Therefore, if you encounter a CUDA-related build issue while doing one of the following or both:
pip install fairscale
pip install deepspeed
please, read the following notes first.
In these notes we give examples for what to do when pytorch
has been built with CUDA 10.2
. If your situation is
different remember to adjust the version number to the one you are after.
Possible problem #1:
While, Pytorch comes with its own CUDA toolkit, to build these two projects you must have an identical version of CUDA installed system-wide.
For example, if you installed pytorch
with cudatoolkit==10.2
in the Python environment, you also need to have
CUDA 10.2
installed system-wide.
The exact location may vary from system to system, but /usr/local/cuda-10.2
is the most common location on many
Unix systems. When CUDA is correctly set up and added to the PATH
environment variable, one can find the
installation location by doing:
which nvcc
If you don’t have CUDA installed system-wide, install it first. You will find the instructions by using your favorite search engine. For example, if you’re on Ubuntu you may want to search for: ubuntu cuda 10.2 install.
Possible problem #2:
Another possible common problem is that you may have more than one CUDA toolkit installed system-wide. For example you may have:
/usr/local/cuda-10.2
/usr/local/cuda-11.0
Now, in this situation you need to make sure that your PATH
and LD_LIBRARY_PATH
environment variables contain
the correct paths to the desired CUDA version. Typically, package installers will set these to contain whatever the
last version was installed. If you encounter the problem, where the package build fails because it can’t find the right
CUDA version despite you having it installed system-wide, it means that you need to adjust the 2 aforementioned
environment variables.
First, you may look at their contents:
echo $PATH
echo $LD_LIBRARY_PATH
so you get an idea of what is inside.
It’s possible that LD_LIBRARY_PATH
is empty.
PATH
lists the locations of where executables can be found and LD_LIBRARY_PATH
is for where shared libraries
are to looked for. In both cases, earlier entries have priority over the later ones. :
is used to separate multiple
entries.
Now, to tell the build program where to find the specific CUDA toolkit, insert the desired paths to be listed first by doing:
export PATH=/usr/local/cuda-10.2/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-10.2/lib64:$LD_LIBRARY_PATH
Note that we aren’t overwriting the existing values, but prepending instead.
Of course, adjust the version number, the full path if need be. Check that the directories you assign actually do
exist. lib64
sub-directory is where the various CUDA .so
objects, like libcudart.so
reside, it’s unlikely
that your system will have it named differently, but if it is adjust it to reflect your reality.
Possible problem #3:
Some older CUDA versions may refuse to build with newer compilers. For example, you my have gcc-9
but it wants
gcc-7
.
There are various ways to go about it.
If you can install the latest CUDA toolkit it typically should support the newer compiler.
Alternatively, you could install the lower version of the compiler in addition to the one you already have, or you may
already have it but it’s not the default one, so the build system can’t see it. If you have gcc-7
installed but the
build system complains it can’t find it, the following might do the trick:
sudo ln -s /usr/bin/gcc-7 /usr/local/cuda-10.2/bin/gcc
sudo ln -s /usr/bin/g++-7 /usr/local/cuda-10.2/bin/g++
Here, we are making a symlink to gcc-7
from /usr/local/cuda-10.2/bin/gcc
and since
/usr/local/cuda-10.2/bin/
should be in the PATH
environment variable (see the previous problem’s solution), it
should find gcc-7
(and g++7
) and then the build will succeed.
As always make sure to edit the paths in the example to match your situation.
If still unsuccessful:
If after addressing these you still encounter build issues, please, proceed with the GitHub Issue of FairScale and Deepspeed, depending on the project you have the problem with.
FairScale¶
By integrating FairScale the Trainer
provides support for the following features from the ZeRO paper:
Optimizer State Sharding
Gradient Sharding
Model Parameters Sharding (new and very experimental)
CPU offload (new and very experimental)
You will need at least two GPUs to use this feature.
To deploy this feature:
Install the library via pypi:
pip install fairscale
or find more details on the FairScale’s GitHub page.
To use the first version of Sharded data-parallelism, add
--sharded_ddp simple
to the command line arguments, and make sure you have added the distributed launcher-m torch.distributed.launch --nproc_per_node=NUMBER_OF_GPUS_YOU_HAVE
if you haven’t been using it already.
For example here is how you could use it for run_translation.py
with 2 GPUs:
python -m torch.distributed.launch --nproc_per_node=2 examples/seq2seq/run_translation.py \
--model_name_or_path t5-small --per_device_train_batch_size 1 \
--output_dir output_dir --overwrite_output_dir \
--do_train --max_train_samples 500 --num_train_epochs 1 \
--dataset_name wmt16 --dataset_config "ro-en" \
--source_lang en --target_lang ro \
--fp16 --sharded_ddp simple
Notes:
This feature requires distributed training (so multiple GPUs).
It is not implemented for TPUs.
It works with
--fp16
too, to make things even faster.One of the main benefits of enabling
--sharded_ddp simple
is that it uses a lot less GPU memory, so you should be able to use significantly larger batch sizes using the same hardware (e.g. 3x and even bigger) which should lead to significantly shorter training time.
To use the second version of Sharded data-parallelism, add
--sharded_ddp zero_dp_2
or--sharded_ddp zero_dp_3` to the command line arguments, and make sure you have added the distributed launcher ``-m torch.distributed.launch --nproc_per_node=NUMBER_OF_GPUS_YOU_HAVE
if you haven’t been using it already.
For example here is how you could use it for run_translation.py
with 2 GPUs:
python -m torch.distributed.launch --nproc_per_node=2 examples/seq2seq/run_translation.py \
--model_name_or_path t5-small --per_device_train_batch_size 1 \
--output_dir output_dir --overwrite_output_dir \
--do_train --max_train_samples 500 --num_train_epochs 1 \
--dataset_name wmt16 --dataset_config "ro-en" \
--source_lang en --target_lang ro \
--fp16 --sharded_ddp zero_dp_2
zero_dp_2
is an optimized version of the simple wrapper, while zero_dp_3
fully shards model weights,
gradients and optimizer states.
Both are compatible with adding cpu_offload
to enable ZeRO-offload (activate it like this: --sharded_ddp
"zero_dp_2 cpu_offload"
).
Notes:
This feature requires distributed training (so multiple GPUs).
It is not implemented for TPUs.
It works with
--fp16
too, to make things even faster.The
cpu_offload
additional option requires--fp16
.This is an area of active development, so make sure you have a source install of fairscale to use this feature as some bugs you encounter may have been fixed there already.
Known caveats:
This feature is incompatible with
--predict_with_generate
in the run_translation.py script.Using
--sharded_ddp zero_dp_3
requires wrapping each layer of the model in the special containerFullyShardedDataParallelism
of fairscale. It should be used with the optionauto_wrap
if you are not doing this yourself:--sharded_ddp "zero_dp_3 auto_wrap"
.
DeepSpeed¶
DeepSpeed implements everything described in the ZeRO paper, except ZeRO’s stage 3. “Parameter Partitioning (Pos+g+p)”. Currently it provides full support for:
Optimizer State Partitioning (ZeRO stage 1)
Add Gradient Partitioning (ZeRO stage 2)
Custom fp16 handling
A range of fast Cuda-extension-based Optimizers
ZeRO-Offload
ZeRO-Offload has its own dedicated paper: ZeRO-Offload: Democratizing Billion-Scale Model Training.
DeepSpeed is currently used only for training, as all the currently available features are of no use to inference.
Installation¶
Install the library via pypi:
pip install deepspeed
or find more details on the DeepSpeed’s GitHub page.
Deployment with multiple GPUs¶
To deploy this feature with multiple GPUs adjust the Trainer
command line arguments as
following:
replace
python -m torch.distributed.launch
withdeepspeed
.add a new argument
--deepspeed ds_config.json
, whereds_config.json
is the DeepSpeed configuration file as documented here. The file naming is up to you.
Therefore, if your original command line looked as following:
python -m torch.distributed.launch --nproc_per_node=2 your_program.py <normal cl args>
Now it should be:
deepspeed --num_gpus=2 your_program.py <normal cl args> --deepspeed ds_config.json
Unlike, torch.distributed.launch
where you have to specify how many GPUs to use with --nproc_per_node
, with the
deepspeed
launcher you don’t have to use the corresponding --num_gpus
if you want all of your GPUs used. The
full details on how to configure various nodes and GPUs can be found here.
In fact, you can continue using -m torch.distributed.launch
with DeepSpeed as long as you don’t need to use
deepspeed
launcher-specific arguments. Typically if you don’t need a multi-node setup you’re not required to use
the deepspeed
launcher. But since in the DeepSpeed documentation it’ll be used everywhere, for consistency we will
use it here as well.
Here is an example of running run_translation.py
under DeepSpeed deploying all available GPUs:
deepspeed examples/seq2seq/run_translation.py \
--deepspeed examples/tests/deepspeed/ds_config.json \
--model_name_or_path t5-small --per_device_train_batch_size 1 \
--output_dir output_dir --overwrite_output_dir --fp16 \
--do_train --max_train_samples 500 --num_train_epochs 1 \
--dataset_name wmt16 --dataset_config "ro-en" \
--source_lang en --target_lang ro
Note that in the DeepSpeed documentation you are likely to see --deepspeed --deepspeed_config ds_config.json
- i.e.
two DeepSpeed-related arguments, but for the sake of simplicity, and since there are already so many arguments to deal
with, we combined the two into a single argument.
For some practical usage examples, please, see this post.
Deployment with one GPU¶
To deploy DeepSpeed with one GPU adjust the Trainer
command line arguments as following:
deepspeed --num_gpus=1 examples/seq2seq/run_translation.py \
--deepspeed examples/tests/deepspeed/ds_config.json \
--model_name_or_path t5-small --per_device_train_batch_size 1 \
--output_dir output_dir --overwrite_output_dir --fp16 \
--do_train --max_train_samples 500 --num_train_epochs 1 \
--dataset_name wmt16 --dataset_config "ro-en" \
--source_lang en --target_lang ro
This is almost the same as with multiple-GPUs, but here we tell DeepSpeed explicitly to use just one GPU. By default, DeepSpeed deploys all GPUs it can see. If you have only 1 GPU to start with, then you don’t need this argument. The following documentation discusses the launcher options.
Why would you want to use DeepSpeed with just one GPU?
It has a ZeRO-offload feature which can delegate some computations and memory to the host’s CPU and RAM, and thus leave more GPU resources for model’s needs - e.g. larger batch size, or enabling a fitting of a very big model which normally won’t fit.
It provides a smart GPU memory management system, that minimizes memory fragmentation, which again allows you to fit bigger models and data batches.
While we are going to discuss the configuration in details next, the key to getting a huge improvement on a single GPU with DeepSpeed is to have at least the following configuration in the configuration file:
{
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"overlap_comm": true,
"contiguous_gradients": true,
"cpu_offload": true
},
}
which enables cpu_offload
and some other important features. You may experiment with the buffer sizes, you will
find more details in the discussion below.
For a practical usage example of this type of deployment, please, see this post.
Notes:
if you need to run on a specific GPU, which is different from GPU 0, you can’t use
CUDA_VISIBLE_DEVICES
to limit the visible scope of available GPUs. Instead, you have to use the following syntax:deepspeed --include localhost:1 examples/seq2seq/run_translation.py ...
In this example, we tell DeepSpeed to use GPU 1 (second gpu).
Deployment in Notebooks¶
The problem with running notebook cells as a script is that there is no normal deepspeed
launcher to rely on, so
under certain setups we have to emulate it.
Here is how you’d have to adjust your training code in the notebook to use DeepSpeed.
# DeepSpeed requires a distributed environment even when only one process is used.
# This emulates a launcher in the notebook
import os
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '9994' # modify if RuntimeError: Address already in use
os.environ['RANK'] = "0"
os.environ['LOCAL_RANK'] = "0"
os.environ['WORLD_SIZE'] = "1"
# Now proceed as normal, plus pass the deepspeed config file
training_args = TrainingArguments(..., deepspeed="ds_config.json")
trainer = Trainer(...)
trainer.train()
Note: … stands for the normal arguments that you’d pass to the functions.
If you want to create the config file on the fly in the notebook in the current directory, you could have a dedicated cell with:
%%bash
cat <<'EOT' > ds_config.json
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true,
"cpu_offload": true
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": 3e-5,
"betas": [0.8, 0.999],
"eps": 1e-8,
"weight_decay": 3e-7
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 3e-5,
"warmup_num_steps": 500
}
},
"steps_per_print": 2000,
"wall_clock_breakdown": false
}
EOT
That’s said if the script is not in the notebook cells, you can launch deepspeed
normally via shell from a cell
with:
!deepspeed examples/seq2seq/run_translation.py ...
or with bash magic, where you can write a multi-line code for the shell to run:
%%bash
cd /somewhere
deepspeed examples/seq2seq/run_translation.py ...
Configuration¶
For the complete guide to the DeepSpeed configuration options that can be used in its configuration file please refer to the following documentation.
You can find dozens of DeepSpeed configuration examples that address various practical needs in the DeepSpeedExamples repo:
git clone https://github.com/microsoft/DeepSpeedExamples
cd DeepSpeedExamples
find . -name '*json'
Continuing the code from above, let’s say you’re looking to configure the Lamb optimizer. So you can search through the
example .json
files with:
grep -i Lamb $(find . -name '*json')
Some more examples are to be found in the main repo as well.
When using DeepSpeed you always need to supply a DeepSpeed configuration file, yet some configuration parameters have to be configured via the command line. You will find the nuances in the rest of this guide.
To get an idea of what DeepSpeed configuration file looks like, here is one that activates ZeRO stage 2 features,
enables FP16, uses AdamW
optimizer and WarmupLR
scheduler:
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 5e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 5e8,
"contiguous_gradients": true,
"cpu_offload": true
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": 3e-5,
"betas": [ 0.8, 0.999 ],
"eps": 1e-8,
"weight_decay": 3e-7
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 3e-5,
"warmup_num_steps": 500
}
}
}
When you execute the program, DeepSpeed will log the configuration it received from the Trainer
to the console, so you can see exactly what was the final configuration passed to it.
Passing Configuration¶
As discussed in this document normally the DeepSpeed configuration is passed as a path to a json file, but if you’re
not using the command line interface to configure the training, and instead instantiate the
Trainer
via TrainingArguments
then for the deepspeed
argument you can
pass a nested dict
. This allows you to create the configuration on the fly and doesn’t require you to write it to
the file system before passing it to TrainingArguments
.
To summarize you can do:
TrainingArguments(..., deespeed="/path/to/ds_config.json")
or:
ds_config_dict=dict(scheduler=scheduler_params, optimizer=optimizer_params)
TrainingArguments(..., deespeed=ds_config_dict)
ZeRO¶
The zero_optimization
section of the configuration file is the most important part (docs), since that is where you define
which ZeRO stages you want to enable and how to configure them.
{
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 5e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 5e8,
"contiguous_gradients": true,
"cpu_offload": true
}
}
Notes:
enabling
cpu_offload
should reduce GPU RAM usage (it requires"stage": 2
)"overlap_comm": true
trades off increased GPU RAM usage to lower all-reduce latency.overlap_comm
uses 4.5x theallgather_bucket_size
andreduce_bucket_size
values. So if they are set to 5e8, this requires a 9GB footprint (5e8 x 2Bytes x 2 x 4.5
). Therefore, if you have a GPU with 8GB or less RAM, to avoid getting OOM-errors you will need to reduce those parameters to about2e8
, which would require 3.6GB. You will want to do the same on larger capacity GPU as well, if you’re starting to hit OOM.when reducing these buffers you’re trading communication speed to avail more GPU RAM. The smaller the buffer size, the slower the communication, and the more GPU RAM will be available to other tasks. So if a bigger batch size is important, getting a slightly slower training time could be a good trade.
This section has to be configured exclusively via DeepSpeed configuration - the Trainer
provides
no equivalent command line arguments.
Optimizer and Scheduler¶
As long as you don’t enable cpu_offload
you can mix and match DeepSpeed and HuggingFace schedulers and optimizers,
with the exception of using the combination of HuggingFace scheduler and DeepSpeed optimizer:
Combos |
HF Scheduler |
DS Scheduler |
HF Optimizer |
Yes |
Yes |
DS Optimizer |
No |
Yes |
If cpu_offload
is enabled you must use both DeepSpeed scheduler and DeepSpeed optimizer.
Optimizer¶
DeepSpeed’s main optimizers are Adam, AdamW, OneBitAdam, and Lamb. These have been thoroughly tested with ZeRO and are
thus recommended to be used. It, however, can import other optimizers from torch
. The full documentation is here.
If you don’t configure the optimizer
entry in the configuration file, the Trainer
will
automatically set it to AdamW
and will use the supplied values or the defaults for the following command line
arguments: --learning_rate
, --adam_beta1
, --adam_beta2
, --adam_epsilon
and --weight_decay
.
Here is an example of the pre-configured optimizer
entry for AdamW
:
{
"optimizer": {
"type": "AdamW",
"params": {
"lr": 0.001,
"betas": [0.8, 0.999],
"eps": 1e-8,
"weight_decay": 3e-7
}
}
}
Note that the command line arguments will override the values in the configuration file. This is so that there is one definitive source of the values and to avoid hard to find errors when for example, the learning rate is set to different values in different places. Command line rules. The values that get overridden are:
lr
with the value of--learning_rate
betas
with the value of--adam_beta1 --adam_beta2
eps
with the value of--adam_epsilon
weight_decay
with the value of--weight_decay
Therefore please remember to tune the shared hyperparameters on the command line.
If you want to use another optimizer which is not listed above, you will have to add "zero_allow_untested_optimizer":
true
to the top level configuration.
If you want to use one of the officially supported optimizers, configure them explicitly in the configuration file, and
make sure to adjust the values. e.g. if use Adam you will want weight_decay
around 0.01
.
Scheduler¶
DeepSpeed supports LRRangeTest, OneCycle, WarmupLR and WarmupDecayLR LR schedulers. The full documentation is here.
Here is where the schedulers overlap between 🤗 Transformers and DeepSpeed:
WarmupLR
via--lr_scheduler_type constant_with_warmup
WarmupDecayLR
via--lr_scheduler_type linear
. This is also the default value for--lr_scheduler_type
, therefore, if you don’t configure the scheduler this is scheduler that will get configured by default.
If you don’t configure the scheduler
entry in the configuration file, the Trainer
will use
the values of --lr_scheduler_type
, --learning_rate
and --warmup_steps
to configure a 🤗 Transformers version
of it.
Here is an example of the pre-configured scheduler
entry for WarmupLR
:
{
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 0.001,
"warmup_num_steps": 1000
}
}
}
Note that the command line arguments will override the values in the configuration file. This is so that there is one definitive source of the values and to avoid hard to find errors when for example, the learning rate is set to different values in different places. Command line rules. The values that get overridden are:
warmup_max_lr
with the value of--learning_rate
warmup_num_steps
with the value of--warmup_steps
total_num_steps
with either the value of--max_steps
or if it is not provided, derived automatically at run time based on the environment and the size of the dataset and other command line arguments (needed forWarmupDecayLR
).
Therefore please remember to tune the shared hyperparameters on the command line.
For example, for WarmupDecayLR
, you can use the following entry:
{
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"total_num_steps": 10,
"last_batch_iteration": -1,
"warmup_min_lr": 0,
"warmup_max_lr": 0.001,
"warmup_num_steps": 1000
}
}
}
and warmup_max_lr
, warmup_num_steps
and total_num_steps
will be corrected at loading time.
Automatic Mixed Precision¶
You can work with FP16 in one of the following ways:
If you want to use an equivalent of the Pytorch native amp, you can either configure the fp16
entry in the
configuration file, or use the following command line arguments: --fp16 --fp16_backend amp
.
Here is an example of the fp16
configuration:
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
},
}
If you want to use NVIDIA’s apex instead, you can can either configure the amp
entry in the configuration file, or
use the following command line arguments: --fp16 --fp16_backend apex --fp16_opt_level 01
.
Here is an example of the amp
configuration:
{
"amp": {
"enabled": true,
"opt_level": "O1"
}
}
Gradient Accumulation¶
While normally DeepSpeed gets gradient accumulation configured with:
{
"gradient_accumulation_steps": 3,
}
in this case, to enable gradient accumulation, pass the command line –gradient_accumulation_steps argument as normal and it will get injected into the DeepSpeed configuration.
If you try to add it directly to the configuration file, you will receive an error from the Trainer - this is because this setting is needed by the Trainer too, and so this approach ensures that there is a single way of setting this value and thus avoid potential subtle errors.
Gradient Clipping¶
If you don’t configure the gradient_clipping
entry in the configuration file, the Trainer
will use the value of the --max_grad_norm
command line argument to set it.
Here is an example of the gradient_clipping
configuration:
{
"gradient_clipping": 1.0,
}
Notes¶
DeepSpeed works with the PyTorch
Trainer
but not TFTFTrainer
.While DeepSpeed has a pip installable PyPI package, it is highly recommended that it gets installed from source to best match your hardware and also if you need to enable certain features, like 1-bit Adam, which aren’t available in the pypi distribution.
You don’t have to use the
Trainer
to use DeepSpeed with 🤗 Transformers - you can use any model with your own trainer, and you will have to adapt the latter according to the DeepSpeed integration instructions.
Main DeepSpeed Resources¶
Papers:
Finally, please, remember that, HuggingFace Trainer
only integrates DeepSpeed, therefore if you
have any problems or questions with regards to DeepSpeed usage, please, file an issue with DeepSpeed GitHub.