Accelerator

The Accelerator is the main class provided by 🤗 Accelerate. It serves at the main entrypoint for the API. To quickly adapt your script to work on any kind of setup with 🤗 Accelerate juste:

  1. Initialize an Accelerator object (that we will call accelerator in the rest of this page) as early as possible in your script.

  2. Pass along your model(s), optimizer(s), dataloader(s) to the prepare() method.

  3. (Optional but best practice) Remove all the cuda() or to(device) in your code and let the accelerator handle device placement for you.

  4. Replace the loss.backward() in your code by accelerator.backward(loss).

  5. (Optional, when using distributed evaluation) Gather your predictions and labelsbefore storing them or using them for metric computation using gather().

This is all what is needed in most cases. For more advanced case or a nicer experience here are the functions you should search for and replace by the corresponding methods of your accelerator:

class accelerate.Accelerator(device_placement: bool = True, split_batches: bool = False, fp16: bool = None, cpu: bool = False, deepspeed_plugin: accelerate.utils.DeepSpeedPlugin = None, rng_types: Optional[List[Union[str, accelerate.utils.RNGType]]] = None, dispatch_batches: Optional[bool] = None, kwargs_handlers: Optional[List[accelerate.kwargs_handlers.KwargsHandler]] = None)[source]

Creates an instance of an accelerator for distributed training (on multi-GPU, TPU) or mixed precision training.

Parameters
  • device_placement (bool, optional, defaults to True) – Whether or not the accelerator should put objects on device (tensors yielded by the dataloader, model, etc…).

  • split_batches (bool, optional, defaults to False) – Whether or not the accelerator should split the batches yielded by the dataloaders across the devices. If True the actual batch size used will be the same on any kind of distributed processes, but it must be a round multiple of the num_processes you are using. If False, actual batch size used will be the one set in your script multiplied by the number of processes.

  • fp16 (bool, optional) – Whether or not to use mixed precision training. Will default to the value in the environment variable USE_FP16, which will use the default value in the accelerate config of the current system or the flag passed with the accelerate.launch command.

  • cpu (bool, optional) – Whether or not to force the script to execute on CPU. Will ignore GPU available if set to True and force the execution on one process only.

  • deepspeed_plugin (DeepSpeedPlugin, optional) – Tweak your DeepSpeed related args using this argument. This argument is optional and can be configured directly using accelerate config

  • rng_types (list of str or RNGType) –

    The list of random number generators to synchronize at the beginning of each iteration in your prepared dataloaders. Should be one or several of:

    • "torch": the base torch random number generator

    • "cuda": the CUDA random number generator (GPU only)

    • "xla": the XLA random number generator (TPU only)

    • "generator": the torch.Generator of the sampler (or batch sampler if there is no sampler in your dataloader) or of the iterable dataset (if it exists) if the underlying dataset is of that type.

    Will default to ["torch"] for PyTorch versions <=1.5.1 and ["generator"] for PyTorch versions >= 1.6.

  • dispatch_batches (bool, optional) – If set to True, the dataloader prepared by the Accelerator is only iterated through on the main process and then the batches are split and broadcast to each process. Will default to True for DataLoader whose underlying dataset is an IterableDataset, False otherwise.

  • kwargs_handlers (list of kwargs handlers, optional) – A list of KwargHandler to customize how the objects related to distributed training or mixed precision are created. See Kwargs Handlers for more information.

Attributes

  • device (torch.device) – The device to use.

  • state (AcceleratorState) – The distributed setup state.

autocast()[source]

Will apply automatic mixed-precision inside the block inside this context manager, if it is enabled. Nothing different will happen otherwise.

backward(loss, **kwargs)[source]

Use accelerator.backward(loss) in lieu of loss.backward().

clip_grad_norm_(parameters, max_norm, norm_type=2)[source]

Should be used in place of torch.nn.utils.clip_grad_norm_().

clip_grad_value_(parameters, clip_value)[source]

Should be used in place of torch.nn.utils.clip_grad_value_().

free_memory()[source]

Will release all references to the internal objects stored and call the garbage collector. You should call this method between two trainings with different models/optimizers.

gather(tensor)[source]

Gather the values in tensor accross all processes and concatenate them on the first dimension. Useful to regroup the predictions from all processes when doing evaluation.

Note

This gather happens in all processes.

Parameters

tensor (torch.Tensor, or a nested tuple/list/dictionary of torch.Tensor) – The tensors to gather across all processes.

Returns

The gathered tensor(s). Note that the first dimension of the result is num_processes multiplied by the first dimension of the input tensors.

Return type

torch.Tensor, or a nested tuple/list/dictionary of torch.Tensor

property is_local_main_process

True for one process per server.

property is_main_process

True for one process only.

local_main_process_first()[source]

Lets the local main process go inside a with block.

The other processes will enter the with block after the main process exits.

main_process_first()[source]

Lets the main process go first inside a with block.

The other processes will enter the with block after the main process exits.

property optimizer_step_was_skipped

Whether or not the optimizer update was skipped (because of gradient overflow in mixed precision), in which case the learning rate should not be changed.

pad_across_processes(tensor, dim=0, pad_index=0, pad_first=False)[source]

Recursively pad the tensors in a nested list/tuple/dictionary of tensors from all devices to the same size so they can safely be gathered.

Parameters
  • tensor (nested list/tuple/dictionary of torch.Tensor) – The data to gather.

  • dim (int, optional, defaults to 0) – The dimension on which to pad.

  • pad_index (int, optional, defaults to 0) – The value with which to pad.

  • pad_first (bool, optional, defaults to False) – Whether to pad at the beginning or the end.

prepare(*args)[source]

Prepare all objects passed in args for distributed training and mixed precision, then return them in the same order.

Accepts the following type of objects:

  • torch.utils.data.DataLoader: PyTorch Dataloader

  • torch.nn.Module: PyTorch Module

  • torch.optim.Optimizer: PyTorch Optimizer

print(*args, **kwargs)[source]

Use in replacement of print() to only print once per server.

save(obj, f)[source]

Save the object passed to disk once per machine. Use in place of torch.save.

Parameters
  • obj – The object to save.

  • f (str or os.PathLike) – Where to save the content of obj.

unscale_gradients(optimizer=None)[source]

Unscale the gradients in mixed precision training with AMP. This is a noop in all other settings.

Parameters

optimizer (torch.optim.Optimizer or List[torch.optim.Optimizer], optional) – The optimizer(s) for which to unscale gradients. If not set, will unscale gradients on all optimizers that were passed to prepare().

unwrap_model(model)[source]

Unwraps the model from the additional layer possible added by prepare(). Useful before saving the model.

Parameters

model (torch.nn.Module) – The model to unwrap.

wait_for_everyone()[source]

Will stop the execution of the current process until every other process has reached that point (so this does nothing when the script is only run in one process). Useful to do before saving a model.