The Accelerator is the main class provided by 🤗 Accelerate. It serves at the main entrypoint for the API.
To quickly adapt your script to work on any kind of setup with 🤗 Accelerate just:
accelerator
throughout this page) as early as possible in your script..cuda()
or .to(device)
from your code and let the accelerator
handle the device placement for you. Step three is optional, but considered a best practice.
loss.backward()
in your code with accelerator.backward(loss)
Step five is mandatory when using distributed evaluation
In most cases this is all that is needed. The next section lists a few more advanced use cases and nice features
you should search for and replace by the corresponding methods of your accelerator
:
print
statements should be replaced by print() to be printed once per process
- print("My thing I want to print!")
+ accelerator.print("My thing I want to print!")
For statements that should be executed once per server, use is_local_main_process
:
if accelerator.is_local_main_process:
do_thing_once_per_server()
A function can be wrapped using the on_local_main_process() function to achieve the same behavior on a function’s execution:
@accelerator.on_local_main_process
def do_my_thing():
"Something done once per server"
do_thing_once_per_server()
For statements that should only ever be executed once, use is_main_process
:
if accelerator.is_main_process:
do_thing_once()
A function can be wrapped using the on_main_process() function to achieve the same behavior on a function’s execution:
@accelerator.on_main_process
def do_my_thing():
"Something done once per server"
do_thing_once()
If a function should be ran on a specific overall or local process index, there are similar decorators to achieve this:
@accelerator.on_local_process(local_process_idx=0)
def do_my_thing():
"Something done on process index 0 on each server"
do_thing_on_index_zero_on_each_server()
@accelerator.on_process(process_index=0)
def do_my_thing():
"Something done on process index 0"
do_thing_on_index_zero()
Use wait_for_everyone() to make sure all processes join that point before continuing. (Useful before a model save for instance)
Use unwrap_model() before saving to remove all special model wrappers added during the distributed process.
model = MyModel()
model = accelerator.prepare(model)
# Unwrap
model = accelerator.unwrap_model(model)
Use save() instead of torch.save
:
state_dict = model.state_dict()
- torch.save(state_dict, "my_state.pkl")
+ accelerator.save(state_dict, "my_state.pkl")
Use clipgrad_norm() instead of torch.nn.utils.clip_grad_norm_
and clipgrad_value() instead of torch.nn.utils.clip_grad_value
To perform gradient accumulation use accumulate() and specify a gradient_accumulation_steps. This will also automatically ensure the gradients are synced or unsynced when on multi-device training, check if the step should actually be performed, and auto-scale the loss:
- accelerator = Accelerator()
+ accelerator = Accelerator(gradient_accumulation_steps=2)
for (input, label) in training_dataloader:
+ with accelerator.accumulate(model):
predictions = model(input)
loss = loss_function(predictions, labels)
accelerator.backward(loss)
optimizer.step()
scheduler.step()
optimizer.zero_grad()
( device_placement: bool = True split_batches: bool = False mixed_precision: typing.Union[accelerate.utils.dataclasses.PrecisionType, str] = None gradient_accumulation_steps: int = 1 cpu: bool = False deepspeed_plugin: DeepSpeedPlugin = None fsdp_plugin: FullyShardedDataParallelPlugin = None megatron_lm_plugin: MegatronLMPlugin = None rng_types: typing.Union[typing.List[typing.Union[str, accelerate.utils.dataclasses.RNGType]], NoneType] = None log_with: typing.Union[typing.List[typing.Union[str, accelerate.utils.dataclasses.LoggerType, accelerate.tracking.GeneralTracker]], NoneType] = None project_dir: typing.Union[str, os.PathLike, NoneType] = None project_config: typing.Optional[accelerate.utils.dataclasses.ProjectConfiguration] = None logging_dir: typing.Union[str, os.PathLike, NoneType] = None dispatch_batches: typing.Optional[bool] = None even_batches: bool = True step_scheduler_with_optimizer: bool = True kwargs_handlers: typing.Optional[typing.List[accelerate.utils.dataclasses.KwargsHandler]] = None dynamo_backend: typing.Union[accelerate.utils.dataclasses.DynamoBackend, str] = None )
Parameters
bool
, optional, defaults to True
) —
Whether or not the accelerator should put objects on device (tensors yielded by the dataloader, model,
etc…).
bool
, optional, defaults to False
) —
Whether or not the accelerator should split the batches yielded by the dataloaders across the devices. If
True
the actual batch size used will be the same on any kind of distributed processes, but it must be a
round multiple of the num_processes
you are using. If False
, actual batch size used will be the one set
in your script multiplied by the number of processes.
str
, optional) —
Whether or not to use mixed precision training (fp16 or bfloat16). Choose from ‘no’,‘fp16’,‘bf16’. Will
default to the value in the environment variable ACCELERATE_MIXED_PRECISION
, which will use the default
value in the accelerate config of the current system or the flag passed with the accelerate.launch
command. ‘fp16’ requires pytorch 1.6 or higher. ‘bf16’ requires pytorch 1.10 or higher.
int
, optional, default to 1) —
The number of steps that should pass before gradients are accumulated. A number > 1 should be combined with
Accelerator.accumulate
.
bool
, optional) —
Whether or not to force the script to execute on CPU. Will ignore GPU available if set to True
and force
the execution on one process only.
DeepSpeedPlugin
, optional) —
Tweak your DeepSpeed related args using this argument. This argument is optional and can be configured
directly using accelerate config
FullyShardedDataParallelPlugin
, optional) —
Tweak your FSDP related args using this argument. This argument is optional and can be configured directly
using accelerate config
MegatronLMPlugin
, optional) —
Tweak your MegatronLM related args using this argument. This argument is optional and can be configured
directly using accelerate config
str
or RNGType
) —
The list of random number generators to synchronize at the beginning of each iteration in your prepared
dataloaders. Should be one or several of:
"torch"
: the base torch random number generator"cuda"
: the CUDA random number generator (GPU only)"xla"
: the XLA random number generator (TPU only)"generator"
: the torch.Generator
of the sampler (or batch sampler if there is no sampler in your
dataloader) or of the iterable dataset (if it exists) if the underlying dataset is of that type.Will default to ["torch"]
for PyTorch versions <=1.5.1 and ["generator"]
for PyTorch versions >= 1.6.
str
, LoggerType or GeneralTracker, optional) —
A list of loggers to be setup for experiment tracking. Should be one or several of:
"all"
"tensorboard"
"wandb"
"comet_ml"
If "all"
is selected, will pick up all available trackers in the environment and initialize them. Can
also accept implementations of GeneralTracker
for custom trackers, and can be combined with "all"
.ProjectConfiguration
, optional) —
A configuration for how saving the state can be handled.
str
, os.PathLike
, optional) —
A path to a directory for storing data such as logs of locally-compatible loggers and potentially saved
checkpoints.
bool
, optional) —
If set to True
, the dataloader prepared by the Accelerator is only iterated through on the main process
and then the batches are split and broadcast to each process. Will default to True
for DataLoader
whose
underlying dataset is an IterableDataset
, False
otherwise.
bool
, optional, defaults to True
) —
If set to True
, in cases where the total batch size across all processes does not exactly divide the
dataset, samples at the start of the dataset will be duplicated so the batch can be divided equally among
all workers.
bool
, *optional, defaults to
True) -- Set
Trueif the learning rate scheduler is stepped at the same time as the optimizer,
False` if only
done under certain circumstances (at the end of each epoch, for instance).
List[KwargHandler]
, optional) —
A list of KwargHandler
to customize how the objects related to distributed training or mixed precision
are created. See kwargs for more information.
str
or DynamoBackend
, optional, defaults to "no"
) —
Set to one of the possible dynamo backends to optimize your training with torch dynamo.
Creates an instance of an accelerator for distributed training (on multi-GPU, TPU) or mixed precision training.
Available attributes:
torch.device
) — The device to use.int
) — The process index on the current machine.str
) — The configured mixed precision mode.int
) — The total number of processes used for training.bool
) — Whether or not the optimizer update was skipped (because of
gradient overflow in mixed precision), in which
case the learning rate should not be changed.int
) — The overall index of the current process among all processes.bool
) — Whether the gradients are currently being synced across all processes.bool
) — Whether the current configuration is for distributed training.( model )
A context manager that will lightly wrap around and perform gradient accumulation automatically
Example:
>>> from accelerate import Accelerator
>>> accelerator = Accelerator(gradient_accumulation_steps=2)
>>> dataloader, model, optimizer, scheduler = accelerator.prepare(dataloader, model, optimizer, scheduler)
>>> with accelerator.accumulate():
... for input, output in dataloader:
... outputs = model(input)
... loss = loss_func(outputs)
... loss.backward()
... optimizer.step()
... scheduler.step()
... optimizer.zero_grad()
Will apply automatic mixed-precision inside the block inside this context manager, if it is enabled. Nothing different will happen otherwise.
Scales the gradients in accordance to Accelerator.gradient_accumulation_steps
and calls the correct
backward()
based on the configuration.
Should be used in lieu of loss.backward()
.
Alias for Accelerate.free_memory
, releases all references to the internal objects stored and call the
garbage collector. You should call this method between two trainings with different models/optimizers.
(
parameters
max_norm
norm_type = 2
)
→
torch.Tensor
Returns
torch.Tensor
Total norm of the parameter gradients (viewed as a single vector).
Should be used in place of torch.nn.utils.clip_grad_norm_
.
Example:
>>> from accelerate import Accelerator
>>> accelerator = Accelerator(gradient_accumulation_steps=2)
>>> dataloader, model, optimizer, scheduler = accelerator.prepare(dataloader, model, optimizer, scheduler)
>>> for (input, target) in dataloader:
... optimizer.zero_grad()
... output = model(input)
... loss = loss_func(output, target)
... accelerator.backward(loss)
... if accelerator.sync_gradients:
... accelerator.clip_grad_norm_(model.parameters(), max_grad_norm)
... optimizer.step()
Should be used in place of torch.nn.utils.clip_grad_value_
.
Example:
>>> from accelerate import Accelerator
>>> accelerator = Accelerator(gradient_accumulation_steps=2)
>>> dataloader, model, optimizer, scheduler = accelerator.prepare(dataloader, model, optimizer, scheduler)
>>> for (input, target) in dataloader:
... optimizer.zero_grad()
... output = model(input)
... loss = loss_func(output, target)
... accelerator.backward(loss)
... if accelerator.sync_gradients:
... accelerator.clip_grad_value_(model.parameters(), clip_value)
... optimizer.step()
Runs any special end training behaviors, such as stopping trackers on the main process only. Should always be called at the end of your script if using experiment tracking.
Will release all references to the internal objects stored and call the garbage collector. You should call this method between two trainings with different models/optimizers.
(
tensor
)
→
torch.Tensor
, or a nested tuple/list/dictionary of torch.Tensor
Parameters
torch.Tensor
, or a nested tuple/list/dictionary of torch.Tensor
) —
The tensors to gather across all processes.
Returns
torch.Tensor
, or a nested tuple/list/dictionary of torch.Tensor
The gathered tensor(s). Note that the first dimension of the result is num_processes multiplied by the first dimension of the input tensors.
Gather the values in tensor across all processes and concatenate them on the first dimension. Useful to regroup the predictions from all processes when doing evaluation.
Note: This gather happens in all processes.
( tensor )
Gathers tensor
and potentially drops duplicates in the last batch if on a distributed system. Should be used
for gathering the inputs and targets for metric calculation.
( model unwrap = True )
Parameters
torch.nn.Module
) —
A PyTorch model sent through Accelerator.prepare()
bool
, optional, defaults to True
) —
Whether to return the original underlying state_dict of model
or to return the wrapped state_dict
Returns the state dictionary of a model sent through Accelerator.prepare() in full precision
( name: str )
Returns a tracker
from self.trackers
based on name
on the main process only.
( project_name: str config: typing.Optional[dict] = None init_kwargs: typing.Optional[dict] = {} )
Parameters
str
) —
The name of the project. All trackers will save their data based on this
dict
, optional) —
Optional starting configuration to be logged.
dict
, optional) —
A nested dictionary of kwargs to be passed to a specific tracker’s __init__
function. Should be
formatted like so:
Initializes a run for all trackers stored in self.log_with
, potentially with starting configurations
( joinables even_batches = None )
Parameters
List[torch.distributed.algorithms.Joinable]
) —
A list of models or optimizers that subclass torch.distributed.algorithms.Joinable
. Most commonly, a
PyTorch Module that was prepared with Accelerator.prepare
for DistributedDataParallel training.
bool
, optional) —
If set, this will override the value of even_batches
set in the Accelerator
. If it is not provided,
the default Accelerator
value wil be used.
A context manager that facilitates distributed training or evaluation on uneven inputs, which acts as a wrapper
around torch.distributed.algorithms.join
. This is useful when the total batch size does not evenly divide the
length of the dataset.
join_uneven_inputs
is only supported for Distributed Data Parallel training on multiple GPUs. For any other
configuration, this method will have no effect.
Overidding even_batches
will not affect iterable-style data loaders.
Example:
>>> from accelerate import Accelerator
>>> accelerator = Accelerator(even_batches=True)
>>> ddp_model, optimizer, dataloader = accelerator.prepare(model, optimizer, dataloader)
>>> with accelerator.join_uneven_inputs([ddp_model], even_batches=False):
... for input, output in dataloader:
... outputs = model(input)
... loss = loss_func(outputs)
... loss.backward()
... optimizer.step()
... optimizer.zero_grad()
( input_dir: str )
Loads the current states of the model, optimizer, scaler, RNG generators, and registered objects.
Should only be used in conjunction with Accelerator.save_state().
Lets the local main process go inside a with block.
The other processes will enter the with block after the main process exits.
( values: dict step: typing.Optional[int] = None log_kwargs: typing.Optional[dict] = {} )
Parameters
dict
) —
Values should be a dictionary-like object containing only types int
, float
, or str
.
int
, optional) —
The run step. If included, the log will be affiliated with this step.
dict
, optional) —
A nested dictionary of kwargs to be passed to a specific tracker’s log
function. Should be formatted
like so:
Logs values
to all stored trackers in self.trackers
on the main process only.
Lets the main process go first inside a with block.
The other processes will enter the with block after the main process exits.
( model )
A context manager to disable gradient synchronizations across DDP processes by calling
torch.nn.parallel.DistributedDataParallel.no_sync
.
If model
is not in DDP, this context manager does nothing
Example:
>>> from accelerate import Accelerator
>>> accelerator = Accelerator()
>>> dataloader, model, optimizer = accelerator.prepare(dataloader, model, optimizer)
>>> input_a = next(iter(dataloader))
>>> input_b = next(iter(dataloader))
>>> with accelerator.no_sync():
... outputs = model(input_a)
... loss = loss_func(outputs)
... accelerator.backward(loss)
... # No synchronization across processes, only accumulate gradients
>>> outputs = model(input_b)
>>> accelerator.backward(loss)
>>> # Synchronization across all processes
>>> optimizer.step()
>>> optimizer.zero_grad()
A decorator that will run the decorated function on the last process only.
A decorator that will run the decorated function on the local main process only.
A decorator that will run the decorated function on a given local process index only.
A decorator that will run the decorated function on the main process only.
A decorator that will run the decorated function on a given process index only.
( tensor dim = 0 pad_index = 0 pad_first = False )
Parameters
torch.Tensor
) —
The data to gather.
int
, optional, defaults to 0) —
The dimension on which to pad.
int
, optional, defaults to 0) —
The value with which to pad.
bool
, optional, defaults to False
) —
Whether to pad at the beginning or the end.
Recursively pad the tensors in a nested list/tuple/dictionary of tensors from all devices to the same size so they can safely be gathered.
( *args device_placement = None )
Parameters
torch.utils.data.DataLoader
: PyTorch Dataloadertorch.nn.Module
: PyTorch Moduletorch.optim.Optimizer
: PyTorch Optimizertorch.optim.lr_scheduler.LRScheduler
: PyTorch LR SchedulerList[bool]
, optional) —
Used to customize whether automatic device placement should be performed for each object passed. Needs
to be a list of the same length as args
.
Prepare all objects passed in args
for distributed training and mixed precision, then return them in the same
order.
You don’t need to prepare a model if you only use it for inference without any kind of mixed precision
( data_loader: DataLoader device_placement = None )
Prepares a PyTorch DataLoader for training in any distributed setup. It is recommended to use Accelerator.prepare() instead.
( model: Module device_placement = None )
Prepares a PyTorch model for training in any distributed setup. It is recommended to use Accelerator.prepare() instead.
( optimizer: Optimizer device_placement = None )
Prepares a PyTorch Optimizer for training in any distributed setup. It is recommended to use Accelerator.prepare() instead.
( scheduler: _LRScheduler )
Prepares a PyTorch Scheduler for training in any distributed setup. It is recommended to use Accelerator.prepare() instead.
Use in replacement of print()
to only print once per server.
(
tensor
reduction = 'sum'
)
→
torch.Tensor
, or a nested tuple/list/dictionary of torch.Tensor
Parameters
torch.Tensor
, or a nested tuple/list/dictionary of torch.Tensor
) —
The tensors to reduce across all processes.
str
, optional, defaults to “sum”) —
A reduction type, can be one of ‘sum’, ‘mean’, or ‘none’. If ‘none’, will not perform any operation.
Returns
torch.Tensor
, or a nested tuple/list/dictionary of torch.Tensor
The reduced tensor(s).
Reduce the values in tensor across all processes based on reduction.
Note: All processes get the reduced value.
Makes note of objects
and will save or load them in during save_state
or load_state
.
These should be utilized when the state is being loaded or saved in the same script. It is not designed to be used in different scripts
Every object
must have a load_state_dict
and state_dict
function to be stored.
Save the object passed to disk once per machine. Use in place of torch.save
.
( output_dir: str = None )
Saves the current states of the model, optimizer, scaler, RNG generators, and registered objects to a folder.
If a ProjectConfiguration
was passed to the Accelerator
object with automatic_checkpoint_naming
enabled
then checkpoints will be saved to self.project_dir/checkpoints
. If the number of current saves is greater
than total_limit
then the oldest save is deleted. Each checkpoint is saved in seperate folders named
checkpoint_<iteration>
.
Otherwise they are just saved to output_dir
.
Should only be used when wanting to save a checkpoint during training and restoring the state in the same environment.
( optimizer = None )
Parameters
torch.optim.Optimizer
or List[torch.optim.Optimizer]
, optional) —
The optimizer(s) for which to unscale gradients. If not set, will unscale gradients on all optimizers
that were passed to prepare().
Unscale the gradients in mixed precision training with AMP. This is a noop in all other settings.
( model keep_fp32_wrapper: bool = False )
Unwraps the model
from the additional layer possible added by prepare(). Useful before saving
the model.
Will stop the execution of the current process until every other process has reached that point (so this does nothing when the script is only run in one process). Useful to do before saving a model.