Below are a variety of utility functions that 🤗 Accelerate provides, broken down by use-case.
These are basic dataclasses used throughout 🤗 Accelerate and they can be passed in as parameters.
( value names = None module = None qualname = None type = None start = 1 )
Represents a type of distributed environment.
Values:
( value names = None module = None qualname = None type = None start = 1 )
Represents a type of supported experiment tracker
Values:
( value names = None module = None qualname = None type = None start = 1 )
Represents a type of precision used on floating point values
Values:
These include data operations that mimic the same torch
ops but can be used on distributed processes.
( tensor from_process: int = 0 )
Recursively broadcast tensor in a nested list/tuple/dictionary of tensors to all devices.
( data dim = 0 )
Recursively concatenate the tensors in a nested list/tuple/dictionary of lists of tensors with the same shape.
( tensor )
Recursively gather tensor in a nested list/tuple/dictionary of tensors from all devices.
( tensor dim = 0 pad_index = 0 pad_first = False )
Parameters
torch.Tensor
) —
The data to gather.
int
, optional, defaults to 0) —
The dimension on which to pad.
int
, optional, defaults to 0) —
The value with which to pad.
bool
, optional, defaults to False
) —
Whether to pad at the beginning or the end.
Recursively pad the tensors in a nested list/tuple/dictionary of tensors from all devices to the same size so they can safely be gathered.
( tensor reduction = 'mean' )
Recursively reduce the tensors in a nested list/tuple/dictionary of lists of tensors across all processes by the mean of a given operation.
( tensor device non_blocking = False )
Recursively sends the elements in a nested list/tuple/dictionary of tensors to a given device.
These functionalities check the state of the current working environment including information about the operating system itself, what it can support, and if particular dependencies are installed.
Checks if bf16 is supported, optionally ignoring the TPU
( operation: str version: str )
Compares the current PyTorch version to a given reference with an operation.
Checks if torch_xla
is installed and potentially if a TPU is in the environment
( mixed_precision = 'no' save_location: str = '/github/home/.cache/huggingface/accelerate/default_config.yaml' dynamo_backend = 'no' )
Parameters
str
, optional, defaults to “no”) —
Mixed Precision to use. Should be one of “no”, “fp16”, or “bf16”
str
, optional, defaults to default_json_config_file
) —
Optional custom save location. Should be passed to --config_file
when using accelerate launch
. Default
location is inside the huggingface cache folder (~/.cache/huggingface
) but can be overriden by setting
the HF_HOME
environmental variable, followed by accelerate/default_config.yaml
.
Creates and saves a basic cluster config to be used on a local machine with potentially multiple GPUs. Will also set CPU if it is a CPU-only machine.
When setting up 🤗 Accelerate for the first time, rather than running accelerate config
[~utils.write_basic_config] can be used as an alternative for quick configuration.
( max_memory: typing.Union[typing.Dict[typing.Union[int, str], typing.Union[int, str]], NoneType] = None )
Get the maximum memory available if nothing is passed, converts string to int otherwise.
( function: callable = None starting_batch_size: int = 128 )
A basic decorator that will try to execute function
. If it fails from exceptions related to out-of-memory or
CUDNN, the batch size is cut in half and passed to function
function
must take in a batch_size
parameter as its first argument.
These utilities relate to interacting with PyTorch models
(
model
keep_fp32_wrapper: bool = False
)
→
torch.nn.Module
Extract a model from its distributed containers.
(
modules: typing.List[typing.Tuple[str, torch.nn.modules.module.Module]]
module_sizes: typing.Dict[str, int]
no_split_module_classes: typing.List[str]
)
→
Tuple[int, List[str]]
Parameters
List[Tuple[str, torch.nn.Module]]
) —
The list of named modules where we want to determine the maximum layer size.
Dict[str, int]
) —
A dictionary mapping each layer name to its size (as generated by compute_module_sizes
).
List[str]
) —
A list of class names for layers we don’t want to be split.
Returns
Tuple[int, List[str]]
The maximum size of a layer with the list of layer names realizing that maximum size.
Utility function that will scan a list of named modules and return the maximum size used by one full layer. The definition of a layer being:
no_split_module_classes
( save_dir: typing.Union[str, os.PathLike] state_dict: typing.Dict[str, torch.Tensor] )
Offload a state dict in a given folder.
These include general utilities that should be used when working in parallel.
(
model
keep_fp32_wrapper: bool = False
)
→
torch.nn.Module
Extract a model from its distributed containers.
Save the data to disk. Use in place of torch.save()
.
Introduces a blocking point in the script, making sure all processes have reached this point before continuing.
Make sure all processes will reach this instruction otherwise one of your processes will hang forever.
These utilities relate to setting and synchronizing of all the random states.
( seed: int device_specific: bool = False )
Helper function for reproducible behavior to set the seed in random
, numpy
, torch
.
( rng_type: typing.Optional[accelerate.utils.dataclasses.RNGType] = None generator: typing.Optional[torch._C.Generator] = None )
( rng_types: typing.List[typing.Union[str, accelerate.utils.dataclasses.RNGType]] generator: typing.Optional[torch._C.Generator] = None )