Accelerate documentation

Helpful Utilities

You are viewing v0.27.2 version. A newer version v1.1.0 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Helpful Utilities

Below are a variety of utility functions that 🤗 Accelerate provides, broken down by use-case.

Constants

Constants used throughout 🤗 Accelerate for reference

The following are constants used when utilizing Accelerator.save_state()

utils.MODEL_NAME: "pytorch_model" utils.OPTIMIZER_NAME: "optimizer" utils.RNG_STATE_NAME: "random_states" utils.SCALER_NAME: "scaler.pt utils.SCHEDULER_NAME: "scheduler

The following are constants used when utilizing Accelerator.save_model()

utils.WEIGHTS_NAME: "pytorch_model.bin" utils.SAFE_WEIGHTS_NAME: "model.safetensors" utils.WEIGHTS_INDEX_NAME: "pytorch_model.bin.index.json" utils.SAFE_WEIGHTS_INDEX_NAME: "model.safetensors.index.json"

Data Classes

These are basic dataclasses used throughout 🤗 Accelerate and they can be passed in as parameters.

Standalone

These are standalone dataclasses used for checks, such as the type of distributed system being used

class accelerate.utils.ComputeEnvironment

< >

( value names = None module = None qualname = None type = None start = 1 )

Represents a type of the compute environment.

Values:

  • LOCAL_MACHINE — private/custom cluster hardware.
  • AMAZON_SAGEMAKER — Amazon SageMaker as compute environment.

class accelerate.DistributedType

< >

( value names = None module = None qualname = None type = None start = 1 )

Represents a type of distributed environment.

Values:

  • NO — Not a distributed environment, just a single process.
  • MULTI_CPU — Distributed on multiple CPU nodes.
  • MULTI_GPU — Distributed on multiple GPUs.
  • MULTI_NPU — Distributed on multiple NPUs.
  • MULTI_XPU — Distributed on multiple XPUs.
  • DEEPSPEED — Using DeepSpeed.
  • TPU — Distributed on TPUs.

class accelerate.utils.DynamoBackend

< >

( value names = None module = None qualname = None type = None start = 1 )

Represents a dynamo backend (see https://github.com/pytorch/torchdynamo).

Values:

  • NO — Do not use torch dynamo.
  • EAGER — Uses PyTorch to run the extracted GraphModule. This is quite useful in debugging TorchDynamo issues.
  • AOT_EAGER — Uses AotAutograd with no compiler, i.e, just using PyTorch eager for the AotAutograd’s extracted forward and backward graphs. This is useful for debugging, and unlikely to give speedups.
  • INDUCTOR — Uses TorchInductor backend with AotAutograd and cudagraphs by leveraging codegened Triton kernels. Read more
  • AOT_TS_NVFUSER — nvFuser with AotAutograd/TorchScript. Read more
  • NVPRIMS_NVFUSER — nvFuser with PrimTorch. Read more
  • CUDAGRAPHS — cudagraphs with AotAutograd. Read more
  • OFI — Uses Torchscript optimize_for_inference. Inference only. Read more
  • FX2TRT — Uses Nvidia TensorRT for inference optimizations. Inference only. Read more
  • ONNXRT — Uses ONNXRT for inference on CPU/GPU. Inference only. Read more
  • TENSORRT — Uses ONNXRT to run TensorRT for inference optimizations. Read more
  • IPEX — Uses IPEX for inference on CPU. Inference only. Read more.
  • TVM — Uses Apach TVM for inference optimizations. Read more

class accelerate.utils.LoggerType

< >

( value names = None module = None qualname = None type = None start = 1 )

Represents a type of supported experiment tracker

Values:

  • ALL — all available trackers in the environment that are supported
  • TENSORBOARD — TensorBoard as an experiment tracker
  • WANDB — wandb as an experiment tracker
  • COMETML — comet_ml as an experiment tracker
  • DVCLIVE — dvclive as an experiment tracker

class accelerate.utils.PrecisionType

< >

( value names = None module = None qualname = None type = None start = 1 )

Represents a type of precision used on floating point values

Values:

  • NO — using full precision (FP32)
  • FP16 — using half precision
  • BF16 — using brain floating point precision

class accelerate.utils.RNGType

< >

( value names = None module = None qualname = None type = None start = 1 )

An enumeration.

class accelerate.utils.SageMakerDistributedType

< >

( value names = None module = None qualname = None type = None start = 1 )

Represents a type of distributed environment.

Values:

  • NO — Not a distributed environment, just a single process.
  • DATA_PARALLEL — using sagemaker distributed data parallelism.
  • MODEL_PARALLEL — using sagemaker distributed model parallelism.

Kwargs

These are configurable arguments for specific interactions throughout the PyTorch ecosystem that Accelerate handles under the hood.

class accelerate.AutocastKwargs

< >

( enabled: bool = True cache_enabled: bool = None )

Use this object in your Accelerator to customize how torch.autocast behaves. Please refer to the documentation of this context manager for more information on each argument.

Example:

from accelerate import Accelerator
from accelerate.utils import AutocastKwargs

kwargs = AutocastKwargs(cache_enabled=True)
accelerator = Accelerator(kwargs_handlers=[kwargs])

class accelerate.DistributedDataParallelKwargs

< >

( dim: int = 0 broadcast_buffers: bool = True bucket_cap_mb: int = 25 find_unused_parameters: bool = False check_reduction: bool = False gradient_as_bucket_view: bool = False static_graph: bool = False )

Use this object in your Accelerator to customize how your model is wrapped in a torch.nn.parallel.DistributedDataParallel. Please refer to the documentation of this wrapper for more information on each argument.

gradient_as_bucket_view is only available in PyTorch 1.7.0 and later versions.

static_graph is only available in PyTorch 1.11.0 and later versions.

Example:

from accelerate import Accelerator
from accelerate.utils import DistributedDataParallelKwargs

kwargs = DistributedDataParallelKwargs(find_unused_parameters=True)
accelerator = Accelerator(kwargs_handlers=[kwargs])

class accelerate.utils.FP8RecipeKwargs

< >

( backend: Literal = 'MSAMP' opt_level: Literal = 'O2' margin: int = 0 interval: int = 1 fp8_format: Literal = 'E4M3' amax_history_len: int = 1 amax_compute_algo: Literal = 'most_recent' override_linear_precision: Tuple = (False, False, False) )

Parameters

  • backend (str, optional, defaults to “msamp”) — Which FP8 engine to use. Must be one of "msamp" (MS-AMP) or "te" (TransformerEngine).
  • margin (int, optional, default to 0) — The margin to use for the gradient scaling.
  • interval (int, optional, default to 1) — The interval to use for how often the scaling factor is recomputed.
  • fp8_format (str, optional, default to “E4M3”) — The format to use for the FP8 recipe. Must be one of E4M3 or HYBRID.
  • amax_history_len (int, optional, default to 1024) — The length of the history to use for the scaling factor computation
  • amax_compute_algo (str, optional, default to “most_recent”) — The algorithm to use for the scaling factor computation. Must be one of max or most_recent.
  • override_linear_precision (tuple of three bool, optional, default to (False, False, False)) — Whether or not to execute fprop, dgrad, and wgrad GEMMS in higher precision.
  • optimization_level (str), one of O1, O2. (default is O2) — What level of 8-bit collective communication should be used with MS-AMP. In general:
    • O1: Weight gradients and all_reduce communications are done in fp8, reducing GPU memory usage and communication bandwidth
    • O2: First-order optimizer states are in 8-bit, and second order states are in FP16. Only available when using Adam or AdamW. This maintains accuracy and can potentially save the highest memory.
    • 03: Specifically for DeepSpeed, implements capabilities so weights and master weights of models are stored in FP8. If fp8 is selected and deepspeed is enabled, will be used by default. (Not available currently).

Use this object in your Accelerator to customize the initialization of the recipe for FP8 mixed precision training with transformer-engine or ms-amp.

For more information on transformer-engine args, please refer to the API documentation.

For more information on the ms-amp args, please refer to the Optimization Level documentation.

from accelerate import Accelerator
from accelerate.utils import FP8RecipeKwargs

kwargs = FP8RecipeKwargs(backend="te", fp8_format="HYBRID")
accelerator = Accelerator(mixed_precision="fp8", kwargs_handlers=[kwargs])

To use MS-AMP as an engine, pass backend="msamp" and the optimization_level:

kwargs = FP8RecipeKwargs(backend="msamp", optimization_level="02")

class accelerate.GradScalerKwargs

< >

( init_scale: float = 65536.0 growth_factor: float = 2.0 backoff_factor: float = 0.5 growth_interval: int = 2000 enabled: bool = True )

Use this object in your Accelerator to customize the behavior of mixed precision, specifically how the torch.cuda.amp.GradScaler used is created. Please refer to the documentation of this scaler for more information on each argument.

GradScaler is only available in PyTorch 1.5.0 and later versions.

Example:

from accelerate import Accelerator
from accelerate.utils import GradScalerKwargs

kwargs = GradScalerKwargs(backoff_filter=0.25)
accelerator = Accelerator(kwargs_handlers=[kwargs])

class accelerate.InitProcessGroupKwargs

< >

( backend: Optional = 'nccl' init_method: Optional = None timeout: timedelta = datetime.timedelta(seconds=1800) )

Use this object in your Accelerator to customize the initialization of the distributed processes. Please refer to the documentation of this method for more information on each argument.

from datetime import timedelta
from accelerate import Accelerator
from accelerate.utils import InitProcessGroupKwargs

kwargs = InitProcessGroupKwargs(timeout=timedelta(seconds=800))
accelerator = Accelerator(kwargs_handlers=[kwargs])

Plugins

These are plugins that can be passed to the Accelerator object. While they are defined elsewhere in the documentation, for convenience all of them are available to see here:

class accelerate.DeepSpeedPlugin

< >

( hf_ds_config: Any = None gradient_accumulation_steps: int = None gradient_clipping: float = None zero_stage: int = None is_train_batch_min: str = True offload_optimizer_device: bool = None offload_param_device: bool = None offload_optimizer_nvme_path: str = None offload_param_nvme_path: str = None zero3_init_flag: bool = None zero3_save_16bit_model: bool = None )

This plugin is used to integrate DeepSpeed.

deepspeed_config_process

< >

( prefix = '' mismatches = None config = None must_match = True **kwargs )

Process the DeepSpeed config with the values from the kwargs.

class accelerate.FullyShardedDataParallelPlugin

< >

( sharding_strategy: typing.Any = None backward_prefetch: typing.Any = None mixed_precision_policy: typing.Any = None auto_wrap_policy: Optional = None cpu_offload: typing.Any = None ignored_modules: Optional = None state_dict_type: typing.Any = None state_dict_config: typing.Any = None optim_state_dict_config: typing.Any = None limit_all_gathers: bool = True use_orig_params: bool = True param_init_fn: Optional = None sync_module_states: bool = True forward_prefetch: bool = False activation_checkpointing: bool = False )

This plugin is used to enable fully sharded data parallelism.

get_module_class_from_name

< >

( module name )

Parameters

  • module (torch.nn.Module) — The module to get the class from.
  • name (str) — The name of the class.

Gets a class from a module by its name.

class accelerate.utils.GradientAccumulationPlugin

< >

( num_steps: int = None adjust_scheduler: bool = True sync_with_dataloader: bool = True )

A plugin to configure gradient accumulation behavior.

class accelerate.utils.MegatronLMPlugin

< >

( tp_degree: int = None pp_degree: int = None num_micro_batches: int = None gradient_clipping: float = None sequence_parallelism: bool = None recompute_activations: bool = None use_distributed_optimizer: bool = None pipeline_model_parallel_split_rank: int = None num_layers_per_virtual_pipeline_stage: int = None is_train_batch_min: str = True train_iters: int = None train_samples: int = None weight_decay_incr_style: str = 'constant' start_weight_decay: float = None end_weight_decay: float = None lr_decay_style: str = 'linear' lr_decay_iters: int = None lr_decay_samples: int = None lr_warmup_iters: int = None lr_warmup_samples: int = None lr_warmup_fraction: float = None min_lr: float = 0 consumed_samples: List = None no_wd_decay_cond: Optional = None scale_lr_cond: Optional = None lr_mult: float = 1.0 megatron_dataset_flag: bool = False seq_length: int = None encoder_seq_length: int = None decoder_seq_length: int = None tensorboard_dir: str = None set_all_logging_options: bool = False eval_iters: int = 100 eval_interval: int = 1000 return_logits: bool = False custom_train_step_class: Optional = None custom_train_step_kwargs: Optional = None custom_model_provider_function: Optional = None custom_prepare_model_function: Optional = None other_megatron_args: Optional = None )

Plugin for Megatron-LM to enable tensor, pipeline, sequence and data parallelism. Also to enable selective activation recomputation and optimized fused kernels.

class accelerate.utils.TorchDynamoPlugin

< >

( backend: DynamoBackend = None mode: str = None fullgraph: bool = None dynamic: bool = None options: Any = None disable: bool = False )

This plugin is used to compile a model with PyTorch 2.0

Configurations

These are classes which can be configured and passed through to the appropriate integration

class accelerate.utils.BnbQuantizationConfig

< >

( load_in_8bit: bool = False llm_int8_threshold: float = 6.0 load_in_4bit: bool = False bnb_4bit_quant_type: str = 'fp4' bnb_4bit_use_double_quant: bool = False bnb_4bit_compute_dtype: bool = 'fp16' torch_dtype: dtype = None skip_modules: List = None keep_in_fp32_modules: List = None )

A plugin to enable BitsAndBytes 4bit and 8bit quantization

class accelerate.utils.ProjectConfiguration

< >

( project_dir: str = None logging_dir: str = None automatic_checkpoint_naming: bool = False total_limit: int = None iteration: int = 0 save_on_each_node: bool = False )

Configuration for the Accelerator object based on inner-project needs.

set_directories

< >

( project_dir: str = None )

Sets self.project_dir and self.logging_dir to the appropriate values.

Environmental Variables

These are environmental variables that can be enabled for different use cases

  • ACCELERATE_DEBUG_MODE (str): Whether to run accelerate in debug mode. More info available here.

Data Manipulation and Operations

These include data operations that mimic the same torch ops but can be used on distributed processes.

accelerate.utils.broadcast

< >

( tensor from_process: int = 0 )

Parameters

  • tensor (nested list/tuple/dictionary of torch.Tensor) — The data to gather.
  • from_process (int, optional, defaults to 0) — The process from which to send the data

Recursively broadcast tensor in a nested list/tuple/dictionary of tensors to all devices.

accelerate.utils.broadcast_object_list

< >

( object_list from_process: int = 0 )

Parameters

  • object_list (list of picklable objects) — The list of objects to broadcast. This list will be modified inplace.
  • from_process (int, optional, defaults to 0) — The process from which to send the data.

Broadcast a list of picklable objects form one process to the others.

accelerate.utils.concatenate

< >

( data dim = 0 )

Parameters

  • data (nested list/tuple/dictionary of lists of tensors torch.Tensor) — The data to concatenate.
  • dim (int, optional, defaults to 0) — The dimension on which to concatenate.

Recursively concatenate the tensors in a nested list/tuple/dictionary of lists of tensors with the same shape.

accelerate.utils.convert_outputs_to_fp32

< >

( model_forward )

accelerate.utils.convert_to_fp32

< >

( tensor )

Parameters

  • tensor (nested list/tuple/dictionary of torch.Tensor) — The data to convert from FP16/BF16 to FP32.

Recursively converts the elements nested list/tuple/dictionary of tensors in FP16/BF16 precision to FP32.

accelerate.utils.gather

< >

( tensor )

Parameters

  • tensor (nested list/tuple/dictionary of torch.Tensor) — The data to gather.

Recursively gather tensor in a nested list/tuple/dictionary of tensors from all devices.

accelerate.utils.gather_object

< >

( object: Any )

Parameters

  • object (nested list/tuple/dictionary of picklable object) — The data to gather.

Recursively gather object in a nested list/tuple/dictionary of objects from all devices.

accelerate.utils.listify

< >

( data )

Parameters

  • data (nested list/tuple/dictionary of torch.Tensor) — The data from which to convert to regular numbers.

Recursively finds tensors in a nested list/tuple/dictionary and converts them to a list of numbers.

accelerate.utils.pad_across_processes

< >

( tensor dim = 0 pad_index = 0 pad_first = False )

Parameters

  • tensor (nested list/tuple/dictionary of torch.Tensor) — The data to gather.
  • dim (int, optional, defaults to 0) — The dimension on which to pad.
  • pad_index (int, optional, defaults to 0) — The value with which to pad.
  • pad_first (bool, optional, defaults to False) — Whether to pad at the beginning or the end.

Recursively pad the tensors in a nested list/tuple/dictionary of tensors from all devices to the same size so they can safely be gathered.

accelerate.utils.recursively_apply

< >

( func data *args test_type = <function is_torch_tensor at 0x7f2b4f4c7d00> error_on_other_type = False **kwargs )

Parameters

  • func (callable) — The function to recursively apply.
  • data (nested list/tuple/dictionary of main_type) — The data on which to apply func *args — Positional arguments that will be passed to func when applied on the unpacked data.
  • main_type (type, optional, defaults to torch.Tensor) — The base type of the objects to which apply func.
  • error_on_other_type (bool, optional, defaults to False) — Whether to return an error or not if after unpacking data, we get on an object that is not of type main_type. If False, the function will leave objects of types different than main_type unchanged. **kwargs — Keyword arguments that will be passed to func when applied on the unpacked data.

Recursively apply a function on a data structure that is a nested list/tuple/dictionary of a given base type.

accelerate.utils.reduce

< >

( tensor reduction = 'mean' scale = 1.0 )

Parameters

  • tensor (nested list/tuple/dictionary of torch.Tensor) — The data to reduce.
  • reduction (str, optional, defaults to "mean") — A reduction method. Can be of “mean”, “sum”, or “none”
  • scale (float, optional) — A default scaling value to be applied after the reduce, only valied on XLA.

Recursively reduce the tensors in a nested list/tuple/dictionary of lists of tensors across all processes by the mean of a given operation.

accelerate.utils.send_to_device

< >

( tensor device non_blocking = False skip_keys = None )

Parameters

  • tensor (nested list/tuple/dictionary of torch.Tensor) — The data to send to a given device.
  • device (torch.device) — The device to send the data to.

Recursively sends the elements in a nested list/tuple/dictionary of tensors to a given device.

accelerate.utils.slice_tensors

< >

( data tensor_slice process_index = None num_processes = None )

Parameters

  • data (nested list/tuple/dictionary of torch.Tensor) — The data to slice.
  • tensor_slice (slice) — The slice to take.

Recursively takes a slice in a nested list/tuple/dictionary of tensors.

Environment Checks

These functionalities check the state of the current working environment including information about the operating system itself, what it can support, and if particular dependencies are installed.

accelerate.utils.is_bf16_available

< >

( ignore_tpu = False )

Checks if bf16 is supported, optionally ignoring the TPU

accelerate.utils.is_ipex_available

< >

( )

accelerate.utils.is_mps_available

< >

( )

accelerate.utils.is_npu_available

( check_device = False )

Checks if torch_npu is installed and potentially if a NPU is in the environment

accelerate.utils.is_torch_version

< >

( operation: str version: str )

Parameters

  • operation (str) — A string representation of an operator, such as ">" or "<="
  • version (str) — A string version of PyTorch

Compares the current PyTorch version to a given reference with an operation.

accelerate.utils.is_tpu_available

( check_device = True )

Checks if torch_xla is installed and potentially if a TPU is in the environment

accelerate.utils.is_xpu_available

( check_device = False )

check if user disables it explicitly

Environment Manipulation

accelerate.utils.patch_environment

< >

( **kwargs )

A context manager that will add each keyword argument passed to os.environ and remove them when exiting.

Will convert the values in kwargs to strings and upper-case all the keys.

Example:

>>> import os
>>> from accelerate.utils import patch_environment

>>> with patch_environment(FOO="bar"):
...     print(os.environ["FOO"])  # prints "bar"
>>> print(os.environ["FOO"])  # raises KeyError

accelerate.utils.clear_environment

< >

( )

A context manager that will cache origin os.environ and replace it with a empty dictionary in this context.

When this context exits, the cached os.environ will be back.

Example:

>>> import os
>>> from accelerate.utils import clear_environment

>>> os.environ["FOO"] = "bar"
>>> with clear_environment():
...     print(os.environ)
...     os.environ["FOO"] = "new_bar"
...     print(os.environ["FOO"])
{}
new_bar

>>> print(os.environ["FOO"])
bar

accelerate.commands.config.default.write_basic_config

< >

( mixed_precision = 'no' save_location: str = '/github/home/.cache/huggingface/accelerate/default_config.yaml' use_xpu: bool = False )

Parameters

  • mixed_precision (str, optional, defaults to “no”) — Mixed Precision to use. Should be one of “no”, “fp16”, or “bf16”
  • save_location (str, optional, defaults to default_json_config_file) — Optional custom save location. Should be passed to --config_file when using accelerate launch. Default location is inside the huggingface cache folder (~/.cache/huggingface) but can be overriden by setting the HF_HOME environmental variable, followed by accelerate/default_config.yaml.
  • use_xpu (bool, optional, defaults to False) — Whether to use XPU if available.

Creates and saves a basic cluster config to be used on a local machine with potentially multiple GPUs. Will also set CPU if it is a CPU-only machine.

When setting up 🤗 Accelerate for the first time, rather than running accelerate config [~utils.write_basic_config] can be used as an alternative for quick configuration.

Memory

accelerate.find_executable_batch_size

< >

( function: callable = None starting_batch_size: int = 128 )

Parameters

  • function (callable, optional) — A function to wrap
  • starting_batch_size (int, optional) — The batch size to try and fit into memory

A basic decorator that will try to execute function. If it fails from exceptions related to out-of-memory or CUDNN, the batch size is cut in half and passed to function

function must take in a batch_size parameter as its first argument.

Example:

>>> from accelerate.utils import find_executable_batch_size


>>> @find_executable_batch_size(starting_batch_size=128)
... def train(batch_size, model, optimizer):
...     ...


>>> train(model, optimizer)

Modeling

These utilities relate to interacting with PyTorch models

accelerate.utils.calculate_maximum_sizes

< >

( model: Module )

Computes the total size of the model and its largest layer

accelerate.utils.compute_module_sizes

< >

( model: Module dtype: Union = None special_dtypes: Optional = None )

Compute the size of each submodule of a given model.

accelerate.utils.extract_model_from_parallel

< >

( model keep_fp32_wrapper: bool = True ) torch.nn.Module

Parameters

  • model (torch.nn.Module) — The model to extract.
  • keep_fp32_wrapper (bool, optional) — Whether to remove mixed precision hooks from the model.

Returns

torch.nn.Module

The extracted model.

Extract a model from its distributed containers.

accelerate.utils.get_balanced_memory

< >

( model: Module max_memory: Optional = None no_split_module_classes: Optional = None dtype: Union = None special_dtypes: Optional = None low_zero: bool = False )

Parameters

  • model (torch.nn.Module) — The model to analyze.
  • max_memory (Dict, optional) — A dictionary device identifier to maximum memory. Will default to the maximum memory available if unset. Example: max_memory={0: "1GB"}.
  • no_split_module_classes (List[str], optional) — A list of layer class names that should never be split across device (for instance any layer that has a residual connection).
  • dtype (str or torch.dtype, optional) — If provided, the weights will be converted to that type when loaded.
  • special_dtypes (Dict[str, Union[str, torch.device]], optional) — If provided, special dtypes to consider for some specific weights (will override dtype used as default for all weights).
  • low_zero (bool, optional) — Minimizes the number of weights on GPU 0, which is convenient when it’s used for other operations (like the Transformers generate function).

Compute a max_memory dictionary for infer_auto_device_map() that will balance the use of each available GPU.

All computation is done analyzing sizes and dtypes of the model parameters. As a result, the model can be on the meta device (as it would if initialized within the init_empty_weights context manager).

accelerate.utils.get_max_layer_size

< >

( modules: List module_sizes: Dict no_split_module_classes: List ) Tuple[int, List[str]]

Parameters

  • modules (List[Tuple[str, torch.nn.Module]]) — The list of named modules where we want to determine the maximum layer size.
  • module_sizes (Dict[str, int]) — A dictionary mapping each layer name to its size (as generated by compute_module_sizes).
  • no_split_module_classes (List[str]) — A list of class names for layers we don’t want to be split.

Returns

Tuple[int, List[str]]

The maximum size of a layer with the list of layer names realizing that maximum size.

Utility function that will scan a list of named modules and return the maximum size used by one full layer. The definition of a layer being:

  • a module with no direct children (just parameters and buffers)
  • a module whose class name is in the list no_split_module_classes

accelerate.infer_auto_device_map

< >

( model: Module max_memory: Optional = None no_split_module_classes: Optional = None dtype: Union = None special_dtypes: Optional = None verbose: bool = False clean_result: bool = True )

Parameters

  • model (torch.nn.Module) — The model to analyze.
  • max_memory (Dict, optional) — A dictionary device identifier to maximum memory. Will default to the maximum memory available if unset. Example: max_memory={0: "1GB"}.
  • no_split_module_classes (List[str], optional) — A list of layer class names that should never be split across device (for instance any layer that has a residual connection).
  • dtype (str or torch.dtype, optional) — If provided, the weights will be converted to that type when loaded.
  • special_dtypes (Dict[str, Union[str, torch.device]], optional) — If provided, special dtypes to consider for some specific weights (will override dtype used as default for all weights).
  • verbose (bool, optional, defaults to False) — Whether or not to provide debugging statements as the function builds the device_map.
  • clean_result (bool, optional, defaults to True) — Clean the resulting device_map by grouping all submodules that go on the same device together.

Compute a device map for a given model giving priority to GPUs, then offload on CPU and finally offload to disk, such that:

  • we don’t exceed the memory available of any of the GPU.
  • if offload to the CPU is needed, there is always room left on GPU 0 to put back the layer offloaded on CPU that has the largest size.
  • if offload to the CPU is needed,we don’t exceed the RAM available on the CPU.
  • if offload to the disk is needed, there is always room left on the CPU to put back the layer offloaded on disk that has the largest size.

All computation is done analyzing sizes and dtypes of the model parameters. As a result, the model can be on the meta device (as it would if initialized within the init_empty_weights context manager).

accelerate.load_checkpoint_in_model

< >

( model: Module checkpoint: Union device_map: Optional = None offload_folder: Union = None dtype: Union = None offload_state_dict: bool = False offload_buffers: bool = False keep_in_fp32_modules: List = None offload_8bit_bnb: bool = False )

Parameters

  • model (torch.nn.Module) — The model in which we want to load a checkpoint.
  • checkpoint (str or os.PathLike) — The folder checkpoint to load. It can be:
    • a path to a file containing a whole model state dict
    • a path to a .json file containing the index to a sharded checkpoint
    • a path to a folder containing a unique .index.json file and the shards of a checkpoint.
    • a path to a folder containing a unique pytorch_model.bin or a model.safetensors file.
  • device_map (Dict[str, Union[int, str, torch.device]], optional) — A map that specifies where each submodule should go. It doesn’t need to be refined to each parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the same device.
  • offload_folder (str or os.PathLike, optional) — If the device_map contains any value "disk", the folder where we will offload weights.
  • dtype (str or torch.dtype, optional) — If provided, the weights will be converted to that type when loaded.
  • offload_state_dict (bool, optional, defaults to False) — If True, will temporarily offload the CPU state dict on the hard drive to avoid getting out of CPU RAM if the weight of the CPU state dict + the biggest shard does not fit.
  • offload_buffers (bool, optional, defaults to False) — Whether or not to include the buffers in the weights offloaded to disk.
  • keep_in_fp32_modules(List[str], optional) — A list of the modules that we keep in torch.float32 dtype.
  • offload_8bit_bnb (bool, optional) — Whether or not to enable offload of 8-bit modules on cpu/disk.

Loads a (potentially sharded) checkpoint inside a model, potentially sending weights to a given device as they are loaded.

Once loaded across devices, you still need to call dispatch_model() on your model to make it able to run. To group the checkpoint loading and dispatch in one single call, use load_checkpoint_and_dispatch().

accelerate.utils.load_offloaded_weights

< >

( model index offload_folder )

Parameters

  • model (torch.nn.Module) — The model to load the weights into.
  • index (dict) — A dictionary containing the parameter name and its metadata for each parameter that was offloaded from the model.
  • offload_folder (str) — The folder where the offloaded weights are stored.

Loads the weights from the offload folder into the model.

accelerate.utils.load_state_dict

< >

( checkpoint_file device_map = None )

Parameters

  • checkpoint_file (str) — The path to the checkpoint to load.
  • device_map (Dict[str, Union[int, str, torch.device]], optional) — A map that specifies where each submodule should go. It doesn’t need to be refined to each parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the same device.

Load a checkpoint from a given file. If the checkpoint is in the safetensors format and a device map is passed, the weights can be fast-loaded directly on the GPU.

accelerate.utils.offload_state_dict

< >

( save_dir: Union state_dict: Dict )

Parameters

  • save_dir (str or os.PathLike) — The directory in which to offload the state dict.
  • state_dict (Dict[str, torch.Tensor]) — The dictionary of tensors to offload.

Offload a state dict in a given folder.

accelerate.utils.retie_parameters

< >

( model tied_params )

Parameters

  • model (torch.nn.Module) — The model in which to retie parameters.
  • tied_params (List[List[str]]) — A mapping parameter name to tied parameter name as obtained by find_tied_parameters.

Reties tied parameters in a given model if the link was broken (for instance when adding hooks).

accelerate.utils.set_module_tensor_to_device

< >

( module: Module tensor_name: str device: Union value: Optional = None dtype: Union = None fp16_statistics: Optional = None tied_params_map: Optional = None )

Parameters

  • module (torch.nn.Module) — The module in which the tensor we want to move lives.
  • tensor_name (str) — The full name of the parameter/buffer.
  • device (int, str or torch.device) — The device on which to set the tensor.
  • value (torch.Tensor, optional) — The value of the tensor (useful when going from the meta device to any other device).
  • dtype (torch.dtype, optional) — If passed along the value of the parameter will be cast to this dtype. Otherwise, value will be cast to the dtype of the existing parameter in the model.
  • fp16_statistics (torch.HalfTensor, optional) — The list of fp16 statistics to set on the module, used for 8 bit model serialization.
  • tied_params_map (Dict[int, Dict[torch.device, torch.Tensor]], optional, defaults to None) — A map of current data pointers to dictionaries of devices to already dispatched tied weights. For a given execution device, this parameter is useful to reuse the first available pointer of a shared weight on the device for all others, instead of duplicating memory.

A helper function to set a given tensor (parameter of buffer) of a module on a specific device (note that doing param.to(device) creates a new tensor not linked to the parameter, which is why we need this function).

accelerate.utils.shard_checkpoint

< >

( state_dict: Dict max_shard_size: Union = '10GB' weights_name: str = 'pytorch_model.bin' )

Parameters

  • state_dict (Dict[str, torch.Tensor]) — The state dictionary of a model to save.
  • max_shard_size (int or str, optional, defaults to "10GB") — The maximum size of each sub-checkpoint. If expressed as a string, needs to be digits followed by a unit (like "5MB").
  • weights_name (str, optional, defaults to "pytorch_model.bin") — The name of the model save file.

Splits a model state dictionary in sub-checkpoints so that the final size of each sub-checkpoint does not exceed a given size.

The sub-checkpoints are determined by iterating through the state_dict in the order of its keys, so there is no optimization made to make each sub-checkpoint as close as possible to the maximum size passed. For example, if the limit is 10GB and we have weights of sizes [6GB, 6GB, 2GB, 6GB, 2GB, 2GB] they will get sharded as [6GB], [6+2GB], [6+2+2GB] and not [6+2+2GB], [6+2GB], [6GB].

If one of the model’s weight is bigger that max_sahrd_size, it will end up in its own sub-checkpoint which will have a size greater than max_shard_size.

Parallel

These include general utilities that should be used when working in parallel.

accelerate.utils.extract_model_from_parallel

< >

( model keep_fp32_wrapper: bool = True ) torch.nn.Module

Parameters

  • model (torch.nn.Module) — The model to extract.
  • keep_fp32_wrapper (bool, optional) — Whether to remove mixed precision hooks from the model.

Returns

torch.nn.Module

The extracted model.

Extract a model from its distributed containers.

accelerate.utils.save

< >

( obj f save_on_each_node: bool = False safe_serialization: bool = False )

Parameters

  • save_on_each_node (bool, optional, defaults to False) — Whether to only save on the global main process
  • safe_serialization (bool, optional, defaults to False) — Whether to save obj using safetensors or the traditional PyTorch way (that uses pickle).

Save the data to disk. Use in place of torch.save().

accelerate.utils.wait_for_everyone

< >

( )

Introduces a blocking point in the script, making sure all processes have reached this point before continuing.

Make sure all processes will reach this instruction otherwise one of your processes will hang forever.

Random

These utilities relate to setting and synchronizing of all the random states.

accelerate.utils.set_seed

< >

( seed: int device_specific: bool = False )

Parameters

  • seed (int) — The seed to set.
  • device_specific (bool, optional, defaults to False) — Whether to differ the seed on each device slightly with self.process_index.

Helper function for reproducible behavior to set the seed in random, numpy, torch.

accelerate.utils.synchronize_rng_state

< >

( rng_type: Optional = None generator: Optional = None )

accelerate.synchronize_rng_states

< >

( rng_types: List generator: Optional = None )

PyTorch XLA

These include utilities that are useful while using PyTorch with XLA.

accelerate.utils.install_xla

< >

( upgrade: bool = False )

Parameters

  • upgrade (bool, optional, defaults to False) — Whether to upgrade torch and install the latest torch_xla wheels.

Helper function to install appropriate xla wheels based on the torch version in Google Colaboratory.

Example:

>>> from accelerate.utils import install_xla

>>> install_xla(upgrade=True)

Loading model weights

These include utilities that are useful to load checkpoints.

accelerate.load_checkpoint_in_model

< >

( model: Module checkpoint: Union device_map: Optional = None offload_folder: Union = None dtype: Union = None offload_state_dict: bool = False offload_buffers: bool = False keep_in_fp32_modules: List = None offload_8bit_bnb: bool = False )

Parameters

  • model (torch.nn.Module) — The model in which we want to load a checkpoint.
  • checkpoint (str or os.PathLike) — The folder checkpoint to load. It can be:
    • a path to a file containing a whole model state dict
    • a path to a .json file containing the index to a sharded checkpoint
    • a path to a folder containing a unique .index.json file and the shards of a checkpoint.
    • a path to a folder containing a unique pytorch_model.bin or a model.safetensors file.
  • device_map (Dict[str, Union[int, str, torch.device]], optional) — A map that specifies where each submodule should go. It doesn’t need to be refined to each parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the same device.
  • offload_folder (str or os.PathLike, optional) — If the device_map contains any value "disk", the folder where we will offload weights.
  • dtype (str or torch.dtype, optional) — If provided, the weights will be converted to that type when loaded.
  • offload_state_dict (bool, optional, defaults to False) — If True, will temporarily offload the CPU state dict on the hard drive to avoid getting out of CPU RAM if the weight of the CPU state dict + the biggest shard does not fit.
  • offload_buffers (bool, optional, defaults to False) — Whether or not to include the buffers in the weights offloaded to disk.
  • keep_in_fp32_modules(List[str], optional) — A list of the modules that we keep in torch.float32 dtype.
  • offload_8bit_bnb (bool, optional) — Whether or not to enable offload of 8-bit modules on cpu/disk.

Loads a (potentially sharded) checkpoint inside a model, potentially sending weights to a given device as they are loaded.

Once loaded across devices, you still need to call dispatch_model() on your model to make it able to run. To group the checkpoint loading and dispatch in one single call, use load_checkpoint_and_dispatch().

Quantization

These include utilities that are useful to quantize model.

accelerate.utils.load_and_quantize_model

< >

( model: Module bnb_quantization_config: BnbQuantizationConfig weights_location: Union = None device_map: Optional = None no_split_module_classes: Optional = None max_memory: Optional = None offload_folder: Union = None offload_state_dict: bool = False ) torch.nn.Module

Parameters

  • model (torch.nn.Module) — Input model. The model can be already loaded or on the meta device
  • bnb_quantization_config (BnbQuantizationConfig) — The bitsandbytes quantization parameters
  • weights_location (str or os.PathLike) — The folder weights_location to load. It can be:
    • a path to a file containing a whole model state dict
    • a path to a .json file containing the index to a sharded checkpoint
    • a path to a folder containing a unique .index.json file and the shards of a checkpoint.
    • a path to a folder containing a unique pytorch_model.bin file.
  • device_map (Dict[str, Union[int, str, torch.device]], optional) — A map that specifies where each submodule should go. It doesn’t need to be refined to each parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the same device.
  • no_split_module_classes (List[str], optional) — A list of layer class names that should never be split across device (for instance any layer that has a residual connection).
  • max_memory (Dict, optional) — A dictionary device identifier to maximum memory. Will default to the maximum memory available if unset.
  • offload_folder (str or os.PathLike, optional) — If the device_map contains any value "disk", the folder where we will offload weights.
  • offload_state_dict (bool, optional, defaults to False) — If True, will temporarily offload the CPU state dict on the hard drive to avoid getting out of CPU RAM if the weight of the CPU state dict + the biggest shard does not fit.

Returns

torch.nn.Module

The quantized model

This function will quantize the input model with the associated config passed in bnb_quantization_config. If the model is in the meta device, we will load and dispatch the weights according to the device_map passed. If the model is already loaded, we will quantize the model and put the model on the GPU,