Accelerate documentation

Kwargs Handlers

You are viewing v0.25.0 version. A newer version v1.1.0 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Kwargs Handlers

The following objects can be passed to the main Accelerator to customize how some PyTorch objects related to distributed training or mixed precision are created.

AutocastKwargs

class accelerate.AutocastKwargs

< >

( enabled: bool = True cache_enabled: bool = None )

Use this object in your Accelerator to customize how torch.autocast behaves. Please refer to the documentation of this context manager for more information on each argument.

Example:

from accelerate import Accelerator
from accelerate.utils import AutocastKwargs

kwargs = AutocastKwargs(cache_enabled=True)
accelerator = Accelerator(kwargs_handlers=[kwargs])

DistributedDataParallelKwargs

class accelerate.DistributedDataParallelKwargs

< >

( dim: int = 0 broadcast_buffers: bool = True bucket_cap_mb: int = 25 find_unused_parameters: bool = False check_reduction: bool = False gradient_as_bucket_view: bool = False static_graph: bool = False )

Use this object in your Accelerator to customize how your model is wrapped in a torch.nn.parallel.DistributedDataParallel. Please refer to the documentation of this wrapper for more information on each argument.

gradient_as_bucket_view is only available in PyTorch 1.7.0 and later versions.

static_graph is only available in PyTorch 1.11.0 and later versions.

Example:

from accelerate import Accelerator
from accelerate.utils import DistributedDataParallelKwargs

kwargs = DistributedDataParallelKwargs(find_unused_parameters=True)
accelerator = Accelerator(kwargs_handlers=[kwargs])

FP8RecipeKwargs

class accelerate.utils.FP8RecipeKwargs

< >

( margin: int = 0 interval: int = 1 fp8_format: str = 'E4M3' amax_history_len: int = 1 amax_compute_algo: str = 'most_recent' override_linear_precision: typing.Tuple[bool, bool, bool] = (False, False, False) )

Use this object in your Accelerator to customize the initialization of the recipe for FP8 mixed precision training. Please refer to the documentation of this class for more information on each argument.

from accelerate import Accelerator
from accelerate.utils import FP8RecipeKwargs

kwargs = FP8RecipeKwargs(fp8_format="HYBRID")
accelerator = Accelerator(mixed_precision="fp8", kwargs_handlers=[kwargs])

GradScalerKwargs

class accelerate.GradScalerKwargs

< >

( init_scale: float = 65536.0 growth_factor: float = 2.0 backoff_factor: float = 0.5 growth_interval: int = 2000 enabled: bool = True )

Use this object in your Accelerator to customize the behavior of mixed precision, specifically how the torch.cuda.amp.GradScaler used is created. Please refer to the documentation of this scaler for more information on each argument.

GradScaler is only available in PyTorch 1.5.0 and later versions.

Example:

from accelerate import Accelerator
from accelerate.utils import GradScalerKwargs

kwargs = GradScalerKwargs(backoff_filter=0.25)
accelerator = Accelerator(kwargs_handlers=[kwargs])

InitProcessGroupKwargs

class accelerate.InitProcessGroupKwargs

< >

( backend: typing.Optional[str] = 'nccl' init_method: typing.Optional[str] = None timeout: timedelta = datetime.timedelta(seconds=1800) )

Use this object in your Accelerator to customize the initialization of the distributed processes. Please refer to the documentation of this method for more information on each argument.

from datetime import timedelta
from accelerate import Accelerator
from accelerate.utils import InitProcessGroupKwargs

kwargs = InitProcessGroupKwargs(timeout=timedelta(seconds=800))
accelerator = Accelerator(kwargs_handlers=[kwargs])