Accelerate documentation

Utilities for Fully Sharded Data Parallelism

You are viewing v0.27.2 version. A newer version v1.1.0 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Utilities for Fully Sharded Data Parallelism

class accelerate.FullyShardedDataParallelPlugin

< >

( sharding_strategy: typing.Any = None backward_prefetch: typing.Any = None mixed_precision_policy: typing.Any = None auto_wrap_policy: Optional = None cpu_offload: typing.Any = None ignored_modules: Optional = None state_dict_type: typing.Any = None state_dict_config: typing.Any = None optim_state_dict_config: typing.Any = None limit_all_gathers: bool = True use_orig_params: bool = True param_init_fn: Optional = None sync_module_states: bool = True forward_prefetch: bool = False activation_checkpointing: bool = False )

This plugin is used to enable fully sharded data parallelism.

get_module_class_from_name

< >

( module name )

Parameters

  • module (torch.nn.Module) — The module to get the class from.
  • name (str) — The name of the class.

Gets a class from a module by its name.