Accelerate documentation

Utilities for DeepSpeed

You are viewing v0.34.2 version. A newer version v1.2.1 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Utilities for DeepSpeed

class accelerate.DeepSpeedPlugin

< >

( hf_ds_config: Any = None gradient_accumulation_steps: int = None gradient_clipping: float = None zero_stage: int = None is_train_batch_min: bool = True offload_optimizer_device: str = None offload_param_device: str = None offload_optimizer_nvme_path: str = None offload_param_nvme_path: str = None zero3_init_flag: bool = None zero3_save_16bit_model: bool = None transformer_moe_cls_names: str = None )

This plugin is used to integrate DeepSpeed.

deepspeed_config_process

< >

( prefix = '' mismatches = None config = None must_match = True **kwargs )

Process the DeepSpeed config with the values from the kwargs.

class accelerate.utils.DummyOptim

< >

( params lr = 0.001 weight_decay = 0 **kwargs )

Parameters

  • lr (float) — Learning rate.
  • params (iterable) — iterable of parameters to optimize or dicts defining parameter groups
  • weight_decay (float) — Weight decay.
  • **kwargs (additional keyword arguments, optional) — Other arguments.

Dummy optimizer presents model parameters or param groups, this is primarily used to follow conventional training loop when optimizer config is specified in the deepspeed config file.

class accelerate.utils.DummyScheduler

< >

( optimizer total_num_steps = None warmup_num_steps = 0 lr_scheduler_callable = None **kwargs )

Parameters

  • optimizer (torch.optim.optimizer.Optimizer) — The optimizer to wrap.
  • total_num_steps (int, optional) — Total number of steps.
  • warmup_num_steps (int, optional) — Number of steps for warmup.
  • lr_scheduler_callable (callable, optional) — A callable function that creates an LR Scheduler. It accepts only one argument optimizer.
  • **kwargs (additional keyword arguments, optional) — Other arguments.

Dummy scheduler presents model parameters or param groups, this is primarily used to follow conventional training loop when scheduler config is specified in the deepspeed config file.

class accelerate.utils.DeepSpeedEngineWrapper

< >

( engine )

Parameters

  • engine (deepspeed.runtime.engine.DeepSpeedEngine) — deepspeed engine to wrap

Internal wrapper for deepspeed.runtime.engine.DeepSpeedEngine. This is used to follow conventional training loop.

class accelerate.utils.DeepSpeedOptimizerWrapper

< >

( optimizer )

Parameters

  • optimizer (torch.optim.optimizer.Optimizer) — The optimizer to wrap.

Internal wrapper around a deepspeed optimizer.

class accelerate.utils.DeepSpeedSchedulerWrapper

< >

( scheduler optimizers )

Parameters

  • scheduler (torch.optim.lr_scheduler.LambdaLR) — The scheduler to wrap.
  • optimizers (one or a list of torch.optim.Optimizer) —

Internal wrapper around a deepspeed scheduler.

< > Update on GitHub