Fully Sharded Data Parallel (FSDP) is a data parallel method that shards a model’s parameters, gradients and optimizer states across the number of available GPUs (also called workers or rank). Unlike DistributedDataParallel (DDP), FSDP reduces memory-usage because a model is replicated on each GPU. This improves GPU memory-efficiency and allows you to train much larger models on fewer GPUs. FSDP is integrated with the Accelerate, a library for easily managing training in distributed environments, which means it is available for use from the Trainer class.
Before you start, make sure Accelerate is installed and at least PyTorch 2.1.0 or newer.
pip install accelerate
To start, run the
accelerate config command to create a configuration file for your training environment. Accelerate uses this configuration file to automatically setup the correct training environment based on your selected training options in
When you run
accelerate config, you’ll be prompted with a series of options to configure your training environment. This section covers some of the most important FSDP options. To learn more about the other available FSDP options, take a look at the fsdp_config parameters.
FSDP offers a number of sharding strategies to select from:
FULL_SHARD- shards model parameters, gradients and optimizer states across workers; select
1for this option
SHARD_GRAD_OP- shard gradients and optimizer states across workers; select
2for this option
NO_SHARD- don’t shard anything (this is equivalent to DDP); select
3for this option
HYBRID_SHARD- shard model parameters, gradients and optimizer states within each worker where each worker also has a full copy; select
4for this option
HYBRID_SHARD_ZERO2- shard gradients and optimizer states within each worker where each worker also has a full copy; select
5for this option
This is enabled by the
You could also offload parameters and gradients when they are not in use to the CPU to save even more GPU memory and help you fit large models where even FSDP may not be sufficient. This is enabled by setting
fsdp_offload_params: true when running
FSDP is applied by wrapping each layer in the network. The wrapping is usually applied in a nested way where the full weights are discarded after each forward pass to save memory for use in the next layer. The auto wrapping policy is the simplest way to implement this and you don’t need to change any code. You should select
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP to wrap a Transformer layer and
fsdp_transformer_layer_cls_to_wrap to specify which layer to wrap (for example
Otherwise, you can choose a size-based wrapping policy where FSDP is applied to a layer if it exceeds a certain number of parameters. This is enabled by setting
fsdp_wrap_policy: SIZE_BASED_WRAP and
min_num_param to the desired size threshold.
Intermediate checkpoints should be saved with
fsdp_state_dict_type: SHARDED_STATE_DICT because saving the full state dict with CPU offloading on rank 0 takes a lot of time and often results in
NCCL Timeout errors due to indefinite hanging during broadcasting. You can resume training with the sharded state dicts with the load_state` method.
# directory containing checkpoints
However, when training ends, you want to save the full state dict because sharded state dict is only compatible with FSDP.
PyTorch XLA supports FSDP training for TPUs and it can be enabled by modifying the FSDP configuration file generated by
accelerate config. In addition to the sharding strategies and wrapping options specified above, you can add the parameters shown below to the file.
xla: True # must be set to True to enable PyTorch/XLA
xla_fsdp_settings: # XLA-specific FSDP parameters
xla_fsdp_grad_ckpt: True # use gradient checkpointing
xla_fsdp_settings allow you to configure additional XLA-specific parameters for FSDP.
An example FSDP configuration file may look like:
To launch training, run the
accelerate launch command and it’ll automatically use the configuration file you previously created with
accelerate launch my-trainer-script.py
accelerate launch --fsdp="full shard" --fsdp_config="path/to/fsdp_config/ my-trainer-script.py
FSDP can be a powerful tool for training really large models and you have access to more than one GPU or TPU. By sharding the model parameters, optimizer and gradient states, and even offloading them to the CPU when they’re inactive, FSDP can reduce the high cost of large-scale training. If you’re interested in learning more, the following may be helpful:
- Follow along with the more in-depth Accelerate guide for FSDP.
- Read the Introducing PyTorch Fully Sharded Data Parallel (FSDP) API blog post.
- Read the Scaling PyTorch models on Cloud TPUs with FSDP blog post.