Working with large models
Dispatching and Offloading Models
accelerate.init_empty_weights
< source >( include_buffers: bool = False )
A context manager under which models are initialized with all parameters on the meta device, therefore creating an empty model. Useful when just initializing the model would blow the available RAM.
Example:
import torch.nn as nn
from accelerate import init_empty_weights
# Initialize a model with 100 billions parameters in no time and without using any RAM.
with init_empty_weights():
tst = nn.Sequential(*[nn.Linear(10000, 10000) for _ in range(1000)])
Any model created under this context manager has no weights. As such you can’t do something like
model.to(some_device)
with it. To load weights inside your empty model, see load_checkpoint_and_dispatch().
accelerate.cpu_offload
< source >( model: Module execution_device: typing.Optional[torch.device] = None offload_buffers: bool = False state_dict: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None preload_module_classes: typing.Optional[typing.List[str]] = None )
Parameters
-
model (
torch.nn.Module
) — The model to offload. -
execution_device (
torch.device
, optional) — The device on which the forward pass of the model will be executed (should be a GPU). Will default to the model first parameter device. -
offload_buffers (
bool
, optional, defaults toFalse
) — Whether or not to offload the buffers with the model parameters. -
state_dict (
Dict[str, torch.Tensor]
, optional) — The state dict of the model that will be kept on CPU. -
preload_module_classes (
List[str]
, optional) — A list of classes whose instances should load all their weights (even in the submodules) at the beginning of the forward. This should only be used for classes that have submodules which are registered but not called directly during the forward, for instance if adense
linear layer is registered, but at forward,dense.weight
anddense.bias
are used in some operations instead of callingdense
directly.
Activates full CPU offload for a model. As a result, all parameters of the model will be offloaded and only one copy of the state dict of the model will be kept. During the forward pass, parameters will be extracted from that state dict and put on the execution device passed as they are needed, then offloaded again.
accelerate.disk_offload
< source >( model: Module offload_dir: typing.Union[str, os.PathLike] execution_device: typing.Optional[torch.device] = None offload_buffers: bool = False preload_module_classes: typing.Optional[typing.List[str]] = None )
Parameters
-
model (
torch.nn.Module
) — The model to offload. -
offload_dir (
str
oros.PathLike
) — The folder in which to offload the model weights (or where the model weights are already offloaded). -
execution_device (
torch.device
, optional) — The device on which the forward pass of the model will be executed (should be a GPU). Will default to the model’s first parameter device. -
offload_buffers (
bool
, optional, defaults toFalse
) — Whether or not to offload the buffers with the model parameters. -
preload_module_classes (
List[str]
, optional) — A list of classes whose instances should load all their weights (even in the submodules) at the beginning of the forward. This should only be used for classes that have submodules which are registered but not called directly during the forward, for instance if adense
linear layer is registered, but at forward,dense.weight
anddense.bias
are used in some operations instead of callingdense
directly.
Activates full disk offload for a model. As a result, all parameters of the model will be offloaded as memory-mapped array in a given folder. During the forward pass, parameters will be accessed from that folder and put on the execution device passed as they are needed, then offloaded again.
accelerate.dispatch_model
< source >( model: Module device_map: typing.Dict[str, typing.Union[str, int, torch.device]] main_device: typing.Optional[torch.device] = None state_dict: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None offload_dir: typing.Union[str, os.PathLike, NoneType] = None offload_index: typing.Union[typing.Dict[str, str], NoneType] = None offload_buffers: bool = False preload_module_classes: typing.Optional[typing.List[str]] = None )
Parameters
-
model (
torch.nn.Module
) — The model to dispatch. -
device_map (
Dict[str, Union[str, int, torch.device]]
) — A dictionary mapping module names in the modelsstate_dict
to the device they should go to. Note that"disk"
is accepted even if it’s not a proper value fortorch.device
. -
main_device (
str
,int
ortorch.device
, optional) — The main execution device. Will default to the first device in thedevice_map
different from"cpu"
or"disk"
. -
state_dict (
Dict[str, torch.Tensor]
, optional) — The state dict of the part of the model that will be kept on CPU. -
offload_dir (
str
oros.PathLike
) — The folder in which to offload the model weights (or where the model weights are already offloaded). -
offload_index (
Dict
, optional) — A dictionary from weight name to their information (dtype
/shape
or safetensors filename). Will default to the index saved insave_folder
. -
offload_buffers (
bool
, optional, defaults toFalse
) — Whether or not to offload the buffers with the model parameters. -
preload_module_classes (
List[str]
, optional) — A list of classes whose instances should load all their weights (even in the submodules) at the beginning of the forward. This should only be used for classes that have submodules which are registered but not called directly during the forward, for instance if adense
linear layer is registered, but at forward,dense.weight
anddense.bias
are used in some operations instead of callingdense
directly.
Dispatches a model according to a given device map. Layers of the model might be spread across GPUs, offloaded on the CPU or even the disk.
accelerate.load_checkpoint_and_dispatch
< source >( model: Module checkpoint: typing.Union[str, os.PathLike] device_map: typing.Union[str, typing.Dict[str, typing.Union[str, int, torch.device]], NoneType] = None max_memory: typing.Union[typing.Dict[typing.Union[int, str], typing.Union[int, str]], NoneType] = None no_split_module_classes: typing.Optional[typing.List[str]] = None offload_folder: typing.Union[str, os.PathLike, NoneType] = None offload_buffers: bool = False dtype: typing.Union[str, torch.dtype, NoneType] = None offload_state_dict: typing.Optional[bool] = None preload_module_classes: typing.Optional[typing.List[str]] = None )
Parameters
-
model (
torch.nn.Module
) — The model in which we want to load a checkpoint. -
checkpoint (
str
oros.PathLike
) — The folder checkpoint to load. It can be:- a path to a file containing a whole model state dict
- a path to a
.json
file containing the index to a sharded checkpoint - a path to a folder containing a unique
.index.json
file and the shards of a checkpoint.
-
device_map (
Dict[str, Union[int, str, torch.device]]
, optional) — A map that specifies where each submodule should go. It doesn’t need to be refined to each parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the same device.To have Accelerate compute the most optimized
device_map
automatically, setdevice_map="auto"
. For more information about each option see here. -
max_memory (
Dict
, optional) — A dictionary device identifier to maximum memory. Will default to the maximum memory available for each GPU and the available CPU RAM if unset. -
no_split_module_classes (
List[str]
, optional) — A list of layer class names that should never be split across device (for instance any layer that has a residual connection). -
offload_folder (
str
oros.PathLike
, optional) — If thedevice_map
contains any value"disk"
, the folder where we will offload weights. -
offload_buffers (
bool
, optional, defaults toFalse
) — In the layers that are offloaded on the CPU or the hard drive, whether or not to offload the buffers as well as the parameters. -
dtype (
str
ortorch.dtype
, optional) — If provided, the weights will be converted to that type when loaded. -
offload_state_dict (
bool
, optional) — IfTrue
, will temporarily offload the CPU state dict on the hard drive to avoid getting out of CPU RAM if the weight of the CPU state dict + the biggest shard does not fit. Will default toTrue
if the device map picked contains"disk"
values. -
preload_module_classes (
List[str]
, optional) — A list of classes whose instances should load all their weights (even in the submodules) at the beginning of the forward. This should only be used for classes that have submodules which are registered but not called directly during the forward, for instance if adense
linear layer is registered, but at forward,dense.weight
anddense.bias
are used in some operations instead of callingdense
directly.
Loads a (potentially sharded) checkpoint inside a model, potentially sending weights to a given device as they are loaded and adds the various hooks that will make this model run properly (even if split across devices).
Example:
>>> from accelerate import init_empty_weights, load_checkpoint_and_dispatch
>>> from huggingface_hub import hf_hub_download
>>> from transformers import AutoConfig, AutoModelForCausalLM
>>> # Download the Weights
>>> checkpoint = "EleutherAI/gpt-j-6B"
>>> weights_location = hf_hub_download(checkpoint, "pytorch_model.bin")
>>> # Create a model and initialize it with empty weights
>>> config = AutoConfig.from_pretrained(checkpoint)
>>> with init_empty_weights():
... model = AutoModelForCausalLM.from_config(config)
>>> # Load the checkpoint and dispatch it to the right devices
>>> model = load_checkpoint_and_dispatch(
... model, weights_location, device_map="auto", no_split_module_classes=["GPTJBlock"]
... )
Model Hooks
Hook Classes
A hook that contains callbacks to be executed just before and after the forward method of a model. The difference with PyTorch existing hooks is that they get passed along the kwargs.
Class attribute:
- no_grad (
bool
, optional, defaults toFalse
) — Whether or not to execute the actual forward pass under thetorch.no_grad()
context manager.
detach_hook
< source >( module )
To be executed when the hook is detached from a module.
init_hook
< source >( module )
To be executed when the hook is attached to the module.
post_forward
< source >(
module
output
)
→
Any
To be executed just after the forward method of the model.
pre_forward
< source >(
module
*args
**kwargs
)
→
Tuple[Tuple[Any], Dict[Str, Any]]
Parameters
-
module (
torch.nn.Module
) — The module whose forward pass will be executed just after this event. -
args (
Tuple[Any]
) — The positional arguments passed to the module. -
kwargs (
Dict[Str, Any]
) — The keyword arguments passed to the module.
Returns
Tuple[Tuple[Any], Dict[Str, Any]]
A tuple with the treated args
and kwargs
.
To be executed just before the forward method of the model.
class accelerate.hooks.AlignDevicesHook
< source >( execution_device: typing.Union[int, str, torch.device, NoneType] = None offload: bool = False io_same_device: bool = False weights_map: typing.Optional[typing.Mapping] = None offload_buffers: bool = False place_submodules: bool = False )
Parameters
-
execution_device (
torch.device
, optional) — The device on which inputs and model weights should be placed before the forward pass. -
offload (
bool
, optional, defaults toFalse
) — Whether or not the weights should be offloaded after the forward pass. -
io_same_device (
bool
, optional, defaults toFalse
) — Whether or not the output should be placed on the same device as the input was. -
weights_map (
Mapping[str, torch.Tensor]
, optional) — When the model weights are offloaded, a (potentially lazy) map from param names to the tensor values. -
offload_buffers (
bool
, optional, defaults toFalse
) — Whether or not to include the associated module’s buffers when offloading. -
place_submodules (
bool
, optional, defaults toFalse
) — Whether to place the submodules onexecution_device
during theinit_hook
event.
A generic ModelHook
that ensures inputs and model weights are on the same device for the forward pass of the
associated module, potentially offloading the weights after the forward pass.
A hook that can contain several hooks and iterates through them at each event.
Adding Hooks
accelerate.hooks.add_hook_to_module
< source >(
module: Module
hook: ModelHook
append: bool = False
)
→
torch.nn.Module
Parameters
-
module (
torch.nn.Module
) — The module to attach a hook to. -
hook (
ModelHook
) — The hook to attach. -
append (
bool
, optional, defaults toFalse
) — Whether the hook should be chained with an existing one (if module already contains a hook) or not.
Returns
torch.nn.Module
The same module, with the hook attached (the module is modified in place, so the result can be discarded).
Adds a hook to a given module. This will rewrite the forward
method of the module to include the hook, to remove
this behavior and restore the original forward
method, use remove_hook_from_module
.
If the module already contains a hook, this will replace it with the new hook passed by default. To chain two hooks
together, pass append=True
, so it chains the current and new hook into an instance of the SequentialHook
class.
accelerate.hooks.attach_execution_device_hook
< source >( module: Module execution_device: typing.Union[int, str, torch.device] preload_module_classes: typing.Optional[typing.List[str]] = None )
Parameters
-
module (
torch.nn.Module
) — The module where we want to attach the hooks. -
execution_device (
int
,str
ortorch.device
) — The device on which inputs and model weights should be placed before the forward pass. -
preload_module_classes (
List[str]
, optional) — A list of classes whose instances should load all their weights (even in the submodules) at the beginning of the forward. This should only be used for classes that have submodules which are registered but not called directly during the forward, for instance if adense
linear layer is registered, but at forward,dense.weight
anddense.bias
are used in some operations instead of callingdense
directly.
Recursively attaches AlignDevicesHook
to all submodules of a given model to make sure they have the right
execution device
accelerate.hooks.attach_align_device_hook
< source >( module: Module execution_device: typing.Optional[torch.device] = None offload: bool = False weights_map: typing.Optional[typing.Mapping] = None offload_buffers: bool = False module_name: str = '' preload_module_classes: typing.Optional[typing.List[str]] = None )
Parameters
-
module (
torch.nn.Module
) — The module where we want to attach the hooks. -
execution_device (
torch.device
, optional) — The device on which inputs and model weights should be placed before the forward pass. -
offload (
bool
, optional, defaults toFalse
) — Whether or not the weights should be offloaded after the forward pass. -
weights_map (
Mapping[str, torch.Tensor]
, optional) — When the model weights are offloaded, a (potentially lazy) map from param names to the tensor values. -
offload_buffers (
bool
, optional, defaults toFalse
) — Whether or not to include the associated module’s buffers when offloading. -
module_name (
str
, optional, defaults to""
) — The name of the module. -
preload_module_classes (
List[str]
, optional) — A list of classes whose instances should load all their weights (even in the submodules) at the beginning of the forward. This should only be used for classes that have submodules which are registered but not called directly during the forward, for instance if adense
linear layer is registered, but at forward,dense.weight
anddense.bias
are used in some operations instead of callingdense
directly.
Recursively attaches AlignDevicesHook
to all submodules of a given model that have direct parameters and/or
buffers.
accelerate.hooks.attach_align_device_hook_on_blocks
< source >( module: Module execution_device: typing.Union[torch.device, typing.Dict[str, torch.device], NoneType] = None offload: typing.Union[bool, typing.Dict[str, bool]] = False weights_map: typing.Mapping = None offload_buffers: bool = False module_name: str = '' preload_module_classes: typing.Optional[typing.List[str]] = None )
Parameters
-
module (
torch.nn.Module
) — The module where we want to attach the hooks. -
execution_device (
torch.device
orDict[str, torch.device]
, optional) — The device on which inputs and model weights should be placed before the forward pass. It can be one device for the whole module, or a dictionary mapping module name to device. -
offload (
bool
, optional, defaults toFalse
) — Whether or not the weights should be offloaded after the forward pass. It can be one boolean for the whole module, or a dictionary mapping module name to boolean. -
weights_map (
Mapping[str, torch.Tensor]
, optional) — When the model weights are offloaded, a (potentially lazy) map from param names to the tensor values. -
offload_buffers (
bool
, optional, defaults toFalse
) — Whether or not to include the associated module’s buffers when offloading. -
module_name (
str
, optional, defaults to""
) — The name of the module. -
preload_module_classes (
List[str]
, optional) — A list of classes whose instances should load all their weights (even in the submodules) at the beginning of the forward. This should only be used for classes that have submodules which are registered but not called directly during the forward, for instance if adense
linear layer is registered, but at forward,dense.weight
anddense.bias
are used in some operations instead of callingdense
directly.
Attaches AlignDevicesHook
to all blocks of a given model as needed.
Removing Hooks
accelerate.hooks.remove_hook_from_module
< source >(
module: Module
recurse = False
)
→
torch.nn.Module
Removes any hook attached to a module via add_hook_to_module
.
accelerate.hooks.remove_hook_from_submodules
< source >( module: Module )
Recursively removes all hooks attached on the submodules of a given model.