LoRA
LoRA is a fast and lightweight training method that inserts and trains a significantly smaller number of parameters instead of all the model parameters. This produces a smaller file (~100 MBs) and makes it easier to quickly train a model to learn a new concept. LoRA weights are typically loaded into the UNet, text encoder or both. There are two classes for loading LoRA weights:
LoraLoaderMixin
provides functions for loading and unloading, fusing and unfusing, enabling and disabling, and more functions for managing LoRA weights. This class can be used with any model.StableDiffusionXLLoraLoaderMixin
is a Stable Diffusion (SDXL) version of theLoraLoaderMixin
class for loading and saving LoRA weights. It can only be used with the SDXL model.
To learn more about how to load LoRA weights, see the LoRA loading guide.
LoraLoaderMixin
Load LoRA layers into UNet2DConditionModel and
CLIPTextModel
.
delete_adapters
< source >( adapter_names: Union )
disable_lora_for_text_encoder
< source >( text_encoder: Optional = None )
Disables the LoRA layers for the text encoder.
enable_lora_for_text_encoder
< source >( text_encoder: Optional = None )
Enables the LoRA layers for the text encoder.
fuse_lora
< source >( fuse_unet: bool = True fuse_text_encoder: bool = True lora_scale: float = 1.0 safe_fusing: bool = False adapter_names: Optional = None )
Parameters
- fuse_unet (
bool
, defaults toTrue
) — Whether to fuse the UNet LoRA parameters. - fuse_text_encoder (
bool
, defaults toTrue
) — Whether to fuse the text encoder LoRA parameters. If the text encoder wasn’t monkey-patched with the LoRA parameters then it won’t have any effect. - lora_scale (
float
, defaults to 1.0) — Controls how much to influence the outputs with the LoRA parameters. - safe_fusing (
bool
, defaults toFalse
) — Whether to check fused weights for NaN values before fusing and if values are NaN not fusing them. - adapter_names (
List[str]
, optional) — Adapter names to be used for fusing. If nothing is passed, all active adapters will be fused.
Fuses the LoRA parameters into the original parameters of the corresponding blocks.
This is an experimental API.
Example:
from diffusers import DiffusionPipeline
import torch
pipeline = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel")
pipeline.fuse_lora(lora_scale=0.7)
Gets the list of the current active adapters.
Gets the current list of all available adapters in the pipeline.
load_lora_into_text_encoder
< source >( state_dict network_alphas text_encoder prefix = None lora_scale = 1.0 adapter_name = None _pipeline = None )
Parameters
- state_dict (
dict
) — A standard state dict containing the lora layer parameters. The key should be prefixed with an additionaltext_encoder
to distinguish between unet lora layers. - network_alphas (
Dict[str, float]
) — SeeLoRALinearLayer
for more details. - text_encoder (
CLIPTextModel
) — The text encoder model to load the LoRA layers into. - prefix (
str
) — Expected prefix of thetext_encoder
in thestate_dict
. - lora_scale (
float
) — How much to scale the output of the lora linear layer before it is added with the output of the regular lora layer. - adapter_name (
str
, optional) — Adapter name to be used for referencing the loaded adapter model. If not specified, it will usedefault_{i}
where i is the total number of adapters being loaded.
This will load the LoRA layers specified in state_dict
into text_encoder
load_lora_into_transformer
< source >( state_dict network_alphas transformer adapter_name = None _pipeline = None )
Parameters
- state_dict (
dict
) — A standard state dict containing the lora layer parameters. The keys can either be indexed directly into the unet or prefixed with an additionalunet
which can be used to distinguish between text encoder lora layers. - network_alphas (
Dict[str, float]
) — SeeLoRALinearLayer
for more details. - unet (
UNet2DConditionModel
) — The UNet model to load the LoRA layers into. - adapter_name (
str
, optional) — Adapter name to be used for referencing the loaded adapter model. If not specified, it will usedefault_{i}
where i is the total number of adapters being loaded.
This will load the LoRA layers specified in state_dict
into transformer
.
load_lora_into_unet
< source >( state_dict network_alphas unet adapter_name = None _pipeline = None )
Parameters
- state_dict (
dict
) — A standard state dict containing the lora layer parameters. The keys can either be indexed directly into the unet or prefixed with an additionalunet
which can be used to distinguish between text encoder lora layers. - network_alphas (
Dict[str, float]
) — The value of the network alpha used for stable learning and preventing underflow. This value has the same meaning as the--network_alpha
option in the kohya-ss trainer script. Refer to this link. - unet (
UNet2DConditionModel
) — The UNet model to load the LoRA layers into. - adapter_name (
str
, optional) — Adapter name to be used for referencing the loaded adapter model. If not specified, it will usedefault_{i}
where i is the total number of adapters being loaded.
This will load the LoRA layers specified in state_dict
into unet
.
load_lora_weights
< source >( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs )
Parameters
- pretrained_model_name_or_path_or_dict (
str
oros.PathLike
ordict
) — See lora_state_dict(). - kwargs (
dict
, optional) — See lora_state_dict(). - adapter_name (
str
, optional) — Adapter name to be used for referencing the loaded adapter model. If not specified, it will usedefault_{i}
where i is the total number of adapters being loaded.
Load LoRA weights specified in pretrained_model_name_or_path_or_dict
into self.unet
and
self.text_encoder
.
All kwargs are forwarded to self.lora_state_dict
.
See lora_state_dict() for more details on how the state dict is loaded.
See load_lora_into_unet() for more details on how the state dict is loaded into
self.unet
.
See load_lora_into_text_encoder() for more details on how the state dict is loaded
into self.text_encoder
.
lora_state_dict
< source >( pretrained_model_name_or_path_or_dict: Union **kwargs )
Parameters
- pretrained_model_name_or_path_or_dict (
str
oros.PathLike
ordict
) — Can be either:- A string, the model id (for example
google/ddpm-celebahq-256
) of a pretrained model hosted on the Hub. - A path to a directory (for example
./my_model_directory
) containing the model weights saved with ModelMixin.save_pretrained(). - A torch state dict.
- A string, the model id (for example
- cache_dir (
Union[str, os.PathLike]
, optional) — Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used. - force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v1 of Diffusers. - proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, for example,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. - local_files_only (
bool
, optional, defaults toFalse
) — Whether to only load local model weights and configuration files or not. If set toTrue
, the model won’t be downloaded from the Hub. - token (
str
or bool, optional) — The token to use as HTTP bearer authorization for remote files. IfTrue
, the token generated fromdiffusers-cli login
(stored in~/.huggingface
) is used. - revision (
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git. - subfolder (
str
, optional, defaults to""
) — The subfolder location of a model file within a larger model repository on the Hub or locally. - weight_name (
str
, optional, defaults to None) — Name of the serialized state dict file.
Return state dict for lora weights and the network alphas.
We support loading A1111 formatted LoRA checkpoints in a limited capacity.
This function is experimental and might change in the future.
save_lora_weights
< source >( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True )
Parameters
- save_directory (
str
oros.PathLike
) — Directory to save LoRA parameters to. Will be created if it doesn’t exist. - unet_lora_layers (
Dict[str, torch.nn.Module]
orDict[str, torch.Tensor]
) — State dict of the LoRA layers corresponding to theunet
. - text_encoder_lora_layers (
Dict[str, torch.nn.Module]
orDict[str, torch.Tensor]
) — State dict of the LoRA layers corresponding to thetext_encoder
. Must explicitly pass the text encoder LoRA state dict because it comes from 🤗 Transformers. - is_main_process (
bool
, optional, defaults toTrue
) — Whether the process calling this is the main process or not. Useful during distributed training and you need to call this function on all processes. In this case, setis_main_process=True
only on the main process to avoid race conditions. - save_function (
Callable
) — The function to use to save the state dictionary. Useful during distributed training when you need to replacetorch.save
with another method. Can be configured with the environment variableDIFFUSERS_SAVE_MODE
. - safe_serialization (
bool
, optional, defaults toTrue
) — Whether to save the model usingsafetensors
or the traditional PyTorch way withpickle
.
Save the LoRA parameters corresponding to the UNet and text encoder.
set_adapters_for_text_encoder
< source >( adapter_names: Union text_encoder: Optional = None text_encoder_weights: Union = None )
Parameters
- adapter_names (
List[str]
orstr
) — The names of the adapters to use. - text_encoder (
torch.nn.Module
, optional) — The text encoder module to set the adapter layers for. IfNone
, it will try to get thetext_encoder
attribute. - text_encoder_weights (
List[float]
, optional) — The weights to use for the text encoder. IfNone
, the weights are set to1.0
for all the adapters.
Sets the adapter layers for the text encoder.
set_lora_device
< source >( adapter_names: List device: Union )
Moves the LoRAs listed in adapter_names
to a target device. Useful for offloading the LoRA to the CPU in case
you want to load multiple adapters and free some GPU memory.
unfuse_lora
< source >( unfuse_unet: bool = True unfuse_text_encoder: bool = True )
Reverses the effect of
pipe.fuse_lora()
.
This is an experimental API.
Unloads the LoRA parameters.
StableDiffusionXLLoraLoaderMixin
This class overrides LoraLoaderMixin
with LoRA loading/saving code that’s specific to SDXL
load_lora_weights
< source >( pretrained_model_name_or_path_or_dict: Union adapter_name: Optional = None **kwargs )
Parameters
- pretrained_model_name_or_path_or_dict (
str
oros.PathLike
ordict
) — See lora_state_dict(). - adapter_name (
str
, optional) — Adapter name to be used for referencing the loaded adapter model. If not specified, it will usedefault_{i}
where i is the total number of adapters being loaded. - kwargs (
dict
, optional) — See lora_state_dict().
Load LoRA weights specified in pretrained_model_name_or_path_or_dict
into self.unet
and
self.text_encoder
.
All kwargs are forwarded to self.lora_state_dict
.
See lora_state_dict() for more details on how the state dict is loaded.
See load_lora_into_unet() for more details on how the state dict is loaded into
self.unet
.
See load_lora_into_text_encoder() for more details on how the state dict is loaded
into self.text_encoder
.
save_lora_weights
< source >( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None text_encoder_2_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True )
Parameters
- save_directory (
str
oros.PathLike
) — Directory to save LoRA parameters to. Will be created if it doesn’t exist. - unet_lora_layers (
Dict[str, torch.nn.Module]
orDict[str, torch.Tensor]
) — State dict of the LoRA layers corresponding to theunet
. - text_encoder_lora_layers (
Dict[str, torch.nn.Module]
orDict[str, torch.Tensor]
) — State dict of the LoRA layers corresponding to thetext_encoder
. Must explicitly pass the text encoder LoRA state dict because it comes from 🤗 Transformers. - text_encoder_2_lora_layers (
Dict[str, torch.nn.Module]
orDict[str, torch.Tensor]
) — State dict of the LoRA layers corresponding to thetext_encoder_2
. Must explicitly pass the text encoder LoRA state dict because it comes from 🤗 Transformers. - is_main_process (
bool
, optional, defaults toTrue
) — Whether the process calling this is the main process or not. Useful during distributed training and you need to call this function on all processes. In this case, setis_main_process=True
only on the main process to avoid race conditions. - save_function (
Callable
) — The function to use to save the state dictionary. Useful during distributed training when you need to replacetorch.save
with another method. Can be configured with the environment variableDIFFUSERS_SAVE_MODE
. - safe_serialization (
bool
, optional, defaults toTrue
) — Whether to save the model usingsafetensors
or the traditional PyTorch way withpickle
.
Save the LoRA parameters corresponding to the UNet and text encoder.