LyCORIS
LyCORIS (Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion) are LoRA-like matrix decomposition adapters that modify the cross-attention layer of the UNet. The LoHa and LoKr methods inherit from the Lycoris
classes here.
LycorisConfig
class peft.tuners.lycoris_utils.LycorisConfig
< source >( task_type: typing.Union[str, peft.utils.peft_types.TaskType, NoneType] = None peft_type: typing.Union[str, peft.utils.peft_types.PeftType, NoneType] = None auto_mapping: typing.Optional[dict] = None base_model_name_or_path: typing.Optional[str] = None revision: typing.Optional[str] = None inference_mode: bool = False rank_pattern: Optional[dict] = <factory> alpha_pattern: Optional[dict] = <factory> )
A base config for LyCORIS like adapters
LycorisLayer
A base layer for LyCORIS like adapters
merge
< source >( safe_merge: bool = False adapter_names: Optional[list[str]] = None )
Parameters
- safe_merge (
bool
, optional) — IfTrue
, the merge operation will be performed in a copy of the original weights and check for NaNs before merging the weights. This is useful if you want to check if the merge operation will produce NaNs. Defaults toFalse
. - adapter_names (
List[str]
, optional) — The list of adapter names that should be merged. IfNone
, all active adapters will be merged. Defaults toNone
.
Merge the active adapter weights into the base weights
This method unmerges all merged adapter layers from the base weights.
LycorisTuner
class peft.tuners.lycoris_utils.LycorisTuner
< source >( model config adapter_name low_cpu_mem_usage: bool = False )
Parameters
- model (
torch.nn.Module
) — The model to be adapted. - config (LoraConfig) — The configuration of the Lora model.
- adapter_name (
str
) — The name of the adapter, defaults to"default"
. - low_cpu_mem_usage (
bool
,optional
, defaults toFalse
) — Create empty adapter weights on meta device. Useful to speed up the loading process.
A base tuner for LyCORIS like adapters
delete_adapter
< source >( adapter_name: str )
Deletes an existing adapter.
Disable all adapters.
When disabling all adapters, the model output corresponds to the output of the base model.
Enable all adapters.
Call this if you have previously disabled all adapters and want to re-enable them.
merge_and_unload
< source >( progressbar: bool = False safe_merge: bool = False adapter_names: Optional[list[str]] = None )
Parameters
- progressbar (
bool
) — whether to show a progressbar indicating the unload and merge process - safe_merge (
bool
) — whether to activate the safe merging check to check if there is any potential Nan in the adapter weights - adapter_names (
List[str]
, optional) — The list of adapter names that should be merged. If None, all active adapters will be merged. Defaults toNone
.
This method merges the adapter layers into the base model. This is needed if someone wants to use the base model as a standalone model.
set_adapter
< source >( adapter_name: str | list[str] )
Set the active adapter(s).
Additionally, this function will set the specified adapters to trainable (i.e., requires_grad=True). If this is not desired, use the following code.
Gets back the base model by removing all the lora modules without merging. This gives back the original base model.