weights safetensor to huggingface repo

#1
by cemalgndzz - opened

Hi, I have trained my own lora model with anylora base model. I have my weights file as safetensor and I want to use it with diffusers. So how can I generate a huggingface library like you did for my own trained lora model to use with diffusers. How did you do it ? Can you help me pls ?

Hi, I have trained my own lora model with anylora base model. I have my weights file as safetensor and I want to use it with diffusers. So how can I generate a huggingface library like you did for my own trained lora model to use with diffusers. How did you do it ? Can you help me pls ?

Hello! Sorry for the late response, I was not aware of my notifications here.

So, this model here is a checkpoint converted to diffusers format. If you check my profile I have many other models converted.
I use this scripts for converting checkpoints, they don't work for LoRAs: https://github.com/danbrown/ckpt-to-diffusers
Feel free to take a look.

For using LoRAs with diffusers you can take a look at Diffusers documentation and do your own implementation:
https://huggingface.co/docs/diffusers/v0.16.0/en/training/text2image#lora

But honestly I don't use it, I have another method with this useLora function, here is a sample usage code:

import torch
from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
pipe = pipe.to("cuda")

lora_alpha = 0.75

pipe = useLora(pipe, "./LORAFILE.safetensors", lora_alpha)

prompt = "a photo of an KEYWORD riding a horse on mars"
image = pipe(prompt).images[0]

And here is the mentioned useLora function:

import time
import os
from safetensors.torch import load_file

LORA_PREFIX_UNET = 'lora_unet'
LORA_PREFIX_TEXT_ENCODER = 'lora_te'

def useLora(pipeline, model_path, alpha):

    if not os.path.exists(model_path):
        raise Exception("Lora path {} does not exist".format(model_path))

    start = time.time()

    state_dict = load_file(model_path)
    visited = []

    # directly update weight in diffusers model
    for key in state_dict:
        
        # it is suggested to print out the key, it usually will be something like below
        # "lora_te_text_model_encoder_layers_0_self_attn_k_proj.lora_down.weight"
        
        # as we have set the alpha beforehand, so just skip
        if '.alpha' in key or key in visited:
            continue
            
        if 'text' in key:
            layer_infos = key.split('.')[0].split(LORA_PREFIX_TEXT_ENCODER+'_')[-1].split('_')
            curr_layer = pipeline.text_encoder
        else:
            layer_infos = key.split('.')[0].split(LORA_PREFIX_UNET+'_')[-1].split('_')
            curr_layer = pipeline.unet

        # find the target layer
        temp_name = layer_infos.pop(0)
        while len(layer_infos) > -1:
            try:
                curr_layer = curr_layer.__getattr__(temp_name)
                if len(layer_infos) > 0:
                    temp_name = layer_infos.pop(0)
                elif len(layer_infos) == 0:
                    break
            except Exception:
                if len(temp_name) > 0:
                    temp_name += '_'+layer_infos.pop(0)
                else:
                    temp_name = layer_infos.pop(0)
        
        # org_forward(x) + lora_up(lora_down(x)) * multiplier
        pair_keys = []
        if 'lora_down' in key:
            pair_keys.append(key.replace('lora_down', 'lora_up'))
            pair_keys.append(key)
        else:
            pair_keys.append(key)
            pair_keys.append(key.replace('lora_up', 'lora_down'))
        
        # update weight
        if len(state_dict[pair_keys[0]].shape) == 4:
            weight_up = state_dict[pair_keys[0]].squeeze(3).squeeze(2).to(torch.float32)
            weight_down = state_dict[pair_keys[1]].squeeze(3).squeeze(2).to(torch.float32)
            curr_layer.weight.data += alpha * torch.mm(weight_up, weight_down).unsqueeze(2).unsqueeze(3)
        else:
            weight_up = state_dict[pair_keys[0]].to(torch.float32)
            weight_down = state_dict[pair_keys[1]].to(torch.float32)
            curr_layer.weight.data += alpha * torch.mm(weight_up, weight_down)
            
        # update visited list
        for item in pair_keys:
            visited.append(item)

    print("Lora model {} loaded in pipeline in {} seconds".format(model_path, time.time() - start))
    return pipeline

Sign up or log in to comment