How to built a lora model

#1
by deepgoyal19 - opened

I built my own lora model and a base model. I have connected my lora model with my base model but still i'm not getting the desired results. The images generated by my lora models comes out be as same as the images generated by my base model. Can you please help?

https://huggingface.co/deepgoyal19/lora15

https://huggingface.co/deepgoyal19/mysd1

I'm so sorry I haven't check my community recently.I've just use your models to generate 2 images in the same parameters,your work is very effective.The image of LoRA looks better.Perhaps there is a error in your generation code,here's my code:

import torch
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained("deepgoyal19/mysd1")
pipeline.to("cuda")
generator = torch.Generator("cuda").manual_seed(2000)
prompt = "cute puppy"
image = pipeline(prompt, generator=generator, num_inference_steps=50, guidance_scale=7.5).images[0]
image

sd1
puppy.png

from diffusers import StableDiffusionPipeline
import torch

pipe = StableDiffusionPipeline.from_pretrained("deepgoyal19/mysd1",torch_dtype=torch.float16)
model_path = "deepgoyal19/lora15"
pipe.unet.load_attn_procs(model_path)
pipe.to("cuda")
prompt = "cute puppy"
generator = torch.Generator("cuda").manual_seed(2000)
image = pipe(prompt, generator=generator, num_inference_steps=50, guidance_scale=7.5).images[0]
image

lora
puppy_lora .png

Thankyou for your reply.

As you have seen my LORA generated good results but it this is not happening on Hugging face inference api.(https://huggingface.co/deepgoyal19/lora15)

image.png

Do you know why?

This situation is normal,you will find more imformation in https://api-inference.huggingface.co/models/deepgoyal19/lora15,the JSON shows that the argument "id: deepgoyal19/lora15" and "base_model: deepgoyal19/mysd1",if you use this model id in the pipeline like the following code:

from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained("deepgoyal19/lora15")

you will find this error:

 HTTPError: 404 Client Error: Not Found for url: 
Entry Not Found for url: https://huggingface.co/deepgoyal19/lora15/resolve/main/model_index.json.

because there is no model_index.json or other files for pipeline,so it can't find such files.
I guess in order to avoid this error while using API,the programmer of HuggingFace just use the base_model id for pipeline,the API is using model "deepgoyal19/mysd1".

In order to solve this problem, I have two idea:

    1. maybe add the model "deepgoyal19/mysd1" files into "deepgoyal19/lora15" repository will solve this problem ,I'm not sure ,you should backups these two repositories if you want to take a try.
    1. the HF API is unreliable,I think they use "Gradio" and "FastApi" lib to bulid it.Maybe custom a API in your Space is a better choice,you could find a introduction in this page Quickstart

I hope it will be help.

I tried your idea no.1 and it's not working.
I know that I can make my custom space but HF API is really fast so I did want to do this with that.

I am still not able to figure out why only my model is not working. I can see everyone's LORA model working on hub.

Thankyou again!

Sign up or log in to comment