How to build a lora model?

#3
by deepgoyal19 - opened

I built my own lora model and a base model. I have connected my lora model with my base model but still i'm not getting the desired results. The images generated by my lora models comes out be as same as the images generated by my base model. Can you please help?

Lora model link:
https://huggingface.co/deepgoyal19/lora15

Base model link:
https://huggingface.co/deepgoyal19/mysd1

Note:
Resolution of my base model is 256.
My lora model works perfectly on my machine

I don't understand your question:

My lora model works perfectly on my machine

What is the problem then?

Images generated on base model( My workstation)
image.png

Images generated after loading LORA weights( My workstation)
image.png

Images generatedon hugging face Hosted inference API ( LORA model ) link: https://huggingface.co/deepgoyal19/lora15
This is my lora model which I uploaded on hugging face and it is connected to my base model: (https://huggingface.co/deepgoyal19/mysd1)
image.png

Images generated on hugging face Hosted inference API ( Base model ) link: https://huggingface.co/deepgoyal19/mysd1
image.png

The image generated(in Figure3) on hugging face inference api is similar to output images generated by my base model. The generated images on hugging face inference api(lora model)(Figure 3) should look as similar as the ones in Figure 2 but the generated images are completely similar to my base model(Figure 1).

To summarize my issue, I'm aiming for the images generated by my LORA model in Figure 3 to be identical to the images generated using LORA weights on my workstation. Despite following all the correct procedures and not encountering any errors, I'm unable to determine where I may have gone wrong.

I really hope I can explain my problem to you in a way that makes sense. If there's anything else you need to know, please don't hesitate to ask. Your help means a lot to me, and I genuinely appreciat

Could you share a Colab Notebook and also how did you train the LoRA models?

Our team at HKUST developed a new framework RAFT to train the LORA. You can read more about it here (https://arxiv.org/abs/2304.06767)

https://colab.research.google.com/drive/1bQmlSiKnqFjrkijFUJ5ylbYW-zUwObqL?usp=sharing

I cannot help you with implementing a new method for training LoRA. Sorry.

As for how this model (contained in this repository), it was generated with this script: https://huggingface.co/docs/diffusers/main/en/training/lora (text-to-image part).

sayakpaul changed discussion status to closed

I used the same script(https://huggingface.co/docs/diffusers/main/en/training/lora) to upload my model on hub.

!accelerate launch --num_processes=1 --mixed_precision='fp16' --gpu_ids='5' --dynamo_backend='no' --num_machines=1 /home/deepanshu/deepanshu/diffusers/examples/text_to_image/train_text_to_image_lora.py \
    --pretrained_model_name_or_path=$MODEL_NAME \
    --train_data_dir=$TRAINING_DIR \
    --resolution=$RESOLUTION \
    --train_batch_size=8 \
    --gradient_accumulation_steps=1 \
    --max_grad_norm=1 \
    --mixed_precision="fp16" \
    --max_train_steps=0 \
    --learning_rate=$LEARNING_RATE \
    --lr_warmup_steps=0 \
    --enable_xformers_memory_efficient_attention \
    --dataloader_num_workers=1 \
    --output_dir=$OUTPUT_DIR \
    --lr_warmup_steps=0 \
    --seed=$SEED \
    --lr_scheduler='constant' \
    --resume_from_checkpoint="latest" \
    --gradient_checkpointing \
    --hub_model_id='lora15' \
    --hub_token='my  token' \
    --push_to_hub

Sign up or log in to comment