Instance prompt and prompt to generate images

#3
by adhikjoshi - opened

I am training model with "a photo of AdhikJoshi Person"

But it generating random images of different people and not the images I have uploaded.

Maybe I am doing something wrong?

Screenshot 2022-12-12 at 8.59.40 AM.png

As I get it, the space doesn't train textencoder. The prompt is only used to test the lora weights you got with the training on input images provided.

Thanks for testing the space. I would recommend higher number of steps as a a start. Also, let me try this on for myself and would get back to you on that. Would you be able to recall the number of steps chosen by you?

As I get it, the space doesn't train textencoder. The prompt is only used to test the lora weights you got with the training on input images provided.

Quite possible! Thanks for the observation and comment.

As I get it, the space doesn't train textencoder. The prompt is only used to test the lora weights you got with the training on input images provided.

If its possible, can @ysharma update the space for it?

Yeah @ysharma can you please update the space so that we can try out LORA here.

As I get it, the space doesn't train textencoder. The prompt is only used to test the lora weights you got with the training on input images provided.

If its possible, can @ysharma update the space for it?

I don't think this is quite possible currently. Looking at the official lora notebook: https://github.com/cloneofsimo/lora/blob/master/scripts/run_inference.ipynb for txt2img inference.
The way LORA works is by adjusting weights of the original model and there is no connection with the text-encoder at this point.

Thanks @En-Ol , for your time and comments on this. You are correct, this LoRA technique can only be used to fine-tune SD to a particular theme (for example, Pixar, Disney, anime, etc) to generate further images using appropriate prompts.

Seems --train_text_encoder has now been implemented :)

https://github.com/cloneofsimo/lora#what-happens-to-text-encoder-lora-and-unet-lora

And we can independently set the learning rates: (You can adjust this with --learning_rate=1e-4 and --learning_rate_text=5e-5)

Hey, thanks for the comment. I am trying to add text encoder training as well from the repo.

ysharma changed discussion status to closed

Sign up or log in to comment