Custom person models less sensitive to prompts?

#17
by djn93 - opened

I am new to Dreambooth so forgive me in advance! I've noticed that my custom person models seem to be far less sensitive to supplemental text in prompts and often generate images that are seemingly only extrapolated from the source material.

For example... If I create a custom person model and then try:

[concept prompt] reading a book

I am yet to see a rendering of them reading a book. On the flip side, you can do "[insert celebrity] reading a book" and almost always find success.

Is this something that I can correct somehow? This was just the simplest example to demonstrate the phenomenon I'm seeing, but it also is stubborn with things like "[concept prompt] in a grocery store" whereas, again, a celebrity name would yield a rendering of the person in a grocery store no problem.

Thanks in advance!

How many images have you used for training?

20-30 typically

That might be too many and the model could be overfit. Try 3-7 images.

I'll have to give that a try. Interestingly though, I trained a new model with about 75 images and it's actually working much better. So funny.

Glad it works!

for me training with 40 images 4040 steps made the model useless, seems to just replace any face with the trained face and not responsive to any prompts to alter

Sign up or log in to comment