love this model

#5
by seancai - opened

I compare it with "t2i sketch" model. This scribble model understands prompt better. t2i model seems to have some bias. For example, if you use word "valley", t2i is easy to get a typical california valley full of sunshine and maguey. Even though I use "cloudy" and "green tree", it wont help. But this scribbe model can show a green valley background easily.

I love this model too! But It seems that this model does not work with some certain checkpoints (like AutismMix), very interesting. @seancai @xinsir

Owner

I will test the AutismMix and give some advice. Before I test it with bluepencil and it work well. If you want to generate anime images, it is recommend to use danbooru tags generate with waifu series models. Natural language will lead to some unstable problem.

I will test the AutismMix and give some advice. Before I test it with bluepencil and it work well. If you want to generate anime images, it is recommend to use danbooru tags generate with waifu series models. Natural language will lead to some unstable problem.

@xinsir Thanks for your swift response! It seems to be a VAE issue based on the output I got. Let me provide the workflow and the input image I use.

test-img.jpeg

Screenshot 2024-06-20 at 12.14.54 AM.png

Owner

000001_scribble.webp
000001.webp

fans.jpeg
out.jpeg

I test the autismMix model with simple canny procession and test your prompt and your control image, the first works and the second not well. I think the key point is your drawing is not usual pattern like canny 、hed and so on. The model doesn't learning the drawing in training, can you please provide a method to generate your drawing style control image based on original image, need to retrain the model based on the new drawing style.

000001_scribble.webp
000001.webp

fans.jpeg
out.jpeg

I test the autismMix model with simple canny procession and test your prompt and your control image, the first works and the second not well. I think the key point is your drawing is not usual pattern like canny 、hed and so on. The model doesn't learning the drawing in training, can you please provide a method to generate your drawing style control image based on original image, need to retrain the model based on the new drawing style.

Hey Xinsir, I hand drew this input image, so ... I also tried to use a canny pre-processor for this input image but didn't work well either.

Hey Xinsir, I hand drew this input image, so ... I also tried to use a canny pre-processor for this input image but didn't work well either.

I think you need to inverse you image. Controlnet was trained with pics of white lines and black background, so you should sent hand drawing like that. There's node in comfyui which can do this.

Sign up or log in to comment