what is the difference between 16 32 and full? I don't know anything about this topic :/

#3
by Qweteurii - opened

Could you please tell me which one should be downloaded?

Functionally they're all the same. Nearly no one will use the full. You could have the same seed, same prompt, same everything and likely have near exact same results with each; the difference is extra data not relevant to image generation is pruned from the full, and we're left with F16 or F32. 32 is full precision, 16 is half.

If you have 24gb VRAM 32bit precision is easy, if you have less you may still use it depending on your SD build and how it optimizes memory use. Best to stick with 16 is you're not sure. Difference in quality is virtually imperceptible.

I've left the full epochs in the repo since there are slight differences between full, float32, and float16. Plus, you can also finetune with the full weights lol

I've left the full epochs in the repo since there are slight differences between full, float32, and float16. Plus, you can also finetune with the full weights lol

Is fine-tuning when training or getting more precise results when prompting?

Fine-tuning is when a model is trained on a dataset; WD would make a great base for many art/anime models. If someone wanted a bias to more 80's style anime or a particular look, the full would provide the possibility to train WD in that direction.

Sign up or log in to comment