How to train sdxl turbo?
How do you train this model?
Do you start with a base model and then generate a lora? How does this work?
should be the same as sdxl, provided you tune sampler, cfg and steps according to the distillation ones.
should be the same as sdxl, provided you tune sampler, cfg and steps according to the distillation ones.
Thanks for reply!
@Lykon
A further question is, does dreamshaper-xl-turbo train from sdxl-turbo
or from stable-diffusion-xl-base-1.0
?
If it's from
stable-diffusion-xl-base-1.0
like Model card said:
I think it's awesome becase dreamshaper-xl-turbo performs better than official sdxl-turbo! Do you implement Adversarial Diffusion Distillation (ADD) in training?If it's from
sdxl-turbo
:
Turbo model disabled CFG but dreamshaper has a CFG. It is ok to change this settings in continue training?
it isn't distilled using the official Turbo lora. You can tell by the fact that it's using DPMpp SDE as the better sampler and not LCM.
I see, dreamshaper-xl-turbo has nothing to do with the official sdxl-turbo, right? It is continue trained from sdxl-base-1.0 using DPMpp SDE sampler.
It is amazing work! thank you.
should be the same as sdxl, provided you tune sampler, cfg and steps according to the distillation ones.
When I train on DreamShaper XL Turbo and use DPMpp SDE, 2.0 cfg and 6 steps in A1111, it gives fractured, incredibly low quality outputs, like using vanilla SDXL with low cfg/steps would.
If I bump it up to 6 cfg / 40 steps, it looks pretty much normal. In OneTrainer, you can see the first sample it generates at 2 cfg / 6 steps looks fine, but as soon as training starts, it corrupts it.
Do you mean tune sampler, cfg and steps during training, or inference? Cos there doesn't seem to be any way to do this in training.
If it's not possible to train on DreamShaper XL Turbo, can we get a regular XL finetune on the latest version to train on?
Have you tried training on base xl and then diffmerging?