How to train sdxl turbo?

#2
by AeroDEmi - opened

How do you train this model?
Do you start with a base model and then generate a lora? How does this work?

Lykon changed discussion status to closed

Hi @Lykon , also want to know how to train sdxl-turbo.
The official repo Stability-AI/generative-models doesn't provide training code for turbo model.
Is it ok to train sdxl-turbo just like sdxl-base?

Owner

should be the same as sdxl, provided you tune sampler, cfg and steps according to the distillation ones.

should be the same as sdxl, provided you tune sampler, cfg and steps according to the distillation ones.

Thanks for reply! @Lykon
A further question is, does dreamshaper-xl-turbo train from sdxl-turbo or from stable-diffusion-xl-base-1.0?

  1. If it's from stable-diffusion-xl-base-1.0 like Model card said:
    image.png
    I think it's awesome becase dreamshaper-xl-turbo performs better than official sdxl-turbo! Do you implement Adversarial Diffusion Distillation (ADD) in training?

  2. If it's from sdxl-turbo:
    Turbo model disabled CFG but dreamshaper has a CFG. It is ok to change this settings in continue training?
    image.png

Owner

it isn't distilled using the official Turbo lora. You can tell by the fact that it's using DPMpp SDE as the better sampler and not LCM.

I see, dreamshaper-xl-turbo has nothing to do with the official sdxl-turbo, right? It is continue trained from sdxl-base-1.0 using DPMpp SDE sampler.
It is amazing work! thank you.

should be the same as sdxl, provided you tune sampler, cfg and steps according to the distillation ones.

When I train on DreamShaper XL Turbo and use DPMpp SDE, 2.0 cfg and 6 steps in A1111, it gives fractured, incredibly low quality outputs, like using vanilla SDXL with low cfg/steps would.
If I bump it up to 6 cfg / 40 steps, it looks pretty much normal. In OneTrainer, you can see the first sample it generates at 2 cfg / 6 steps looks fine, but as soon as training starts, it corrupts it.
Do you mean tune sampler, cfg and steps during training, or inference? Cos there doesn't seem to be any way to do this in training.
If it's not possible to train on DreamShaper XL Turbo, can we get a regular XL finetune on the latest version to train on?

Owner

Have you tried training on base xl and then diffmerging?

Just finished trying that with supermerger on every combination of settings and no luck.
Dreamshaper needed to be the base for the full finetune, since it's a lot of subtle things Dreamshaper does well, like human anatomy and that style of photorealism.
Also tried merging using an unofficial xl turbo lora but that got garbled outputs.

I've been training Loras on regular XL for about a year and lately I'm just using that regular XL Lora on top of better finetunes, Dreamshaper Turbo being the best so far, but it brings all the anatomy bugs from regular XL with it, and weird interpolations in between.
Since OneTrainer came out, I was able to do a full finetune on Dreamshaper Turbo, and it fixes all the anatomy bugs and such, but anyone that's tried to train on top of a Turbo/Lightning/Hyper model gets the same corruption.
Juggernaught released a regular XL version alongside their lightning/turbo models, which you can train on and then apply lightning on after and it works perfectly, but other than that or training on the same dataset you used, idk any way around it.

Sign up or log in to comment