Also on CivitAI

Merge of Schnell, finetuned Dev and Hyper. Recommended 4-8 steps. Greatly improved quality at 4 steps compared to V1.


Important

All in one versions include VAE + CLIP + T5XXL (fp8).

UNET versions (Model only) need Text Encoders and VAE, downloads:

GGUF versions require a custom node to work in comfy:

BNB versions (NF4, FP4) require a custom node to work in comfy:


Downloads last month
1,189
GGUF
Model size
11.9B params
Architecture
flux

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.