FLUX dev Quantized Models
This repo contains quantized versions of the FLUX dev transformer for use in InvokeAI.
Contents:
transformer/base/
- Transformer in bfloat16 copied from heretransformer/bnb_nf4/
- Transformer quantized to bitsandbytes NF4 format using this script
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.