File size: 652 Bytes
f628714
 
 
606b908
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
---

license: apache-2.0
---


# FLUX schnell Quantized Models

This repo contains quantized versions of the FLUX schnell transformer for use in [InvokeAI](https://github.com/invoke-ai/InvokeAI).

Contents:
- `transformer/base/` - Transformer in bfloat16 copied from [here](https://huggingface.co/black-forest-labs/FLUX.1-schnell/blob/741f7c3ce8b383c54771c7003378a50191e9efe9/flux1-schnell.safetensors)
- `transformer/bnb_nf4/` - Transformer quantized to bitsandbytes NF4 format using [this script](https://github.com/invoke-ai/InvokeAI/blob/b8ccd53dd33aaaa6d19b780d5f11bef6142155dc/invokeai/backend/quantization/load_flux_model_bnb_nf4.py)