Update README.md
Browse files
README.md
CHANGED
@@ -19,11 +19,11 @@ tags:
|
|
19 |
|
20 |
# Usage with Diffusers
|
21 |
|
22 |
-
To use this quantized FLUX.1 [dev] checkpoint, you need to install the 🧨 diffusers and
|
23 |
|
24 |
```
|
25 |
pip install -U diffusers
|
26 |
-
pip install -U
|
27 |
```
|
28 |
|
29 |
After installing the required library, you can run the following script:
|
@@ -65,13 +65,13 @@ This checkpoint was created with the following script using "black-forest-labs/F
|
|
65 |
import torch
|
66 |
from diffusers import FluxPipeline
|
67 |
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig
|
68 |
-
from diffusers
|
69 |
-
from transformers import
|
70 |
|
71 |
pipeline_quant_config = PipelineQuantizationConfig(
|
72 |
quant_mapping={
|
73 |
-
"transformer":
|
74 |
-
"text_encoder_2":
|
75 |
}
|
76 |
)
|
77 |
|
|
|
19 |
|
20 |
# Usage with Diffusers
|
21 |
|
22 |
+
To use this quantized FLUX.1 [dev] checkpoint, you need to install the 🧨 diffusers and torchao library:
|
23 |
|
24 |
```
|
25 |
pip install -U diffusers
|
26 |
+
pip install -U torchao
|
27 |
```
|
28 |
|
29 |
After installing the required library, you can run the following script:
|
|
|
65 |
import torch
|
66 |
from diffusers import FluxPipeline
|
67 |
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig
|
68 |
+
from diffusers import TorchAoConfig as DiffusersTorchAoConfig
|
69 |
+
from transformers import TorchAoConfig as TransformersTorchAoConfig
|
70 |
|
71 |
pipeline_quant_config = PipelineQuantizationConfig(
|
72 |
quant_mapping={
|
73 |
+
"transformer": DiffusersTorchAoConfig("int8_weight_only"),
|
74 |
+
"text_encoder_2": TransformersTorchAoConfig("int8_weight_only"),
|
75 |
}
|
76 |
)
|
77 |
|