marcsun13 HF Staff commited on
Commit
39a22b3
·
verified ·
1 Parent(s): 64c92a9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -19,11 +19,11 @@ tags:
19
 
20
  # Usage with Diffusers
21
 
22
- To use this quantized FLUX.1 [dev] checkpoint, you need to install the 🧨 diffusers and bitsandbytes library:
23
 
24
  ```
25
  pip install -U diffusers
26
- pip install -U bitsandbytes
27
  ```
28
 
29
  After installing the required library, you can run the following script:
@@ -65,13 +65,13 @@ This checkpoint was created with the following script using "black-forest-labs/F
65
  import torch
66
  from diffusers import FluxPipeline
67
  from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig
68
- from diffusers.quantizers import PipelineQuantizationConfig
69
- from transformers import BitsAndBytesConfig as TransformersBitsAndBytesConfig
70
 
71
  pipeline_quant_config = PipelineQuantizationConfig(
72
  quant_mapping={
73
- "transformer": DiffusersBitsAndBytesConfig(load_in_8bit=True),
74
- "text_encoder_2": TransformersBitsAndBytesConfig(load_in_8bit=True),
75
  }
76
  )
77
 
 
19
 
20
  # Usage with Diffusers
21
 
22
+ To use this quantized FLUX.1 [dev] checkpoint, you need to install the 🧨 diffusers and torchao library:
23
 
24
  ```
25
  pip install -U diffusers
26
+ pip install -U torchao
27
  ```
28
 
29
  After installing the required library, you can run the following script:
 
65
  import torch
66
  from diffusers import FluxPipeline
67
  from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig
68
+ from diffusers import TorchAoConfig as DiffusersTorchAoConfig
69
+ from transformers import TorchAoConfig as TransformersTorchAoConfig
70
 
71
  pipeline_quant_config = PipelineQuantizationConfig(
72
  quant_mapping={
73
+ "transformer": DiffusersTorchAoConfig("int8_weight_only"),
74
+ "text_encoder_2": TransformersTorchAoConfig("int8_weight_only"),
75
  }
76
  )
77