dn6 HF Staff commited on
Commit
756a927
·
verified ·
1 Parent(s): a6d21c2

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +39 -0
README.md ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Setup
2
+
3
+ Install the latest version of `diffusers`
4
+
5
+ ```shell
6
+ pip install git+https://github.com/huggingface/diffusers.git
7
+ ```
8
+
9
+ Login to your Hugging Face account
10
+
11
+ ```shell
12
+ hf auth login
13
+ ```
14
+
15
+ ## How to use
16
+
17
+ The following code snippet demonstrates how to use the [Flux2](https://huggingface.co/black-forest-labs/FLUX.2-dev) modular pipeline with a remote text encoder and group offloading. It requires approximately 8GB of VRAM and 64GB of CPU RAM to generate an image.
18
+
19
+ ```python
20
+ import torch
21
+ from diffusers.modular_pipelines.flux2 import ALL_BLOCKS
22
+ from diffusers.modular_pipelines import SequentialPipelineBlocks
23
+
24
+ blocks = SequentialPipelineBlocks.from_blocks_dict(ALL_BLOCKS['remote'])
25
+ pipe = blocks.init_pipeline("diffusers/flux2-modular")
26
+ pipe.load_components(torch_dtype=torch.bfloat16, device_map="cpu")
27
+ pipe.vae.to("cuda")
28
+ pipe.transformer.enable_group_offload(
29
+ offload_type="leaf_level",
30
+ onload_device=torch.device("cuda"),
31
+ offload_device=torch.device("cpu"),
32
+ use_stream=True,
33
+ low_cpu_mem_usage=True,
34
+ )
35
+
36
+ prompt = "a photo of a cat"
37
+ output = pipe(prompt=prompt, num_inference_steps=28, output="images")
38
+ output[0].save("flux2-modular.png")
39
+ ```