File size: 2,774 Bytes
9f87f0b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c3a5a3a
 
 
 
9f87f0b
 
482abfe
9f87f0b
 
 
 
 
c3a5a3a
9f87f0b
 
 
 
 
 
 
 
 
2fdfbcf
 
9f87f0b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e31a209
 
b443b30
 
 
 
 
9f87f0b
482abfe
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md

language:
  - en
library_name: diffusers
pipeline_tag: text-to-image

tags:
- Text-to-Image
- ControlNet
- Diffusers
- Flux.1-dev
- image-generation
- Stable Diffusion
base_model: black-forest-labs/FLUX.1-dev
---

# FLUX.1-dev-ControlNet-Depth

This repository contains a Depth ControlNet for FLUX.1-dev model jointly trained by researchers from [InstantX Team](https://huggingface.co/InstantX) and [Shakker Labs](https://huggingface.co/Shakker-Labs).

<div class="container">
  <img src="./assets/poster.png" width="1024"/>
</div>

# Model Cards
- The model consists of 4 FluxTransformerBlock and 1 FluxSingleTransformerBlock.
- This checkpoint is trained on both real and generated image datasets, with 16\*A800 for 70K steps. The batch size 16\*4=64 with resolution=1024. The learning rate is set to 5e-6. We use [Depth-Anything-V2](https://github.com/DepthAnything/Depth-Anything-V2) to extract depth maps.
- The recommended controlnet_conditioning_scale is 0.3-0.7.

# Showcases

<div class="container">
  <img src="./assets/teaser.png" width="1024"/>
</div>


# Inference
```python
import torch
from diffusers.utils import load_image
from diffusers import FluxControlNetPipeline, FluxControlNetModel

base_model = "black-forest-labs/FLUX.1-dev"
controlnet_model = "Shakker-Labs/FLUX.1-dev-ControlNet-Depth"

controlnet = FluxControlNetModel.from_pretrained(controlnet_model, torch_dtype=torch.bfloat16)
pipe = FluxControlNetPipeline.from_pretrained(
    base_model, controlnet=controlnet, torch_dtype=torch.bfloat16
)
pipe.to("cuda")

control_image = load_image("https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Depth/resolve/main/assets/cond1.png")
prompt = "an old man with white hair"

image = pipe(prompt,
             control_image=control_image,
             controlnet_conditioning_scale=0.5,
             width=control_image.size[0],
             height=control_image.size[1],
             num_inference_steps=24,
             guidance_scale=3.5,
).images[0]
```

For multi-ControlNets support, please refer to [Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro](https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro).

# Resources
- [InstantX/FLUX.1-dev-Controlnet-Canny](https://huggingface.co/InstantX/FLUX.1-dev-Controlnet-Canny)
- [Shakker-Labs/FLUX.1-dev-ControlNet-Depth](https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Depth)
- [Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro](https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro)

# Acknowledgements
This project is sponsored and released by [Shakker AI](https://www.shakker.ai/). All copyright reserved.