anghellia's picture
Update README.md
4e1a5ec verified
|
raw
history blame
5.49 kB
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.
language:
- en
pipeline_tag: text-to-image
tags:
- LoRA
- Stable Diffusion
- image-generation
- Flux
---
![Controlnet collections for Flux](https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/light/flux-controlnet-collections.png?raw=true)
[<img src="https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/light/join-our-discord-rev1.png?raw=true">](https://discord.gg/FHY2guThfy)
This repository provides a collection of ControlNet checkpoints for
[FLUX.1-dev model](https://huggingface.co/black-forest-labs/FLUX.1-dev) by Black Forest Labs
![Example Picture 1](https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/examples/depth_example_3.png?raw=true)
# Training details
[XLabs AI](https://github.com/XLabs-AI) team is happy to publish fune-tuning Flux scripts, including:
- **LoRA** πŸ”₯
- **ControlNet** πŸ”₯
[See our github](https://github.com/XLabs-AI/x-flux) for train script and train configs.
# Training Dataset
Dataset has the following format for the training process:
```
β”œβ”€β”€ images/
β”‚ β”œβ”€β”€ 1.png
β”‚ β”œβ”€β”€ 1.json
β”‚ β”œβ”€β”€ 2.png
β”‚ β”œβ”€β”€ 2.json
β”‚ β”œβ”€β”€ ...
```
A .json file contains "caption" field with a text prompt.
# Models
Our collection supports 3 models:
- Canny
- HED
- Depth (Midas)
Each ControlNet is trained on 1024x1024 resolution.
However, we recommend you to generate images with 1024x1024 for Depth, and use 768x768 resolution for Canny and HED for better results.
# Examples
See examples of our models results below.
Also, some generation results with input images are provided in "Files and versions"
# Inference
Use main.py from our [official repo](https://github.com/XLabs-AI/x-flux)
We do not guarantee that our checkpoints will work 100% correctly with other repositories and tools due to the nature of our sampling implementation and so on until we add our support.
See examples how to launch our models:
## Canny ControlNet
```bash
python3 main.py \
--prompt "a viking man with white hair looking, cinematic, MM full HD" \
--image input_image_canny.jpg \
--control_type canny \
--repo_id XLabs-AI/flux-controlnet-collections --name flux-canny-controlnet.safetensors --device cuda --use_controlnet \
--model_type flux-dev --width 768 --height 768 \
--timestep_to_start_cfg 1 --num_steps 25 --true_gs 3.5 --guidance 4
```
![Example Picture 1](https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/examples/canny_example_1.png?raw=true)
## Depth ControlNet
```bash
python3 main.py \
--prompt "Photo of the bold man with beard and laptop, full hd, cinematic photo" \
--image input_image_depth1.jpg \
--control_type depth \
--repo_id XLabs-AI/flux-controlnet-collections --name flux-depth-controlnet.safetensors --device cuda --use_controlnet \
--model_type flux-dev --width 1024 --height 1024 \
--timestep_to_start_cfg 1 --num_steps 25 --true_gs 3.5 --guidance 4
```
![Example Picture 2](https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/examples/depth_example_1.png?raw=true)
```bash
python3 main.py \
--prompt "photo of handsome fluffy black dog standing on a forest path, full hd, cinematic photo" \
--image input_image_depth2.jpg \
--control_type depth \
--repo_id XLabs-AI/flux-controlnet-collections --name flux-depth-controlnet.safetensors --device cuda --use_controlnet \
--model_type flux-dev --width 1024 --height 1024 \
--timestep_to_start_cfg 1 --num_steps 25 --true_gs 3.5 --guidance 4
```
![Example Picture 2](https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/examples/depth_example_2.png?raw=true)
```bash
python3 main.py \
--prompt "Photo of japanese village with houses and sakura, full hd, cinematic photo" \
--image input_image_depth3.webp \
--control_type depth \
--repo_id XLabs-AI/flux-controlnet-collections --name flux-depth-controlnet.safetensors --device cuda --use_controlnet \
--model_type flux-dev --width 1024 --height 1024 \
--timestep_to_start_cfg 1 --num_steps 25 --true_gs 3.5 --guidance 4
```
![Example Picture 2](https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/examples/depth_example_3.png?raw=true)
## HED ControlNet
```bash
python3 main.py \
--prompt "2d art of a sitting african rich woman, full hd, cinematic photo" \
--image input_image_hed1.jpg \
--control_type hed \
--repo_id XLabs-AI/flux-controlnet-collections --name flux-hed-controlnet.safetensors --device cuda --use_controlnet \
--model_type flux-dev --width 768 --height 768 \
--timestep_to_start_cfg 1 --num_steps 25 --true_gs 3.5 --guidance 4
```
![Example Picture 2](https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/examples/hed_example_1.png?raw=true)
```bash
python3 main.py \
--prompt "anime ghibli style art of a running happy white dog, full hd" \
--image input_image_hed2.jpg \
--control_type hed \
--repo_id XLabs-AI/flux-controlnet-collections --name flux-hed-controlnet.safetensors --device cuda --use_controlnet \
--model_type flux-dev --width 768 --height 768 \
--timestep_to_start_cfg 1 --num_steps 25 --true_gs 3.5 --guidance 4
```
![Example Picture 2](https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/examples/hed_example_2.png?raw=true)
## License
Our weights fall under the [FLUX.1 [dev]](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md) Non-Commercial License<br/>