Transformers
English
controlnet
Inference Endpoints
control-edgedrawing / README.md
Gerold Meisinger
control-edgedrawing-cv480edpf-drop0+50-fp16-checkpoint-118000
9f23e65
|
raw
history blame
No virus
3.99 kB
---
license: openrail
datasets:
- ChristophSchuhmann/improved_aesthetics_6.5plus
language:
- en
---
Based on my GitHub monologs at [Edge Drawing - a Canny alternative](https://github.com/lllyasviel/ControlNet/discussions/318)
Controls image generation by edge maps generated with [EdgeDrawing Parameter-Free](https://github.com/CihanTopal/ED_Lib).
For usage see the model page on [Civitai.com](https://civitai.com/models/149740). For evaluation see the corresponding .zip files with images. To run your own evaluation you can use [inference.py](https://gitlab.com/-/snippets/3602096).
**EdgeDrawing Parameter-Free**
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c0ec65a2ec8cb2f589233a/jmdCGeMJx4dKFGo44cuEq.png)
**Example**
sampler=UniPC steps=20 cfg=7.5 seed=0 batch=9 model: v1-5-pruned-emaonly.safetensors cherry-picked: 1/9
prompt: _a detailed high-quality professional photo of swedish woman standing in front of a mirror, dark brown hair, white hat with purple feather_
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c0ec65a2ec8cb2f589233a/2PSWsmzLdHeVG-i67S7jF.png)
**Canndy Edge Detection (default in Automatic1111)**
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c0ec65a2ec8cb2f589233a/JZTpa-HZfw0NUYnxZ52Iu.png)
# Image dataset
* [laion2B-en aesthetics>=6.5 dataset](https://huggingface.co/datasets/ChristophSchuhmann/improved_aesthetics_6.5plus)
* `--min_image_size 512 --max_aspect_ratio 2 --resize_mode="center_crop" --image_size 512`
* resulting in 180k images
# Training
```
accelerate launch train_controlnet.py ^
--pretrained_model_name_or_path="runwayml/stable-diffusion-v1-5" ^
--output_dir="control-edgedrawing-[version]-fp16/" ^
--dataset_name="mydataset" ^
--mixed_precision="fp16" ^
--resolution=512 ^
--learning_rate=1e-5 ^
--train_batch_size=1 ^
--gradient_accumulation_steps=4 ^
--gradient_checkpointing ^
--use_8bit_adam ^
--enable_xformers_memory_efficient_attention ^
--set_grads_to_none ^
--seed=0
```
# Versions
**Experiment 5 - control-edgedrawing-cv480edpf-drop0+50-fp16-checkpoint-118000**
see experiment 4. resumed with epoch 2 from 90000 using `--proportion_empty_prompts=0.5` => results became worse, CN didn't pick up on no-prompts (I also tried checkpoint-104000). restarting with 50% drop.
**Experiment 4 - control-edgedrawing-cv480edpf-drop0-fp16-checkpoint-90000**
Conditioning images generated with [edpf.py](https://gitlab.com/-/snippets/3601881) using [opencv-contrib-python::ximgproc::EdgeDrawing](https://docs.opencv.org/4.8.0/d1/d1c/classcv_1_1ximgproc_1_1EdgeDrawing.html).
```
ed = cv2.ximgproc.createEdgeDrawing()
params = cv2.ximgproc.EdgeDrawing.Params()
params.PFmode = True
ed.setParams(params)
edges = ed.detectEdges(image)
edge_map = ed.getEdgeImage(edges)
```
90000 steps (45000 steps on original, 45000 steps with left-right flipped images)
**Experiment 3 - control-edgedrawing-cv480edpf-drop0-fp16-checkpoint-45000**
see experiment 4. 45000 steps. This is version 0.1 on civitai.
**Experiment 2 - control-edgedrawing-default-noisy-drop0-fp16-checkpoint-40000**
Images converted with https://github.com/shaojunluo/EDLinePython
Default settings are:
`smoothed=False`
```
{ 'ksize' : 5
, 'sigma' : 1.0
, 'gradientThreshold': 36
, 'anchorThreshold' : 8
, 'scanIntervals' : 1
}
```
`smoothed=True`, but no empty prompts. Trained for 40000 steps with default settings => conditioning images are too noisy.
**Experiment 1 - control-edgedrawing-default-drop50-fp16-checkpoint-40000**
Same as experiment 2.
Update: bug in algorithm produces too sparse images on default, see https://github.com/shaojunluo/EDLinePython/issues/4
additional arguments: `--proportion_empty_prompts=0.5`. Trained for 40000 steps with default settings => empty prompts were probably too excessive
# Question and answers
**Q: What's the point of another edge control net anyway?**
A: 🤷