--- license: openrail datasets: - laion/laion2B-en-aesthetic language: - en --- Based on https://github.com/lllyasviel/ControlNet/discussions/318 ``` accelerate launch train_controlnet.py ^ --pretrained_model_name_or_path="runwayml/stable-diffusion-v1-5" ^ --output_dir="control-edgedrawing-default-drop50-fp16/" ^ --dataset_name="mydataset" ^ --mixed_precision="fp16" ^ --proportion_empty_prompts=0.5 ^ --resolution=512 ^ --learning_rate=1e-5 ^ --train_batch_size=1 ^ --gradient_accumulation_steps=4 ^ --gradient_checkpointing ^ --use_8bit_adam ^ --enable_xformers_memory_efficient_attention ^ --set_grads_to_none ^ --seed=0 ``` Trained for 40000 steps on images converted with https://github.com/shaojunluo/EDLinePython using `smoothed = False` and default settings: ``` { 'ksize' : 5 , 'sigma' : 1.0 , 'gradientThreshold': 36 , 'anchorThreshold' : 8 , 'scanIntervals' : 1 } ``` **TODO** Results are not good so far: * `--proportion_empty_prompts=0.5` me be too excessive for 40000 steps * Use `smoothed = True` next time, maybe control net doesn't pick up on single pixels * Find better parameter spread instead of default values, most images are very sparse * Train on more steps * Train on more diverse dataset * Train on higher-precision