LECO / README.md
SenY's picture
Upload folder using huggingface_hub
8dd1d6f
|
raw
history blame
1.34 kB
metadata
license: other

It is a repository for storing as many LECOs as I can think of, emphasizing quantity over quality.

Files will continue to be added as needed.

Because the guidance_scale parameter is somewhat excessive, these LECOs tend to be very sensitive and too effective; using a weight of -0.1 to -1 is appropriate in most cases.

All LECOs are trained with target eq positive, erase settings.

The target is a one of among danbooru's GENERAL tags what most frequently used in order from the top to the bottom, and sometimes I also add phrases that I have personally come up with.

- target: "$query"
  positive: "$query"
  unconditional: ""
  neutral: ""
  action: "erase"
  guidance_scale: 1.0
  resolution: 512
  batch_size: 4
prompts_file: prompts.yaml
pretrained_model:
  name_or_path: "/storage/model-1892-0000-0000.safetensors"
  v2: false
  v_pred: false
network:
  type: "lierla"
  rank: 4
  alpha: 1.0
  training_method: "full"
train:
  precision: "bfloat16"
  noise_scheduler: "ddim"
  iterations: 50
  lr: 1
  optimizer: "Prodigy"
  lr_scheduler: "cosine"
  max_denoising_steps: 50

save:
  name: "$query"
  path: "/stable-diffusion-webui/models/Lora/LECO/"
  per_steps: 50
  precision: "float16"

logging:
  use_wandb: false
  verbose: false

other:
  use_xformers: true