sd-models / posts /mask.md
gustproof's picture
ddfsd
3cfc073
|
raw
history blame
No virus
2.45 kB

(It would be greatly appreciated if someone can point to me a clean source of Tokyo 7th Sisters assets. I don't really want to scrape Twitter or reverse the game API.)

Mask, Don't Negative Prompt: Dealing with undesirable parts of training images

Introduction

Training images aren't always clean. Sometimes, when training for a given target, unrelated parts in images such as text, frames, or watermarks will also be learned by the model. There are several strategies that can be applied to this problem, each with shortcomings:

  1. Cropping: Leave out undesired parts. Modifies source composition, not applicable in some cases.
  2. Inpainting: Preprocess the data and replace undesirable parts with generated pixels. Requires a good inpainting prompt / model.
  3. Negative Prompts: Train as is and add negative prompts when generating new images. Requires the model to know how the undesirable parts map to the prompt.

Another simple strategy is effective:

  1. Masking: Multiply the loss with a predefinfed mask.

This method is not new, but the most popular LoRA training script has yet to have built-in support for it.

Experiment

60 images with card text and decorations of Serizwa Momoka from Tokyo 7th Sisters were used.

A masked LoRA and an plain unmasked LoRA were trained.

For the masked version, a mask was drawn using image editing software over source images. Note that since the VAE has a 8x scaling factor, what seen by the model is the 8x8 pixelated version. Tags that do not describe the parts masked away were removed.

Results

xy compare

Masked version works 100% unlike negative prompts.

Future work

  • Auto generation of masks with segmantation models