lora-training / shizuko /README.md
khanon's picture
adds loss values to most LoRAs
6e1c7b7

Kawawa Shizuko (Blue Archive)

河和シズコ (ブルーアーカイブ) / 카와와 시즈코 (블루 아카이브) / 河和静子 (碧蓝档案)

Download here.

Table of Contents

Preview

Shizuko Portrait Shizuko Normal Outfit Shizuko Swimsuit Outfit

I spent a lot longer than usual on this one trying to mess with batch sizes, learning rates, and network dimension/alpha. I didn't really come out with any conclusive findings and ended up going back to settings similar to the Koharu LoRA.

Usage

Use any or all of the following tags to summon Shizuko: shizuko, halo, 1girl, maid headdress, purple eyes, brown hair

  • Hair and eye tags are optional.

For her normal outfit: two side up, wa maid, japanese clothes, pink kimono, apron, black skirt, white thighhighs, hair ribbon

For her swimsuit outfit: twintails, swimsuit, pink bikini, frilled bikini, frills, hair flower, fake animal ears

For her lewd expression: naughty face, seductive smile, :d, blush

Not all tags may be necessary.

Her summer alt's cat ears tend to leak into other outfits, but can usually be fixed with negative fake animal ears.

Her shotgun sling tends to show up on her normal outfit and is a little hard to get rid of, unfortunately. You can try gun sling or strap in the negative prompt. If I were retraining this, I would go back and tag the sling in all images and see if that makes it easier to remove. Maybe another time.

Training

Exact parameters are provided in the accompanying JSON files.

  • Trained on a set of 125 images; 88 swimsuit, 37 normal.
    • 11 repeats for normal outfit
    • 8 repeats for swimsuit outfit
    • 3 batch size, 4 epochs
    • (88*8 + 37*11) / 3 * 4 = 1482 steps
  • 0.0958 loss
  • Due to a change in the kohya GUI script, a few of my previous LoRAs (Mari, Michiru, Reisa, Sora, Chise) were accidentally trained without my painstakingly pruned tags. This is probably why they seem overfit to the characters' outfits, though the results were surprisingly good considering there were literally no tags.
  • Once I found this issue, I figured since I'd have to "re-learn" some of my settings anyway to account for proper captions, I may as well experiment with batch/dim/alpha/LR valuee. Unfortunately, no conclusive results. Ended up going back to tried and true settings from several LoRAs ago.
  • constant_with_warmup scheduler instead of cosine since it seems to train in fewer steps, at the cost of being more finnicky
  • 1.5e-5 text encoder LR
  • 1.5e-4 unet LR
  • 1.5e-5 optimizer LR though in my experience this makes very little difference if the above two are already set
  • Initially tagged with WD1.4 swinv2 model. Tags minimally pruned/edited.
    • Removed blue archive from tags. I think it just adds noise.
    • keep_tokens accidentally set to 3. This means it probably usually kept shizuko, 1girl and some other random tag.
  • Used network_dimension 128 (same as usual) / network alpha 128 (default)
  • Trained without VAE.
  • Dataset can be found on the mega.co.nz repository.

Revisions

  • v9 (2023-02-06)
    • Initial release.