lora-training / momoka /README.md
khanon's picture
adds missing dataset links for Fubuki and Momoka
2ca36bc

Yuragi Momoka (Blue Archive)

由良木モモカ (ブルーアーカイブ) / 유라기 모모카 (블루 아카이브) / 由良木桃香 (碧蓝档案)

Download here.

Table of Contents

Preview

Momoka portrait Momoka preview 1 Momoka preview 2 Momoka preview 3

Usage

Use any or all of the following tags to summon momoka: momoka, halo, short twintails, horns, bright pupils, pointy ears, hair ornament, ahoge

  • Add (dragon tail:1.3) for her tail (even though I'm not quite sure Momoka is truly a dragon?)

For her normal outfit: sleeveless dress, collared dress, blue necktie, white open jacket, off shoulder, loose socks, white shoes

  • Add frilled dress if the frills at the bottom of her dress are not correctly displayed.

For her accessories: potato chips, bag of chips, holding food

For her smug expression: smug, open mouth, sharp teeth, :3, :d

  • Alternatively, smug, grin, sharp teeth, smile for a toothy grin

Here is a list of all tags including in the training dataset, sorted by frequency.

Training

Exact parameters are provided in the accompanying JSON files.

  • Trained on a set of 94 images.
    • 13 repeats
    • 3 batch size, 4 epochs
    • (94 * 13) / 3 * 4 = 1654 steps
  • 0.0737 loss
  • Initially tagged with WD1.4 swin-v2 model. Tags pruned/edited for consistency.
  • constant_with_warmup scheduler
  • 1.5e-5 text encoder LR
  • 1.5e-4 unet LR
  • 1e-5 optimizer LR
  • Used network_dimension 128 (same as usual) / network alpha 128 (default)
    • Resized to 24 after training
    • This LoRA seemed very slightly overtrained, perhaps due to smaller dataset, so resizing to 24 appeared a bit better than 32.
  • Training resolution 832x832.
    • This one also came out better at 832 vs 768.
    • It's not clear to me why some LoRAs perform substantially better at 768 and others at 832.
  • Trained without VAE.
  • Training dataset available here.

Revisions

  • v1c (2023-02-19)
    • Initial release.