khanon commited on
Commit
1350f17
β€’
1 Parent(s): 1d34da1

adds retrained Hibiki LoRA

Browse files
README.md CHANGED
@@ -23,7 +23,9 @@ Here you will find the various LoRAs I've trained, typically of Blue Archive cha
23
  [![Chise](chise/chara-chise-v2.png)](https://huggingface.co/khanon/lora-training/blob/main/chise/README.md)
24
 
25
  ### Hibiki
26
- [Available on old Mega.co.nz repository.](https://mega.nz/folder/SqYwQTRI#GN2SmGTBsV6S4q-L-V4VeA)
 
 
27
 
28
  ### Hina
29
  [Available on old Mega.co.nz repository.](https://mega.nz/folder/SqYwQTRI#GN2SmGTBsV6S4q-L-V4VeA)
 
23
  [![Chise](chise/chara-chise-v2.png)](https://huggingface.co/khanon/lora-training/blob/main/chise/README.md)
24
 
25
  ### Hibiki
26
+ [Nekozuka Hibiki / ηŒ«ε‘šγƒ’γƒ“γ‚­ / λ„€μ½”μ¦ˆμΉ΄ νžˆλΉ„ν‚€ / ηŒ«ε‘šε“](https://huggingface.co/khanon/lora-training/blob/main/hibiki/README.md)
27
+
28
+ [![Hibiki](hibiki/chara-hibiki-v3.png)](https://huggingface.co/khanon/lora-training/blob/main/hibiki/README.md)
29
 
30
  ### Hina
31
  [Available on old Mega.co.nz repository.](https://mega.nz/folder/SqYwQTRI#GN2SmGTBsV6S4q-L-V4VeA)
hibiki/README.md CHANGED
@@ -1,38 +1,61 @@
1
  # Nekozuka Hibiki (Blue Archive)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
  ## Usage
4
- *Important:* This is a fairly temperamental LoRA due to the dataset, and it needs some wrangling to get good results. It won't look good with vanilla NAI and the standard NAI negative prompt, despite being trained on nai-animefull-final.
5
- - Use a strong negative prompt. Consider using the bad_prompt_v2 embed at a reduced strength to dramatically improve things, though it does affect the style somewhat.
6
- - Use a strong model. AbyssOrangeMix2 and nutmegmix work well with this LoRA.
7
- - Use the negative prompt liberally to suppress cheerleader Hibiki if you don't want her, otherwise her traits tend to take over.
8
 
9
- To summon Hibiki, use the following tags. Adjust strength as needed.
10
- - `1girl, halo, black hair, blue eyes, bright pupils, animal ears, dog girl, tail, hair bobbles, goggles, eyewear on head, medium breasts`
11
 
12
- For regular Hibiki, add the following tags. Adjust strength as needed.
13
- - Prompt: `(black camisole, jacket, black shorts:1.2), (fishnet legwear:1.1)`
14
- - Negative prompt: `(midriff, navel, skin tight, tight:1.4), (tattoo, arm tattoo, star sticker:1.30), ski goggles, wavy mouth, embarrassed`
15
- - Cheerleader Hibiki tends to have a permanent embarrassed/wavy mouth expression unless you use negative tags to get rid of it.
16
 
17
- For cheerleader Hibiki, add the following tags.
18
- - Prompt: `cheerleader, midriff, embarrassed, wavy mouth, crop top, white pleated skirt`
19
- - Negative prompt: `fishnets`
 
20
 
21
- You can add or ignore `hibiki, blue archive`; while they were in her captions, they don't have an especially strong effect.
22
 
23
- Adjust the weight as needed. Weights from 0.95 up to 1.25 work well (higher weights may summon Cheerleader Hibiki unintentionally).
 
 
 
 
24
 
25
  ## Training
26
- *All parameters are provided in the accompanying JSON files.*
27
- - Trained on 119 images, separated into two sets.
28
- - Hibiki (regular) -- 54 images, 10 repeats
29
- - Hibiki (cheerleader) -- 65 images, 6 repeats
30
- - Dataset included a mixture of SFW and NSFW. Mostly SFW.
31
- - As you can guess, her cheerleader alt makes up the vast majority of her art, and artists are more consistent when drawing her. Training them all together did not work well, so I had to split up the datasets.
32
- - Dataset was tagged with WD1.4 interrogator. Shuffling was enabled.
33
- - `hibiki, blue archive` were added to the start of each caption. keep_tokens=2 was set to prevent shuffling those tokens.
34
- - Her cheerleader outfit is much more easily recognized by the tagger, leading to stronger tags. Even human artists can't seem to agree on what she's wearing in her normal outfit.
35
- - Trained at 768px resolution. I stopped training a 512 variant because it was almost always worse and added to training time.
36
- - Two variants included.
37
- - v1: first attempt at splitting the dataset. Works well, but not as coherent in some ways (halo particularly) and still tends to blend details from her two outfits.
38
- - v2: using the split dataset. Increased batch size slightly (3 >>> 4), reduced learning rate (3e-5 >>> 1e-5), increased epochs (1 >>> 3). Generally an improvement, though sometimes v1 is easier to wrangle.
 
 
 
 
 
 
 
 
 
 
 
1
  # Nekozuka Hibiki (Blue Archive)
2
+ ηŒ«ε‘šγƒ’γƒ“γ‚­ (ブルーをーカむブ) / λ„€μ½”μ¦ˆμΉ΄ νžˆλΉ„ν‚€ (블루 μ•„μΉ΄μ΄λΈŒ) / ηŒ«ε‘šε“ (η’§θ“ζ‘£ζ‘ˆ)
3
+
4
+ [**Download here.**](chara-hibiki-v3.safetensors)
5
+
6
+ ## Table of Contents
7
+ - [Preview](#preview)
8
+ - [Usage](#usage)
9
+ - [Training](#training)
10
+ - [Revisions](#revisions)
11
+
12
+ ## Preview
13
+ ![Hibiki portrait](chara-hibiki-v3.png)
14
+ ![Hibiki 1](example-001-v3.png)
15
+ ![Hibiki 2](example-002-v3.png)
16
+ ![Hibiki 3](example-003-v3.png)
17
 
18
  ## Usage
19
+ _β€» Hibiki's outfits are rather complicated, so I have documented all of her tags here for your reference. It may not be necessary to use all of them._
 
 
 
20
 
21
+ Use any or all of these tags to summon Hibiki: `hibiki, halo, dog ears, bright pupils, goggles on head, black hair, dog tail`
 
22
 
23
+ For her normal outfit: `twintails, hair bobbles, engineering goggles, purple choker, multicolored jacket, black camisole, grey shirt, black shorts, fishnet pantyhose, dog tags`
 
 
 
24
 
25
+ For her cheerleader alt: `cheerleader, ponytail, crop top, wing collar, criss-cross halter, pleated miniskirt, pom pom (cheerleading)`
26
+ - Use `cheerleader` to strengthen or weaken the effect.
27
+ - If the stickers appear on incorrect outfits, add `star sticker, sticker on face, sticker on arm` to the negative prompt.
28
+ - If her name tag and collar appear on incorrect outfits, add `wing collar` to the negative prompt.
29
 
30
+ Calm expression: `light smile, expressionless`
31
 
32
+ Embarrassed expression: `blush, embarrassed, wavy mouth, open mouth`
33
+
34
+ Weapon: `mortar (weapon), rocket, mortar shell, suitcase, m logo`
35
+
36
+ Weight 0.9 - 1.0 should be ideal. The LoRA is slightly overfit, I'll probably retrain it with slightly fewer steps in the future. It's still a substantial improvement over the previous version and is reasonably flexible with Hibiki's outfit.
37
 
38
  ## Training
39
+ *Exact parameters are provided in the accompanying JSON files.*
40
+ - Trained on a set of 162 images; 82 cheerleader, 80 normal.
41
+ - 8 repeats for normal outfit
42
+ - 7 repeats for cheerleader outfit
43
+ - 3 batch size, 4 epochs
44
+ - `(82*7 + 80*8) / 3 * 4` = 1619 steps
45
+ - 0.0905 loss
46
+ - 832x832 training resolution
47
+ - `constant_with_warmup` scheduler
48
+ - Initially tagged with WD1.4 swin-v2 model, then heavily edited
49
+ - Used network_dimension 128 (same as usual) / network alpha 128 (default)
50
+ - Resized to 32dim after training
51
+ - Trained without VAE.
52
+ - [Training dataset available here.](https://mega.nz/folder/2u4HgRoK#1wqHcDAJRi6jrtTAmNjB-Q)
53
+
54
+ ## Revisions
55
+ - v3 (2023-02-13)
56
+ - Retrained from scratch since the original LoRA was not very good.
57
+ - Added many new images to the dataset, particularly for Hibiki's regular outfit.
58
+ - Completely re-tagged the dataset. Used WD1.4 swin-v2 as a starting point, then performed extensive editing and cleanup for accuracy and consistency. Added many new tags to better describe Hibiki's outfits and expressions.
59
+ - Uses improved hyperparamters from my more recent models.
60
+ - v1 (2023-01-10)
61
+ - Initial release.
hibiki/{comparisons/example-001-variant1.png β†’ chara-hibiki-v3.png} RENAMED
File without changes
hibiki/chara-hibiki-v3.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae2f43fa0fc02447ce982e04632c0f2020762392d394823084838c7cc7c901fc
3
+ size 37877571
hibiki/comparisons/example-003-variant1.png DELETED

Git LFS Details

  • SHA256: df4e52cc783040cd37d7da213abba808f1b225973f85fb4d9e45cf02221c215c
  • Pointer size: 132 Bytes
  • Size of remote file: 1.69 MB
hibiki/comparisons/example-003-variant2.png DELETED

Git LFS Details

  • SHA256: c5df2f8b960a5b2884427751843e77723f0a9e0cea53b1846b9841dd7300730e
  • Pointer size: 132 Bytes
  • Size of remote file: 1.62 MB
hibiki/{comparisons/example-001-variant2.png β†’ example-001-v3.png} RENAMED
File without changes
hibiki/{comparisons/example-002-variant1.png β†’ example-002-v3.png} RENAMED
File without changes
hibiki/{comparisons/example-002-variant2.png β†’ example-003-v3.png} RENAMED
File without changes
hibiki/{lora_character_hibiki_119img9repeat_512.json β†’ lora_chara_hibiki_v3_80i8r-82i7r.json} RENAMED
@@ -3,21 +3,22 @@
3
  "v2": false,
4
  "v_parameterization": false,
5
  "logging_dir": "",
6
- "train_data_dir": "G:/sd/training/datasets/hibiki",
7
- "reg_data_dir": "G:/sd/training/datasets/regempty",
8
- "output_dir": "G:/sd/repo/extensions/sd-webui-additional-networks/models/lora",
9
- "max_resolution": "512,512",
 
10
  "lr_scheduler": "constant_with_warmup",
11
  "lr_warmup": "5",
12
  "train_batch_size": 3,
13
- "epoch": "1",
14
- "save_every_n_epochs": "1",
15
  "mixed_precision": "fp16",
16
  "save_precision": "fp16",
17
- "seed": "23",
18
  "num_cpu_threads_per_process": 32,
19
- "cache_latent": true,
20
- "caption_extention": ".txt",
21
  "enable_bucket": true,
22
  "gradient_checkpointing": false,
23
  "full_fp16": false,
@@ -30,8 +31,8 @@
30
  "save_state": false,
31
  "resume": "",
32
  "prior_loss_weight": 1.0,
33
- "text_encoder_lr": "3e-5",
34
- "unet_lr": "3e-4",
35
  "network_dim": 128,
36
  "lora_network_weights": "",
37
  "color_aug": false,
@@ -39,5 +40,20 @@
39
  "clip_skip": 2,
40
  "gradient_accumulation_steps": 1.0,
41
  "mem_eff_attn": false,
42
- "output_name": "hibiki-NAI-VAE-512px-357steps"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
  }
 
3
  "v2": false,
4
  "v_parameterization": false,
5
  "logging_dir": "",
6
+ "train_data_dir": "G:/sd/training/datasets/hibiki/dataset",
7
+ "reg_data_dir": "",
8
+ "output_dir": "G:/sd/lora/trained/chara/hibiki",
9
+ "max_resolution": "832,832",
10
+ "learning_rate": "1e-5",
11
  "lr_scheduler": "constant_with_warmup",
12
  "lr_warmup": "5",
13
  "train_batch_size": 3,
14
+ "epoch": "4",
15
+ "save_every_n_epochs": "",
16
  "mixed_precision": "fp16",
17
  "save_precision": "fp16",
18
+ "seed": "31337",
19
  "num_cpu_threads_per_process": 32,
20
+ "cache_latents": true,
21
+ "caption_extension": ".txt",
22
  "enable_bucket": true,
23
  "gradient_checkpointing": false,
24
  "full_fp16": false,
 
31
  "save_state": false,
32
  "resume": "",
33
  "prior_loss_weight": 1.0,
34
+ "text_encoder_lr": "1.5e-5",
35
+ "unet_lr": "1.5e-4",
36
  "network_dim": 128,
37
  "lora_network_weights": "",
38
  "color_aug": false,
 
40
  "clip_skip": 2,
41
  "gradient_accumulation_steps": 1.0,
42
  "mem_eff_attn": false,
43
+ "output_name": "chara-hibiki-v3",
44
+ "model_list": "",
45
+ "max_token_length": "150",
46
+ "max_train_epochs": "",
47
+ "max_data_loader_n_workers": "",
48
+ "network_alpha": 128,
49
+ "training_comment": "Character: `hibiki, halo, dog ears, bright pupils, goggles on head, black hair, dog tail`\nStandard outfit: `twintails, hair bobbles, engineering goggles, purple choker, dog tags, multicolored jacket, black camisole, grey shirt, black shorts`\nCheerleader: `cheerleader, ponytail, crop top, wing collar, criss-cross halter, pleated miniskirt, pom pom (cheerleading)`\n\nCheerleader stickers are tagged with `sticker on face, sticker on arm, star sticker`.\nUse `engineering goggles` for her normal goggles and `blue goggles` for her cheerleader goggles.\nSeveral alternate outfits/cosplays are also included. Use `alternate outfit` to encourage different outfits.\n\n(82 cheerleader * 7 repeats + 80 normal * 8 repeats) / 3 batch size * 4 epochs = 1772 steps",
50
+ "keep_tokens": 2,
51
+ "lr_scheduler_num_cycles": "",
52
+ "lr_scheduler_power": "",
53
+ "persistent_data_loader_workers": true,
54
+ "bucket_no_upscale": true,
55
+ "random_crop": false,
56
+ "bucket_reso_steps": 64.0,
57
+ "caption_dropout_every_n_epochs": 0.0,
58
+ "caption_dropout_rate": 0
59
  }
hibiki/lora_character_hibiki_119img9repeat_768.json DELETED
@@ -1,43 +0,0 @@
1
- {
2
- "pretrained_model_name_or_path": "G:/sd/repo/models/Stable-diffusion/nai-animefull-final-pruned.safetensors",
3
- "v2": false,
4
- "v_parameterization": false,
5
- "logging_dir": "",
6
- "train_data_dir": "G:/sd/training/datasets/hibiki",
7
- "reg_data_dir": "G:/sd/training/datasets/regempty",
8
- "output_dir": "G:/sd/repo/extensions/sd-webui-additional-networks/models/lora",
9
- "max_resolution": "768,768",
10
- "lr_scheduler": "constant_with_warmup",
11
- "lr_warmup": "5",
12
- "train_batch_size": 3,
13
- "epoch": "1",
14
- "save_every_n_epochs": "1",
15
- "mixed_precision": "fp16",
16
- "save_precision": "fp16",
17
- "seed": "23",
18
- "num_cpu_threads_per_process": 32,
19
- "cache_latent": true,
20
- "caption_extention": ".txt",
21
- "enable_bucket": true,
22
- "gradient_checkpointing": false,
23
- "full_fp16": false,
24
- "no_token_padding": false,
25
- "stop_text_encoder_training": 0,
26
- "use_8bit_adam": true,
27
- "xformers": true,
28
- "save_model_as": "safetensors",
29
- "shuffle_caption": true,
30
- "save_state": false,
31
- "resume": "",
32
- "prior_loss_weight": 1.0,
33
- "text_encoder_lr": "3e-5",
34
- "unet_lr": "3e-4",
35
- "network_dim": 128,
36
- "lora_network_weights": "",
37
- "color_aug": false,
38
- "flip_aug": false,
39
- "clip_skip": 2,
40
- "gradient_accumulation_steps": 1.0,
41
- "mem_eff_attn": false,
42
- "output_name": "hibiki-NAI-VAE-768px-357steps"
43
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
hibiki/lora_character_hibiki_split_65i6r-54i10r_768.json DELETED
@@ -1,43 +0,0 @@
1
- {
2
- "pretrained_model_name_or_path": "G:/sd/repo/models/Stable-diffusion/nai-animefull-final-pruned.safetensors",
3
- "v2": false,
4
- "v_parameterization": false,
5
- "logging_dir": "",
6
- "train_data_dir": "G:/sd/training/datasets/hibiki",
7
- "reg_data_dir": "G:/sd/training/datasets/regempty",
8
- "output_dir": "G:/sd/repo/extensions/sd-webui-additional-networks/models/lora",
9
- "max_resolution": "768,768",
10
- "lr_scheduler": "constant_with_warmup",
11
- "lr_warmup": "5",
12
- "train_batch_size": 3,
13
- "epoch": "1",
14
- "save_every_n_epochs": "1",
15
- "mixed_precision": "fp16",
16
- "save_precision": "fp16",
17
- "seed": "23",
18
- "num_cpu_threads_per_process": 32,
19
- "cache_latent": true,
20
- "caption_extention": ".txt",
21
- "enable_bucket": true,
22
- "gradient_checkpointing": false,
23
- "full_fp16": false,
24
- "no_token_padding": false,
25
- "stop_text_encoder_training": 0,
26
- "use_8bit_adam": true,
27
- "xformers": true,
28
- "save_model_as": "safetensors",
29
- "shuffle_caption": true,
30
- "save_state": false,
31
- "resume": "",
32
- "prior_loss_weight": 1.0,
33
- "text_encoder_lr": "3e-5",
34
- "unet_lr": "3e-4",
35
- "network_dim": 128,
36
- "lora_network_weights": "",
37
- "color_aug": false,
38
- "flip_aug": false,
39
- "clip_skip": 2,
40
- "gradient_accumulation_steps": 1.0,
41
- "mem_eff_attn": false,
42
- "output_name": "hibiki-v2-NAI-VAE-768px-6.10split-310steps"
43
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
hibiki/lora_character_hibiki_split_65i6r-54i10r_768_batch4_slower3epoch.json DELETED
@@ -1,43 +0,0 @@
1
- {
2
- "pretrained_model_name_or_path": "G:/sd/repo/models/Stable-diffusion/nai-animefull-final-pruned.safetensors",
3
- "v2": false,
4
- "v_parameterization": false,
5
- "logging_dir": "",
6
- "train_data_dir": "G:/sd/training/datasets/hibiki",
7
- "reg_data_dir": "G:/sd/training/datasets/regempty",
8
- "output_dir": "G:/sd/repo/extensions/sd-webui-additional-networks/models/lora",
9
- "max_resolution": "768,768",
10
- "lr_scheduler": "constant_with_warmup",
11
- "lr_warmup": "5",
12
- "train_batch_size": 4,
13
- "epoch": "3",
14
- "save_every_n_epochs": "1",
15
- "mixed_precision": "fp16",
16
- "save_precision": "fp16",
17
- "seed": "23",
18
- "num_cpu_threads_per_process": 32,
19
- "cache_latent": true,
20
- "caption_extention": ".txt",
21
- "enable_bucket": true,
22
- "gradient_checkpointing": false,
23
- "full_fp16": false,
24
- "no_token_padding": false,
25
- "stop_text_encoder_training": 0,
26
- "use_8bit_adam": true,
27
- "xformers": true,
28
- "save_model_as": "safetensors",
29
- "shuffle_caption": true,
30
- "save_state": false,
31
- "resume": "",
32
- "prior_loss_weight": 1.0,
33
- "text_encoder_lr": "1e-5",
34
- "unet_lr": "1e-4",
35
- "network_dim": 128,
36
- "lora_network_weights": "",
37
- "color_aug": false,
38
- "flip_aug": false,
39
- "clip_skip": 2,
40
- "gradient_accumulation_steps": 1.0,
41
- "mem_eff_attn": false,
42
- "output_name": "hibiki-v3-NAI-VAE-768px-6.10split-310steps-batch4-slower"
43
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
hibiki/lora_character_hibiki_split_65i6r-54i10r_768_batch5_slower4epoch.json DELETED
@@ -1,43 +0,0 @@
1
- {
2
- "pretrained_model_name_or_path": "G:/sd/repo/models/Stable-diffusion/nai-animefull-final-pruned.safetensors",
3
- "v2": false,
4
- "v_parameterization": false,
5
- "logging_dir": "",
6
- "train_data_dir": "G:/sd/training/datasets/hibiki",
7
- "reg_data_dir": "G:/sd/training/datasets/regempty",
8
- "output_dir": "G:/sd/repo/extensions/sd-webui-additional-networks/models/lora",
9
- "max_resolution": "768,768",
10
- "lr_scheduler": "constant_with_warmup",
11
- "lr_warmup": "5",
12
- "train_batch_size": 5,
13
- "epoch": "4",
14
- "save_every_n_epochs": "1",
15
- "mixed_precision": "fp16",
16
- "save_precision": "fp16",
17
- "seed": "23",
18
- "num_cpu_threads_per_process": 32,
19
- "cache_latent": true,
20
- "caption_extention": ".txt",
21
- "enable_bucket": true,
22
- "gradient_checkpointing": false,
23
- "full_fp16": false,
24
- "no_token_padding": false,
25
- "stop_text_encoder_training": 0,
26
- "use_8bit_adam": true,
27
- "xformers": true,
28
- "save_model_as": "safetensors",
29
- "shuffle_caption": true,
30
- "save_state": false,
31
- "resume": "",
32
- "prior_loss_weight": 1.0,
33
- "text_encoder_lr": "1e-5",
34
- "unet_lr": "1e-4",
35
- "network_dim": 128,
36
- "lora_network_weights": "",
37
- "color_aug": false,
38
- "flip_aug": false,
39
- "clip_skip": 2,
40
- "gradient_accumulation_steps": 1.0,
41
- "mem_eff_attn": false,
42
- "output_name": "hibiki-v3-NAI-VAE-768px-6.10split-310steps-batch5-slower-4epoch"
43
- }