license: creativeml-openrail-m
Okingjo's Multi-identifier LORAs
I will share most of my LORA model with multiple identifier here. by saying "multiple", one charater with two or more costumes are stored within the model. Not only will the LORA model will be post here, training setups and tips will also be shared. I`m still in the state of learning, so any comments/feedbacks are welcom!
some tips during trainig
general idea
The way to have multiple identifiers within one Lora is by using captioning. Since most auto captioning of an anime character starts with "1girl/boy", the second prompt will be used as the triggering word, i.e. the prompt to let the AI to excite the desired costume. Captioning the images properly and put them into different folders respectively, just as the Kohya_SS SD trainier docs says, and you are good to go.
captioning tips
Im trying to find a way to optimize the captioning of identifiers. I
ll write down some of my findings here as a record.
- It looks like that the more different the trigger prompts are, the better. The exception are my first multi-id Lora "ningguang"
The format of promt are different in SD webUI and the captioning file, e.g. the " "and "_", Tried with Barbara model, minimal difference, but it seems the original format(using"_" instead of " ") is better, slightly.
Charater from Genshin
Ningguang/凝光
Brief intro
LORA of Ningguang, with two costumes in game. civitAI page Download
Training dataset
Default costume
72 images in total, in folder "30_Ningguang"
- 36 normal illustrations
- 15 normal 360 3D model snapshots
- 5 nude illustrations
- 16 nude 360 3D model snapshots
Orchid costume
43 images in total, in folder "20_NingguangOrchid"
- 11 normal illustrations
- 15 normal 360 3D model snapshots
- 2 nude illustrations
- 15 nude 360 3D model snapshots
Captioning
WD14 captioning instead of the deepdanbooru caption was used, since the former one will not crop/resize the images. Threshold are usually set to 0.75-0.8. since I don't like to have a very long and sometimes inaccurate caption for my training data. After captionin is done, I added "ningguang \ (genshin impact\ )" after "1girl" to every caption file of the default costume, and "ningguang \ (orchid's evening gown\ ) \ (genshin impact\ )" to the orchid costume. Some of the caption files were empty so I have to manually type the words.
Training setup
Trained with Kohya_SS stable diffusion trainer Base model was Anything V3.0 full Trainig process consist of two phases. The first one with default parameters of:
- learning_rate: 0.0001
- text_encoder_lr: 5e-5
- unet_lr: 0.0001 and 6 epoch, After phase 1, choose the one with the best result (a little bit underfitting, no over fitting, and the two costume are seperated), which is the 6th one. Then trained with 1/10 of the original LR for another 7 epochs.
Result
Barbara/芭芭拉
Brief intro
LORA of Barbara, with two costumes in game. The filesize of the model has been decreased to 1/4, i.e. 2 identifiers in a 64 size (was 1 identifier per 128). On one hand, saving storasge; on the other hand, fewer room for char with lower priority. in summary, less is better!
Training dataset
Default costume
164 images in total, in folder "10_barbara_(genshin_impact) 1girl"
- 104 illustrations, bothSFW and NSFW, handpicked to ensure best quality
- 30 normal 360 3D model snapshots
- 30 nude 360 3D model snapshots
Summertime swimsuit
94 imges in total, in folder "16_barbara_(summertime_sparkle)_(genshin_impact) 1girl"
- 64 illustrations, bothSFW and NSFW, handpicked to ensure best quality
- 30 normal 360 3D model snapshots
Captioning
It was the first time that the standard Danbooru style prompt was used for captioning. "barbara_(genshin_impact)" and "barbara_(summertime_sparkle)_(genshin_impact)" were added to each costume respectively.
Training setup
Defalut LR fo 4 epochs, then 1/10 default LR for another 8 epochs. Trainig basing on anything v3. Total steps is: (4+8)x(164x10+94x16)=37,728
results
Charater from Honkai impact
Elysia/爱莉希雅
Brief intro
LORA of Elysia, with 4 costumes in game. civitAI page Download
Training dataset
Default costume/Miss Pink Elf
70 images in total, in folder "14_Elysia (miss pink elf) 1girl"
- 40 normal illustrations, non-nude
- 30 normal 360 3D model snapshots
Default herrscher costume/herrscher of human:ego
93 images in total, in folder "11_elysia (herrscher of humanego) 1girl"
- 33 normal illustrations, included few NSFW, not completely nude though
- 60 normal 360 3D model snapshots
Miad costume
62 images in total, in folder “16_Elysia-maid 1girl”
- 33 normal illustrations, non-nude
- 30 normal 360 3D model snapshots
swimsuit
74 images in total, in folder “14_elysia-swimsuit 1girl”
- 14 normal illustrations, non-nude
- 60 normal 360 3D model snapshots In addition, I have also included 12 images with non-official costumes in a new folder "10_Elysia 1girl"
Captioning
WD14 captioning instead of the deepdanbooru caption was used, since the former one will not crop/resize the images. Threshold are usually set to 0.75-0.8. since I don't like to have a very long and sometimes inaccurate caption for my training data. After captionin is done, I added "elysia \ (miss pink elf\ ) \ (honkai impact\ )", "elysia \ (herrscher of human:ego\ ) \ (honkai impact\ )", "Elysia-maid", "Elysia-swimsuit" and "1girl, elysia \ (honkai impact\ )" to the captioning respectively as identifiers.
Training setup
Trained with Kohya_SS stable diffusion trainer Base model was Anything V3.0 full Trainig process consist of two phases. The first one with default parameters of:
- learning_rate: 0.0001
- text_encoder_lr: 5e-5
- unet_lr: 0.0001 and 4 epoch, After phase 1, choose the one with the best result (a little bit underfitting, no over fitting, and the two costume are seperated), which is the 6th one. Then trained with 1/10 of the original LR for another 8 epochs.