--- license: creativeml-openrail-m --- # Okingjo's Multi-identifier LORAs I will share most of my LORA model with multiple identifier here. by saying "multiple", one charater with two or more costumes are stored within the model. Not only will the LORA model will be post here, training setups and tips will also be shared. I`m still in the state of learning, so any comments/feedbacks are welcom! # some tips during trainig ## general idea The way to have multiple identifiers within one Lora is by using captioning. Since most auto captioning of an anime character starts with "1girl/boy", the second prompt will be used as the triggering word, i.e. the prompt to let the AI to excite the desired costume. Captioning the images properly and put them into different folders respectively, just as the Kohya_SS SD trainier docs says, and you are good to go. ## captioning tips I`m trying to find a way to optimize the captioning of identifiers. I`ll write down some of my findings here as a record. * It looks like that the more different the trigger prompts are, the better. The exception are my first multi-id Lora "ningguang" * The format of promt are different in SD webUI and the captioning file, e.g. the " "and "_". Currently I`m using the webUI format in captioning, and so far so good. # Charater from Genshin ## Ningguang/凝光 ### Brief intro LORA of Ningguang, with two costumes in game. civitAI page [Download](https://civitai.com/models/8546/ningguang) ### Training dataset #### Default costume 72 images in total, in folder "30_Ningguang" * 36 normal illustrations * 15 normal 360 3D model snapshots * 5 nude illustrations * 16 nude 360 3D model snapshots #### Orchid costume 43 images in total, in folder "20_NingguangOrchid" * 11 normal illustrations * 15 normal 360 3D model snapshots * 2 nude illustrations * 15 nude 360 3D model snapshots ### Captioning WD14 captioning instead of the deepdanbooru caption was used, since the former one will not crop/resize the images. Threshold are usually set to 0.75-0.8. since I don't like to have a very long and sometimes inaccurate caption for my training data. After captionin is done, I added "ningguang \ \(genshin impact\ \)" after "1girl" to every caption file of the default costume, and "ningguang \ \(orchid's evening gown\ \) \ \(genshin impact\ \)" to the orchid costume. Some of the caption files were empty so I have to manually type the words. ### Training setup Trained with Kohya_SS stable diffusion trainer Base model was [Anything V3.0 full](https://huggingface.co/Linaqruf/anything-v3.0/blob/main/anything-v3-fp32-pruned.safetensors) Trainig process consist of two phases. The first one with default parameters of: * learning_rate: 0.0001 * text_encoder_lr: 5e-5 * unet_lr: 0.0001 and 6 epoch, After phase 1, choose the one with the best result (a little bit underfitting, no over fitting, and the two costume are seperated), which is the 6th one. Then trained with 1/10 of the original LR for another 7 epochs. ### Result ![sample1](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/9573a553-c456-4c36-c029-f2955fe52800/width=800) ![sample2](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/c7709515-4537-4501-fe87-296734995700/width=800) ![sample3](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/38d47c4a-6ba5-4925-5a56-e8701856a100/width=640) ![sample4](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/b60aa7f4-6f63-46fb-381f-05b11f4afe00/width=640) # Charater from Honkai impact ## Elysia/爱莉希雅 ### Brief intro LORA of Elysia, with 4 costumes in game. civitAI page [Download](https://civitai.com/models/14616) ### Training dataset #### Default costume/Miss Pink Elf 70 images in total, in folder "14_Elysia (miss pink elf) 1girl" * 20 normal illustrations, non-nude * 30 normal 360 3D model snapshots #### Default herrscher costume/herrscher of human:ego 93 images in total, in folder "11_elysia (herrscher of humanego) 1girl" * 33 normal illustrations, included few NSFW, not completely nude though * 60 normal 360 3D model snapshots #### Miad costume 62 images in total, in folder “16_Elysia-maid 1girl” * 33 normal illustrations, non-nude * 30 normal 360 3D model snapshots #### swimsuit 74 images in total, in folder “14_elysia-swimsuit 1girl” * 14 normal illustrations, non-nude * 60 normal 360 3D model snapshots In addition, I have also included 12 images with non-official costumes in a new folder "10_Elysia 1girl" ### Captioning WD14 captioning instead of the deepdanbooru caption was used, since the former one will not crop/resize the images. Threshold are usually set to 0.75-0.8. since I don't like to have a very long and sometimes inaccurate caption for my training data. After captionin is done, I added "elysia \\(miss pink elf\\) \\(honkai impact\\)", "elysia \\(herrscher of human:ego\\) \\(honkai impact\\)", "Elysia-maid", "Elysia-swimsuit" and "1girl, elysia \\(honkai impact\\)" to the captioning respectively as identifiers. ### Training setup Trained with Kohya_SS stable diffusion trainer Base model was [Anything V3.0 full](https://huggingface.co/Linaqruf/anything-v3.0/blob/main/anything-v3-fp32-pruned.safetensors) Trainig process consist of two phases. The first one with default parameters of: * learning_rate: 0.0001 * text_encoder_lr: 5e-5 * unet_lr: 0.0001 and 4 epoch, After phase 1, choose the one with the best result (a little bit underfitting, no over fitting, and the two costume are seperated), which is the 6th one. Then trained with 1/10 of the original LR for another 8 epochs. ### Result ![sample1](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/9a699f92-b026-4efb-9714-6d6e2675f400/width=1280/174757)