okingjo's picture
Update README.md
35abaee
---
license: creativeml-openrail-m
---
# Okingjo's Single-identifier LORAs
I will share most of my LORA model with single identifier here. by saying "single", only one charater with one costume are stored within the model.
Not only will the LORA model will be post here, training setups and tips will also be shared.
I`m still in the state of learning, so any comments/feedbacks are welcom!
## Characters from Genshin Impact
### Sangonomiya-Kokomi / 珊瑚宫心海
#### Brief intro
LORA of Sangonomiya Kokomi, with her default costume in game.
civitAI page [download](https://civitai.com/models/9186/sangonomiya-kokomi)
#### Training dataset
149 images of Kokomi:
* 4 nude illustrations, to ensure the AI knows that the costume is removable
* 85 normal illustrations of Kokomi, multiple angle, style and composition
* 30 nude 360 degree snapshot of Kokomi's 3D model
* 30 normal 360 degree snapshot of Kokomi's 3D model
Since only one costume is included, all 149 images are placed inside one folder.
#### Captioning
WD14 captioning instead of the danbooru caption was used, since the former one will not crop/resize the images.
Threshold are usually set to 0.75-0.8. since I don't like to have a very long and sometimes inaccurate caption for my training data.
After captionin is done, I added "sangonomiya kokomi" after "1girl" to every caption file generate as the triggering prompt. Some of the caption files were empty so I have to manually type the words.
#### Training setup
Trained with Kohya_SS stable diffusion trainer
Base model was [Anything V3.0 full](https://huggingface.co/Linaqruf/anything-v3.0/blob/main/anything-v3-fp32-pruned.safetensors)
Trainig process consist of two phases. The first one with default parameters of:
* learning_rate: 0.0001
* text_encoder_lr: 5e-5
* unet_lr: 0.0001
20 repeats, and 5 epoch
Then, for phase2, all three learning rate were decreased to 1/10, and trained with another 5 epochs.
#### results
V1.0 samples
![sample1](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/de263a36-166d-45b4-5d8c-9bd4e310af00/width=800)
![sample2](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/4daf18ca-a61a-43f2-0415-1f46cc002100/width=800)
![sample3](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/bb171f84-67a7-4da2-ced2-36c3cfab8f00/width=800)
## Characters from Honkai Impact 3rd
### Raiden Mei adult ver / 雷电芽衣
#### Brief intro
LORA of the adult Raiden Mei from Honkai Impact 3rd, Post-Honkai Odyssey, with her default costume in game.
civitAI page [download](https://civitai.com/models/13023/raiden-mei-adult-ver)
#### Training dataset
96 images of Raiden Mei:
* 36 illustrations. both SFW and NSFW, 3 of them are with other costumes.
* 30 360degree 3D model snapshots for accuracy.
* 30 360degree 3D model nude snapshot to ensure the costume is removable/replacable.
Since only one costume is included, all 96 images are placed inside one folder.
#### Captioning
WD14 captioning instead of the danbooru caption was used, since the former one will not crop/resize the images.
Threshold are usually set to 0.75-0.8. since I don't like to have a very long and sometimes inaccurate caption for my training data.
After captionin is done, I added "raiden mei" after "1girl" to every caption file generate as the triggering prompt. Some of the caption files were empty so I have to manually type the words.
#### Training setup
Trained with Kohya_SS stable diffusion trainer
Base model was [Anything V3.0 full](https://huggingface.co/Linaqruf/anything-v3.0/blob/main/anything-v3-fp32-pruned.safetensors)
Trainig process consist of two phases. The first one with default parameters of:
* learning_rate: 0.0001
* text_encoder_lr: 5e-5
* unet_lr: 0.0001
20 repeats, and 3 epoch
Then, for phase2, all three learning rate were decreased to 1/10, and trained with another 8 epochs.
#### results
V1.0 samples
![sample1](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/61d46874-b53e-46bf-0adb-b60448aa6400/width=800)
![sample2](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/ac518ae5-bccc-4361-79d9-0e7a4b8b1200/width=800)