File size: 1,884 Bytes
ffee127 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
This folder contains models trained for the character grass wonder of umamusume
The main goal is to test the difference between native training and lora
You can prompt with
- GrassWonder
- GrassWonder Toresen Uniform
- GrassWonder Racing Uniform
### Dataset
Totat size 376 (no regularization is used)
- Face crop: 149
- Racing uniform: 94
- Toresen uniform: 39
- Other grasswonder images: 94
### Base model
[NMFSAN](https://huggingface.co/Crosstyan/BPModel/blob/main/NMFSAN/README.md) so you can have different styles
### Native training
Trained with [Kohya trainer](https://github.com/Linaqruf/kohya-trainer)
- training of text encoder turned on
- learning rate 1e-6
- batch size 1
- clip skip 2
- number of training steps 7520 (20 epochs)
*Examples*



### LoRA embedding
Please refer to [LoRA Training Guide](https://rentry.org/lora_train)
- training of text encoder turned on
- network dimension 128
- learning rate 1e-4
- batch size 6
- clip skip 2
- number of training steps 7520 (20 epochs)
*Examples*



|