cyber-meow commited on
Commit
df01e60
·
2 Parent(s): 49c8b9d 667a3ea

quuMerge branch 'main' of https://huggingface.co/alea31415/LyCORIS-experiments into main

Browse files
.gitattributes CHANGED
@@ -110,3 +110,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
110
  0325_aniscreen_fanart_styles/0325exp-cs1-tte1e-4-step-12000.safetensors filter=lfs diff=lfs merge=lfs -text
111
  0325_aniscreen_fanart_styles/0325exp-cs2-step-6000.safetensors filter=lfs diff=lfs merge=lfs -text
112
  0324_all_aniscreen_tags/0324exp-cs1.safetensors filter=lfs diff=lfs merge=lfs -text
 
 
 
110
  0325_aniscreen_fanart_styles/0325exp-cs1-tte1e-4-step-12000.safetensors filter=lfs diff=lfs merge=lfs -text
111
  0325_aniscreen_fanart_styles/0325exp-cs2-step-6000.safetensors filter=lfs diff=lfs merge=lfs -text
112
  0324_all_aniscreen_tags/0324exp-cs1.safetensors filter=lfs diff=lfs merge=lfs -text
113
+ generated_samples/00165-20230327023658.jpg filter=lfs diff=lfs merge=lfs -text
114
+ generated_samples/00165-20230327023658.png filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -2,7 +2,7 @@
2
  license: creativeml-openrail-m
3
  ---
4
 
5
- Tags to test
6
 
7
  ```
8
  Anisphia, Euphyllia, Tilty, OyamaMahiro, OyamaMihari
@@ -14,7 +14,7 @@ For `0324_all_aniscreen_tags`, I accidentally tag all the character images with
14
  For `0325_aniscreen_fanart_styles`, things are done correctly (anime screenshots tagged as `aniscreen`, fanart tagged as `fanart`).
15
 
16
 
17
- #### Settings
18
 
19
  Default settings are
20
  - loha net dim 8, conv dim 4, alpha 1
@@ -28,7 +28,28 @@ The configuration json files can otherwsie be found in the `config` subdirectori
28
  However, some experiments concern the effect of tags for which I regenerate the txt file and the difference can not be seen from the configuration file in this case.
29
  For now this concerns `05tag` for which tags are only used with probability 0.5.
30
 
31
- #### Datasets
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
 
33
  Here is the composition of the datasets
34
  ```
@@ -51,6 +72,4 @@ Here is the composition of the datasets
51
  5_artists~onono_imoko: 373
52
  7_characters~screenshots~Tilty: 95
53
  9_characters~fanart~Anisphia+Euphyllia: 97
54
- ```
55
-
56
- Tks
 
2
  license: creativeml-openrail-m
3
  ---
4
 
5
+ ### Trigger words
6
 
7
  ```
8
  Anisphia, Euphyllia, Tilty, OyamaMahiro, OyamaMihari
 
14
  For `0325_aniscreen_fanart_styles`, things are done correctly (anime screenshots tagged as `aniscreen`, fanart tagged as `fanart`).
15
 
16
 
17
+ ### Settings
18
 
19
  Default settings are
20
  - loha net dim 8, conv dim 4, alpha 1
 
28
  However, some experiments concern the effect of tags for which I regenerate the txt file and the difference can not be seen from the configuration file in this case.
29
  For now this concerns `05tag` for which tags are only used with probability 0.5.
30
 
31
+ ### Some observations
32
+
33
+ For a thorough comparaison please refer to the `generated_samples` folder.
34
+
35
+ #### Captioning
36
+
37
+ Dataset, in general, is the most important out of all.
38
+ The common wisdom that we should prune anything that we want to be attach to the trigger word is exactly the way to go for.
39
+ No tags at all is terrible, especially for style training.
40
+ Having all the tags remove the traits from subjects if these tags are not used during sampling (not completely true but more or less the case).
41
+
42
+ ![00066-20230326090858](https://huggingface.co/alea31415/LyCORIS-experiments/resolve/main/generated_samples/00066-20230326090858.png)
43
+
44
+
45
+ #### Others
46
+
47
+ 1. I barely see any difference for training at clip skip 1 and 2.
48
+ 2. Setting text encoder learning rate to be half of that of unet makes training two times slower while I cannot see how it helps.
49
+ 3. The difference between lora, locon, and loha are very subtle.
50
+ 4. Training at higher resolution helps generating more complex backgrounds etc, but it is very time-consuming and most of the time it isn't worth it (simpler to just switch base model) unless this is exactly the goal of the lora you're training.
51
+
52
+ ### Datasets
53
 
54
  Here is the composition of the datasets
55
  ```
 
72
  5_artists~onono_imoko: 373
73
  7_characters~screenshots~Tilty: 95
74
  9_characters~fanart~Anisphia+Euphyllia: 97
75
+ ```
 
 
generated_samples/00165-20230327023658.jpg ADDED

Git LFS Details

  • SHA256: 269a594f5ce80f168b7ed1170ee0d8202fbf31386783ce496fcfaf6e6936609d
  • Pointer size: 132 Bytes
  • Size of remote file: 1.64 MB
generated_samples/00165-20230327023658.png ADDED

Git LFS Details

  • SHA256: abdb0d9451e80b49c26e16feede6c45c1c4369cda9dfbf5563f8a0d267e92f2d
  • Pointer size: 133 Bytes
  • Size of remote file: 26.5 MB
generated_samples/00165-20230327023658.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ OyamaMahiro; fanart; (w-sitting, wariza:1.1), 1girl, solo, skirt, eating lollipop
2
+ looking at viewer, detailed background, outdoors, in village, cloud
3
+ <lora:0325exp-cs1:1>
4
+ Negative prompt: aniscreen, far from viewer, realistic, bad hands, lowres, bad anatomy, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
5
+ Steps: 25, Sampler: Euler a, CFG scale: 8, Seed: 1234, Size: 520x680, Model hash: 89d59c3dde, Model: base_nai-full-pruned, Script: X/Y/Z plot, X Type: Prompt S/R, X Values: "<lora:0325exp-cs1:1>,<lora:0325exp-cs1-tte1e-4:1>,<lora:0326exp-cs1-lora:1>,
6
+ <lora:0325exp-cs1-768-step-28000:1>, <lora:0326exp-cs1-5e-4:1>,<lora:0326exp-cs1-dim32-16:1>, <lora:0326exp-cs1-dim32-16-alpha-half:1>, <lora:0326exp-cs1-dim32-16-5e-4:1>, <lora:0326exp-cs1-character-only:1>", Y Type: Seed, Y Values: "1234,555,98,444,343242", Fixed Y Values: "1234, 555, 98, 444, 343242"