Classacre commited on
Commit
bdf08f7
1 Parent(s): 96db361

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -4
README.md CHANGED
@@ -9,15 +9,25 @@ It can be used by modifying the `instance_prompt(s)`: **sololeveling**
9
  You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb).
10
  And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
11
 
12
- This model was trained using 71 training images, 14200 total training steps, model saved every 3550 steps (25%) and text encoder was trained up to 35%.
13
 
14
- This is my first model, criticism and advice is welcome.
 
15
 
16
  I've made this model out of admiration towards Jang-Sung Rak (DUBU) who recently passed away. This model is not perfect, and will never be perfect as the original artists art is irreplaceable.
17
 
18
- The final model struggles to do calm / peaceful environments as it was trained on mainly cinematic action scenes - this leads to style bleeding where the ai creates action sequences from seemingly calm and peaceful prompts. Earlier models don't seem to have this problem albeit they are not as sharp and do not reproduce the style as accurately. Negative prompts seem to lessen the effects of action sequences in the final model, however they are not as natural as older models. A comparison between the different model versions can be seen below:
19
 
20
- In my opinion this model runs best using the DDIM sampler, however im still pretty new to experimenting samplers and my opinion about this may change in the future. Please experiment with the different samplers yourself and choose what you believe is best.
 
 
 
 
 
 
 
 
 
21
 
22
  Here are the images used for training this concept:
23
 
 
9
  You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb).
10
  And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
11
 
12
+ This model was trained using 71 training images, 14200 total training steps, model saved every 3550 steps (25%) and text encoder was trained up to 35%. Made using Stable Diffusion v1.5 as the base model.
13
 
14
+ This is my first model, criticism and advice is welcome. Discord: "Classacre#1028"
15
+ This model is inspired by @ogkalu and his comic-diffusion model (https://huggingface.co/ogkalu/Comic-Diffusion). I think its pretty cool and you should check it out.
16
 
17
  I've made this model out of admiration towards Jang-Sung Rak (DUBU) who recently passed away. This model is not perfect, and will never be perfect as the original artists art is irreplaceable.
18
 
19
+ The final model struggles to do calm / peaceful environments as it was trained on mainly cinematic action scenes - this leads to style bleeding where the ai creates action sequences from seemingly calm and peaceful prompts. Earlier models don't seem to have this problem albeit they are not as sharp and do not reproduce the style as accurately. Negative prompts seem to lessen the effects of action sequences in the final model, however they are not as natural as older models. Another thing to mention is that the model struggles at drawing eyes in action sequences, you may be able to play with the prompt to get eyes to show up though. A comparison between the different model versions can be seen below:
20
 
21
+ Sampler used: DDIM
22
+ CFG: 7
23
+
24
+ Prompt: man holding a sword, black hair, muscular, in a library, cinematic, full color, fighting a man
25
+ (https://i.imgur.com/MBjzUVI.jpg)
26
+
27
+ man eating food in the subway station, sololeveling, happy, cinematic, golden hour
28
+ (https://i.imgur.com/L3MB4Ka.jpg)
29
+
30
+ In my opinion this model runs best using the DDIM sampler, however I'm still pretty new to experimenting samplers and my opinion about this may change in the future. Please experiment with the different samplers yourself and choose what you believe is best. The model in 106560 steps may be better than the final model.
31
 
32
  Here are the images used for training this concept:
33