sotediffusion-v2 / README.md
Disty0's picture
Update README.md
06e9a10 verified
|
raw
history blame
10.4 kB
---
pipeline_tag: text-to-image
license: other
license_name: faipl-1.0-sd
license_link: LICENSE
tags:
- text-to-image
- diffusers
prior:
- Disty0/sotediffusion-v2-prior
---
# SoteDiffusion V2
An anime finetune of Würstchen V3 / Stable Cascade.
# Release Notes
- This release is sponsored by <a href="https://fal.ai/grants?rel=sote-diffusion" target="_blank">fal.ai/grants</a>
- Trained on 12M text & image paris including WD tags and natural language captions for a single epoch on 8xH100 80GB SXM5 GPUs.
- Trained with Full FP32 and MAE Loss.
<style>
.image {
float: left;
margin-left: 10px;
}
</style>
<table>
<img class="image" src="https://cdn-uploads.huggingface.co/production/uploads/6456af6195082f722d178522/KJTHqR3otoKoiXxvbudp8.png" width="320">
<img class="image" src="https://cdn-uploads.huggingface.co/production/uploads/6456af6195082f722d178522/uua4L9aaqJ0LI8gYv4xmC.png" width="320">
</table>
# ComfyUI
Use these arguments when starting ComfyUI: `--fp16-vae --fp16-unet`
Download the Stage C to unet folder: https://huggingface.co/Disty0/sotediffusion-v2/resolve/main/sotediffusion-v2-stage_c.safetensors
Download the Stage C Text Encoder to clip folder: https://huggingface.co/Disty0/sotediffusion-v2/resolve/main/sotediffusion-v2-stage_c_text_encoder.safetensors
Download the Stage B to unet folder: https://huggingface.co/Disty0/sotediffusion-v2/resolve/main/sotediffusion-v2-stage_b.safetensors
Download the Stage A to vae folder: https://huggingface.co/Disty0/sotediffusion-v2/resolve/main/stage_a_ft_hq.safetensors
Download the workflow and load it: https://huggingface.co/Disty0/sotediffusion-v2/resolve/main/comfyui_workflow.json
## SD.Next
URL: https://github.com/vladmandic/automatic/
Go to Models -> Huggingface and type `Disty0/sotediffusion-v2` into the model name and press download.
Load `Disty0/sotediffusion-v2` after the download process is complete.
Prompt:
```
your prompt goes here
very aesthetic, best quality, newest,
```
(New lines act the same way as BREAK in SD.Next)
Negative Prompt:
```
very displeasing, displeasing, worst quality, bad quality, low quality, realistic, monochrome, comic, sketch, oldest, early, artist name, signature, blurry, simple background, upside down,
```
Parameters:
Sampler: Default
Steps: 30 or 40
Refiner Steps: 10
CFG: 7
Secondary CFG: 1.5 or 1
Resolution: 1024x1536, 2048x1152
Anything works as long as it's a multiply of 128.
# Diffusers
```shell
pip install diffusers
```
```python
import torch
import diffusers
device = "cuda"
dtype = torch.float16
model_path = "/mnt/DataSSD/AI/SoteDiffusion/Wuerstchen3/diffusers/sotediffusion-v2"
def get_timestep_ratio_conditioning(t, alphas_cumprod):
s = torch.tensor([0.008]) # diffusers uses 0.003 while the original is 0.008
clamp_range = [0, 1]
min_var = torch.cos(s / (1 + s) * torch.pi * 0.5) ** 2
var = alphas_cumprod[t]
var = var.clamp(*clamp_range)
s, min_var = s.to(var.device), min_var.to(var.device)
ratio = (((var * min_var) ** 0.5).acos() / (torch.pi * 0.5)) * (1 + s) - s
return ratio
pipe = diffusers.AutoPipelineForText2Image.from_pretrained(model_path, text_encoder=None, torch_dtype=dtype)
pipe.prior_pipe.get_timestep_ratio_conditioning = get_timestep_ratio_conditioning
pipe.prior_pipe.scheduler.config.clip_sample = False
# de-dupe
pipe.decoder_pipe.text_encoder = pipe.text_encoder = None # nothing uses this
del pipe.decoder_pipe.text_encoder
del pipe.prior_prior
del pipe.prior_text_encoder
del pipe.prior_tokenizer
del pipe.prior_scheduler
del pipe.prior_feature_extractor
del pipe.prior_image_encoder
pipe = pipe.to(device, dtype=dtype)
pipe.prior_pipe = pipe.prior_pipe.to(device, dtype=dtype)
def encode_empty_prompt(
prior_pipe,
device,
batch_size,
num_images_per_prompt,
):
text_inputs = prior_pipe.tokenizer(
"",
padding="max_length",
max_length=prior_pipe.tokenizer.model_max_length,
truncation=True,
return_tensors="pt",
)
# Don't use attention mask for empty prompt
text_encoder_output = prior_pipe.text_encoder(
text_inputs.input_ids.to(device), attention_mask=None, output_hidden_states=True
)
prompt_embeds = text_encoder_output.hidden_states[-1]
prompt_embeds = prompt_embeds.to(dtype=prior_pipe.text_encoder.dtype, device=device)
prompt_embeds = prompt_embeds.repeat_interleave(num_images_per_prompt, dim=0)
return prompt_embeds
prompt = "1girl, solo, looking at viewer, open mouth, blue eyes, medium breasts, blonde hair, gloves, dress, bow, hair between eyes, bare shoulders, upper body, hair bow, indoors, elbow gloves, hand on own chest, bridal gauntlets, candlestand, smile, rim lighting, from side, castle interior, looking side,"
quality_prompt = "extremely aesthetic, best quality, newest"
negative_prompt = "very displeasing, displeasing, worst quality, bad quality, low quality, realistic, monochrome, comic, sketch, oldest, early, artist name, signature, blurry, simple background, upside down,"
num_images_per_prompt=1
# Encode prompts and quality prompts eperately:
# device, batch_size, num_images_per_prompt, cfg, prompt
prompt_embeds, prompt_embeds_pooled, _, _ = pipe.prior_pipe.encode_prompt(device, 1, num_images_per_prompt, False, prompt=prompt)
quality_prompt_embeds, _, _, _ = pipe.prior_pipe.encode_prompt(device, 1, num_images_per_prompt, False, prompt=quality_prompt)
negative_prompt_embeds, negative_prompt_embeds_pooled, _, _ = pipe.prior_pipe.encode_prompt(device, 1, num_images_per_prompt, False, prompt=negative_prompt)
empty_prompt_embeds = encode_empty_prompt(pipe.prior_pipe, device, 1, num_images_per_prompt)
prompt_embeds = torch.cat([prompt_embeds, quality_prompt_embeds], dim=1)
negative_prompt_embeds = torch.cat([negative_prompt_embeds, empty_prompt_embeds], dim=1)
pipe.prior_pipe.maybe_free_model_hooks()
output = pipe(
width=1024,
height=1536,
decoder_guidance_scale=1.2,
prior_guidance_scale=7.0,
prior_num_inference_steps=30,
num_inference_steps=10,
output_type="pil",
prompt=prompt + " " + quality_prompt,
negative_prompt=negative_prompt,
prompt_embeds=prompt_embeds,
prompt_embeds_pooled=prompt_embeds_pooled,
negative_prompt_embeds=negative_prompt_embeds,
negative_prompt_embeds_pooled=negative_prompt_embeds_pooled,
num_images_per_prompt=num_images_per_prompt,
).images[0]
display(output)
```
## Training:
### Stage C
**Base model**: Disty0/sotediffusion-wuerstchen3
**GPU used**: 7x Nvidia H100 80GB SXM5
| parameter | value |
|---|---|
| **amp** | no |
| **weights** | fp32 |
| **save weights** | fp32 |
| **resolution** | 1024x1024 |
| **effective batch size** | 84 |
| **unet learning rate** | 2e-6 |
| **te learning rate** | 1e-7 |
| **optimizer** | AdamW 8bit |
| **images** | 6M * 2 captions per image |
| **epochs** | 1 |
### Stage B
**Base model**: Disty0/sotediffusion-wuerstchen3-decoder
**GPU used**: 1x Nvidia H100 80GB SXM5
| parameter | value |
|---|---|
| **amp** | no |
| **weights** | fp32 |
| **save weights** | fp32 |
| **resolution** | 1024x1024 |
| **effective batch size** | 8 |
| **unet learning rate** | 8e-6 |
| **te learning rate** | none |
| **optimizer** | AdamW |
| **images** | 120K |
| **epochs** | 6 |
## WD Tags:
Model is trained with this tag order:
```
aesthetic tags, quality tags, date tags, custom tags, rating tags, character, series, rest of the tags
```
### Date:
| tag | date |
|---|---|
| **newest** | 2022 to 2024 |
| **recent** | 2019 to 2021 |
| **mid** | 2015 to 2018 |
| **early** | 2011 to 2014 |
| **oldest** | 2005 to 2010 |
### Aesthetic Tags:
**Model used**: shadowlilac/aesthetic-shadow-v2
| score greater than | tag | count |
|---|---|---|
| **0.90** | extremely aesthetic | 125.451 |
| **0.80** | very aesthetic | 887.382 |
| **0.70** | aesthetic | 1.049.857 |
| **0.50** | slightly aesthetic | 1.643.091 |
| **0.40** | not displeasing | 569.543 |
| **0.30** | not aesthetic | 445.188 |
| **0.20** | slightly displeasing | 341.424 |
| **0.10** | displeasing | 237.660 |
| **rest of them** | very displeasing | 328.712 |
### Quality Tags:
**Model used**: https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/models/aes-B32-v0.pth
| score greater than | tag | count |
|---|---|---|
| **0.980** | best quality | 1.270.447 |
| **0.900** | high quality | 498.244 |
| **0.750** | great quality | 351.006 |
| **0.500** | medium quality | 366.448 |
| **0.250** | normal quality | 368.380 |
| **0.125** | bad quality | 279.050 |
| **0.025** | low quality | 538.958 |
| **rest of them** | worst quality | 1.955.966 |
## Rating Tags:
| tag | count |
|---|---|
| **general** | 1.416.451 |
| **sensitive** | 3.447.664 |
| **nsfw** | 427.459 |
| **explicit nsfw** | 336.925 |
## Custom Tags:
| dataset name | custom tag |
|---|---|
| **image boards** | date, |
| **text** | The text says "text", |
| **characters** | character, series
| **pixiv** | art by Display_Name, |
| **visual novel cg** | Full_VN_Name (short_3_letter_name), visual novel cg, |
| **anime wallpaper** | date, anime wallpaper, |
## Limitations and Bias
### Bias
- This model is intended for anime illustrations.
Realistic capabilites are not tested at all.
### Limitations
- Can fall back to realistic.
Add "realistic" tag to the negatives when this happens.
- Far shot eyes and hands can be bad.
- Still has a lot more room for more training.
## License
SoteDiffusion models falls under [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/) license, which is compatible with Stable Diffusion models’ license. Key points:
1. **Modification Sharing:** If you modify SoteDiffusion models, you must share both your changes and the original license.
2. **Source Code Accessibility:** If your modified version is network-accessible, provide a way (like a download link) for others to get the source code. This applies to derived models too.
3. **Distribution Terms:** Any distribution must be under this license or another with similar rules.
4. **Compliance:** Non-compliance must be fixed within 30 days to avoid license termination, emphasizing transparency and adherence to open-source values.
**Notes**: Anything not covered by Fair AI license is inherited from Stability AI Non-Commercial license which is named as LICENSE_INHERIT.