Core ML Converted Model:
- This model was converted to Core ML for use on Apple Silicon devices. Conversion instructions can be found here.
- Provide the model to an app such as Mochi Diffusion Github / Discord to generate images.
split_einsum
version is compatible with all compute unit options including Neural Engine.original
version is only compatible withCPU & GPU
option.- Custom resolution versions are tagged accordingly.
- The
vae-ft-mse-840000-ema-pruned.ckpt
VAE is embedded into the model. - This model was converted with a
vae-encoder
for use withimage2image
. - This model is
fp16
. - Descriptions are posted as-is from original model source.
- Not all features and/or results may be available in
CoreML
format. - This model does not have the unet split into chunks.
- This model does not include a
safety checker
(for NSFW content). - This model can be used with ControlNet.
westernAnimation_v1_cn:
Source(s): CivitAI
Western Animation Diffusion
Comicbook and Western Animation Style Model
Do you like what I do? Consider supporting me on Patreon or feel free to buy me a coffee.
A ❤️, a kind comment or a review is greatly appreciated.
Purpose of this model
Train character loras where the dataset is mostly made of cartoon screencaps or comicbooks, allowing less style transfer and less overfitting.
Add variety to mixes.
Have an alternative to anime models when it comes to western stuff.
NOT to be used with style loras. Also NOT for style lora training.
Suggested settings
Set the ETA Noise Seed Delta (ENSD) to 31337
Set CLIP Skip to 2
DISABLE face restore. It's terrible, never use it
Use negative prompts and embeddings that don't ruin the style
Use AnimeVideo or Foolhardy as upscalers in highres fix
Use ADetailer for far away shots or full body images to avoid blurred faces
Brief history
This was requested by a supporter and I also wanted to see if I was capable of doing it. It was a funny little project.