Core ML Converted Model:
- This model was converted to Core ML for use on Apple Silicon devices. Conversion instructions can be found here.
- Provide the model to an app such as Mochi Diffusion Github / Discord to generate images.
split_einsum
version is compatible with all compute unit options including Neural Engine.original
version is only compatible withCPU & GPU
option.- Custom resolution versions are tagged accordingly.
- The
vae-ft-mse-840000-ema-pruned.ckpt
VAE is embedded into the model. - This model was converted with a
vae-encoder
for use withimage2image
. - This model is
fp16
. - Descriptions are posted as-is from original model source.
- Not all features and/or results may be available in
CoreML
format. - This model does not have the unet split into chunks.
- This model does not include a
safety checker
(for NSFW content).
majicmixRealistic_v5Preview:
Source(s): CivitAI
第五版先行版来了,先把赛博永生小姐姐娜乌斯嘉融进来做个例子,也算是公测吧。
5th edtion is coming soon. I've posted a preview version with the face of nwsj.
推荐使用Euler作为采样器。
Use Euler as sampler.
听我一句劝,不要开脸部修复! Please don't use Face Restoration! 如果要修复脸部,请使用after detailer.
If your face comes out badly, use after detailer instead.
https://github.com/Bing-su/adetailer
我习惯开启Dynamic Thresholding来更好控制cfg值,1~20都可以尝试一下。
Use Dynmaic Thresholding to control CFG. You can try from 1~20.
https://github.com/mcmonkeyprojects/sd-dynamic-thresholding
很抱歉在之前的例图中我使用了分层的lora让大家困惑,也让大家复刻我的例图变得困难。所以新一版的例图我没有使用任何lora。想了解lora分层的可以参考:GitHub - hako-mikan/sd-webui-lora-block-weight
I apologize for using lora block weight in the example images of the previous edition, which confused most of you and made it difficult for you to replicate my examples. Therefore, in the newer editions, I did not use any lora in my showcase. If you would like to learn about lora block weight, please refer to: https://github.com/hako-mikan/sd-webui-lora-block-weight
我的分层参数 My lora block weight setting:
身体 BODY:1,1,1,1,1,1,1,1,0,0,0,1,1,1,1,1,1 BODY0.5:1,1,1,1,1,1,0.2,1,0.2,0,0,0.8,1,1,1,1,1 脸部(脸型、发型、眼型、瞳色等) FACE:1,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0 FACE0.5:1,0,0,0,0,0,0,0,0.8,1,1,0.2,0,0,0,0,0 FACE0.2:1,0,0,0,0,0,0,0,0.2,0.6,0.8,0.2,0,0,0,0,0 修手专用 HAND:1,0,1,1,0.2,0,0,0,0,0,0,0,0,0,0,0,0 服装(搭配tag使用) CLOTHING:1,1,1,1,1,0,0.2,0,0.8,1,1,0.2,0,0,0,0,0 动作(搭配tag使用) POSE:1,0,0,0,0,0,0.2,1,1,1,0,0,0,0,0,0,0 上色风格(搭配tag使用) PALETTE:1,0,0,0,0,0,0,0,0,0,0,0.8,1,1,1,1,1 角色(去风格化) KEEPCHAR:1,1,1,1,1,0,0,0,1,1,1,1,1,1,1,0,0 背景(去风格化) KEEPBG:1,1,1,1,1,1,0.2,1,0.2,0,0,0.8,1,1,1,0,0 减弱过拟合(等同于OUTALL) REDUCEFIT:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1
融合了多种模型,可以生成好看的脸部,也能有效应对暗部处理。远距离脸部需要inpaint以达成最好效果。也可以使用after detailer.
A good looking model, suitable for NSFW and dark scene (because I added noiseoffset). long-range facial detail require inpainting to achieve the best results. You can also use after detailer.
推荐关键词 recommended positive prompts: Best quality, masterpiece, ultra high res, (photorealistic:1.4), 1girl
如果想要更暗的图像 if you want darker picture, add: in the dark, deep shadow, low key, etc.
负面关键词 use ng_deepnegative_v1_75t and badhandv4 in negative prompt
I've used a bug-fixed version of DPM++ 2M Karras, you can check this out: https://civitai.com/models/35966/dpm-2m-alt-karras-sampler
推荐参数Recommended Parameters:
Sampler: Euler a, Euler, DPM++ 2M Karras (bug-fixed) or DPM++ SDE Karras
Steps: 20~40
Hires upscaler: R-ESRGAN 4x+ or 4x-UltraSharp
Hires upscale: 2
Hires steps: 15
Denoising strength: 0.2~0.5
CFG scale: 6-8
clip skip 2
脸部修复的方法 to inpaint the face: inpaint-->only masked-->set to 512x512-->Denoising strength:0.2~0.5
basic formula: KanPiroMix + XSMix + ChikMix
关注我的TG频道看更多例图:https://t.me/majic_NSFW