Edit model card

civitai:https://civitai.com/models/91609/unrealibrary-mix (更多预览在civitai/More preview in civitai)

所有的预览图没有使用embedding,lora

我的融合模型:

DreaMirror:https://civitai.com/models/30294 / https://huggingface.co/Fre2C/DreaMirror-Mix

UnreaLibrary:https://civitai.com/models/91609 / https://huggingface.co/Fre2C/UnreaLibrary-Mix

这个模型的方向是尽可能忠于提示词(在2D模型中这好像有点难),保留2D模型的创造性(所以我并没有融合3D/2.5D模型。所以大部分时间都在与手部进行搏斗),合适的明暗对比。

你可以用它尝试任何东西!

从以下地方学习了很多,十分感谢。

https://huggingface.co/WarriorMama777/OrangeMixs

https://civitai.com/models/9409/or-anything-v5

https://economylife.net/u-net-marge-webui1111/

https://docs.qq.com/doc/DTkRodlJ1c1VzcFBr?u=e7c714671e694797a04f1d58aff5c8b0

https://docs.qq.com/doc/DQ1Vzd3VCTllFaXBv?_t=1685979317852&u=e7c714671e694797a04f1d58aff5c8b0

https://www.figma.com/file/1JYEljsTwm6qRwR665yI7w/Merging-lab%E3%80%8CHosioka-Fork%E3%80%8D?type=design&node-id=1-69

使用建议:

脸部出现崩坏的情况,以及想提升面部质量,使用局部重绘 重绘区域使用仅蒙版(效果最好)获得更好的面部,或使用Hires. fix改善,使用其他随机种或者工具也是不错的办法。

较高的分辨率(比512 * 512高一点)再加上Hires. fix,图片质量会更好(如果显存不够你可以尝试低倍率的Hires. fix或者其他放大方法)。

用于画面质量的正面提示词(像 best quality)是不必要的,会减少画面的可能性,还会使画面趋于一种风格。

将你原本用在正面质量提示词上的权重,用在负面质量提示词上,那是更好的选择。

如果觉得画面内容不够丰富,你可以尝试细致地描述,使画面更加贴近你的想象

提示词的权重以及顺序会影响它在画面里的重要程度。

如果有无法作出反应的提示词,请按以下顺序排查问题:同义词(同一概念的不同描述),提示词冲突(正面和负面),模型问题(看其他模型能否对同样的提示词作出反应),embedding(我并没有使用它的习惯,但考虑到它的原理,我将它放上来作为参考)。

如果你想用很少的提示词抽奖的话,最好把雨伞(umbrella)加进负面提示词(至少在V1是这样的)。

我一般在效果不符合预期时使用clip2。

随你喜好使用lora!

All preview images do not use embedding,lora.

The direction of this model is to be as faithful as possible to the prompt words(This seems a bit difficult in a 2D model), preserve the creativity of 2D models(So I did not merge the 3D/2.5D models. So most of the time is fighting with the hands), appropriate light and dark contrast.

You can try anything with it!

I have learned a lot from the following places, thank you very much.

https://huggingface.co/WarriorMama777/OrangeMixs

https://civitai.com/models/9409/or-anything-v5

https://economylife.net/u-net-marge-webui1111/

https://rentry.org/Merge_Block_Weight_-china-_v1_Beta#1-introduction(This is the translated version)

https://docs.qq.com/doc/DQ1Vzd3VCTllFaXBv?_t=1685979317852&u=e7c714671e694797a04f1d58aff5c8b0

https://www.figma.com/file/1JYEljsTwm6qRwR665yI7w/Merging-lab%E3%80%8CHosioka-Fork%E3%80%8D?type=design&node-id=1-69

Suggestions for use:

If the face appears to be falling apart, and you want to improve the quality of the face, use Inpaint and Inpaint area use only Masked (Best results) to get a better face, or use Hires. fix to improve, use other seed or tools is also a good way.

Higher resolution (a little higher than 512 * 512) plus Hires. fix, picture quality will be better (if the gpu memory is not enough you can try a Low magnification of Hires. fix or other upscale tools).

Positive prompt for image quality (like best quality) are unnecessary and reduce the possibilities of the picture, also make the picture tend to be in a style.

It's better to Use the weight you would have used for positive quality prompt on negative quality prompt.

If you feel that the content of the picture is not rich enough, You can try to describe in detail to make the picture more closely to your imagination.

If you want to sweepstakes with few prompts, it is better to add umbrella to the negative prompt (at least in V1).

The weight of the prompt word and the order in which it is used affects how important it is in the picture.

If there are prompt words that you cannot respond to, please rank the problems in the following order: synonyms (different descriptions of the same concept), prompt word conflicts (positive and negative), model problems (see if other models can respond to the same prompt words), embedding (I am not in the habit of using it, but considering its rationale, I put it up as a reference).

I usually use clip2 when the results don't meet expectations.

Use lora as you like!

我使用这两个VAE/I use these two VAEs: https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/vae/kl-f8-anime2.ckpt https://civitai.com/models/22354/clearvae

V1 image image image image image image

使用的模型/Models used

kawaiimixNijiV5Cute_v10【58f37f4736】

Counterfeit-V3.0_fp32【17277FBE68】

pikasNewGeneration_v20【6C509880A5】

breakdomainanime_A0440【1870FA10C3】

plagion_v10【0C42B21C09】

AnythingV5V3_v5PrtRE【7f96a1a9ca】

tComicV35_v35【25750140EA】

配方/Recipe

use:https://github.com/hako-mikan/sd-webui-supermerger/

kawaiimixNijiV5Cute_v10 x (1-alpha) + Counterfeit-V3.0_fp32 x alpha)x(1-beta)+ pikasNewGeneration_v20 x beta alpha:0.7,1.0,0.9,0.8,0.7,0.6,0.6,0.7,0.8,0.9,0.7,0.5,0.7,0.7,0.85,0.75,0.65,0.75,0.85,0.75,0.65,0.75,0.85,0.9,0.8,0.8 beta:0.75,0.35,0.45,0.55,0.65,0.75,0.85,0.75,0.85,0.75,0.6,0.6,0.6,0.5,0.35,0.45,0.55,0.6,0.65,0.55,0.6,0.5,0.35,0.4,0.5,0.4

Named as step1

breakdomainanime_A0440 x (1-alpha) + plagion_v10 x alpha)x(1-beta)+ step1 x beta alpha:0.25,0.35,0.45,0.55,0.65,0.55,0.45,0.55,0.4,0.6,0.7,0.75,0.8,0.4,0.4,0.5,0.6,0.7,0.8,0.6,0.5,0.4,0.5,0.4,0.7,0.7 beta:0.7,0.85,0.75,0.65,0.55,0.7,0.6,0.5,0.4,0.5,0.6,0.5,0.4,0.6,0.8,0.7,0.6,0.8,0.7,0.6,0.5,0.4,0.5,0.6,0.5,0.4

Named as step2

AnythingV5V3_v5PrtRE x (1-alpha) + tComicV35_v35 x alpha)x(1-beta)+ step2 x beta alpha:0.65,0.75,0.65,0.75,0.65,0.75,0.65,0.75,0.85,1.0,0.85,0.75,0.85,0.4,0.65,0.75,0.65,0.45,0.3,0.15,0.3,0.45,0.65,0.75,0.8,0.8 beta:0.75,0.25,0.35,0.45,0.55,0.75,0.85,0.75,0.85,0.75,0.85,1.0,1.0,0.7,0.35,0.45,0.55,0.75,0.65,0.75,0.65,0.55,0.45,0.35,0.75,0.85

prune and get final fp16 version

Downloads last month
0
Unable to determine this model's library. Check the docs .