Core ML Converted Model:

  • This model was converted to Core ML for use on Apple Silicon devices. Conversion instructions can be found here.
  • Provide the model to an app such as Mochi Diffusion Github / Discord to generate images.
  • split_einsum version is compatible with all compute unit options including Neural Engine.
  • original version is only compatible with CPU & GPU option.
  • Custom resolution versions are tagged accordingly.
  • This model was converted with a vae-encoder for use with image2image.
  • This model is fp16.
  • Not all features and/or results may be available in CoreML format.
  • This model does not have the unet split into chunks.
  • This model does not include a safety checker (for NSFW content).
  • This model can be used with ControlNet.

AnythigV5Ink, AnythigV5PrtRE

Sources: CivitAI

ALL

Anything系列目前有V1、V2、V3、V3.2、V5五个基础版本,其余为变种版本,标注RE的版本为修复版,修复了诸如clip等模型方面的问题,Prt是V5版本的特别修剪版,是最推荐使用的版本,此外大家所说的Anything ink/A-ink都是指的V3.2++这个模型,我更改了标注,防止有一些人找不到。

The Anything series currently has five basic versions, V1, V2, V3, V3.2, and V5, with the rest being variations. Versions labeled with "RE" are fixed versions that have addressed issues with models such as clip. Prt is a specially trimmed version of the V5 model and is the most recommended version. Additionally, Anything ink/A-ink refers to the V3.2++ model. I have updated the labels to prevent confusion.

NoVAE

真正不含有VAE的版本,想要使用请在webui的模型选择栏中选择外置VAE

我把这个版本放到了后面,以免有的人下载了不能跑

V3.2++[Ink]

V3.2++版本是为了替换老旧的Anything V3版本,展示图中的模型YB就是Anything V3.2++。如果你前段时间下载了测试版本的Txydm的YB版本,那么无需下载ANythingV3.2++,这两个模型是完全相同的东西

AnythingV3.2++因为模型性能原因选择了其他底模型,目前测试发现并其并不是很兼容NAI及其衍生模型为底模型的LoRA,强行使用会生成糟糕的图片。如果想要提示词更准确或者是想要使用更多的LoRA模型,那么请使用V5-Prt版本,而不是V3.2++版本。

如果使用Anything训练LoRA模型,推荐使用V5版本而不是V3.2++,因为你使用V3.2++训练LoRA模型所得到的东西将在大部分ckp模型上用不了

A-Ink已经可以脱离合并模型的范畴了,但是Civitai并不能单独版本设置是融合还是训练的模型。底模型训练使用大量来自Niji的生成图片,二次训练使用由Stable Diffusion相关模型生成的图片。

The V3.2++ version was created to replace the old Anything V3 version, and the YB model in the displayed image is Anything V3.2++. If you downloaded the YB version of the Txydm test version some time ago, there is no need to download Anything V3.2++, as these two models are exactly the same.

Due to model performance issues, Anything V3.2++ has chosen other base models and is currently not very compatible with LoRA, which is based on NAI and its derivative models. Using it forcibly will result in poor quality images. If you want more accurate prompts or want to use more LoRA models, please use the V5-Prt version instead of the V3.2++ version.

If you are training a LoRA model using Anything, it is recommended to use the V5 version instead of the V3.2++ version, as the results obtained from training a LoRA model using V3.2++ will not work on most ckp models.

A-Ink can now be used separately from the merged model, but Civitai cannot set the version as a merged or trained model. The base model training uses a large number of generated images from Niji, and the secondary training uses images generated by Stable Diffusion-related models.

V5[PRT]

AnythingV5之后的模型并非只需要使用简单提示词就可以看似很好的图,也并非只有“1girl”,它需要精准的提示词以达到对应的效果。

The model after AnythingV5 doesn't just need to use a simple prompt to look good, and it doesn't just have "1girl"; it needs precise prompt words to achieve the corresponding effect.

OR

Anything系列并没有4.0和4.5版本,请不要通过这个联想到我。我本因为意识到融合模型的种种问题而放弃了angthingv3之后的版本制作,没想到会有人搞出来4和4.5版本。25D模型请去使用AOM2,而不是Anything4.5之类的版本,这些模型无论是使用还是作为训练用的底模型都是极度糟糕的

There is no Anything version 4.0 or 4.5, so please don't associate me with them. I gave up making versions after AnythingV3 due to the various problems with fusion models. I didn't expect someone to create versions 4 and 4.5. For 2.5D models, please use AOM2 instead of versions like Anything 4.5. These models are extremely poor in both usage and as base models for training.

MADE

万象熔炉(Anything)起源于当初元素法典作者群的一次调侃,由于元素法典第一点五卷的名称为万象熔炉,故使用此名称。AnythingV1.0是融合了当时所有能找到的二次元模型,而Anything2.1和3.0则是有选择的使用了部分模型防止出现糟糕的生成图片。

我起初并不知道huggingface或者civitai这些平台,所以当时的模型仅上传至百度网盘,直到有一天QQ群里的群友问我新闻上的模型是不是我制作的,我才发现这个模型已经被传到了各大平台,并且有了相当的热度。后来我得知了Huggingface和civitai平台,于是就上传了这些模型,当然Huggingface我并不会使用,所以模型是由别人代替上传的

AnythingV3作为最早的一批二次元融合模型,被各种营销号和自媒体吹捧并加以“爱国营销”,逐渐出圈并成为当时所谓的“最好的模型”,并一度使得卖整合包和坑骗小白的人使用此模型。(固本人极度反感模型被营销号和自媒体无脑吹捧和被人拿去坑骗小白。)

The "Anything" project originated from a joke among the authors of "元素法典". As the name of the first 1.5 volumes of "元素法典" is "万象熔炉" , this name was used for the project. Anything v1.0 fused all the available anime-style models at the time, while Anything 2.1 and 3.0 selectively used certain models to prevent the generation of poor quality images.

At first, I didn't know about platforms like Huggingface or Civitai, so the models were only uploaded to Baidu Netdisk. One day, a friend in a QQ group asked me if the model mentioned in the news was made by me, and I found out that the model had been uploaded to various platforms and had gained considerable popularity. Later, after learning about Huggingface and Civitai, I uploaded the models to those platforms. However, I didn't know how to use Huggingface, so someone else uploaded the models for me.

AnythingV3, as one of the earliest anime fusion models, was hyped up by various marketing accounts and self-media platforms, and was even used for "patriotic marketing". Gradually, it became popular and was considered the "best model" at that time, which led to some people selling integration packages and deceiving novices using this model. (Of course, the author strongly opposed the model being blindly hyped up by marketing accounts and self-media platforms, and being used to deceive novices.)

USE

推荐参数 | Recommended parameters:

Anything

你可以使用您喜欢的任何采样器、步骤、cfg

You can use any Sampler, steps, cfg you like 比如我喜欢如下参数:

For example, I like the following argument:

Sampler: Euler A

Steps: 20

CFG: 7

Clip Skip: 2 Negatives:You need, not something that's fixed!

不过为了达到更好的效果,请不要使用EasyNegative

But for better results, do not use EasyNegative

OTGHER

huggingface:Linaqruf/anything-v3.0 · Hugging Face

[因为我不会英文的缘故,Huggingface上的模型并不是由我本人上传。是由别人经过我的同意后上传的]

[It wasn't me who uploaded the model to huggingface, but with my permission because I don't speak any English.]

————————————————————

有关模型的相关问题,请查看下面文档

[ZH -CN]https://docs.qq.com/doc/DQ1Vzd3VCTllFaXBv

[EN]https://civitai.com/articles/640/model-basis-theory

————————————————————

跟所有的模型都一样,这个说明放在这里只是为了叠甲,该怎么玩怎么玩就是了。

=Anything模型的链接为:https://civitai.com/models/9409 =Anything model release link is: https://civitai.com/models/9409

Of course, as with all models, this instruction is just there in case something goes wrong.

你可以随意将本模型融合到其他地方,但如果你共享该融合模型,别忘了标注一下

You are free to merge this model into other places, but if you share the merged model, please attribute it. 除此之外允许任何人复制和修改模型,但是请遵守CreativeML Open RAIL-这里是M.CreativeML Open RAIL-M相关内容:

Anyone else is allowed to copy and modify the model, but please comply with CreativeML Open RAIL-M.You can learn more about the CreativeML Open RAIL-M here:

License - a Hugging Face Space by CompVis 模型可以和其他模型一样随意使用,但是请遵守所在地区的法律法规,以免造成麻烦(我们可不负责)

The model can be used freely like other models, but please comply with the laws and regulations of your region to avoid trouble (we are not responsible). [Use GPT4 translation]

Examples

AnythigV5Ink

AnythigV5

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .