Edit model card

Core ML Converted Model:

  • This model was converted to Core ML for use on Apple Silicon devices. Conversion instructions can be found here.
  • Provide the model to an app such as Mochi Diffusion Github / Discord to generate images.
  • split_einsum version is compatible with all compute unit options including Neural Engine.
  • original version is only compatible with CPU & GPU option.
  • Custom resolution versions are tagged accordingly.
  • The vae-ft-mse-840000-ema-pruned.ckpt VAE is embedded into the model.
  • This model was converted with a vae-encoder for use with image2image.
  • This model is fp16.
  • Descriptions are posted as-is from original model source.
  • Not all features and/or results may be available in CoreML format.
  • This model does not have the unet split into chunks.
  • This model does not include a safety checker (for NSFW content).
  • This model can be used with ControlNet.

DreamShaper-v8_cn:

Source(s): CivitAI

DreamShaper - Vโˆž!

Please check out my other base models, including SDXL ones!

Check the version description below for more info.

Do you like what I do? Consider supporting me on Patreon ๐Ÿ…ฟ๏ธ to get exclusive tips and tutorials,

or feel free to buy me a coffee โ˜• ๐ŸŽŸ๏ธ

Join my Discord Server

Available on the following websites with GPU acceleration: Mage.space Sinkin.ai RandomSeed AnimeMaker.ai https://tensor.art/u/600303455797521413

New Negative Embedding for this: Bad Dream.

Message from the author

Hello hello, my fellow AI Art lovers. Version 8 just released. Did you like the cover with the โˆž symbol? This version holds a special meaning for me.

DreamShaper started as a model to have an alternative to MidJourney in the open source world. I didn't like how MJ was handled back when I started and how closed it was and still is, as well as the lack of freedom it gives to users compared to SD. Look at all the tools we have now from TIs to LoRA, from ControlNet to Latent Couple. We can do anything. The purpose of DreamShaper has always been to make "a better Stable Diffusion", a model capable of doing everything on its own, to weave dreams.

With SDXL (and, of course, DreamShaper XL ๐Ÿ˜‰) just released, I think the "swiss knife" type of model is closer then ever. That model architecture is big and heavy enough to accomplish that the pretty easily. But what about all the resources built on top of SD1.5? Or all the users that don't have 10GB of vram? It might just be a bit too early to let go of DreamShaper.

Not before one. Last. Push.

And here it is, I hope you enjoy. And thank you for all the support you've given me in the recent months.

PS: the primary goal is still towards art and illustrations. Being good at everything comes second.

Suggested settings:

  • I had CLIP skip 2 on some pics, the model works with that too.
  • I have ENSD set to 31337, in case you need to reproduce some results, but it doesn't guarantee it.
  • All of them had highres.fix or img2img at higher resolution. Some even have ADetailer. Careful with that tho, as it tends to make all faces look the same.
  • I don't use "restore faces".

For old versions:

NOTES

Version 8 focuses on improving what V7 started. Might be harder to do photorealism compared to realism focused models, as it might be hard to do anime compared to anime focused models, but it can do both pretty well if you're skilled enough. Check the examples!

Original v1 description: After a lot of tests I'm finally releasing my mix model. This started as a model to make good portraits that do not look like cg or photos with heavy filters, but more like actual paintings. The result is a model capable of doing portraits like I wanted, but also great backgrounds and anime-style characters. Below you can find some suggestions, including LoRA networks to make anime style images.

I hope you'll enjoy it as much as I do.

Official HF repository: https://huggingface.co/Lykon/DreamShaper

image

image

image

image

Downloads last month
0
Inference API
Unable to determine this model's library. Check the docs .