Introduction:

  • I don't know how to introduce it, but it's been renamed several times. It is an open, free to use and fine-tune AI-art model. It was created by my curiosity. Hope you will like it. Have fun! (●'β—‘'●).

Use:

  • For 🧨Diffusers:
from diffusers import DiffusionPipeline

pipe = DiffusionPipeline.from_pretrained("Ojimi/anime-kawai-diffusion")
pipe = pipe.to("cuda")

prompt = "1girl, animal ears, long hair, solo, cat ears, choker, bare shoulders, red eyes, fang, looking at viewer, animal ear fluff, upper body, black hair, blush, closed mouth, off shoulder, bangs, bow, collarbone"
image = pipe(prompt, negative_prompt="lowres, bad anatomy").images[0]

Tips:

  • The masterpiece and best quality tags are not necessary, as it sometimes leads to contradictory results, but if it is distorted or discolored, add them now.
  • The CGF scale should be 7.5 and the step count 28 for the best quality and best performance.
  • Use a sample photo for your idea. Interrogate DeepBooru and change the prompts to suit what you want.
  • You should use it as a supportive tool for creating works of art, and not rely on it completely.
  • The Clip skip should be 2.

Limitations:

  • The drawing is hard, not soft.
  • Loss of detail, errors, bad human-like (six-fingered hand) details, deformation, blurring, and unclear images are inevitable.
  • ⚠️Content may not be appropriate for all ages: As it is trained on data that includes adult content, the generated images may contain content not suitable for children (depending on your country there will be a specific regulation about it). If you do not want to appear adult content, make sure you have additional safety measures in place, such as adding "nsfw" to the negative prompt.
  • The results generated by the model are considered impressive. But unfortunately, currently, it only supports the English language, to use multilingual, consider using third-party translation programs.
  • The model is trained on the Danbooru and Nai tagging system, so the long text may result in poor results.
  • My amount of money: 0 USD =((.

Desires:

As it is a version made only by myself and my small associates, the model will not be perfect and may differ from what people expect. Any contributions from everyone will be respected.

Want to support me? Thank you, please help me make it better. ❀️

Special Thank:

This wouldn't have happened if they hadn't made a breakthrough.

  • Runwayml: Base model.
  • CompVis: VAE Trainer.
  • stabilityai: stabilityai/sd-vae-ft-mse-original Β· Hugging Face
  • d8ahazard : Dreambooth.
  • Automatic1111 : Web UI.
  • Mikubill: Where my ideas started.
  • Chat-GPT: Help me do crazy things that I thought I would never do.
  • Novel AI, Anything Model, Abyss Orange Model: Dataset images. An AI made me thousands of pictures without worrying about copyright or dispute.
  • Danbooru: Help me write the correct tag.
  • My friend and others: Get quality images.
  • And You 🫡❀️

Copyright:

This license allows anyone to copy, and modify the model, but please follow the terms of the CreativeML Open RAIL-M. You can learn more about the CreativeML Open RAIL-M here.

If any part of the model does not comply with the terms of the GNU General Public License, the copyright and other rights of the model will still be valid.

All AI-generated images are yours, you can do whatever you want, but please obey the laws of your country. We will not be responsible for any problems you cause.

We allow you to merge with another model, but if you share that merge model, don't forget to add me to the credits.

Don't forget me.

Have fun with your waifu! (●'β—‘'●)

Do you want to sponsor computing resources for us? Thank you . Please sponsor to me on Ko-fi at https://ko-fi.com/projectk.

Downloads last month
4,230
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Spaces using Ojimi/anime-kawai-diffusion 100