Edit model card

Novelai-Diffusion

Novelai-Diffusion is a latent diffusion model which can create best quality anime image.

Here is the diffusers version of the model. Just to make it easier to use Novelai-Diffusion for all.

Gradio & Colab Demo

There is a Gradio Web UI and Colab with Diffusers to run Novelai Diffusion:

Open In Colab

Run Novelai Diffusion on TPU !!! (Beta):

Open In Colab

Example Code

pytorch

from diffusers import DiffusionPipeline
import torch

pipe = DiffusionPipeline.from_pretrained("animelover/novelai-diffusion", custom_pipeline="waifu-research-department/long-prompt-weighting-pipeline", torch_dtype=torch.float16)
pipe.safety_checker = None # we don't need safety checker. you can add not safe words to negative prompt instead.
pipe = pipe.to("cuda")

prompt = "best quality, masterpiece, 1girl, cute, looking at viewer, smiling, open mouth, white hair, red eyes, white kimono, sakura petal"
neg_prompt = "lowres, bad anatomy, error body, error hair, error arm, error hands, bad hands, error fingers, bad  fingers, missing fingers, error legs, bad legs, multiple legs, missing legs, error lighting, error shadow, error reflection, text, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry"
# we don't need autocast here, because autocast will make speed slow down.
image = pipe.text2img(prompt,negative_prompt=neg_prompt, width=512,height=768,max_embeddings_multiples=5,guidance_scale=12).images[0]
image.save("test.png")

onnxruntime

from diffusers import DiffusionPipeline

pipe = DiffusionPipeline.from_pretrained("animelover/novelai-diffusion", revision="onnx16",
                                         custom_pipeline="waifu-research-department/onnx-long-prompt-weighting-pipeline",
                                         provider="CUDAExecutionProvider")
pipe.safety_checker = None # we don't need safety checker. you can add not safe words to negative prompt instead.

prompt = "best quality, masterpiece, 1girl, cute, looking at viewer, smiling, open mouth, white hair, red eyes, white kimono, sakura petal"
neg_prompt = "lowres, bad anatomy, error body, error hair, error arm, error hands, bad hands, error fingers, bad  fingers, missing fingers, error legs, bad legs, multiple legs, missing legs, error lighting, error shadow, error reflection, text, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry"
image = pipe.text2img(prompt,negative_prompt=neg_prompt, width=512,height=768,max_embeddings_multiples=5,guidance_scale=12).images[0]
image.save("test.png")

note: we can input long prompt and adjust weighting by using "waifu-research-department/long-prompt-weighting-pipeline". it requires diffusers>=0.4.0 .

Acknowledgements

Thanks to novelai for this awesome model. Support them if you can.

Downloads last month
302
Inference API
Inference API (serverless) has been turned off for this model.