--- language: - en library_name: diffusers tags: - text-to-image - stable-diffusion - safetensors - stable-diffusion-xl license: openrail++ ---

DashAnimeXL V1

image2
image3
image4
image5
image6
image7
image8
image9
image10
image11
image12
image13
image14
image15
image16
image17
image18
image19
image20
image21
image22
image23
image24
image25
image26
image27
image28
image1
DashAnimeXL V1 is a diffusion-based text-to-image generative model. The model is a finetune of [SDXL](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/tree/main) by the research team at [Dashtoon](https://dashtoon.com/create). Please see our [blog](https://insiders.dashtoon.com/dashanimexl/) for more details. ## Model Description - **Developed by:** [Dashtoon](https://dashtoon.com/create) - **Model type:** Diffusion-based text-to-image generative model - **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENSE.md) - **Model Description:** DashAnimeXL V1 is engineered to generate high-quality anime images from textual prompts. It features enhanced hand anatomy, better concept understanding, and prompt interpretation. - **Summary:** This model generates images based on text prompts. It follows the same architecture as [Stable Diffusion XL](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0). - **Finetuned from model:** [SDXL](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/tree/main) ## Using the model with 🧨 Diffusers To use DashAnimeXL V1, install the required libraries as follows: ```python pip install diffusers --upgrade pip install transformers accelerate safetensors ``` Example script for generating images with DashAnimeXL V1: ```python import torch from diffusers import ( StableDiffusionXLPipeline, EulerAncestralDiscreteScheduler, AutoencoderKL ) # Load VAE component vae = AutoencoderKL.from_pretrained( "madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.bfloat16 ) # Configure the pipeline pipe = StableDiffusionXLPipeline.from_pretrained( "dashtoon/DashAnimeXL-V1", vae=vae, torch_dtype=torch.bfloat16, use_safetensors=True, ) pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) if torch.cuda.is_available(): pipe.to('cuda') # Define prompts and generate image prompt = "anime illustration, An ink painting with a superhot, pop art style, featuring vibrant splashes and gradient patterns merging with random signals and noise. A zoomed-in panda wearing glasses, appearing to look directly at the viewer. The piece is bathed in warm, volumetric lighting against a clear dusk sky background. The reflection in the panda's sunglasses reveals nuclear clouds, adding an element of surrealism." negative_prompt = "nsfw, low quality, worst quality, very displeasing, 3d, watermark, signature, ugly, poorly drawn" image = pipe( prompt, negative_prompt=negative_prompt, width=1024, height=1024, guidance_scale=7, num_inference_steps=20 ).images[0] ```