Edit model card

Style Enhancer XL LoRA

sample1
sample4
sample2
sample3
sample1
sample4

Overview

Style Enhancer XL LoRA is an advanced, high-resolution LoRA (Low-Rank Adaptation) adapter designed to enhance the capabilities of Animagine XL 2.0. This innovative model excels in fine-tuning and refining anime-style images, producing unparalleled quality and detail. It seamlessly integrates with the Stable Diffusion XL framework, and uniquely supports Danbooru tags for precise and creative image generation.

Example tags include face focus, cute, masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck.


Model Details

  • Developed by: Linaqruf
  • Model type: LoRA adapter for Stable Diffusion XL
  • Model Description: A compact yet powerful adapter designed to augment and enhance the output of large models like Animagine XL 2.0. This adapter not only improves the style and quality of anime-themed images but also allows users to recreate the distinct 'old-school' art style of SD 1.5. It's the perfect tool for generating high-fidelity, anime-inspired visual content.
  • License: CreativeML Open RAIL++-M License
  • Finetuned from model: Animagine XL 2.0

🧨 Diffusers Installation

Ensure the installation of the latest diffusers library, along with other essential packages:

pip install diffusers --upgrade
pip install transformers accelerate safetensors

The following Python script demonstrates how to utilize the Style Enhancer XL LoRA with Animagine XL 2.0. The default scheduler is EulerAncestralDiscreteScheduler, but it can be explicitly defined for clarity.

import torch
from diffusers import (
    StableDiffusionXLPipeline, 
    EulerAncestralDiscreteScheduler,
    AutoencoderKL
)

# Initialize LoRA model and weights
lora_model_id = "Linaqruf/style-enhancer-xl-lora"
lora_filename = "style-enhancer-xl.safetensors"

# Load VAE component
vae = AutoencoderKL.from_pretrained(
    "madebyollin/sdxl-vae-fp16-fix", 
    torch_dtype=torch.float16
)

# Configure the pipeline
pipe = StableDiffusionXLPipeline.from_pretrained(
    "Linaqruf/animagine-xl-2.0", 
    vae=vae,
    torch_dtype=torch.float16, 
    use_safetensors=True, 
    variant="fp16"
)
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.to('cuda')

# Load and fuse LoRA weights
pipe.load_lora_weights(lora_model_id, weight_name=lora_filename)
pipe.fuse_lora(lora_scale=0.6)

# Define prompts and generate image
prompt = "face focus, cute, masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck"
negative_prompt = "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry"

image = pipe(
    prompt, 
    negative_prompt=negative_prompt, 
    width=1024,
    height=1024,
    guidance_scale=12,
    num_inference_steps=50
).images[0]

# Unfuse LoRA before saving the image
pipe.unfuse_lora()
image.save("anime_girl.png")
Downloads last month
5,855

Adapter for

Space using Linaqruf/style-enhancer-xl-lora 1