Chan-Y's picture
Update README.md
162bd29 verified
metadata
license: mit
language:
  - en
library_name: diffusers

Stable Flash Lightning🌩

combined_images

Model Details

Model Description

The Stable-Flash-Lightning model is a powerful text-to-image model that leverages the strengths of three distinct diffusion models. By merging stabilityai/stable-diffusion-xl-base-1.0, sd-community/sdxl-flash-lora, and ByteDance/SDXL-Lightning, this model aims to generate highly realistic and detailed images from textual descriptions. The combined capabilities of these models ensure high-quality output with intricate details and vivid realism.

Example Usage

import torch
from diffusers import DiffusionPipeline

# Load the pipeline
pipeline = DiffusionPipeline.from_pretrained("Chan-Y/Stable-Flash-Lightning")

# Define the prompt and negative prompt
prompt = """a ultra-realistic cute little rabbit with big green eyes 
that wears a hat"""
neg = "low quality, blur"

# Set random seed for reproducibility
torch.manual_seed(1521)

# Generate the image
image = pipeline(prompt, 
                 negative_prompt=neg,
                 cross_attention_kwargs={"scale": 1.0},
                 num_inference_steps=50, 
                 resize={"target_size": [256, 256]}).images[0]

# Display the image
image

imgs/img05_256.png

Model Performance

The model performs exceptionally well in generating ultra-realistic images with intricate details. The merged architecture allows it to handle complex prompts and produce images with high fidelity. The negative prompt capability helps in refining the output by avoiding undesirable qualities.

Merging Process

The model was created by merging the safetensors of sd-community/sdxl-flash-lora and ByteDance/SDXL-Lightning with the base model stabilityai/stable-diffusion-xl-base-1.0. No further fine-tuning was performed after the merging process. This approach combines the unique features and strengths of each model, resulting in a versatile and powerful text-to-image generation tool.

Intended Use

The model is intended for creative and artistic purposes, enabling users to generate high-quality images from textual descriptions. It can be used in various applications such as digital art, content creation, and visualization.

Limitations

  • The model may not always perfectly capture highly complex or abstract concepts.
  • The quality of the output can be influenced by the specificity and clarity of the prompt.
  • Ethical considerations should be taken into account when generating images to avoid misuse.

Contact Information

For any queries or further information, please contact Linkedin.