Lawrence-cj's picture
add PixArt-Sigma model card
e102b35 verified
metadata
license: openrail++
tags:
  - text-to-image
  - PixArt-Σ

🐱 PixArt-Σ Model Card

row01

Model

pipeline

PixArt-Σ consists of pure transformer blocks for latent diffusion: It can directly generate 1024px, 2K and 4K images from text prompts within a single sampling process.

Source code is available at https://github.com/PixArt-alpha/PixArt-sigma.

Model Description

Model Sources

For research purposes, we recommend our generative-models Github repository (https://github.com/PixArt-alpha/PixArt-sigma), which is more suitable for both training and inference and for which most advanced diffusion sampler like SA-Solver will be added over time. Hugging Face provides free PixArt-Σ inference.

🧨 Diffusers

Make sure to upgrade diffusers to >= 0.28.0:

pip install -U diffusers --upgrade

In addition make sure to install transformers, safetensors, sentencepiece, and accelerate:

pip install transformers accelerate safetensors sentencepiece

For diffusers<0.28.0, check this script for help.

To just use the base model, you can run:

import torch
from diffusers import Transformer2DModel, PixArtSigmaPipeline

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
weight_dtype = torch.float16

pipe = PixArtSigmaPipeline.from_pretrained(
    "PixArt-alpha/PixArt-Sigma-XL-2-1024-MS", 
    torch_dtype=weight_dtype,
    use_safetensors=True,
)
pipe.to(device)

# Enable memory optimizations.
# pipe.enable_model_cpu_offload()

prompt = "A small cactus with a happy face in the Sahara desert."
image = pipe(prompt).images[0]
image.save("./catcus.png")

When using torch >= 2.0, you can improve the inference speed by 20-30% with torch.compile. Simple wrap the unet with torch compile before running the pipeline:

pipe.transformer = torch.compile(pipe.transformer, mode="reduce-overhead", fullgraph=True)

If you are limited by GPU VRAM, you can enable cpu offloading by calling pipe.enable_model_cpu_offload instead of .to("cuda"):

- pipe.to("cuda")
+ pipe.enable_model_cpu_offload()

For more information on how to use PixArt-Σ with diffusers, please have a look at the PixArt-Σ Docs.

Uses

Direct Use

The model is intended for research purposes only. Possible research areas and tasks include

  • Generation of artworks and use in design and other artistic processes.

  • Applications in educational or creative tools.

  • Research on generative models.

  • Safe deployment of models which have the potential to generate harmful content.

  • Probing and understanding the limitations and biases of generative models.

Excluded uses are described below.

Out-of-Scope Use

The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.

Limitations and Bias

Limitations

  • The model does not achieve perfect photorealism
  • The model cannot render legible text
  • The model struggles with more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
  • fingers, .etc in general may not be generated properly.
  • The autoencoding part of the model is lossy.

Bias

While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.