Image-to-Video Generation with PIA (Personalized Image Animator)
Overview
PIA: Your Personalized Image Animator via Plug-and-Play Modules in Text-to-Image Models by Yiming Zhang, Zhening Xing, Yanhong Zeng, Youqing Fang, Kai Chen
Recent advancements in personalized text-to-image (T2I) models have revolutionized content creation, empowering non-experts to generate stunning images with unique styles. While promising, adding realistic motions into these personalized images by text poses significant challenges in preserving distinct styles, high-fidelity details, and achieving motion controllability by text. In this paper, we present PIA, a Personalized Image Animator that excels in aligning with condition images, achieving motion controllability by text, and the compatibility with various personalized T2I models without specific tuning. To achieve these goals, PIA builds upon a base T2I model with well-trained temporal alignment layers, allowing for the seamless transformation of any personalized T2I model into an image animation model. A key component of PIA is the introduction of the condition module, which utilizes the condition frame and inter-frame affinity as input to transfer appearance information guided by the affinity hint for individual frame synthesis in the latent space. This design mitigates the challenges of appearance-related image alignment within and allows for a stronger focus on aligning with motion-related guidance.
Available Pipelines
Pipeline | Tasks | Demo |
---|---|---|
PIAPipeline | Image-to-Video Generation with PIA |
Available checkpoints
Motion Adapter checkpoints for PIA can be found under the OpenMMLab org. These checkpoints are meant to work with any model based on Stable Diffusion 1.5
Usage example
PIA works with a MotionAdapter checkpoint and a Stable Diffusion 1.5 model checkpoint. The MotionAdapter is a collection of Motion Modules that are responsible for adding coherent motion across image frames. These modules are applied after the Resnet and Attention blocks in the Stable Diffusion UNet. In addition to the motion modules, PIA also replaces the input convolution layer of the SD 1.5 UNet model with a 9 channel input convolution layer.
The following example demonstrates how to use PIA to generate a video from a single image.
import torch
from diffusers import (
EulerDiscreteScheduler,
MotionAdapter,
PIAPipeline,
)
from diffusers.utils import export_to_gif, load_image
adapter = MotionAdapter.from_pretrained("openmmlab/PIA-condition-adapter")
pipe = PIAPipeline.from_pretrained("SG161222/Realistic_Vision_V6.0_B1_noVAE", motion_adapter=adapter, torch_dtype=torch.float16)
pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
pipe.enable_vae_slicing()
image = load_image(
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat_6.png?download=true"
)
image = image.resize((512, 512))
prompt = "cat in a field"
negative_prompt = "wrong white balance, dark, sketches,worst quality,low quality"
generator = torch.Generator("cpu").manual_seed(0)
output = pipe(image=image, prompt=prompt, generator=generator)
frames = output.frames[0]
export_to_gif(frames, "pia-animation.gif")
Here are some sample outputs:
If you plan on using a scheduler that can clip samples, make sure to disable it by setting clip_sample=False
in the scheduler as this can also have an adverse effect on generated samples. Additionally, the PIA checkpoints can be sensitive to the beta schedule of the scheduler. We recommend setting this to linear
.
Using FreeInit
FreeInit: Bridging Initialization Gap in Video Diffusion Models by Tianxing Wu, Chenyang Si, Yuming Jiang, Ziqi Huang, Ziwei Liu.
FreeInit is an effective method that improves temporal consistency and overall quality of videos generated using video-diffusion-models without any addition training. It can be applied to PIA, AnimateDiff, ModelScope, VideoCrafter and various other video generation models seamlessly at inference time, and works by iteratively refining the latent-initialization noise. More details can be found it the paper.
The following example demonstrates the usage of FreeInit.
import torch
from diffusers import (
DDIMScheduler,
MotionAdapter,
PIAPipeline,
)
from diffusers.utils import export_to_gif, load_image
adapter = MotionAdapter.from_pretrained("openmmlab/PIA-condition-adapter")
pipe = PIAPipeline.from_pretrained("SG161222/Realistic_Vision_V6.0_B1_noVAE", motion_adapter=adapter)
# enable FreeInit
# Refer to the enable_free_init documentation for a full list of configurable parameters
pipe.enable_free_init(method="butterworth", use_fast_sampling=True)
# Memory saving options
pipe.enable_model_cpu_offload()
pipe.enable_vae_slicing()
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
image = load_image(
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat_6.png?download=true"
)
image = image.resize((512, 512))
prompt = "cat in a field"
negative_prompt = "wrong white balance, dark, sketches,worst quality,low quality"
generator = torch.Generator("cpu").manual_seed(0)
output = pipe(image=image, prompt=prompt, generator=generator)
frames = output.frames[0]
export_to_gif(frames, "pia-freeinit-animation.gif")
FreeInit is not really free - the improved quality comes at the cost of extra computation. It requires sampling a few extra times depending on the num_iters
parameter that is set when enabling it. Setting the use_fast_sampling
parameter to True
can improve the overall performance (at the cost of lower quality compared to when use_fast_sampling=False
but still better results than vanilla video generation models).
PIAPipeline
class diffusers.PIAPipeline
< source >( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: Union scheduler: Union motion_adapter: Optional = None feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None )
Parameters
- vae (AutoencoderKL) — Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- text_encoder (
CLIPTextModel
) — Frozen text-encoder (clip-vit-large-patch14). - tokenizer (
CLIPTokenizer
) — A CLIPTokenizer to tokenize text. - unet (UNet2DConditionModel) — A UNet2DConditionModel used to create a UNetMotionModel to denoise the encoded video latents.
- motion_adapter (
MotionAdapter
) — AMotionAdapter
to be used in combination withunet
to denoise the encoded video latents. - scheduler (SchedulerMixin) —
A scheduler to be used in combination with
unet
to denoise the encoded image latents. Can be one of DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler.
Pipeline for text-to-video generation.
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.).
The pipeline also inherits the following loading methods:
- load_textual_inversion() for loading textual inversion embeddings
- load_lora_weights() for loading LoRA weights
- save_lora_weights() for saving LoRA weights
- load_ip_adapter() for loading IP Adapters
__call__
< source >( image: Union prompt: Union = None strength: float = 1.0 num_frames: Optional = 16 height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None motion_scale: int = 0 output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → PIAPipelineOutput or tuple
Parameters
- image (
PipelineImageInput
) — The input image to be used for video generation. - prompt (
str
orList[str]
, optional) — The prompt or prompts to guide image generation. If not defined, you need to passprompt_embeds
. - strength (
float
, optional, defaults to 1.0) — Indicates extent to transform the referenceimage
. Must be between 0 and 1. - height (
int
, optional, defaults toself.unet.config.sample_size * self.vae_scale_factor
) — The height in pixels of the generated video. - width (
int
, optional, defaults toself.unet.config.sample_size * self.vae_scale_factor
) — The width in pixels of the generated video. - num_frames (
int
, optional, defaults to 16) — The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds amounts to 2 seconds of video. - num_inference_steps (
int
, optional, defaults to 50) — The number of denoising steps. More denoising steps usually lead to a higher quality videos at the expense of slower inference. - guidance_scale (
float
, optional, defaults to 7.5) — A higher guidance scale value encourages the model to generate images closely linked to the textprompt
at the expense of lower image quality. Guidance scale is enabled whenguidance_scale > 1
. - negative_prompt (
str
orList[str]
, optional) — The prompt or prompts to guide what to not include in image generation. If not defined, you need to passnegative_prompt_embeds
instead. Ignored when not using guidance (guidance_scale < 1
). - eta (
float
, optional, defaults to 0.0) — Corresponds to parameter eta (η) from the DDIM paper. Only applies to the DDIMScheduler, and is ignored in other schedulers. - generator (
torch.Generator
orList[torch.Generator]
, optional) — Atorch.Generator
to make generation deterministic. - latents (
torch.Tensor
, optional) — Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor is generated by sampling using the supplied randomgenerator
. Latents should be of shape(batch_size, num_channel, num_frames, height, width)
. - prompt_embeds (
torch.Tensor
, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, text embeddings are generated from theprompt
input argument. - negative_prompt_embeds (
torch.Tensor
, optional) — Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided,negative_prompt_embeds
are generated from thenegative_prompt
input argument. ip_adapter_image — (PipelineImageInput
, optional): Optional image input to work with IP Adapters. - ip_adapter_image_embeds (
List[torch.Tensor]
, optional) — Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. Each element should be a tensor of shape(batch_size, num_images, emb_dim)
. It should contain the negative image embedding ifdo_classifier_free_guidance
is set toTrue
. If not provided, embeddings are computed from theip_adapter_image
input argument. motion_scale — (int
, optional, defaults to 0): Parameter that controls the amount and type of motion that is added to the image. Increasing the value increases the amount of motion, while specific ranges of values control the type of motion that is added. Must be between 0 and 8. Set between 0-2 to only increase the amount of motion. Set between 3-5 to create looping motion. Set between 6-8 to perform motion with image style transfer. - output_type (
str
, optional, defaults to"pil"
) — The output format of the generated video. Choose betweentorch.Tensor
,PIL.Image
ornp.array
. - return_dict (
bool
, optional, defaults toTrue
) — Whether or not to return a TextToVideoSDPipelineOutput instead of a plain tuple. - cross_attention_kwargs (
dict
, optional) — A kwargs dictionary that if specified is passed along to theAttentionProcessor
as defined inself.processor
. - clip_skip (
int
, optional) — Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings. - callback_on_step_end (
Callable
, optional) — A function that calls at the end of each denoising steps during the inference. The function is called with the following arguments:callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)
.callback_kwargs
will include a list of all tensors as specified bycallback_on_step_end_tensor_inputs
. - callback_on_step_end_tensor_inputs (
List
, optional) — The list of tensor inputs for thecallback_on_step_end
function. The tensors specified in the list will be passed ascallback_kwargs
argument. You will only be able to include variables listed in the._callback_tensor_inputs
attribute of your pipeline class.
Returns
PIAPipelineOutput or tuple
If return_dict
is True
, PIAPipelineOutput is returned, otherwise a
tuple
is returned where the first element is a list with the generated frames.
The call function to the pipeline for generation.
Examples:
>>> import torch
>>> from diffusers import (
... EulerDiscreteScheduler,
... MotionAdapter,
... PIAPipeline,
... )
>>> from diffusers.utils import export_to_gif, load_image
>>> adapter = MotionAdapter.from_pretrained("../checkpoints/pia-diffusers")
>>> pipe = PIAPipeline.from_pretrained("SG161222/Realistic_Vision_V6.0_B1_noVAE", motion_adapter=adapter)
>>> pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config)
>>> image = load_image(
... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat_6.png?download=true"
... )
>>> image = image.resize((512, 512))
>>> prompt = "cat in a hat"
>>> negative_prompt = "wrong white balance, dark, sketches,worst quality,low quality, deformed, distorted, disfigured, bad eyes, wrong lips,weird mouth, bad teeth, mutated hands and fingers, bad anatomy,wrong anatomy, amputation, extra limb, missing limb, floating,limbs, disconnected limbs, mutation, ugly, disgusting, bad_pictures, negative_hand-neg"
>>> generator = torch.Generator("cpu").manual_seed(0)
>>> output = pipe(image=image, prompt=prompt, negative_prompt=negative_prompt, generator=generator)
>>> frames = output.frames[0]
>>> export_to_gif(frames, "pia-animation.gif")
encode_prompt
< source >( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None )
Parameters
- prompt (
str
orList[str]
, optional) — prompt to be encoded device — (torch.device
): torch device - num_images_per_prompt (
int
) — number of images that should be generated per prompt - do_classifier_free_guidance (
bool
) — whether to use classifier free guidance or not - negative_prompt (
str
orList[str]
, optional) — The prompt or prompts not to guide the image generation. If not defined, one has to passnegative_prompt_embeds
instead. Ignored when not using guidance (i.e., ignored ifguidance_scale
is less than1
). - prompt_embeds (
torch.Tensor
, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated fromprompt
input argument. - negative_prompt_embeds (
torch.Tensor
, optional) — Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be generated fromnegative_prompt
input argument. - lora_scale (
float
, optional) — A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. - clip_skip (
int
, optional) — Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings.
Encodes the prompt into text encoder hidden states.
- enable_freeu
- disable_freeu
- enable_free_init
- disable_free_init
- enable_vae_slicing
- disable_vae_slicing
- enable_vae_tiling
- disable_vae_tiling
PIAPipelineOutput
class diffusers.pipelines.pia.PIAPipelineOutput
< source >( frames: Union )
Parameters
- frames (
torch.Tensor
,np.ndarray
, or List[List[PIL.Image.Image]]) — Nested list of lengthbatch_size
with denoised PIL image sequences of lengthnum_frames
, NumPy array of shape(batch_size, num_frames, channels, height, width, Torch tensor of shape
(batch_size, num_frames, channels, height, width)`.
Output class for PIAPipeline.