Tune-A-Video

Overview

Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation by Jay Zhangjie Wu, Yixiao Ge, Xintao Wang, Stan Weixian Lei, Yuchao Gu, Yufei Shi, Wynne Hsu, Ying Shan, Xiaohu Qie, Mike Zheng Shou The abstract of the paper is the following:

To replicate the success of text-to-image (T2I) generation, recent works employ large-scale video datasets to train a text-to-video (T2V) generator. Despite their promising results, such paradigm is computationally expensive. In this work, we propose a new T2V generation setting—One-Shot Video Tuning, where only one text-video pair is presented. Our model is built on state-of-the-art T2I diffusion models pre-trained on massive image data. We make two key observations: 1) T2I models can generate still images that represent verb terms; 2) extending T2I models to generate multiple images concurrently exhibits surprisingly good content consistency. To further learn continuous motion, we introduce Tune-A-Video, which involves a tailored spatio-temporal attention mechanism and an efficient one-shot tuning strategy. At inference, we employ DDIM inversion to provide structure guidance for sampling. Extensive qualitative and numerical experiments demonstrate the remarkable ability of our method across various applications.

Resources:

Available Pipelines:

Pipeline Tasks Demo
TuneAVideoPipeline Text-to-Video Generation 🤗 Spaces

Usage example

Loading with a pre-existing Text2Image checkpoint

import torch
from diffusers import TuneAVideoPipeline, DDIMScheduler, UNet3DConditionModel
from diffusers.utils import export_to_video
from PIL import Image

# Use any pretrained Text2Image checkpoint based on stable diffusion
pretrained_model_path = "nitrosocke/mo-di-diffusion"
unet = UNet3DConditionModel.from_pretrained(
    "Tune-A-Video-library/df-cpt-mo-di-bear-guitar", subfolder="unet", torch_dtype=torch.float16
).to("cuda")

pipe = TuneAVideoPipeline.from_pretrained(pretrained_model_path, unet=unet, torch_dtype=torch.float16).to("cuda")

prompt = "A princess playing a guitar, modern disney style"
generator = torch.Generator(device="cuda").manual_seed(42)

video_frames = pipe(prompt, video_length=3, generator=generator, num_inference_steps=50, output_type="np").frames

# Saving to gif.
pil_frames = [Image.fromarray(frame) for frame in video_frames]
duration = len(pil_frames) / 8
pil_frames[0].save(
    "animation.gif",
    save_all=True,
    append_images=pil_frames[1:],  # append rest of the images
    duration=duration * 1000,  # in milliseconds
    loop=0,
)

# Saving to video
video_path = export_to_video(video_frames)

Loading a saved Tune-A-Video checkpoint

import torch
from diffusers import DiffusionPipeline, DDIMScheduler
from diffusers.utils import export_to_video
from PIL import Image

pipe = DiffusionPipeline.from_pretrained(
    "Tune-A-Video-library/df-cpt-mo-di-bear-guitar", torch_dtype=torch.float16
).to("cuda")

prompt = "A princess playing a guitar, modern disney style"
generator = torch.Generator(device="cuda").manual_seed(42)

video_frames = pipe(prompt, video_length=3, generator=generator, num_inference_steps=50, output_type="np").frames

# Saving to gif.
pil_frames = [Image.fromarray(frame) for frame in video_frames]
duration = len(pil_frames) / 8
pil_frames[0].save(
    "animation.gif",
    save_all=True,
    append_images=pil_frames[1:],  # append rest of the images
    duration=duration * 1000,  # in milliseconds
    loop=0,
)

# Saving to video
video_path = export_to_video(video_frames)

Here are some sample outputs:

A princess playing a guitar, modern disney style
A princess playing a guitar, modern disney style

Available checkpoints

TuneAVideoPipeline

class diffusers.TuneAVideoPipeline

< >

( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet3DConditionModel scheduler: typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler] )

Parameters

Pipeline for text-to-video generation.

This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

__call__

< >

( prompt: typing.Union[str, typing.List[str]] video_length: int = 8 height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None num_videos_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'np' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: typing.Optional[int] = 1 cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None ) ~pipelines.stable_diffusion.TuneAVideoPipelineOutput or tuple

Parameters

  • prompt (str or List[str], optional) — The prompt or prompts to guide the video generation. If not defined, one has to pass prompt_embeds. instead.
  • video_length (int, optional, defaults to 8) — The number of video frames that are generated. Defaults to 8 frames which at 8 frames per seconds amounts to 1 second of video.
  • height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — The height in pixels of the generated video.
  • width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — The width in pixels of the generated video.
  • num_inference_steps (int, optional, defaults to 50) — The number of denoising steps. More denoising steps usually lead to a higher quality videos at the expense of slower inference.
  • guidance_scale (float, optional, defaults to 7.5) — Guidance scale as defined in Classifier-Free Diffusion Guidance. guidance_scale is defined as w of equation 2. of Imagen Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate videos that are closely linked to the text prompt, usually at the expense of lower video quality.
  • negative_prompt (str or List[str], optional) — The prompt or prompts not to guide the video generation. If not defined, one has to pass negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1).
  • num_videos_per_prompt (int, optional, defaults to 1) — Number of videos to be generated for each input prompt
  • eta (float, optional, defaults to 0.0) — Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to schedulers.DDIMScheduler, will be ignored for others.
  • generator (torch.Generator or List[torch.Generator], optional) — One or a list of torch generator(s) to make generation deterministic.
  • latents (torch.FloatTensor, optional) — Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for video generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied random generator. Latents should be of shape (batch_size, num_channel, num_frames, height, width).
  • prompt_embeds (torch.FloatTensor, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input argument.
  • negative_prompt_embeds (torch.FloatTensor, optional) — Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input argument.
  • output_type (str, optional, defaults to "np") — The output format of the generate video. Choose between torch.FloatTensor or np.array.
  • return_dict (bool, optional, defaults to True) — Whether or not to return a TuneAVideoPipelineOutput instead of a plain tuple.
  • callback (Callable, optional) — A function that will be called every callback_steps steps during inference. The function will be called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor).
  • callback_steps (int, optional, defaults to 1) — The frequency at which the callback function will be called. If not specified, the callback will be called at every step.
  • cross_attention_kwargs (dict, optional) — A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under self.processor in diffusers.cross_attention.

Returns

~pipelines.stable_diffusion.TuneAVideoPipelineOutput or tuple

~pipelines.stable_diffusion.TuneAVideoPipelineOutput if return_dict is True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated frames.

Function invoked when calling the pipeline for generation.

Examples:

>>> import torch

>>> from diffusers import UNet3DConditionModel, TuneAVideoPipeline, DDIMScheduler
>>> from diffusers.utils import export_to_video

>>> pretrained_model_path = "nitrosocke/mo-di-diffusion"
>>> scheduler = DDIMScheduler.from_pretrained(pretrained_model_path, subfolder="scheduler")
>>> unet = UNet3DConditionModel.from_pretrained(
...     "Tune-A-Video-library/df-cpt-mo-di-bear-guitar", subfolder="unet", torch_dtype=torch.float16
... ).to("cuda")
>>> pipe = TuneAVideoPipeline.from_pretrained(
...     pretrained_model_path, unet=unet, scheduler=scheduler, torch_dtype=torch.float16
... ).to("cuda")

>>> prompt = "a magical princess is playing guitar, modern disney style"
>>> video_frames = pipe(
...     prompt,
...     video_length=4,
...     height=256,
...     width=256,
...     num_inference_steps=38,
...     guidance_scale=12.5,
...     output_type="np",
... ).frames

>>> video_frames = pipe(prompt).frames
>>> video_path = export_to_video(video_frames)
>>> video_path

disable_vae_slicing

< >

( )

Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to computing decoding in one step.

disable_vae_tiling

< >

( )

Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to computing decoding in one step.

enable_vae_slicing

< >

( )

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.

enable_vae_tiling

< >

( )

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow processing larger images.

encode_prompt

< >

( prompt device num_videos_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None lora_scale: typing.Optional[float] = None )

Parameters

  • prompt (str or List[str], optional) — prompt to be encoded device — (torch.device): torch device
  • num_videos_per_prompt (int) — number of images that should be generated per prompt
  • do_classifier_free_guidance (bool) — whether to use classifier free guidance or not
  • negative_prompt (str or List[str], optional) — The prompt or prompts not to guide the image generation. If not defined, one has to pass negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1).
  • prompt_embeds (torch.FloatTensor, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input argument.
  • negative_prompt_embeds (torch.FloatTensor, optional) — Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input argument.
  • lora_scale (float, optional) — A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.

Encodes the prompt into text encoder hidden states.