Text-to-video synthesis

Text-to-video synthesis from ModelScope can be considered the same as Stable Diffusion structure-wise but it is extended to videos instead of static images. More specifically, this system allows us to generate videos from a natural language text prompt.

From the model summary:

This model is based on a multi-stage text-to-video generation diffusion model, which inputs a description text and returns a video that matches the text description. Only English input is supported.

Resources:

Available Pipelines:

Pipeline Tasks Demo
DiffusionPipeline Text-to-Video Generation [Spaces] (TODO)

Usage example

Let’s start by generating a short video with the default length of 16 frames (2s at 8 fps):

import torch
from diffusers import DiffusionPipeline
from diffusers.utils import export_to_video

pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16")
pipe = pipe.to("cuda")

prompt = "Spiderman is surfing"
video_frames = pipe(prompt).frames
video_path = export_to_video(video_frames)
video_path

Diffusers supports different optimization techniques to improve the latency and memory footprint of a pipeline. Since videos are often more memory-heavy than images, we can enable CPU offloading and VAE slicing to keep the memory footprint at bay.

Let’s generate a video of 8 seconds (64 frames) on the same GPU using CPU offloading and VAE slicing:

import torch
from diffusers import DiffusionPipeline
from diffusers.utils import export_to_video

pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16")
pipe.enable_model_cpu_offload()

# memory optimization
pipe.enable_vae_slicing()

prompt = "Darth Vader surfing a wave"
video_frames = pipe(prompt, num_frames=64).frames
video_path = export_to_video(video_frames)
video_path

It just takes 7 GBs of GPU memory to generate the 64 video frames using PyTorch 2.0, “fp16” precision and the techniques mentioned above.

We can also use a different scheduler easily, using the same method we’d use for Stable Diffusion:

import torch
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
from diffusers.utils import export_to_video

pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16")
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()

prompt = "Spiderman is surfing"
video_frames = pipe(prompt, num_inference_steps=25).frames
video_path = export_to_video(video_frames)
video_path

Here are some sample outputs:

An astronaut riding a horse.
An astronaut riding a horse.
Darth vader surfing in waves.
Darth vader surfing in waves.

Available checkpoints

DiffusionPipeline

class diffusers.DiffusionPipeline

< >

( )

Base class for all models.

DiffusionPipeline takes care of storing all components (models, schedulers, processors) for diffusion pipelines and handles methods for loading, downloading and saving models as well as a few methods common to all pipelines to:

Class attributes:

__call__

( *args **kwargs )

Call self as a function.

disable_attention_slicing

< >

( )

Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go back to computing attention in one step.

disable_xformers_memory_efficient_attention

< >

( )

Disable memory efficient attention as implemented in xformers.

download

< >

( pretrained_model_name **kwargs )

Parameters

  • pretrained_model_name (str or os.PathLike, optional) — Should be a string, the repo id of a pretrained pipeline hosted inside a model repo on https://huggingface.co/ Valid repo ids have to be located under a user or organization name, like CompVis/ldm-text2im-large-256.

Download and cache a PyTorch diffusion pipeline from pre-trained pipeline weights.

custom_pipeline (str, optional):

This is an experimental feature and is likely to change in the future.

Can be either:

For more information on how to load and create custom pipelines, please have a look at Loading and Adding Custom Pipelines

force_download (bool, optional, defaults to False): Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False): Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional): A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False): Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False): Whether or not to only look at local files (i.e., do not try to download the model). use_auth_token (str or bool, optional): The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). revision (str, optional, defaults to "main"): The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. custom_revision (str, optional, defaults to "main" when loading from the Hub and to local version of diffusers when loading from GitHub): The specific model version to use. It can be a branch name, a tag name, or a commit id similar to revision when loading a custom pipeline from the Hub. It can be a diffusers version when loading a custom pipeline from GitHub. mirror (str, optional): Mirror source to accelerate downloads in China. If you are from China and have an accessibility problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. Please refer to the mirror site for more information. specify the folder name here. variant (str, optional): If specified load weights from variant filename, e.g. pytorch_model.<variant>.bin. variant is ignored when using from_flax.

It is required to be logged in (huggingface-cli login) when you want to use private or gated models

enable_attention_slicing

< >

( slice_size: typing.Union[str, int, NoneType] = 'auto' )

Parameters

  • slice_size (str or int, optional, defaults to "auto") — When "auto", halves the input to the attention heads, so attention will be computed in two steps. If "max", maximum amount of memory will be saved by running only one slice at a time. If a number is provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim must be a multiple of slice_size.

Enable sliced attention computation.

When this option is enabled, the attention module will split the input tensor in slices, to compute attention in several steps. This is useful to save some memory in exchange for a small speed decrease.

enable_xformers_memory_efficient_attention

< >

( attention_op: typing.Optional[typing.Callable] = None )

Parameters

  • attention_op (Callable, optional) — Override the default None operator for use as op argument to the memory_efficient_attention() function of xFormers.

Enable memory efficient attention as implemented in xformers.

When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference time. Speed up at training time is not guaranteed.

Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention is used.

Examples:

>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp

>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)

from_pretrained

< >

( pretrained_model_name_or_path: typing.Union[str, os.PathLike, NoneType] **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike, optional) — Can be either:

    • A string, the repo id of a pretrained pipeline hosted inside a model repo on https://huggingface.co/ Valid repo ids have to be located under a user or organization name, like CompVis/ldm-text2im-large-256.
    • A path to a directory containing pipeline weights saved using save_pretrained(), e.g., ./my_pipeline_directory/.
  • torch_dtype (str or torch.dtype, optional) — Override the default torch.dtype and load the model under this dtype. If "auto" is passed the dtype will be automatically derived from the model’s weights.
  • custom_pipeline (str, optional) —

    This is an experimental feature and is likely to change in the future.

    Can be either:

    • A string, the repo id of a custom pipeline hosted inside a model repo on https://huggingface.co/. Valid repo ids have to be located under a user or organization name, like hf-internal-testing/diffusers-dummy-pipeline.

      It is required that the model repo has a file, called pipeline.py that defines the custom pipeline.

    • A string, the file name of a community pipeline hosted on GitHub under https://github.com/huggingface/diffusers/tree/main/examples/community. Valid file names have to match exactly the file name without .py located under the above link, e.g. clip_guided_stable_diffusion.

      Community pipelines are always loaded from the current main branch of GitHub.

    • A path to a directory containing a custom pipeline, e.g., ./my_pipeline_directory/.

      It is required that the directory has a file, called pipeline.py that defines the custom pipeline.

    For more information on how to load and create custom pipelines, please have a look at Loading and Adding Custom Pipelines

  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.
  • cache_dir (Union[str, os.PathLike], optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • output_loading_info(bool, optional, defaults to False) — Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (i.e., do not try to download the model).
  • use_auth_token (str or bool, optional) — The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
  • custom_revision (str, optional, defaults to "main" when loading from the Hub and to local version of diffusers when loading from GitHub) — The specific model version to use. It can be a branch name, a tag name, or a commit id similar to revision when loading a custom pipeline from the Hub. It can be a diffusers version when loading a custom pipeline from GitHub.
  • mirror (str, optional) — Mirror source to accelerate downloads in China. If you are from China and have an accessibility problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. Please refer to the mirror site for more information. specify the folder name here.
  • device_map (str or Dict[str, Union[int, str, torch.device]], optional) — A map that specifies where each submodule should go. It doesn’t need to be refined to each parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the same device.

    To have Accelerate compute the most optimized device_map automatically, set device_map="auto". For more information about each option see designing a device map.

  • low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — Speed up model loading by not initializing the weights and only loading the pre-trained weights. This also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. This is only supported when torch version >= 1.9.0. If you are using an older version of torch, setting this argument to True will raise an error.
  • use_safetensors (bool, optional ) — If set to True, the pipeline will be loaded from safetensors weights. If set to None (the default). The pipeline will load using safetensors if the safetensors weights are available and if safetensors is installed. If the to False the pipeline will not use safetensors.
  • kwargs (remaining dictionary of keyword arguments, optional) — Can be used to overwrite load - and saveable variables - i.e. the pipeline components - of the specific pipeline class. The overwritten components are then directly passed to the pipelines __init__ method. See example below for more information.
  • variant (str, optional) — If specified load weights from variant filename, e.g. pytorch_model..bin. variant is ignored when using from_flax.

Instantiate a PyTorch diffusion pipeline from pre-trained pipeline weights.

The pipeline is set in evaluation mode by default using model.eval() (Dropout modules are deactivated).

The warning Weights from XXX not initialized from pretrained model means that the weights of XXX do not come pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning task.

The warning Weights from XXX not used in YYY means that the layer XXX is not used by YYY, therefore those weights are discarded.

It is required to be logged in (huggingface-cli login) when you want to use private or gated models, e.g. "runwayml/stable-diffusion-v1-5"

Activate the special “offline-mode” to use this method in a firewalled environment.

Examples:

>>> from diffusers import DiffusionPipeline

>>> # Download pipeline from huggingface.co and cache.
>>> pipeline = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256")

>>> # Download pipeline that requires an authorization token
>>> # For more information on access tokens, please refer to this section
>>> # of the documentation](https://huggingface.co/docs/hub/security-tokens)
>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")

>>> # Use a different scheduler
>>> from diffusers import LMSDiscreteScheduler

>>> scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config)
>>> pipeline.scheduler = scheduler

numpy_to_pil

< >

( images )

Convert a numpy image or a batch of images to a PIL image.

save_pretrained

< >

( save_directory: typing.Union[str, os.PathLike] safe_serialization: bool = False variant: typing.Optional[str] = None )

Parameters

  • save_directory (str or os.PathLike) — Directory to which to save. Will be created if it doesn’t exist.
  • safe_serialization (bool, optional, defaults to False) — Whether to save the model using safetensors or the traditional PyTorch way (that uses pickle).
  • variant (str, optional) — If specified, weights are saved in the format pytorch_model..bin.

Save all variables of the pipeline that can be saved and loaded as well as the pipelines configuration file to a directory. A pipeline variable can be saved and loaded if its class implements both a save and loading method. The pipeline can easily be re-loaded using the [from_pretrained()](/docs/diffusers/pr_2759/en/api/pipelines/text_to_video#diffusers.DiffusionPipeline.from_pretrained) class method.