GalleryGitHubBlogPaperDiscord

Gallery

For more demos and corresponding prompts, see the Allegro Gallery.

Key Feature

  • Open Source: Full model weights and code available to the community, Apache 2.0!
  • Versatile Content Creation: Capable of generating a wide range of content, from close-ups of humans and animals to diverse dynamic scenes.
  • Text-Image-to-Video Generation: Generate videos from user-provided prompts and images. Supported input types include:
    • Generating subsequent video content from a user prompt and first frame image.
    • Generating intermediate video content from a user prompt and both first and last frame images.
  • High-Quality Output: Generate detailed 6-second videos at 15 FPS with 720x1280 resolution, which can be interpolated to 30 FPS with EMA-VFI.
  • Small and Efficient: Features a 175M parameter VideoVAE and 2.8B parameter VideoDiT model. Supports multiple precisions (FP32, BF16, FP16) and uses 9.3 GB GPU memory in BF16 mode with CPU offloading. Context length is 79.2K, equivalent to 88 frames.

Model info

Model Allegro-TI2V Allegro
Description Text-Image-to-Video Generation Model Text-to-Video Generation Model
Download Hugging Face Hugging Face
Parameter VAE: 175M
DiT: 2.8B
Inference Precision VAE: FP32/TF32/BF16/FP16 (best in FP32/TF32)
DiT/T5: BF16/FP32/TF32
Context Length 79.2K
Resolution 720 x 1280
Frames 88
Video Length 6 seconds @ 15 FPS
Single GPU Memory Usage 9.3G BF16 (with cpu_offload)
Inference time 20 mins (single H100) / 3 mins (8xH100)

Quick start

  1. Download the Allegro GitHub code.

  2. Install the necessary requirements.

    • Ensure Python >= 3.10, PyTorch >= 2.4, CUDA >= 12.4. For details, see requirements.txt.
    • It is recommended to use Anaconda to create a new environment (Python >= 3.10) to run the following example.
  3. Download the Allegro-TI2V model weights.

  4. Run inference.

    python single_inference_ti2v.py \
    --user_prompt 'The car drives along the road.' \
    --first_frame your/path/to/first_frame_image.png \
    --vae your/path/to/vae \
    --dit your/path/to/transformer \
    --text_encoder your/path/to/text_encoder \
    --tokenizer your/path/to/tokenizer \
    --guidance_scale 8 \
    --num_sampling_steps 100 \
    --seed 1427329220
    

    The output video resolution is fixed at 720 脳 1280. Input images with different resolutions will be automatically cropped and resized to fit.

    Argument Description
    --user_prompt [Required] Text input for image-to-video generation.
    --first_frame [Required] First-frame image input for image-to-video generation.
    --last_frame [Optional] If provided, the model will generate intermediate video content based on the specified first and last frame images.
    --enable_cpu_offload [Optional] Offload the model into CPU for less GPU memory cost (about 9.3G, compared to 27.5G if CPU offload is not enabled), but the inference time will increase significantly.
  5. (Optional) Interpolate the video to 30 FPS

  • It is recommended to use EMA-VFI to interpolate the video from 15 FPS to 30 FPS.
  • For better visual quality, you can use imageio to save the video.

License

This repo is released under the Apache 2.0 License.

Downloads last month
166
Inference API
Inference API (serverless) does not yet support diffusers models for this pipeline type.