AnimateLCM-I2V / README.md
phullloph's picture
Update README.md
c6ebcbc verified
|
raw
history blame
1.44 kB
metadata
pipeline_tag: image-to-video
license: mit
datasets:
  - openai/MMMLU
language:
  - am
metrics:
  - accuracy
base_model:
  - black-forest-labs/FLUX.1-dev
new_version: black-forest-labs/FLUX.1-dev
library_name: adapter-transformers
tags:
  - chemistry

AnimateLCM-I2V for Fast Image-conditioned Video Generation in 4 steps.

AnimateLCM-I2V is a latent image-to-video consistency model finetuned with AnimateLCM following the strategy proposed in AnimateLCM-paper without requiring teacher models.

AnimateLCM: Computation-Efficient Personalized Style Video Generation without Personalized Video Data by Fu-Yun Wang et al.

Example-Video

image/png

For more details, please refer to our [paper] | [code] | [proj-page] | [civitai].