--- pipeline_tag: image-to-video license: mit datasets: - openai/MMMLU language: - am metrics: - accuracy base_model: - black-forest-labs/FLUX.1-dev new_version: black-forest-labs/FLUX.1-dev library_name: adapter-transformers tags: - chemistry --- # AnimateLCM-I2V for Fast Image-conditioned Video Generation in 4 steps. AnimateLCM-I2V is a latent image-to-video consistency model finetuned with [AnimateLCM](https://huggingface.co/wangfuyun/AnimateLCM) following the strategy proposed in [AnimateLCM-paper](https://arxiv.org/abs/2402.00769) without requiring teacher models. [AnimateLCM: Computation-Efficient Personalized Style Video Generation without Personalized Video Data](https://arxiv.org/abs/2402.00769) by Fu-Yun Wang et al. ## Example-Video ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63e9e92f20c109718713f5eb/P3rcJbtTKYVnBfufZ_OVg.png) For more details, please refer to our [[paper](https://arxiv.org/abs/2402.00769)] | [[code](https://github.com/G-U-N/AnimateLCM)] | [[proj-page](https://animatelcm.github.io/)] | [[civitai](https://civitai.com/models/310920/animatelcm-i2v-fast-image-to-video-generation)].