StevenZhang commited on
Commit
1cd2cc8
1 Parent(s): ca97fdf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -26
README.md CHANGED
@@ -43,18 +43,18 @@ widgets:
43
  # I2VGen-XL高清图像生成视频大模型
44
 
45
 
46
- 本项目**I2VGen-XL**旨在解决根据输入图像生成高清视频任务。**I2VGen-XL**由达摩院研发的高清视频生成基础模型,其核心部分包含两个阶段,分别解决语义一致性和清晰度的问题,参数量共计约37亿,模型经过在大规模视频和图像数据混合预训练,并在少量精品数据上微调得到,该数据分布广泛、类别多样化,模型对不同的数据均有良好的泛化性。项目于现有的视频生成模型,**I2VGen-XL**在清晰度、质感、语义、时序连续性等方面均具有明显的优势。
47
 
48
- 此外,**I2VGen-XL**的许多设计理念继承于我们已经公开的工作**VideoComposer**,您可以参考我们的[VideoComposer](https://videocomposer.github.io)和本项目的Github代码库了解详细细节
 
49
 
50
- The **I2VGen-XL** project aims to address the task of generating high-definition videos based on input images. Developed by Alibaba Cloud, the **I2VGen-XL** is a fundamental model for generating high-definition videos. Its core components consist of two stages that address the issues of semantic consistency and clarity, totaling approximately 3.7 billion parameters. The model is pre-trained on a large-scale mix of video and image data and fine-tuned on a small number of high-quality data sets with a wide range of distributions and diverse categories. The model demonstrates good generalization capabilities for different data types. Compared to existing video generation models, **I2VGen-XL** has significant advantages in terms of clarity, texture, semantics, and temporal continuity.
51
 
52
- Additionally, many of the design concepts for **I2VGen-XL** are inherited from our publicly available work, **VideoComposer**. For detailed information, please refer to our [VideoComposer](https://videocomposer.github.io) and the Github code repository for this project.
53
 
 
54
  <center>
55
  <p align="center">
56
- <img src="https://huggingface.co/damo-vilab/MS-Image2Video/resolve/main/assets/image/Fig_twostage.png"/>
57
- <br/>
58
  Fig.1 I2VGen-XL
59
  <p>
60
  </center>
@@ -63,21 +63,21 @@ Additionally, many of the design concepts for **I2VGen-XL** are inherited from o
63
 
64
  ## 模型介绍 (Introduction)
65
 
66
- **I2VGen-XL**建立在Stable Diffusion之上,如图Fig.2所示,通过专门设计的时空UNet在隐空间中进行时空建模并通过解码器重建出最终视频。为能够生成720P视频,我们将**I2VGen-XL**分为两个阶段,第一阶段保证语义一致性但低分辨率,第二阶段通过DDIM逆运算并在新的VLDM上进行去噪以提高视频分辨率以及同时提升时间和空间上的一致性。通过在模型、训练和数据上的联合优化,本项目主要具有以下几个特点:
67
 
68
  - 高清&宽屏,可以直接生成720P(1280*720)分辨率的视频,且相比于现有的开源项目,不仅分辨率得到有效提高,其生产的宽屏视频可以适合更多的场景
69
- - 无水印,模型通过我们内部大规模无水印视频/图像训练,并在高质量数据微调得到,生成的无水印视频可适用更多视频平台,减少许多限制
70
  - 连续性,通过特定训练和推理策略,在视频的细节生成的稳定性上(时间和空间维度)有明显提高
71
- - 质感好,通过收集特定的风格的视频数据训练,使得生成的模型在质感得到明显提升,可以生成科技感、电影色、卡通风格和素描等类型视频
 
72
 
73
  以下为生成的部分案例:
74
 
75
- **I2VGen-XL** is built on Stable Diffusion, as shown in Fig.2, and uses a specially designed spatiotemporal UNet to perform spatiotemporal modeling in the latent space, and then reconstructs the final video through the decoder. In order to generate 720P videos, **I2VGen-XL** is divided into two stages. The first stage guarantees semantic consistency but with low resolution, while the second stage uses the DDIM inverse operation and applies denoising on a new VLDM to improve the resolution and spatiotemporal consistency of the video. Through joint optimization of the model, training, and data, this project has the following characteristics:
76
 
77
  - High-definition & widescreen, can directly generate 720P (1280*720) resolution videos, and compared to existing open source projects, not only is the resolution effectively improved, but the widescreen videos it produces can also be suitable for more scenarios.
78
- - No watermark, the model is trained on a large-scale watermark-free video/image dataset internally and fine-tuned on high-quality data, generating watermark-free videos that can be applied to more video platforms and reducing many restrictions.
79
  - Continuity, through specific training and inference strategies, there is a significant improvement in the stability of detail generation in videos (in the time and space dimensions).
80
  - Good texture, by collecting specific style video data for training, the generated model has a significant improvement in texture and can generate technology, film color, cartoon style, sketch and other types of videos.
 
81
 
82
 
83
  Below are some examples generated by the model:
@@ -280,9 +280,9 @@ Below are some examples generated by the model:
280
  ### 依赖项 (Dependency)
281
 
282
 
283
- 首先你需要确定你的系统安装了*ffmpeg*命令,如果没有,可以通过以下命令来安装:
284
 
285
- First, you need to ensure that your system has installed the ffmpeg command. If it is not installed, you can install it using the following command:
286
 
287
  ```bash
288
  sudo apt-get update && apt-get install ffmpeg libsm6 libxext6 -y
@@ -293,7 +293,6 @@ sudo apt-get update && apt-get install ffmpeg libsm6 libxext6 -y
293
 
294
  The **I2VGen-XL** project is compatible with the ModelScope codebase, and the following are some of the dependencies that need to be installed for this project.
295
 
296
-
297
  ```bash
298
  pip install modelscope==1.8.4
299
  pip install xformers==0.0.20
@@ -308,7 +307,6 @@ pip install scipy
308
  pip install imageio
309
  pip install pytorch-lightning
310
  pip install torchsde
311
- pip install easydict
312
  ```
313
 
314
 
@@ -359,23 +357,24 @@ Please visit <a href="https://modelscope.cn/models/damo/Video-to-Video/summary">
359
 
360
  本**I2VGen-XL**项目的模型在处理以下情况会存在局限性:
361
  - 小目标生成能力有限,在生成较小目标的时候,会存在一定的错误
362
- - 快速运动目标生成能力有限,当生成快速运动目标时,会存在一定的假象
363
  - 生成速度较慢,生成高清视频会明显导致生成速度减慢
364
 
365
- 此外,我们研究也发现,生成的视频空间上的质量和时序上的变化速度在一定程度上存在互斥现象,在本项目我们选择了其折中的模型,兼顾两则的平衡。
 
366
 
 
 
 
 
367
 
368
- The model of the **I2VGen-XL** project has limitations in the following scenarios:
369
- - Limited ability to generate small objects: There may be some errors when generating smaller objects.
370
- - Limited ability to generate fast-moving objects: There may be some artifacts when generating fast-moving objects.
371
- - Slow generation speed: Generating high-definition videos significantly slows down the generation speed.
372
 
373
- Additionally, our research has found that there is a trade-off between the spatial quality and temporal variability of the generated videos. In this project, we have chosen a model that strikes a balance between the two.
374
 
375
 
376
- *如果您正在尝试使用我们的模型,我们建议您首先在使用第一阶段得到满意的符合语义的视频之后,再尝试第二阶段的调整(因为该过程比较耗时),这样可以提高您的使用效率,更容易得到更好的结果。*
377
 
378
- *If you are trying to use our model, we recommend that you first focus on obtaining satisfactory semantic-consistent videos using the first stage before attempting adjustments in the second stage (as this process can be time-consuming). This approach will improve your efficiency and increase the likelihood of achieving better results.*
379
 
380
 
381
  ## 训练数据介绍 (Training Data)
@@ -395,9 +394,9 @@ Our training data mainly comes from various sources and has the following attrib
395
  - High-quality data construction: To improve the quality of the model-generated videos, we constructed approximately 200,000 high-quality data pairs for fine-tuning the pre-training model.
396
 
397
 
398
- 相关的技术文档正在撰写中,欢迎及时关注。
399
 
400
- The relevant technical report is currently being written, and we welcome you to stay tuned for updates.
401
 
402
 
403
  ## 相关论文以及引用信息 (Reference)
 
43
  # I2VGen-XL高清图像生成视频大模型
44
 
45
 
46
+ 本项目**I2VGen-XL**旨在解决根据输入图像生成高清视频任务。**I2VGen-XL**由达摩院研发的高清视频生成基础模型之一,其核心部分包含两个阶段,分别解决语义一致性和清晰度的问题,参数量共计约37亿,模型经过在大规模视频和图像数据混合预训练,并在少量精品数据上微调得到,该数据分布广泛、类别多样化,模型对不同的数据均有良好的泛化性。项目相比于现有视频生成模型,**I2VGen-XL**在清晰度、质感、语义、时序连续性等方面均具有明显的优势。
47
 
48
+ 此外,**I2VGen-XL**的许多设计理念和设计细节(比如核心的UNet部分)都继承于我们已经公开的工作**VideoComposer**,您可以参考我们的[VideoComposer](https://videocomposer.github.io)和本项目[ModelScope](https://github.com/modelscope/modelscope)的了解详细细节。
49
+
50
 
51
+ The **I2VGen-XL** project aims to address the task of HD video generation based on input images. **I2VGen-XL** is one of the HQ video generation base models developed by DAMO Academy. Its core components consist of two stages, each addressing the issues of semantic consistency and video quality. The total number of parameters is approximately 3.7 billion. The model has been pre-trained on a large-scale mixture of video and image data and fine-tuned on a small amount of high-quality data. This data distribution is extensive and diverse, and the model demonstrates good generalization to different types of data. Compared to existing video generation models, the **I2VGen-XL** project has significant advantages in terms of quality, texture, semantics, and temporal continuity.
52
 
 
53
 
54
+ Additionally, many design concepts and details of **I2VGen-XL** (such as the core UNet) are inherited from our publicly available work, **VideoComposer**. For detailed information, please refer to our [VideoComposer](https://videocomposer.github.io) and the Github code repository for this [ModelScope](https://github.com/modelscope/modelscope) project.
55
  <center>
56
  <p align="center">
57
+ <img src="https://huggingface.co/damo-vilab/MS-Image2Video/resolve/main/assets/image/Fig_twostage.png"/><br/>
 
58
  Fig.1 I2VGen-XL
59
  <p>
60
  </center>
 
63
 
64
  ## 模型介绍 (Introduction)
65
 
66
+ 如图Fig.2所示,**I2VGen-XL**是一种基于隐空间的视频扩散模型(VLDM),其通过我们专门设计的时��UNet(ST-UNet)在隐空间中进行时空建模并通过解码器重建出最终视频(具体模型结构可以参考[VideoComposer](https://videocomposer.github.io))。为能够生成720P视频,我们将**I2VGen-XL**分为两个阶段,第一阶段是在低分辨率条件下保证语义一致性,第二阶是利用新的VLDM进行去噪以提高视频分辨率以及同时提升时间和空间上的一致性。通过在模型、数据和训练上的联合优化,**I2VGen-XL**主要具有以下几个特点:
67
 
68
  - 高清&宽屏,可以直接生成720P(1280*720)分辨率的视频,且相比于现有的开源项目,不仅分辨率得到有效提高,其生产的宽屏视频可以适合更多的场景
 
69
  - 连续性,通过特定训练和推理策略,在视频的细节生成的稳定性上(时间和空间维度)有明显提高
70
+ - 质感好,通过收集特定的风格的视频数据训练,使得生成的视频在质感上得到明显提升,可以生成科技感、电影色、卡通风格和素描等类型视频
71
+ - 无水印,模型通过我们内部大规模无水印视频/图像训练,并在高质量数据微调得到,生成的无水印视频可适用更多视频平台,减少许多限制
72
 
73
  以下为生成的部分案例:
74
 
75
+ As shown in Fig.2, **I2VGen-XL** is a video latent diffusion model. It utilizes our designed ST-UNet ((for model details, please refer to [VideoComposer](https://videocomposer.github.io))) to perform spatio-temporal modeling in the latent space and reconstruct the generated video through a decoder. In order to generate 720P videos, we divide I2VGen-XL into two stages. The first stage ensures semantic consistency with low resolutions, while the second stage utilizes the new VLDM to denoise and improve video resolution, as well as enhance temporal and spatial consistency. Through joint optimization of the model, data, and training, **I2VGen-XL** has the following characteristics.
76
 
77
  - High-definition & widescreen, can directly generate 720P (1280*720) resolution videos, and compared to existing open source projects, not only is the resolution effectively improved, but the widescreen videos it produces can also be suitable for more scenarios.
 
78
  - Continuity, through specific training and inference strategies, there is a significant improvement in the stability of detail generation in videos (in the time and space dimensions).
79
  - Good texture, by collecting specific style video data for training, the generated model has a significant improvement in texture and can generate technology, film color, cartoon style, sketch and other types of videos.
80
+ - No watermark, the model is trained on a large-scale watermark-free video/image dataset internally and fine-tuned on high-quality data, generating watermark-free videos that can be applied to more video platforms and reducing many restrictions.
81
 
82
 
83
  Below are some examples generated by the model:
 
280
  ### 依赖项 (Dependency)
281
 
282
 
283
+ 首先你需要确定你的系统安装了`ffmpeg`命令,如果没有,可以通过以下命令来安装:
284
 
285
+ First, you need to ensure that your system has installed the `ffmpeg` command. If it is not installed, you can install it using the following command:
286
 
287
  ```bash
288
  sudo apt-get update && apt-get install ffmpeg libsm6 libxext6 -y
 
293
 
294
  The **I2VGen-XL** project is compatible with the ModelScope codebase, and the following are some of the dependencies that need to be installed for this project.
295
 
 
296
  ```bash
297
  pip install modelscope==1.8.4
298
  pip install xformers==0.0.20
 
307
  pip install imageio
308
  pip install pytorch-lightning
309
  pip install torchsde
 
310
  ```
311
 
312
 
 
357
 
358
  本**I2VGen-XL**项目的模型在处理以下情况会存在局限性:
359
  - 小目标生成能力有限,在生成较小目标的时候,会存在一定的错误
360
+ - 快速运动目标生成能力有限,当生成快速运动目标时,可能会出现一些假象和不合理的情况
361
  - 生成速度较慢,生成高清视频会明显导致生成速度减慢
362
 
363
+ 此外,我们研究也发现,生成的视频空间上的质量和时序上的变化速度在一定程度上存在互斥现象,在本项目我们选择了其折中的模型,兼顾两者间的平衡。
364
+
365
 
366
+ The model of the **I2VGen-XL** project still have some following limitations:
367
+ - Limited ability to generate small objects, there may be some errors when generating smaller objects.
368
+ - Limited ability to generate fast-moving objects, there may be some artifacts when generating fast-moving objects.
369
+ - Slow generation speed, generating high-definition videos significantly slows down the generation speed.
370
 
 
 
 
 
371
 
372
+ In addition, our research has also found that there is a trade-off between the quality of the generated video in spatial and temporal changes. In this project, we have chosen a model that strikes a balance between the two.
373
 
374
 
375
+ **如果您正在尝试使用我们的模型,我们建议您首先在第一阶段中得到语义符合预期的视频后(离线运行的时候可以修改`configuration.json`文件中的`Seed`生成不同视频),再尝试第二阶段的视频修正(因为该过程比较耗时),这样可以提高您的使用效率,也更容易得到更好的结果。**
376
 
377
+ **If you are trying to use our model, we suggest that you first obtain semantic-expected videos in the first stage (you can modify the `Seed` in the `configuration.json` file when running offline to generate different videos). Then, you can try video refining in the second stage (as this process takes more time). This will improve your efficiency and make it easier to achieve better results.**
378
 
379
 
380
  ## 训练数据介绍 (Training Data)
 
394
  - High-quality data construction: To improve the quality of the model-generated videos, we constructed approximately 200,000 high-quality data pairs for fine-tuning the pre-training model.
395
 
396
 
397
+ 更强更灵活的视频生成模型会持续发布,及其背后技术报告正在撰写中,欢迎及时关注。
398
 
399
+ More powerful models will continue to be released, and the technical report behind them are currently being written. Please stay tuned for updates and timely information.
400
 
401
 
402
  ## 相关论文以及引用信息 (Reference)