makexin2001 commited on
Commit
074046d
1 Parent(s): 9689bb5

Update README.md

Browse files

![008sxhY0gy1hdkzoxxmb6j32bc2c5wvu.jpg](https://cdn-uploads.huggingface.co/production/uploads/6567b05aac03c4c741604266/XEJuqcyiOU_9Ulql07EAQ.jpeg)

Files changed (1) hide show
  1. README.md +50 -50
README.md CHANGED
@@ -1,83 +1,83 @@
1
  ---
2
- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
3
- # Doc / guide: https://huggingface.co/docs/hub/model-cards
4
  {}
5
  ---
6
 
7
- # Stable Video Diffusion Image-to-Video Model Card
8
 
9
- <!-- Provide a quick summary of what the model is/does. -->
10
  ![row01](output_tile.gif)
11
- Stable Video Diffusion (SVD) Image-to-Video is a diffusion model that takes in a still image as a conditioning frame, and generates a video from it.
12
 
13
- ## Model Details
14
 
15
- ### Model Description
16
 
17
- (SVD) Image-to-Video is a latent diffusion model trained to generate short video clips from an image conditioning.
18
- This model was trained to generate 25 frames at resolution 576x1024 given a context frame of the same size, finetuned from [SVD Image-to-Video [14 frames]](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid).
19
- We also finetune the widely used [f8-decoder](https://huggingface.co/docs/diffusers/api/models/autoencoderkl#loading-from-the-original-format) for temporal consistency.
20
- For convenience, we additionally provide the model with the
21
- standard frame-wise decoder [here](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt/blob/main/svd_xt_image_decoder.safetensors).
22
 
23
 
24
- - **Developed by:** Stability AI
25
- - **Funded by:** Stability AI
26
- - **Model type:** Generative image-to-video model
27
- - **Finetuned from model:** SVD Image-to-Video [14 frames]
28
 
29
- ### Model Sources
30
 
31
- For research purposes, we recommend our `generative-models` Github repository (https://github.com/Stability-AI/generative-models),
32
- which implements the most popular diffusion frameworks (both training and inference).
33
 
34
- - **Repository:** https://github.com/Stability-AI/generative-models
35
- - **Paper:** https://stability.ai/research/stable-video-diffusion-scaling-latent-video-diffusion-models-to-large-datasets
36
 
37
 
38
- ## Evaluation
39
- ![comparison](comparison.png)
40
- The chart above evaluates user preference for SVD-Image-to-Video over [GEN-2](https://research.runwayml.com/gen2) and [PikaLabs](https://www.pika.art/).
41
- SVD-Image-to-Video is preferred by human voters in terms of video quality. For details on the user study, we refer to the [research paper](https://stability.ai/research/stable-video-diffusion-scaling-latent-video-diffusion-models-to-large-datasets)
42
 
43
- ## Uses
44
 
45
- ### Direct Use
46
 
47
- The model is intended for research purposes only. Possible research areas and tasks include
48
 
49
- - Research on generative models.
50
- - Safe deployment of models which have the potential to generate harmful content.
51
- - Probing and understanding the limitations and biases of generative models.
52
- - Generation of artworks and use in design and other artistic processes.
53
- - Applications in educational or creative tools.
54
 
55
- Excluded uses are described below.
56
 
57
- ### Out-of-Scope Use
58
 
59
- The model was not trained to be factual or true representations of people or events,
60
- and therefore using the model to generate such content is out-of-scope for the abilities of this model.
61
- The model should not be used in any way that violates Stability AI's [Acceptable Use Policy](https://stability.ai/use-policy).
62
 
63
- ## Limitations and Bias
64
 
65
- ### Limitations
66
- - The generated videos are rather short (<= 4sec), and the model does not achieve perfect photorealism.
67
- - The model may generate videos without motion, or very slow camera pans.
68
- - The model cannot be controlled through text.
69
- - The model cannot render legible text.
70
- - Faces and people in general may not be generated properly.
71
- - The autoencoding part of the model is lossy.
72
 
73
 
74
- ### Recommendations
75
 
76
- The model is intended for research purposes only.
77
 
78
- ## How to Get Started with the Model
79
 
80
- Check out https://github.com/Stability-AI/generative-models
81
 
82
 
83
 
 
1
  ---
2
+ #型号卡片元数据参考见规范:https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
3
+ #文件/指南: https://huggingface.co/docs/hub/model-cards
4
  {}
5
  ---
6
 
7
+ #稳定视频扩散图像到视频模型卡
8
 
9
+ <! --提供模型功能的快速摘要。 -->
10
  ![row01](output_tile.gif)
11
+ 稳定视频扩散(SVD)图像到视频(Image-to-Video)是一种扩散模型,它将静止图像作为调节帧,并从中生成视频。
12
 
13
+ ##模型详细信息
14
 
15
+ ###型号说明
16
 
17
+ (SVD)图像到视频(Image-to-Video)是一种潜在扩散模型,其被训练为从图像调节生成短视频剪辑。
18
+ 该模型被训练为生成分辨率为576x1024的25帧,给定相同大小的上下文帧,从[SVD图像到视频微调[14]](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid).
19
+ 我们还微调了广泛使用的[F8-译码器](https://huggingface.co/docs/diffusers/api/models/autoencoderkl#从原始格式加载)为了时间的一致性。
20
+ 为方便起见,我们还为模型提供了
21
+ 标准逐帧译码器[在这里](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt/blob/main/svd_xt_image_decoder.safetensors).
22
 
23
 
24
+ -**编制单位:**稳定性AI
25
+ -**资金来源:**稳定性AI
26
+ -**型号类型:**生成图像到视频模型
27
+ -**根据模型进行微调:**SVD图像到视频[14]
28
 
29
+ ###模型源
30
 
31
+ 出于研究目的,我们建议`生成模型`GitHub存储库(https://github.com/Stability-AI/generative-models),
32
+ 它实现了最流行的传播框架(训练和推理)
33
 
34
+ -**存储库:** https://github.com/Stability-AI/generative-models
35
+ -**纸:** https://stability.ai/research/stable-video-diffusion-scaling-latent-video-diffusion-models-to-large-datasets
36
 
37
 
38
+ ##评价
39
+ ![比较](comparison.png)
40
+ 上图评估了用户对SVD-图像到视频的偏好,而不是[GEN-2](https://research.runwayml.com/gen2)和[PikaLabs](https://www.pika.art/).[GEN-2](https://research.runwayml.com/gen2)[PikaLabs](https://www.pika.art/).
41
+ 就视频质量而言,SVD-图像到视频是人类选民的首选。 有关用户研究的详情,请参阅[研究文件](https://stability.ai/research/stable-video-diffusion-scaling-latent-video-diffusion-models-to-large-datasets)[研究论文](https://stability.ai/research/stable-video-diffusion-scaling-latent-video-diffusion-models-to-large-datasets)
42
 
43
+ ##uses
44
 
45
+ ###直接使用
46
 
47
+ 该模型仅用于研究目的。 可能的研究领域和任务包括
48
 
49
+ --研究生成模型。
50
+ --安全部署可能产生有害内容的模型。
51
+ --探索和理解生成模型的局限性和偏差。
52
+ --艺术品的产生和在设计和其他艺术过程中的使用。
53
+ --在教育或创意工具中的应用。
54
 
55
+ 排除的用途描述如下。
56
 
57
+ ###超出范围使用
58
 
59
+ 模型并没有被训练成真实或真实的人或事件的表现,
60
+ 因此,使用该模型来生成这样的内容超出了该模型的能力的范围。
61
+ 该模型不得以任何违反稳定性AI[可接受使用政策]的方式使用(https://stability.ai/use-policy).[可接受的使用政策](https://stability.ai/use-policy).
62
 
63
+ ##限制和偏差
64
 
65
+ ###限制
66
+ --生成的视频相当短(<=4sec),模型没有达到完美的照片真实感。
67
+ --模型可能生成没有运动的视频,或者生成速度非常慢的摄像盘。
68
+ --无法通过文本控制模型。
69
+ --模型无法呈现清晰可见的文本。
70
+ --面部和一般人物可能无法正确生成。
71
+ --模型的自动编码部分有损耗。
72
 
73
 
74
+ ###推荐
75
 
76
+ 该模型仅用于研究目的。
77
 
78
+ ##如何开始使用模型
79
 
80
+ 结帐https://github.com/Stability-AI/generative-models
81
 
82
 
83