Doubiiu commited on
Commit
1e5a3f4
1 Parent(s): 42f16af

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -3
README.md CHANGED
@@ -1,5 +1,62 @@
1
  ---
2
- license: other
3
- license_name: license
4
- license_link: LICENSE
5
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
3
+ # Doc / guide: https://huggingface.co/docs/hub/model-cards
4
+ {}
5
  ---
6
+
7
+ # DynamiCrafter (256x256) (text-)Image-to-Video/Image Animation Model Card
8
+ ![row01](DynamiCrafter-256.webp)
9
+ <!-- Provide a quick summary of what the model is/does. -->
10
+
11
+ DynamiCrafter (256x256) (Text-)Image-to-Video is a video diffusion model that <br> takes in a still image as a conditioning image and text prompt describing dynamics,<br> and generates videos from it.
12
+
13
+ ## Model Details
14
+
15
+ ### Model Description
16
+
17
+ <!-- Provide a longer summary of what this model is. -->
18
+
19
+ DynamiCrafter, a (Text-)Image-to-Video/Image Animation approach, aims to generate <br>
20
+ short video clips (~2 seconds) from a conditioning image and text prompt.
21
+
22
+ This model was trained to generate 16 video frames at a resolution of 256x256 <br>
23
+ given a context frame of the same resolution.
24
+
25
+
26
+ - **Developed by:** CUHK & Tencent AI Lab
27
+ - **Funded by:** CUHK & Tencent AI Lab
28
+ - **Model type:** Generative (text-)image-to-video model
29
+ - **Finetuned from model:** VideoCrafter1 (256x256)
30
+
31
+ ### Model Sources
32
+
33
+ <!-- Provide the basic links for the model. -->
34
+ For research purpose, we recommend our Github repository (https://github.com/Doubiiu/DynamiCrafter), <br>
35
+ which includes the detailed implementations.
36
+ - **Repository:** https://github.com/Doubiiu/DynamiCrafter
37
+ - **Paper:** https://arxiv.org/abs/2310.12190
38
+ ## Uses
39
+
40
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
41
+
42
+ ### Direct Use
43
+
44
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
45
+
46
+ We develop this repository for RESEARCH purposes, so it can only be used for personal/research/non-commercial purposes.
47
+
48
+
49
+
50
+ ## Limitations
51
+
52
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
53
+ - The generated videos are relatively short (2 seconds, FPS=8).
54
+ - The model cannot render legible text.
55
+ - Faces and people in general may not be generated properly.
56
+ - The autoencoding part of the model is lossy, resulting in slight flickering artifacts.
57
+
58
+
59
+
60
+ ## How to Get Started with the Model
61
+
62
+ Check out https://github.com/Doubiiu/DynamiCrafter