Text-to-Video
Diffusers
TextToVideoIFPipeline
jayw commited on
Commit
6e3b832
1 Parent(s): 4924a02

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +70 -0
README.md CHANGED
@@ -1,3 +1,73 @@
1
  ---
2
  license: cc-by-nc-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-4.0
3
  ---
4
+
5
+ # show-1-base
6
+
7
+ Pixel-based VDMs can generate motion accurately aligned with the textual prompt but typically demand expensive computational costs in terms of time and GPU memory, especially when generating high-resolution videos. Latent-based VDMs are more resource-efficient because they work in a reduced-dimension latent space. But it is challenging for such small latent space (e.g., 64×40 for 256×160 videos) to cover rich yet necessary visual semantic details as described by the textual prompt.
8
+
9
+ To marry the strength and alleviate the weakness of pixel-based and latent-based VDMs, we introduce **Show-1**, an efficient text-to-video model that generates videos of not only decent video-text alignment but also high visual quality.
10
+
11
+ ![](https://showlab.github.io/Show-1/assets/images/method.png)
12
+
13
+
14
+ ## Model Details
15
+
16
+ This is the base model of Show-1 that generates videos with 8 keyframes at a resolution of 64x40. The model is trained from [DeepFloyd/IF-I-L-v1.0](https://huggingface.co/DeepFloyd/IF-I-L-v1.0) on the [WebVid-10M](https://maxbain.com/webvid-dataset/) dataset.
17
+
18
+ - **Developed by:** [Show Lab, National University of Singapore](https://sites.google.com/view/showlab/home?authuser=0)
19
+ - **Model type:** pixel- and latent-based cascaded text-to-video diffusion model
20
+ - **Cascade stage:** base (keyframe generation)
21
+ - **Finetuned from model:** [DeepFloyd/IF-I-L-v1.0](https://huggingface.co/DeepFloyd/IF-I-L-v1.0)
22
+ - **License:** Creative Commons Attribution Non Commercial 4.0
23
+ - **Resources for more information:** [GitHub](https://github.com/showlab/Show-1), [Website](https://showlab.github.io/Show-1/), [arXiv](https://arxiv.org/abs/2309.15818)
24
+
25
+ ## Usage
26
+
27
+ Clone the GitHub repository and install the requirements:
28
+
29
+ ```bash
30
+ git clone https://github.com/showlab/Show-1.git
31
+ pip install -r requirements.txt
32
+ ```
33
+
34
+ Run the following command to generate a video from a text prompt. This will automatically download all the model weights from huggingface.
35
+
36
+ ```bash
37
+ python run_inference.py
38
+ ```
39
+
40
+ You can also download the weights manually:
41
+
42
+ ```bash
43
+ git lfs install
44
+
45
+ # base
46
+ git clone https://huggingface.co/showlab/show-1-base
47
+ # interp
48
+ git clone https://huggingface.co/showlab/show-1-interpolation
49
+ # sr1
50
+ git clone https://huggingface.co/showlab/show-1-sr1
51
+ # sr2
52
+ git clone https://huggingface.co/showlab/show-1-sr2
53
+
54
+ ```
55
+
56
+ ## Citation
57
+
58
+ If you make use of our work, please cite our paper.
59
+ ```bibtex
60
+ @misc{zhang2023show1,
61
+ title={Show-1: Marrying Pixel and Latent Diffusion Models for Text-to-Video Generation},
62
+ author={David Junhao Zhang and Jay Zhangjie Wu and Jia-Wei Liu and Rui Zhao and Lingmin Ran and Yuchao Gu and Difei Gao and Mike Zheng Shou},
63
+ year={2023},
64
+ eprint={2309.15818},
65
+ archivePrefix={arXiv},
66
+ primaryClass={cs.CV}
67
+ }
68
+ ```
69
+
70
+
71
+ ## Model Card Contact
72
+
73
+ This model card is maintained by [David Junhao Zhang](https://junhaozhang98.github.io/) and [Jay Zhangjie Wu](https://jayzjwu.github.io/). For any questions, please feel free to contact us or open an issue in the repository.