patrickvonplaten
commited on
Commit
•
f61a46b
1
Parent(s):
2402636
Update README.md
Browse files
README.md
CHANGED
@@ -73,6 +73,37 @@ Here are some results:
|
|
73 |
</tr>
|
74 |
</table>
|
75 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
76 |
## View results
|
77 |
|
78 |
The above code will display the save path of the output video, and the current encoding format can be played with [VLC player](https://www.videolan.org/vlc/).
|
|
|
73 |
</tr>
|
74 |
</table>
|
75 |
|
76 |
+
## Long Video Generation
|
77 |
+
|
78 |
+
You can optimize for memory usage by enabling attention and VAE slicing and using Torch 2.0.
|
79 |
+
This should allow you to generate videos up to 10 seconds on less than 16GB of GPU VRAM.
|
80 |
+
|
81 |
+
```bash
|
82 |
+
$ pip install git+https://github.com/huggingface/diffusers transformers accelerate
|
83 |
+
```
|
84 |
+
|
85 |
+
```py
|
86 |
+
import torch
|
87 |
+
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
|
88 |
+
from diffusers.utils import export_to_video
|
89 |
+
|
90 |
+
# load pipeline
|
91 |
+
pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16")
|
92 |
+
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
|
93 |
+
|
94 |
+
# optimize for GPU memory
|
95 |
+
pipe.enable_model_cpu_offload()
|
96 |
+
pipe.enable_vae_slicing()
|
97 |
+
|
98 |
+
# generate
|
99 |
+
prompt = "Spiderman is surfing"
|
100 |
+
video_frames = pipe(prompt, num_inference_steps=25, num_frames=80).frames
|
101 |
+
|
102 |
+
# convent to video
|
103 |
+
video_path = export_to_video(video_frames)
|
104 |
+
```
|
105 |
+
|
106 |
+
|
107 |
## View results
|
108 |
|
109 |
The above code will display the save path of the output video, and the current encoding format can be played with [VLC player](https://www.videolan.org/vlc/).
|