patrickvonplaten commited on
Commit
8c80146
2 Parent(s): 1aaf8a4 a4db70b

Merge branch 'main' of https://huggingface.co/damo-vilab/text-to-video-ms-1.7b-legacy into main

Browse files
Files changed (2) hide show
  1. README.md +26 -6
  2. model_index.json +1 -1
README.md CHANGED
@@ -33,26 +33,46 @@ This model has a wide range of applications, and can reason and generate videos
33
  Let's first install the libraries required:
34
 
35
  ```bash
36
- $ pip install diffusers transformers git+https://github.com/huggingface/accelerate.git
37
  ```
38
 
39
  Now, generate a video:
40
 
41
  ```python
42
  import torch
43
- from diffusers import TextToVideoMSPipeline, DPMSolverMultistepScheduler
44
  from diffusers.utils import export_to_video
45
 
46
- pipe = TextToVideoMSPipeline.from_pretrained("diffusers/ms-text-to-video-1.7b", torch_dtype=torch.float16)
47
  pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
48
- pipe = pipe.to("cuda")
49
 
50
  prompt = "Spiderman is surfing"
51
- video_frames = pipe(prompt).frames
52
  video_path = export_to_video(video_frames)
53
- print(video_path)
54
  ```
55
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
  ## View results
57
 
58
  The above code will display the save path of the output video, and the current encoding format can be played with [VLC player](https://www.videolan.org/vlc/).
 
33
  Let's first install the libraries required:
34
 
35
  ```bash
36
+ $ pip install git+https://github.com/huggingface/diffusers transformers accelerate
37
  ```
38
 
39
  Now, generate a video:
40
 
41
  ```python
42
  import torch
43
+ from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
44
  from diffusers.utils import export_to_video
45
 
46
+ pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b-legacy", torch_dtype=torch.float16)
47
  pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
48
+ pipe.enable_cpu_model_offload()
49
 
50
  prompt = "Spiderman is surfing"
51
+ video_frames = pipe(prompt, num_inference_steps=25).frames
52
  video_path = export_to_video(video_frames)
 
53
  ```
54
 
55
+ Here are some results:
56
+
57
+ <table>
58
+ <tr>
59
+ <td><center>
60
+ An astronaut riding a horse.
61
+ <br>
62
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/astr.gif"
63
+ alt="An astronaut riding a horse."
64
+ style="width: 300px;" />
65
+ </center></td>
66
+ <td ><center>
67
+ Darth vader surfing in waves.
68
+ <br>
69
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/vader.gif"
70
+ alt="Darth vader surfing in waves."
71
+ style="width: 300px;" />
72
+ </center></td>
73
+ </tr>
74
+ </table>
75
+
76
  ## View results
77
 
78
  The above code will display the save path of the output video, and the current encoding format can be played with [VLC player](https://www.videolan.org/vlc/).
model_index.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_class_name": "TextToVideoMSPipeline",
3
  "_diffusers_version": "0.15.0.dev0",
4
  "scheduler": [
5
  "diffusers",
 
1
  {
2
+ "_class_name": "TextToVideoSDPipeline",
3
  "_diffusers_version": "0.15.0.dev0",
4
  "scheduler": [
5
  "diffusers",