SeanScripts
commited on
Commit
•
4e1a9f6
1
Parent(s):
c255582
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model:
|
3 |
+
- rain1011/pyramid-flow-sd3
|
4 |
+
pipeline_tag: text-to-video
|
5 |
+
library_name: diffusers
|
6 |
+
---
|
7 |
+
|
8 |
+
Converted to bfloat16 from [rain1011/pyramid-flow-sd3](https://huggingface.co/rain1011/pyramid-flow-sd3). Use the text encoders and tokenizers from that repo (or from SD3), no point reuploading them over and over unchanged.
|
9 |
+
|
10 |
+
Inference code is available here: [github.com/jy0205/Pyramid-Flow](https://github.com/jy0205/Pyramid-Flow/tree/main).
|
11 |
+
|
12 |
+
Both 384p and 768p work on 24 GB VRAM. For 16 steps (5 second video), 384p takes a little over a minute on a 3090, and 768p takes about 7 minutes. For 31 steps (10 second video), 384p took about 10 minutes.
|
13 |
+
|
14 |
+
In `diffusion_schedulers/scheduling_flow_matching.py`, in the function `init_sigmas_for_each_stage`, one small change needs to be made:
|
15 |
+
|
16 |
+
Change this line:
|
17 |
+
```
|
18 |
+
self.timesteps_per_stage[i_s] = torch.from_numpy(timesteps[:-1])
|
19 |
+
```
|
20 |
+
To this:
|
21 |
+
```
|
22 |
+
self.timesteps_per_stage[i_s] = timesteps[:-1]
|
23 |
+
```
|
24 |
+
|
25 |
+
This will allow the model to be compatible with newer versions of pytorch and other libraries than is shown in the requirements.
|
26 |
+
|
27 |
+
Working with torch2.4.1+cu124.
|