Update README.md
Browse files
README.md
CHANGED
@@ -7,33 +7,31 @@ tags:
|
|
7 |
|
8 |
# zeroscope_v2 30x448x256
|
9 |
|
10 |
-
Modelscope
|
11 |
-
|
|
|
12 |
|
|
|
|
|
13 |
|
14 |
-
This low-res modelscope model is intended to be upscaled with [potat1](https://huggingface.co/camenduru/potat1) using vid2vid in the 1111 text2video extension by [kabachuha](https://github.com/kabachuha) <br />
|
15 |
|
16 |
-
|
17 |
|
|
|
|
|
|
|
18 |
|
19 |
-
### 1111 text2video extension usage
|
20 |
|
21 |
-
|
22 |
-
2. Rename zeroscope_v2_30x448x256_text.bin to open_clip_pytorch_model.bin<br />
|
23 |
-
3. Replace files in stable-diffusion-webui\models\ModelScope\t2v<br />
|
24 |
|
25 |
-
|
26 |
-
### Upscaling
|
27 |
-
|
28 |
-
I recommend upscaling this using vid2vid in the 1111 extension to 1152x640 with a denoise strength between 0.66 and 0.85. Use the same prompt and settings used to create the original clip. <br />
|
29 |
|
30 |
|
31 |
### Known issues
|
32 |
|
33 |
-
|
34 |
-
|
35 |
-
Some clips
|
36 |
-
|
37 |
|
38 |
|
39 |
|
|
|
7 |
|
8 |
# zeroscope_v2 30x448x256
|
9 |
|
10 |
+
a watermark-free Modelscope-based video model optimized for producing high-quality 16:9 compositions and a smooth video output.<br />
|
11 |
+
This model was trained using 9,923 clips and 29,769 tagged frames at 30 frames, 448x256 resolution.<br />
|
12 |
+
zeroscope_v2 30x448x256 is specifically designed for upscaling with [Potat1](https://huggingface.co/camenduru/potat1) using vid2vid in the 1111 Text2Video extension by [kabachuha](https://github.com/kabachuha). <br />
|
13 |
|
14 |
+
Leveraging this model as a preliminary step allows for superior overall compositions at higher resolutions in Potat1, permitting faster exploration in 448x256 before transitioning to a high-resolution render. <br />
|
15 |
+
See an [example output](https://i.imgur.com/lj90FYP.mp4) that has been upscaled to 1152 x 640 using Potat1.<br />
|
16 |
|
|
|
17 |
|
18 |
+
### Using it with the 1111 text2Video extension
|
19 |
|
20 |
+
1. Rename the file 'zeroscope_v2_30x448x256.pth' to 'text2video_pytorch_model.pth'.
|
21 |
+
2. Rename the file 'zeroscope_v2_30x448x256_text.bin' to 'open_clip_pytorch_model.bin'.
|
22 |
+
3. Replace the respective files in the 'stable-diffusion-webui\models\ModelScope\t2v' directory.
|
23 |
|
|
|
24 |
|
25 |
+
### Upscaling recommendations
|
|
|
|
|
26 |
|
27 |
+
For upscaling, it's recommended to use Potat1 via vid2vid in the 1111 extension. Aim for a resolution of 1152x640 and a denoise strength between 0.66 and 0.85. Remember to use the same prompt and settings that were used to generate the original clip.
|
|
|
|
|
|
|
28 |
|
29 |
|
30 |
### Known issues
|
31 |
|
32 |
+
Lower resolutions or fewer frames could lead to suboptimal output. <br />
|
33 |
+
Certain clips might appear with cuts. This ill be fixed in the upcoming 2.1 version, which will incorporate a cleaner dataset.
|
34 |
+
Some clips may playback too slowly, requiring prompt engineering for an increased pace.
|
|
|
35 |
|
36 |
|
37 |
|