Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- Text-to-Video
|
4 |
+
---
|
5 |
+
|
6 |
+
![model example](https://i.imgur.com/3CQlFBR.png)
|
7 |
+
|
8 |
+
# zeroscope_dark_v2 30x448x256
|
9 |
+
A watermark-free Modelscope-based video model optimized for producing high-quality 16:9 compositions with varying brightness and a smooth video output. This model was trained using 9,923 clips and 29,769 tagged frames at 30 frames, 448x256 resolution.<br />
|
10 |
+
zeroscope_v2 30x448x256 is specifically designed for upscaling with [Potat1](https://huggingface.co/camenduru/potat1) using vid2vid in the [1111 text2video](https://github.com/kabachuha/sd-webui-text2video) extension by [kabachuha](https://github.com/kabachuha). Leveraging this model as a preliminary step allows for superior overall compositions at higher resolutions in Potat1, permitting faster exploration in 448x256 before transitioning to a high-resolution render.<br />
|
11 |
+
|
12 |
+
### Using it with the 1111 text2video extension
|
13 |
+
|
14 |
+
1. Rename the file 'zeroscope_v2_dark_30x448x256.pth' to 'text2video_pytorch_model.pth'.
|
15 |
+
2. Rename the file 'zeroscope_v2_dark_30x448x256_text.bin' to 'open_clip_pytorch_model.bin'.
|
16 |
+
3. Replace the respective files in the 'stable-diffusion-webui\models\ModelScope\t2v' directory.
|
17 |
+
|
18 |
+
|
19 |
+
### Upscaling recommendations
|
20 |
+
|
21 |
+
For upscaling, it's recommended to use Potat1 via vid2vid in the 1111 extension. Aim for a resolution of 1152x640 and a denoise strength between 0.66 and 0.85. Remember to use the same prompt and settings that were used to generate the original clip.
|
22 |
+
|
23 |
+
|
24 |
+
### Known issues
|
25 |
+
|
26 |
+
Lower resolutions or fewer frames could lead to suboptimal output. <br />
|
27 |
+
Certain clips might appear with cuts. This will be fixed in the upcoming 2.1 version, which will incorporate a cleaner dataset.
|
28 |
+
Some clips may playback too slowly, requiring prompt engineering for an increased pace.
|
29 |
+
|
30 |
+
|