Replacing files in 'stable-diffusion-webui\models\ModelScope\t2v' directory.

#2
by tarunabh - opened

Hi,
After downloading the files available in the zs2_576w folder, and Replacing the respective files in the 'stable-diffusion-webui\models\ModelScope\t2v' directory (as per instruction), I found a similar set of files in the 'zeroscope_v2 XL' version.
My question is should I replace the files downloaded from zs2_576w folder, with the files having the same names that are available in the 'zs2_XL folder'?
Because the instructions in both specify replacing the original files in the same target directory i.e. - 'stable-diffusion-webui\models\ModelScope\t2v' directory.

I'm guessing you are supposed to run the smaller one until you get a good result and then swap in the high res version models and restart auto1111 and run it through again to get higher res.
It's a bit tedious until they have a model selection tab.

Yeah, this is the suggested workflow at the moment. I'm hoping that more features like a model selection dropdown will be implemented soon

IDK if just me, but the recommended 30 frames at the max resolution supported doesn't fit on my 4090 in win10, and can only do 9frames...

I also have issues running this - I have a A6000 with 48GB vram, and I cannot run even 16 frames with 1024x576 using the auto1111 approach described on this page.

EDIT: To my own comment above. I noticed a drastic improvement when running 1111 now with xformers on - I can actually run things now.

Yeah someone on the reddit also mentioned that you need xformers for it to actually work correctly.

I'm guessing you are supposed to run the smaller one until you get a good result and then swap in the high res version models and restart auto1111 and run it through again to get higher res.
It's a bit tedious until they have a model selection tab.

I've been swapping them without restarting, do you need to restart?

If anyone has any info on how to actually get xformers working on w10pro on a 40 series gpu please leave a comment, kind in a loop with this error :
Launching Web UI with arguments: --xformers --disable-nan-check --no-half-vae --reinstall-xformers
No module 'xformers'. Proceeding without it.
Cannot import xformers
Traceback (most recent call last):
File "D:\NasD\stable-diffusion-webui\modules\sd_hijack_optimizations.py", line 140, in
import xformers.ops
File "D:\NasD\stable-diffusion-webui\venv\lib\site-packages\xformers\ops_init.py", line 8, in
from .fmha import (
File "D:\NasD\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha_init
.py", line 10, in
from . import cutlass, flash, small_k, triton
File "D:\NasD\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\triton.py", line 15, in
if TYPE_CHECKING or is_triton_available():
File "D:\NasD\stable-diffusion-webui\venv\lib\site-packages\xformers_init
.py", line 33, in func_wrapper
value = func()
File "D:\NasD\stable-diffusion-webui\venv\lib\site-packages\xformers_init.py", line 44, in is_triton_available
from xformers.triton.softmax import softmax as triton_softmax # noqa
File "D:\NasD\stable-diffusion-webui\venv\lib\site-packages\xformers\triton_init
.py", line 12, in
from .dropout import FusedDropoutBias, dropout # noqa
File "D:\NasD\stable-diffusion-webui\venv\lib\site-packages\xformers\triton\dropout.py", line 13, in
import triton
File "D:\NasD\stable-diffusion-webui\venv\lib\site-packages\triton_init
.py", line 1, in
raise RuntimeError("Should never be installed")
RuntimeError: Should never be installed

this is with torch 2.0.1cu118 + everything from xformers17 to the latest dev branch, really confused because I thought this was all sorted...

I have not been successful with torch 2. I recommend doing a brand new, clean install of 1111 with torch 1 (should still be the default) and xformers.

Yup that's what I did, made a separate venv specially to hold the XL model.

I just wanted to stop by to state that this seems to STILL be an issue as of today, Friday the 29th of Sept. I enjoy workin gwith Automatic and the text to video is great but it seems rather bizarre that there isn't a more streamlined way to switch between models. Constantly using the CLI to swap files into different folders gets quite confusing, not to mention that all the files share the same name. I jsut see in my drop down, using most recent Automatic pull, RTX A6000 Ada, Linux/Ubuntu 22.04. There are also issues with tqdm version compatibility due to the strict criteria outlined in the requirements json.

Sign up or log in to comment