For the people that run this locally, how long does it take to generate a video and what gpu are you using?

#18
by Yes69420 - opened

For the people that run this locally, how long does it take to generate a video and what gpu are you using?

Also, i only have a rtx 3060 with 12gb of vram, any way i can run it?

This demo requires about 16GB CPU RAM and 16GB GPU RAM.
It takes about 20 seconds to generate a video on the A100 GPU.

This demo requires about 16GB CPU RAM and 16GB GPU RAM.
It takes about 20 seconds to generate a video on the A100 GPU.

20 seconds is very short, shouldnt take too long on a rtx 3090 i suppose

I'm using a Xiaomi phone lol

I'm using a Xiaomi phone lol

locally?

I'm using a Xiaomi phone lol

locally?

That's a joke

Is it not possible to allow a way for the user to tweak the settings lower in order for it to work with a RTX 2060 Super (6GB)? Theoretically, later I would be able to enhance the low-res video using an AI Video Enhancer program such as Topaz video AI. I used to do the same process to be able to generate images in Stable Diffusion with my low-key video-card.

Anyway, even if what I'm thinking is possible, it seems I need to plan myself to buy a better video card, hehehehe...

If you download it locally, can you increase the frames/total length of video?

I am also using a RTX 3060 12GB. Res. 256x256 takes ~23 sec.

If you download it locally, can you increase the frames/total length of video?

yes, but there are several things you'll need to do:

  1. download the pruned weights and config file and replace the ones used here (https://huggingface.co/kabachuha/modelscope-damo-text2video-pruned-weights/tree/main)
  2. adjust the size of the video output to adjust for what you want
  • if you want more frames you'll need something smaller than 256x256, say 256x192
  • you'll need to adjust the fps

I installed the Modelscope extension in Automatic1111 and then replaced the weights and config files from ^ and was able to get 10s at 8fps 256x128 on a 3060 running on a laptop

I keep getting this error:

RuntimeError: TextToVideoSynthesisPipeline: TextToVideoSynthesis: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

Sign up or log in to comment