Using the upscaling method?

#28
by J450NP13 - opened

I have the first video output and now I want to run it through this upscale method. How do I do this? it isn't very clear how it is actually done.

CODE__________

import torch
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
from diffusers.utils import export_to_video
import imageio

pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()

prompt = "prompt text"
video_frames = pipe(prompt, num_inference_steps=50, height=320, width=576, num_frames=24).frames
video_path = export_to_video(video_frames,f'videos/txt2vid.mp4')


Do I add the upscale code into this? or is it another method that grabs the mp4?

I am also getting an "Image" not defined.

Reading other threads about swapping bin files...I guess I'll wait until that's sorted out....

Try using the UI I made for this, it has upscaling, you can check my code
https://huggingface.co/cerspense/zeroscope_v2_XL/discussions/26

I will definitely try this. But I am also trying to learn and understand how to set these up. I am assuming "video1" is the lower res one and "video2" is the higher res?

@ADHDev Do I need the model bins for this? Or is it going to download them?

It will download them. Also, if you replace the model name with other model, like Potat1 it will download and use that model

@ADHDev , I go tit working.....is there a NSFW deal clamping this?

Not that I'm aware of

Sign up or log in to comment