Preserve input details in upscaled video
Is there a way to run the XL model such that the details of the input video are roughly preserved in the upscaled output? For example, this is what I get from zeroscope_v2_576w with the prompt "a single tree on a desert island, with a sunset in the background, in the style of van gogh"
And this is the upscaled version with the same prompt, following the README pipeline example:
I wish there was a way to preserve the animated flowing brushstrokes on the side, for example
Its a bit random but you can try playing around with the denoising strength. The default value of 0.6 is pretty low already, but you can try going lower to preserve more detail. usually this will add flicker. Sometimes a higher strength of 0.85 to 0.92 will end up being better than the original input. Controlnet-like features could definitely help with this upscaling step in the future.
Got it, thank you! Looking briefly at the code, it wasn't clear to me earlier how different changing the strength was from changing the number of timesteps, but I think I see the difference now.