Spaces:
Running
on
L40S
Running
on
L40S
Upscale Error: IndexError: tuple index out of range
#7
by
loretoparisi
- opened
Getting an upscale error:
tensor.shape[3], tensor.shape[2], tile_x=tile, tile_y=tile, overlap=overlap
IndexError: tuple index out of range
in the util.upscale
function on
steps = tensor.shape[0] * get_tiled_scale_steps(
tensor.shape[3], tensor.shape[2], tile_x=tile, tile_y=tile, overlap=overlap
)
Number of frames of latents latents.shape[0]
was 49
, while logging was
Downloading shards: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 10292.77it/s]
Loading checkpoint shards: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 7.44it/s]
Fetching 2 files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 14899.84it/s]
Fetching 2 files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 45.23it/s]
Loading pipeline components...: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 5/5 [00:00<00:00, 31.95it/s]
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:36<00:00, 18.35s/it]
UPScaleMemory required: 2.2830336149781942 GB
...
upscaled_latent = upscale(upscale_model, latent, inf_device, output_device)
tensor.shape[3], tensor.shape[2], tile_x=tile, tile_y=tile, overlap=overlap
IndexError: tuple index out of range
If it can help I have tried to disable the upscaling and running only the interpolation, but I'm getting a shape error too ValueError: not enough values to unpack (expected 4, got 3)
:
...
samples dtype:torch.bfloat16
samples shape:torch.Size([3, 480, 720])
...
latents = rife_inference_with_latents(frame_interpolation_model, latents)
line 146, in rife_inference_with_latents
frames = ssim_interpolation_rife(model, latent)
in decorate_context
return func(*args, **kwargs)
n ssim_interpolation_rife
_, _, h, w = frame.shape
ValueError: not enough values to unpack (expected 4, got 3)
The cause was not getting the frames
from the pipe!
This will fix it:
seed = 42 #the magic number
video = pipe(
prompt=prompt,
num_videos_per_prompt=num_videos_per_prompt,
num_inference_steps=num_inference_steps,
num_frames=num_frames,
use_dynamic_cfg=True, ## This id used for DPM Sechduler, for DDIM scheduler, it should be False
guidance_scale=guidance_scale,
output_type=output_type,
generator=torch.Generator(device=device).manual_seed(seed)
).frames # <--- this!
loretoparisi
changed discussion status to
closed