Image-to-Video

outofmemory error

#5
by eduardo-baena - opened

I've tried to run this script "python scripts/sampling/simple_video_sample.py --input_path <path/to/image.png> --version sv3d_u" and got the error as below. Could anybody help? My gpu is RTX 3090, and my machine only has 16 GB of RAM. Thanks.

python scripts/sampling/simple_video_sample.py --input_path checkpoints/frog.jpg --version sv3d_u
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
VideoTransformerBlock is using checkpointing
Initialized embedder #0: FrozenOpenCLIPImagePredictionEmbedder with 683800065 params. Trainable: False
Initialized embedder #1: VideoPredictionEmbedderWithEncoder with 83653863 params. Trainable: False
Initialized embedder #2: ConcatTimestepEmbedderND with 0 params. Trainable: False
Restored from checkpoints/sv3d_u.safetensors with 0 missing and 0 unexpected keys
/mnt/ntfs/miniconda3/envs/stability_ai/stability_ai/lib/python3.10/site-packages/torch/utils/checkpoint.py:31: UserWarning: None of the inputs have requires_grad=True. Gradients will be None
warnings.warn("None of the inputs have requires_grad=True. Gradients will be None")
Traceback (most recent call last):
File "/mnt/ntfs/miniconda3/envs/stability_ai/generative-models/scripts/sampling/simple_video_sample.py", line 350, in
Fire(sample)
File "/mnt/ntfs/miniconda3/envs/stability_ai/stability_ai/lib/python3.10/site-packages/fire/core.py", line 143, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/mnt/ntfs/miniconda3/envs/stability_ai/stability_ai/lib/python3.10/site-packages/fire/core.py", line 477, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/mnt/ntfs/miniconda3/envs/stability_ai/stability_ai/lib/python3.10/site-packages/fire/core.py", line 693, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/mnt/ntfs/miniconda3/envs/stability_ai/generative-models/scripts/sampling/simple_video_sample.py", line 254, in sample
samples_x = model.decode_first_stage(samples_z)
File "/mnt/ntfs/miniconda3/envs/stability_ai/stability_ai/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/mnt/ntfs/miniconda3/envs/stability_ai/generative-models/sgm/models/diffusion.py", line 130, in decode_first_stage
out = self.first_stage_model.decode(
File "/mnt/ntfs/miniconda3/envs/stability_ai/generative-models/sgm/models/autoencoder.py", line 211, in decode
x = self.decoder(z, **kwargs)
File "/mnt/ntfs/miniconda3/envs/stability_ai/stability_ai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/mnt/ntfs/miniconda3/envs/stability_ai/generative-models/sgm/modules/diffusionmodules/model.py", line 733, in forward
h = self.up[i_level].block[i_block](h, temb, **kwargs)
File "/mnt/ntfs/miniconda3/envs/stability_ai/stability_ai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/mnt/ntfs/miniconda3/envs/stability_ai/generative-models/sgm/modules/diffusionmodules/model.py", line 134, in forward
h = nonlinearity(h)
File "/mnt/ntfs/miniconda3/envs/stability_ai/generative-models/sgm/modules/diffusionmodules/model.py", line 49, in nonlinearity
return x * torch.sigmoid(x)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 2.21 GiB (GPU 0; 23.68 GiB total capacity; 19.29 GiB already allocated; 1.72 GiB free; 21.31 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Was checking to see if my 8gb gpu could run this lol def not if 24gb can't do it.

Maybe I've installed the wrong packages, I'm not sure...

It seems that 24gb is needed for Zero123 so I assume this model is the same or more in requirements:"Using Stable Zero123 to generate 3D objects requires more time and memory (24GB VRAM recommended)" - from a blog post

ok, I will take a look, thanks...

I could get some results with Stable Zero 123 using a ComfyUI custom node. Maybe this one requires a more powerful GPU.

In simple_video_sample.py reduce the number of frames decoded simultaneously.

decoding_t: int = 3

ok, I will try. thanks.

I've changed the script as you suggested and run it again. This time it didn't hang at all , but I didn't get any good result. All I've got was an unplayable mp4 video file and the source picture with its background removed. Any thoughts? Thanks for your help!

You will likely want to change this section:

vid = (
(rearrange(samples, "t c h w -> t h w c") * 255)
.cpu()
.numpy()
.astype(np.uint8)
)
video_path = os.path.join(output_folder, f"{base_count:06d}.mp4")
imageio.mimwrite(video_path, vid)

to export a video format your system can play, or to export an image sequence.

Keep in mind this model only generates 2D images. It does not seem like the code for turning it into a 3D representation has been provided.

ok, I will study the code, make some changes and see what happens. Again, thanks for your help!

eduardo-baena changed discussion status to closed

Hi,

I am still getting out of memory error besides editing decoding_t: int = 3

I didn't figure out a way to make this work and ended up erasing the virtual environment. But I could test it using a comfyui node and it works fine. If you are interested, here is the link: https://openart.ai/workflows/civet_plush_52/sv3d-workflow/cjnSCqbtwuJWR2StGacF

Sign up or log in to comment