Support for fp16?

#2
by BigDeeper - opened

My GPUs do not handle bfloat16, but the LTX nodes seem to be hardcoded for bfloat16 and not float16.

The script either complains about changing the type to float16 or I get an OOM on the Sampler or VAE nodes.

I forced torch.bfloat16 = torch.float16, and this got me past the sampler but I still got an OOM on the VAE Decode node.

The GPU has 12.2GiB, and I set --lowvram and it still blows up.

need fp16

My GPUs do not handle bfloat16, but the LTX nodes seem to be hardcoded for bfloat16 and not float16.

The script either complains about changing the type to float16 or I get an OOM on the Sampler or VAE nodes.

I forced torch.bfloat16 = torch.float16, and this got me past the sampler but I still got an OOM on the VAE Decode node.

The GPU has 12.2GiB, and I set --lowvram and it still blows up.

what kind of GPU do you have? you can try tiled vae and experiment with batch vae, it'll work i think!

Sign up or log in to comment