I am having out of memory error while running example inference code with this model

#6
by yilmazay - opened

Hello,
I really appreciate your efforts for developing a Turkish model.
Just as a feedback, the example code on the model card did not work on my environment.
I have an NVIDIA card RTX 3090 with a GPU memory of 24 GB.
It throws OOM error on line "model.to(device)"
It only works if I set device type as cpu. But in that case, it becomes too slow.
I would appreciate if any of you there could recommend me a way in which I can run it with cuda.
Thanks in advance.
Y.A.
Note: the error message I get:

return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 224.00 MiB. GPU 0 has a total capacty of 23.68 GiB of which 128.69 MiB is free. Including non-PyTorch memory, this process has 23.55 GiB memory in use. Of the allocated memory 23.30 GiB is allocated by PyTorch, and 1.17 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Sign up or log in to comment