only use one gpu?

#2
by jgbrblmd - opened

$ env | grep VIS
CUDA_VISIBLE_DEVICES=1,2

$ python hqq_test.py
hqq_aten package not installed. HQQBackend.ATEN backend will not work unless you install the hqq_aten lib in hqq/kernels.
Failed to load the weights CUDA out of memory. Tried to allocate 28.00 MiB. GPU 0 has a total capacty of 21.67 GiB of which 22.75 MiB is free. Including non-PyTorch memory, this process has 21.64 GiB memory in use. Of the allocated memory 21.40 GiB is allocated by PyTorch, and 92.97 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Traceback (most recent call last):
File "/nvme/opt/LLM/hqq_test.py", line 7, in
model = HQQModelForCausalLM.from_quantized(model_id)
File "/nvme/opt/venv/hqq/lib/python3.10/site-packages/hqq/engine/base.py", line 71, in from_quantized
cls._make_quantizable(model, quantized=True)
File "/nvme/opt/venv/hqq/lib/python3.10/site-packages/hqq/engine/hf.py", line 29, in _make_quantizable
model.hqq_quantized = quantized
AttributeError: 'NoneType' object has no attribute 'hqq_quantized'

Mobius Labs GmbH org

Indeed, only single GPU for now. You want to load half of the model on gpu1 and the other half on gpu2?

Indeed, only single GPU for now. You want to load half of the model on gpu1 and the other half on gpu2?

yes, my gpus has 22G vram each.

Thank you.

Sign up or log in to comment