Getting an error TypeError: unsupported operand type(s) for *: 'Tensor' and 'NoneType'

#49
by NajiAboo - opened

Trying to finetune the model in my GPU Machine ( Tesla v100). But getting an error as below , But this is working fine in colab

modelling_RW.py", line 93, in forward
return (q * cos) + (rotate_half(q) * sin), (k * cos) + (rotate_half(k) * sin)
~~^~~~~
TypeError: unsupported operand type(s) for *: 'Tensor' and 'NoneType'

any help on this is highly appreciated

Getting the same error

having the same error, any help

model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
device_map="auto",
trust_remote_code=True
)

its working now thanks, have to update device_map="auto",
as shown above,

I tried changing the device_map,

But I am getting below error now

ValueError: You can't train a model that has been loaded in 8-bit precision on a different device than the one you're training on. Make sure you loaded the model on the correct device using for example device_map={'':torch.cuda.current_device()}you're training on. Make sure you loaded the model on the correct device using for example device_map={'':torch.cuda.current_device() or device_map={'':torch.xpu.current_device()}

suggestion please

Running into similar issue. Any update on this?

Same error at my end. Resolved for the time being by not passing the bitsandbytes, version 0.40.0.

Sign up or log in to comment