ValueError when use multiple GPUs for inference

#10
by aladinggit - opened

Hi, I encountered a problem when I use multiple GPUs for inference.

Error message: ValueError: The device_map provided does not give any device for the following parameters: model.normalizer

CUDA_VISIBLE_DEVICES=2,3 python try.py ## If I set CUDA_VISIBLE_DEVICES to just one device then it is fine

The code is exactly the snippet provided in the guidance.
try.py

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("google/recurrentgemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/recurrentgemma-2b-it", device_map="auto")

input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))

Google org

Hey! Sorry about that, this will fix it: https://github.com/huggingface/transformers/pull/30273/files

Google org

Closing as fixed ! Make sure to use transformers main branch

ybelkada changed discussion status to closed

Sign up or log in to comment