ValueError: The device_map provided does not give any device for the following parameters: model.normalizer

#8
by LaferriereJC - opened

trying to load in text-generation-webui with latest transformers

Hi @LaferriereJC
Thanks for the issue! Can you share a reproducible snippet?

Installed the latest transformers and attempted to run the boilerplate code on python 3.10.9 on oracle Linux 8.3

I did test w boilerplate code to ensure it wasn't just ooba

Hi, I encountered the exactly the same problem.

CUDA_VISIBLE_DEVICES=2,3 python try.py ## If I set CUDA_VISIBLE_DEVICES to just one device then it is fine

The code is exactly the snippet provided.
try.py

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("google/recurrentgemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/recurrentgemma-2b-it", device_map="auto")

input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))

Successfully installed transformers-4.40.0.dev0

(textgen) [root@pve0 data]# cat recurrentgemma.py
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("/data/text-generation-webui/models/recurrentgemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("/data/text-generation-webui/models/recurrentgemma-2b-it", device_map="auto")

input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))

(textgen) [root@pve0 data]# python recurrentgemma.py
Loading checkpoint shards: 100%|███████████████████████████████████████| 2/2 [00:12<00:00, 6.25s/it]
Some weights of RecurrentGemmaForCausalLM were not initialized from the model checkpoint at /data/text-generation-webui/models/recurrentgemma-2b-it and are newly initialized: ['model.normalizer']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Traceback (most recent call last):
File "/data/recurrentgemma.py", line 4, in
model = AutoModelForCausalLM.from_pretrained("/data/text-generation-webui/models/recurrentgemma-2b-it", device_map="auto")
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 563, in from_pretrained
return model_class.from_pretrained(
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3735, in from_pretrained
dispatch_model(model, **device_map_kwargs)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/accelerate/big_modeling.py", line 349, in dispatch_model
check_device_map(model, device_map)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 1296, in check_device_map
raise ValueError(
ValueError: The device_map provided does not give any device for the following parameters: model.normalizer
(textgen) [root@pve0 data]#

I also have the same issue.

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("google/recurrentgemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/recurrentgemma-2b-it", 
                                             device_map="auto",
                                            #  max_memory = {
                                            #      1:"10000MB",
                                            #      2:"10000MB",
                                            #      3:"15000MB"
                                            #  }
                                             )

$nvcc -V

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Mon_Apr__3_17:16:06_PDT_2023
Cuda compilation tools, release 12.1, V12.1.105
Build cuda_12.1.r12.1/compiler.32688072_0

$pip list | grep transformers

transformers              4.40.0.dev0
Google org

We just merged the fix

Repo still says 9 days ago

Google org

Just ran this with 4 devices, make sure you are using 4.40 release

image.png

Sign up or log in to comment