CPU bound when loaded on GPU?

#6
by RecViking - opened

What would cause this model to end up CPU bound while running inference? This is loaded to GPU but seems to be stuck doing some portion of the inference on CPU. I have the same issue whether loaded as AutoModelForCausalLM.from_pretrained and pipeline. Inference is SUPER slow and it won't load up my GPU much more then 30% on usage.
Screenshot 2023-05-15 at 8.04.30 AM.png

I've snipped the relevant code (minus includes) if I'm doing anything wrong when loading these.

pipe workflow:
path = self.settings['model_string']
pipe = pipeline("text-generation", model=path, torch_dtype=torch.bfloat16, device=0)
self.pipe = pipe
return self.pipe(inputs, **parameters)

AutoModel workflow:
path = self.settings['model_string']
self.tokenizer = AutoTokenizer.from_pretrained(path)
model = AutoModelForCausalLM.from_pretrained(path, return_dict=True, load_in_8bit=True, device_map=self.device, torch_dtype=torch.float16)
self.model = model
inputs = self.tokenizer(inputs, return_tensors="pt").to("cuda")
outputs = self.model.generate(**inputs, **parameters)
return self.tokenizer.decode(outputs[0], skip_special_tokens=False)

I have the same issue on other LLMs too, I suspect this is coming from bitsandbytes lib used when loading in 8 bits
https://github.com/TimDettmers/bitsandbytes/issues/388

It runs CPU bound regardless of the mode you run it in. (here's fp16)
image.png
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceH4/starchat-alpha")#path)
model = AutoModelForCausalLM.from_pretrained(
"HuggingFaceH4/starchat-alpha",
return_dict=True,
#load_in_8bit=True,
device_map="auto",#{"":2},
torch_dtype=torch.float16,
trust_remote_code=True,
local_files_only=True,
)
model.resize_token_embeddings(len(tokenizer))
#model = PeftModel.from_pretrained(model, path)

Still running badly (here's default, whatever it uses)

image.png

tokenizer = AutoTokenizer.from_pretrained("HuggingFaceH4/starchat-alpha")#path)
model = AutoModelForCausalLM.from_pretrained(
"HuggingFaceH4/starchat-alpha",
return_dict=True,
#load_in_8bit=True,
device_map="auto",#{"":2},
#torch_dtype=torch.float16,
trust_remote_code=True,
local_files_only=True,
)
model.resize_token_embeddings(len(tokenizer))

Sign up or log in to comment