Text Generation
Transformers
Safetensors
dbrx
conversational
text-generation-inference

Failing to 4-bit quantize with BitsAndBytes

#16
by simsim314 - opened

My goals is to run 4-bit quant of the model.

While this usual code provided executes fine... takes forever to generate single token on cpu but runs without errors:

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

tokenizer = AutoTokenizer.from_pretrained("databricks/dbrx-instruct", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("databricks/dbrx-instruct", device_map="cpu", torch_dtype=torch.bfloat16, trust_remote_code=True)

input_text = "What does it take to build a great LLM?"
messages = [{"role": "user", "content": input_text}]
input_ids = tokenizer.apply_chat_template(messages, return_dict=True, tokenize=True, add_generation_prompt=True, return_tensors="pt")

outputs = model.generate(**input_ids, max_new_tokens=2)
print(tokenizer.decode(outputs[0]))

This code with BitsAndBytesConfig of nf4, throws an Error:

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
from transformers import BitsAndBytesConfig

tokenizer = AutoTokenizer.from_pretrained("databricks/dbrx-instruct", trust_remote_code=True)

quantization_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16,
)

model = AutoModelForCausalLM.from_pretrained("databricks/dbrx-instruct", device_map="cpu", quantization_config=quantization_config, trust_remote_code=True)

input_text = "What does it take to build a great LLM?"
messages = [{"role": "user", "content": input_text}]
input_ids = tokenizer.apply_chat_template(messages, return_dict=True, tokenize=True, add_generation_prompt=True, return_tensors="pt")

outputs = model.generate(**input_ids, max_new_tokens=200)
print(tokenizer.decode(outputs[0]))

Error:

RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'

I am guessing it's some new layer they implemented in the custom code. So while llama.cpp fellows, managing their own bugs with 16 experts, why not add support for the basic 4-bits from BitsAndBytes, I am guessing it's much less work, but I don't know how to do it. I am guessing it's something in the custom code provided by databricks.

look at this discussion here #10

srowen changed discussion status to closed

Sign up or log in to comment