Unable to load GPTQ model, despite utilizing autoGPTQ

#11
by DarvinDelray - opened

Hello,

I was attempting to load the model on Collab, however, I encounter an problem, and after an little searching, I was able to find another means of uploading the GPTQ model, shown below.

issue6-8-2023-3.png

However, I still end up with an similar problem:
issue6-8-2023-3b.png

I'm at an near loss of what to do, as I've attempted another method mentioned:

from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import argparse

parser = argparse.ArgumentParser(description='Simple AutoGPTQ example')
parser.add_argument('model_name_or_path', type=str, help='Model folder or repo')
parser.add_argument('--model_basename', type=str, help='Model file basename if model is not named gptq_model-Xb-Ygr')
parser.add_argument('--use_slow', action="store_true", help='Use slow tokenizer')
parser.add_argument('--use_safetensors', action="store_true", help='Model file basename if model is not named gptq_model-Xb-Ygr')
parser.add_argument('--use_triton', action="store_true", help='Use Triton for inference?')
parser.add_argument('--bits', type=int, default=4, help='Specify GPTQ bits. Only needed if no quantize_config.json is provided')
parser.add_argument('--group_size', type=int, default=128, help='Specify GPTQ group_size. Only needed if no quantize_config.json is provided')
parser.add_argument('--desc_act', action="store_true", help='Specify GPTQ desc_act. Only needed if no quantize_config.json is provided')

args = parser.parse_args()

quantized_model_dir = args.model_name_or_path

tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir, use_fast=not args.use_slow)

try:
quantize_config = BaseQuantizeConfig.from_pretrained(quantized_model_dir)
except:
quantize_config = BaseQuantizeConfig(
bits=args.bits,
group_size=args.group_size,
desc_act=args.desc_act
)

model = AutoGPTQForCausalLM.from_quantized(quantized_model_dir,
use_safetensors=True,
model_basename=args.model_basename,
device="cuda:0",
use_triton=args.use_triton,
quantize_config=quantize_config)

logging.set_verbosity(logging.CRITICAL)

prompt = "Tell me about AI"
prompt_template=f'''
Human: {prompt}
Assistant: .'''

print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)

print(pipe(prompt_template)[0]['generated_text'])

print("\n\n*** Generate:")

input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))

and end up with this:

issue6-8-2023-3c.png

Hi!

Try this :

from transformers import AutoModelForCausalLM, AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig

model_name = "TheBloke/Wizard-Vicuna-13B-Uncensored-GPTQ"
tokenizer = AutoTokenizer.from_pretrained(model_name)
quantize_config = BaseQuantizeConfig.from_pretrained(model_name)
model = AutoGPTQForCausalLM.from_quantized(model_name,
                                           use_safetensors=True,
                                           model_basename="Wizard-Vicuna-13B-Uncensored-GPTQ-4bit-128g.compat.no-act-order",
                                           device="cuda:0",
                                           use_triton=False, # True or False
                                           quantize_config=quantize_config)

You should then be able use the model like that :

prompt = "Ask Something here"
prompt_model = f'''### Human: {prompt}
### Assistant:'''
input_ids = tokenizer(prompt_model, return_tensors="pt").input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)

print("*** Output ***")
print(tokenizer.decode(output[0]))
print("**************")

hey, I am getting an error: NameError: name 'autogptq_cuda_256' is not defined

any idea what is happening?

Sign up or log in to comment