Error when loading with text gen webui

#2
by freakontrol - opened

Hi, first of all thank you for your work, it really helps people experimenting with llm's...
I get error when trying to run this model in text-generation-webui:

Traceback (most recent call last): File “/app/server.py”, line 70, in load_model_wrapper shared.model, shared.tokenizer = load_model(shared.model_name) File “/app/modules/models.py”, line 94, in load_model output = load_func(model_name) File “/app/modules/models.py”, line 296, in AutoGPTQ_loader return modules.AutoGPTQ_loader.load_quantized(model_name) File “/app/modules/AutoGPTQ_loader.py”, line 60, in load_quantized model.embed_tokens = model.model.model.embed_tokens File “/app/venv/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1614, in getattr raise AttributeError(“‘{}’ object has no attribute ‘{}’”.format( AttributeError: ‘GPTBigCodeForCausalLM’ object has no attribute ‘model’

Other gptq models load fine.
Am I missing something? Thankyou again

Never mind I think it is related to this issue of oobabooga webui.

Yes it is - update to latest text-gen-ui and it should work now

It works but I had to set wbits to 4 and groupsize to 128.

now I'm getting errors. What setting should I use? Nothing was set by default as described in the instructions.
RuntimeError: expected scalar type Float but found Half

Found my problem. gptq-for-llama was checked for some reason. Unchecked that and everything works now. Be sure to set the Instruction Template in the Chat tab to "Alpaca", and on the Parameters tab, set temperature to 1 and top_p to 0.95

The instruction template mentioned by the original hugging face repo is :

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{instruction}

### Response:

In oobabooga, that's almost equal to Wizard-Mega WizardLM except for it should have no \n between bot and bot-message:

"user: ""### Instruction:""
bot: ""### Response:""
turn_template: ""<|user|>\n<|user-message|>\n\n<|bot|><|bot-message|>\n\n""
context: ""Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n"""

Did you face problems with this?

I've always written Alpaca style prompts as:

### Instruction: prompt

and not

### Instruction:
prompt

And haven't noticed any problems. So I'm pretty sure it's fine. However I haven't specifically tested between the two. If you want to be 100% accurate to their suggestion, you should be able to edit the turn template in ooba to exactly match theirs?

Traceback (most recent call last):
File "E:\AI\TextGen\text-generation-webui\installer_files\env\Lib\site-packages\transformers\modeling_utils.py", line 484, in load_state_dict
return torch.load(checkpoint_file, map_location=map_location)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\TextGen\text-generation-webui\installer_files\env\Lib\site-packages\torch\serialization.py", line 993, in load
with _open_zipfile_reader(opened_file) as opened_zipfile:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\TextGen\text-generation-webui\installer_files\env\Lib\site-packages\torch\serialization.py", line 447, in init
super().init(torch._C.PyTorchFileReader(name_or_buffer))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "E:\AI\TextGen\text-generation-webui\installer_files\env\Lib\site-packages\transformers\modeling_utils.py", line 488, in load_state_dict
if f.read(7) == "version":
^^^^^^^^^
File "E:\AI\TextGen\text-generation-webui\installer_files\env\Lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 1284: character maps to

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "E:\AI\TextGen\text-generation-webui\modules\ui_model_menu.py", line 206, in load_model_wrapper
shared.model, shared.tokenizer = load_model(shared.model_name, loader)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\TextGen\text-generation-webui\modules\models.py", line 84, in load_model
output = load_func_maploader
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\TextGen\text-generation-webui\modules\models.py", line 141, in huggingface_loader
model = LoaderClass.from_pretrained(path_to_model, **params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\TextGen\text-generation-webui\installer_files\env\Lib\site-packages\transformers\models\auto\auto_factory.py", line 565, in from_pretrained
return model_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\TextGen\text-generation-webui\installer_files\env\Lib\site-packages\transformers\modeling_utils.py", line 3019, in from_pretrained
state_dict = load_state_dict(resolved_archive_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\TextGen\text-generation-webui\installer_files\env\Lib\site-packages\transformers\modeling_utils.py", line 500, in load_state_dict
raise OSError(
OSError: Unable to load weights from pytorch checkpoint file for 'models\databricks_dolly-v2-12b\pytorch_model.bin' at 'models\databricks_dolly-v2-12b\pytorch_model.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.

Looks like a partially incomplete download, please try downloading again

Sign up or log in to comment