error loading model: unrecognized tensor type 13

#3
by CR2022 - opened

llama.cpp: loading model from models/TheBloke_WizardLM-30B-Uncensored-GGML/wizardlm-30b-uncensored.ggmlv3.q5_K_S.bin
error loading model: unrecognized tensor type 13

llama_init_from_file: failed to load model
Traceback (most recent call last):
File "/root/text-generation-webui/server.py", line 1079, in
shared.model, shared.tokenizer = load_model(shared.model_name)
File "/root/text-generation-webui/modules/models.py", line 94, in load_model
output = load_func(model_name)
File "/root/text-generation-webui/modules/models.py", line 271, in llamacpp_loader
model, tokenizer = LlamaCppModel.from_pretrained(model_file)
File "/root/text-generation-webui/modules/llamacpp_model.py", line 49, in from_pretrained
self.model = Llama(**params)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/llama_cpp/llama.py", line 197, in init
assert self.ctx is not None
AssertionError
Exception ignored in: <function LlamaCppModel.__del__ at 0x7f354e6704c0>
Traceback (most recent call last):
File "/root/text-generation-webui/modules/llamacpp_model.py", line 23, in del
self.model.del()
AttributeError: 'LlamaCppModel' object has no attribute 'model'
(textgen) root@DESKTOP:~/text-generation-webui#

Yeah, as mentioned in the README the new quant types don't yet work with text-generation-webui. Use q5_0 for now.

Ok thank you.

CR2022 changed discussion status to closed

Sign up or log in to comment