Model cannot load

#6
by aifirst - opened

I updated oobabooga 1 hours ago. My system is CPU: 8core/16thread, GPU: 4070 12G.

I downloaded all files of "Files and version" and copied to directory which contains Wizard-Vicuna-13B-Uncensored-GPTQ-4bit-128g.compat.no-act-order.safetensors.

Traceback (most recent call last):
File "E:\AI\Vicuna\text-generation-webui\server.py", line 1087, in
shared.model, shared.tokenizer = load_model(shared.model_name)
File "E:\AI\Vicuna\text-generation-webui\modules\models.py", line 95, in load_model
output = load_func(model_name)
File "E:\AI\Vicuna\text-generation-webui\modules\models.py", line 223, in huggingface_loader
model = LoaderClass.from_pretrained(checkpoint, **params)
File "E:\AI\Vicuna\installer_files\env\lib\site-packages\transformers\models\auto\auto_factory.py", line 471, in from_pretrained
return model_class.from_pretrained(
File "E:\AI\Vicuna\installer_files\env\lib\site-packages\transformers\modeling_utils.py", line 2405, in from_pretrained
raise EnvironmentError(
OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory models\TheBloke_Wizard-Vicuna-13B-Uncensored-GPTQ.

This usually happens because the GPTQ parameters have not been saved for the model. Please check the README and see the steps about setting and saving GPTQ parameters for the model, then reloading the model

It works. Thank you very much!

aifirst changed discussion status to closed

Sign up or log in to comment