[SOLVED] FileNotFoundError(f"Could not find model in {model_name_or_path}")
#5
by
Ayenem
- opened
LLM_ID = "TheBloke/WizardLM-13B-V1.2-GPTQ"
model_basename = "gptq_model-4bit-128g"
tokenizer = AutoTokenizer.from_pretrained(
LLM_ID,
use_fast=True,
)
model = AutoGPTQForCausalLM.from_quantized( # FileNotFoundError
LLM_ID,
revision="gptq-8bit-64g-actorder_True",
model_basename=model_basename,
use_safetensors=True,
trust_remote_code=False,
device="cuda:0",
quantize_config=None,
)
Traceback (most recent call last):
File ".../demo.py", line 91, in <module>
model = AutoGPTQForCausalLM.from_quantized(
File ".../auto_gptq/modeling/auto.py", line 105, in from_quantized
return quant_func(
File ".../auto_gptq/modeling/_base.py", line 768, in from_quantized
raise FileNotFoundError(f"Could not find model in {model_name_or_path}")
FileNotFoundError: Could not find model in TheBloke/WizardLM-13B-V1.2-GPTQ
What could I be doing wrong?
It works when I match the number of bits and group size in the model_basename
i.e. model_basename = "gptq_model-8bit-64g"
Seems obvious in retrospect, but I wish the revision
example in https://huggingface.co/TheBloke/WizardLM-13B-V1.2-GPTQ#how-to-use-this-gptq-model-from-python-code was clearer
Ayenem
changed discussion title from
FileNotFoundError(f"Could not find model in {model_name_or_path}")
to [SOLVED] FileNotFoundError(f"Could not find model in {model_name_or_path}")