Re. Could not find model at TheBloke/koala-7B-GPTQ/koala-7b-GPTQ-4bit-128g.no-act.order.safetensors

#5
by Andyrasika - opened

Hi @TheBloke ,
While implementing your model (For reference, the colab notebook: https://colab.research.google.com/drive/1DbMcJtY8m7-9MFrH6t6tgs10OTgbuOtG?usp=sharing)
i got the following error:
```
WARNING:auto_gptq.modeling._base:use_triton will force moving the whole model to GPU, make sure you have enough VRAM.

FileNotFoundError Traceback (most recent call last)
in <cell line: 11>()
9 tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
10
---> 11 model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
12 model_basename=model_basename,
13 use_safetensors=True,

1 frames
/usr/local/lib/python3.10/dist-packages/auto_gptq/modeling/_base.py in from_quantized(cls, save_dir, device, use_safetensors, use_triton, max_memory, device_map, quantize_config, model_basename, trust_remote_code)
512
513 if not isfile(model_save_name):
--> 514 raise FileNotFoundError(f"Could not find model at {model_save_name}")
515
516 def skip(*args, **kwargs):

FileNotFoundError: Could not find model at TheBloke/koala-7B-GPTQ/koala-7b-GPTQ-4bit-128g.no-act.order.safetensors


if i change:

quantize_config=None)
i get the following error:
WARNING:auto_gptq.modeling._base:use_triton will force moving the whole model to GPU, make sure you have enough VRAM.

FileNotFoundError Traceback (most recent call last)
in <cell line: 11>()
9 tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
10
---> 11 model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
12 model_basename=model_basename,
13 use_safetensors=True,

2 frames
/usr/local/lib/python3.10/dist-packages/auto_gptq/modeling/_base.py in from_pretrained(cls, save_dir)
49 @classmethod
50 def from_pretrained(cls, save_dir: str):
---> 51 with open(join(save_dir, "quantize_config.json"), "r", encoding="utf-8") as f:
52 return cls(**json.load(f))
53

FileNotFoundError: [Errno 2] No such file or directory: 'TheBloke/koala-7B-GPTQ/quantize_config.json'

Looking forward to hearing from you.
Thanks,
Andy 

Sign up or log in to comment