exllama can't load.

#1
by peterzhu - opened

Traceback (most recent call last):
File "F:\AI-RWKV\oobabooga_windows\text-generation-webui\server.py", line 68, in load_model_wrapper
shared.model, shared.tokenizer = load_model(shared.model_name, loader)
File "F:\AI-RWKV\oobabooga_windows\text-generation-webui\modules\models.py", line 78, in load_model
output = load_func_maploader
File "F:\AI-RWKV\oobabooga_windows\text-generation-webui\modules\models.py", line 305, in ExLlama_HF_loader
return ExllamaHF.from_pretrained(model_name)
File "F:\AI-RWKV\oobabooga_windows\text-generation-webui\modules\exllama_hf.py", line 83, in from_pretrained
config = ExLlamaConfig(pretrained_model_name_or_path / 'config.json')
File "F:\AI-RWKV\oobabooga_windows\installer_files\env\lib\site-packages\exllama\model.py", line 44, in init
self.bos_token_id = read_config["bos_token_id"] # Note that the HF LlamaTokenizer doesn't seem to recognize these automatically
KeyError: 'bos_token_id'

Because chatglm2-6b is not a llama based model

@yfshi123 Does AutoGPTQ suppot it? It should, right?

Sign up or log in to comment