text-generation-webui issue
When I try to load the model I get:[...] env\lib\site-packages\transformers\models\auto\tokenization_auto.py”, line 699, in from_pretrained raise ValueError( ValueError: Tokenizer class CodeGen25Tokenizer does not exist or is not currently imported.
Did I miss something?
I get the same error
You need to install tiktoken
- I included the install command in the README. If you're using the text-generation-webui one-click-installer, tiktoken will need to be installed in the conda environment created for text-generation-webui.
You also need Trust Remote Code = ticked, which I forgot to mention in the README. I'll add that now.
I've never tested this in text-generation-webui as it's a code generation model which isn't really suited for UI use, but I do believe it works if the above are done.
I've updated the README to make the instructions clearer
Trust Remote Code = ticked solved it for me. However, I got some warnings:
2023-07-24 17:54:50 INFO:Loading TheBloke_Codegen25-7B-mono-GPTQ...
2023-07-24 17:54:50 INFO:The AutoGPTQ params are: {'model_basename': 'gptq_model-4bit-128g', 'device': 'cuda:0', 'use_triton': False, 'inject_fused_attention': True, 'inject_fused_mlp': True, 'use_safetensors': True, 'trust_remote_code': True, 'max_memory': {0: '7390MiB', 'cpu': '99GiB'}, 'quantize_config': None, 'use_cuda_fp16': True}
2023-07-24 17:54:52 WARNING:The safetensors archive passed at models\TheBloke_Codegen25-7B-mono-GPTQ\gptq_model-4bit-128g.safetensors does not contain metadata. Make sure to save your model with the `save_pretrained` method. Defaulting to 'pt' metadata.
2023-07-24 17:54:56 WARNING:skip module injection for FusedLlamaMLPForQuantizedModel not support integrate without triton yet.
Using unk_token, but it is not set yet.
Using unk_token, but it is not set yet.
2023-07-24 17:54:56 INFO:Loaded the model in 5.81 seconds.