Text Generation
Transformers
Safetensors
gpt_bigcode
code
text-generation-inference
4-bit precision
gptq
4 papers

Loading this model with autogptq fails:

│ ❱ 194 with open(merges_file, encoding="utf-8") as merges_handle: │

This text file is part of the weird tokenizer setup this model requires

Oh, thank you. I tested this with AutoGPTQ and didn't get this error.

TheBloke changed pull request status to merged

When you tested did you use use_fast=False tokenizer or use_fast=True tokenizer?

This file is only required for use_fast=False as far as I can tell. I can confirm adding it makes everything happy.

I honestly can't remember - does the issue only occur with one or the other?

Yes, it seem this file is only read by use_fast=False

Both should now work.

Great, thanks for the PR. I will remember to add this file for future models of this type

Sign up or log in to comment