Text Generation
Transformers
Safetensors
English
llama
text-generation-inference
4-bit precision
gptq

fix load tokenizer error

#1
by mzbac - opened

The incorrect tokenizer configuration caused the fast tokenizer to error out due to maximum recursion depth exceeded.

mzbac changed pull request title from Update tokenizer_config.json to fix load tokenizer error

Thanks

TheBloke changed pull request status to merged

Sign up or log in to comment