Text Generation
Transformers
Safetensors
English
llama
text-generation-inference
4-bit precision
gptq

Failed with last text-generation-webui

#1
by Dietmar2020 - opened

RuntimeError: Failed to import transformers.models.llama.tokenization_llama_fast because of the following error (look up to see its traceback):
tokenizers>=0.13.3 is required for a normal functioning of this module, but found tokenizers==0.13.2.

Dietmar2020 changed discussion status to closed

Hey, How did you solve this?

Sign up or log in to comment