tokenizer error

#2
by amgadhasan - opened

Code:

model = "NousResearch/Llama-2-13b-chat-hf"

tokenizer = AutoTokenizer.from_pretrained(model)
chatbot = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

Error:

ValueError: Couldn't instantiate the backend tokenizer from one of: 
(1) a `tokenizers` library serialization file, 
(2) a slow tokenizer instance to convert or 
(3) an equivalent slow tokenizer class to instantiate and convert. 
You need to have sentencepiece installed to convert a slow tokenizer to a fast one.

@amgadhasan Have you installed SentencePiece?

@amgadhasan Have you installed SentencePiece?

Yes, it fixed the error.

Thank you!

amgadhasan changed discussion status to closed

Sign up or log in to comment