Several issues with "get started" sample.

#2
by dg-kalle - opened

There is no chat template defined, but the apply_chat_template feature is being used:

>>> inputs = tokenizer.apply_chat_template(messages, return_tensors="pt")

No chat template is defined for this tokenizer - using the default template for the LlamaTokenizerFast class. If the default is not appropriate for your model, please set `tokenizer.chat_template` to an appropriate template. See https://huggingface.co/docs/transformers/main/chat_templating for more information.

And the model.generate call fails for some reason:

>>> output_ids = model.generate(**inputs.to(device))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: transformers.generation.utils.GenerationMixin.generate() argument after ** must be a mapping, not Tensor

Sign up or log in to comment