Text Generation
Transformers
Safetensors
PyTorch
English
mistral
finetuned
quantized
4-bit precision
AWQ
conversational
Inference Endpoints
text-generation-inference
awq

added chat_template in tokenizer_config.json

#1

added below:
"chat_template": "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|im_start|>user\n' + message['content'] + '<|im_end|>' }}\n{% elif message['role'] == 'system' %}\n{{ '<|im_start|>system\n' + message['content'] + '<|im_end|>' }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|im_start|>assistant\n' + message['content'] + '<|im_end|>' }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|im_start|>assistant' }}\n{% endif %}\n{% endfor %}",

SolidRusT Networks org

Thank-you.

Suparious changed pull request status to merged
SolidRusT Networks org

@pankajmathur - I made another commit to reverse the change to token 29000. Any reason that you changed it, or just a commit error?

https://huggingface.co/solidrust/dolphin-2.6-mistral-7b-dpo-laser-AWQ/commit/251963c48254631b145a4a6c7e8291a41fc2dd4d

sorry just saw this, looks like a commit error, thanks for keeping an eye, my bad :(

Sign up or log in to comment