Text Generation
Transformers
GGUF
code
granite
Eval Results
Inference Endpoints

Chat template - use with Ollama?

#1
by smcleod - opened

This model seems pretty strong for coding and it's quite quick for it's size.

One thing I struggled with was it's weird chat template (thanks IBM πŸ™„), after messing around quite a bit with the Ollama TEMPLATE and PARAMETER stop settings I've found this to be the most effective, although I'm certain it could be further improved if anyone has any ideas?

SYSTEM "You are an AI assistant and expert coder, carefully consider the users question and complete their request making sure you meet all requirements. Output your responses with markdown formatting unless requested otherwise."

PARAMETER temperature 0.7
PARAMETER num_keep -1

TEMPLATE """{{ if .System }}System: {{ .System }}{{ end }}

<|start_header_id|>Question: <|end_header_id|>
{{ .Prompt }}<|eot_id|>

<|start_header_id|>Answer: <|end_header_id|>
{{ .Response }}<|endoftext|>"""

PARAMETER stop "<|endoftext|>"
PARAMETER stop "<|eot_id|>"
PARAMETER stop "<end of code>"

As always - thanks for the great GGUFs!

Interesting, you find that it needs the start and end header_id? Their given prompt doesn't use it so that surprises me. I'll have to give it a bit of testing on my own to see, but thanks for the heads up for others!

Good catch! I actually thought it did but you're quite right it doesn't mention it in the docs, not sure what I was thinking when I added that.

Sign up or log in to comment