Text Generation
Transformers
PyTorch
English
llama
Inference Endpoints
text-generation-inference

Orca prompt template?

#6
by nikjohn7 - opened

On the OpenOrcaxOpenChat-Preview2-13B model card, the following is described as the prompt template:

# Single-turn V1 Llama 2
tokenize("User: Hello<|end_of_turn|>Assistant:")
# Result: [1, 4911, 29901, 15043, 32000, 4007, 22137, 29901]

So, if I want to fine-tune a QA dataset on this, is this the appropriate way to prompt it?

User: You will be provided with a multiple choice question followed by 3 choices, A,B and C. Give the letter of the option that correctly answers the given question. For example, if the correct option is B, then your answer should be B.
Question: {prompt}
A) {a}
B) {b}
C) {c}
<|end_of_turn|>Assistant: {answer}

Or am I supposed to frame it in a different way?

Sign up or log in to comment