Text Generation
Transformers
Safetensors
llama
text-generation-inference
4-bit precision
gptq

what is the prompt for chaining previous conversations?

#1
by najasev - opened

I tried chaining like below options:
<|prompter|>Q1<|endoftext|><|assistant|>A1<|endoftext|><|prompter|>Q2<|endoftext|><|assistant|>

<|prompter|>Q1<|assistant|>A1<|prompter|>Q2<|endoftext|><|assistant|>

<|system|><|endoftext|><|prompter|>Q1<|endoftext|><|assistant|>A1<|endoftext|><|prompter|>Q2<|endoftext|><|assistant|>

Below works for few conversations, but later it starts giving weird answers after 3-4 questions:
<|system|>reply from Context only<|endoftext|><|context|>User: Q1\nAnswer: A1\nUser: Q2\nAnswer: A2<|endoftext|><|prompter|>Q3<|endoftext|><|assistant|>

Does this model not support conversation chains?

Sign up or log in to comment