How to handle Context In Phi-2 Model

#91
by yeniceriSGK - opened

Hey I want to know how to handle Context/ History in this model like we do in GPT API like this
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate",
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
But How to do this in Phi-2 Model I am using it like this only
sample = {"topic":"", "context":"","rephrased_query":""}
sample_prompt = f"Question: What is financial planning': Task: Your task is to return a JSON that will analyze the topic, context and rephrsed query of the question and return in JSON format like this {sample}"
#Wrap the prompt using the right chat template

[INST] Question:\n {prompt} [/INST]

instruction = f"[INST] Question:\n {sample_prompt} [/INST] \n\nResponse:\n"
%%time
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200)
result = pipe(instruction)
#Trim the response, remove instruction manually

I'll like to know this too. I currently do something like this, and the responses are all over the place

Output:```
Microsoft org

Phi-2 is a base model, i.e., it was only pre-trained with the text completion objective in mind.

To support a more chat-like conversation / history, I would suggest to fine-tune on some instruct-based data or even check some alternative from people that have already fine-tuned the model.

For example: https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2.

Regards,
Gustavo.

gugarosa changed discussion status to closed

Sign up or log in to comment