Prompt to use for sequence of messages

#13
by kjhamilton - opened

In a sequence of messages, should the tools be included once at the top and then each message?

{tools}
User Query: {question}
Bot Response: {response}
User Query: {question2}
...

Also is there a place to properly put a system message or instruction? For example to guide responses to be brief.

Hi @kjhamilton ,

Apologies for the delay.

For multi turn, you can attach the history to the user query field. For example, check out these turns:

"Get me best vegetarian chinese restaurant in Tennyson Park in Oakland?",
"Actually, let's do Vegan?",
"Hmm, now let's change it to the San Francisco downtown instead."

These are the Raven Calls:

Call: call_search(query='vegetarian chinese restaurant in Tennyson Park in Oakland')
Call: call_search(query='vegan chinese restaurant in Tennyson Park in Oakland')
Call: call_search(query='vegan chinese restaurant in San Francisco downtown')

Here's how to do it:

def chat(my_question):
    history.append(my_question)
    inner_prompt = prompt.format(query = "\n".join(history))
    output = query({
        "inputs" : inner_prompt,
        "parameters" : {"do_sample" : False, "temperature" : 0.001, "max_new_tokens" : 2048, "stop" : ["Thought:"]}
    })
    output = output[0]["generated_text"].replace("Thought:", "").strip()
    print (output)
    history.append(output)
    return output

Here's the full file:

import requests

API_URL = "https://rjmy54al17scvxjr.us-east-1.aws.endpoints.huggingface.cloud"
headers = {
    "Content-Type": "application/json"
}

def query(payload):
    response = requests.post(API_URL, headers=headers, json=payload)
    return response.json()

prompt = '''
Function:
def call_search(query):
  """
  Get google search results for a given query.
  """

User Query: {query}<human_end>

'''

history = []
def chat(my_question):
    history.append(my_question)
    inner_prompt = prompt.format(query = "\n".join(history))
    output = query({
        "inputs" : inner_prompt,
        "parameters" : {"do_sample" : False, "temperature" : 0.001, "max_new_tokens" : 2048, "stop" : ["Thought:"]}
    })
    output = output[0]["generated_text"].replace("Thought:", "").strip()
    print (output)
    history.append(output)
    return output

MULTITURN = \
[
     "Get me best vegetarian chinese restaurant in Tennyson Park in Oakland?",
     "Actually, let's do Vegan?",
     "Hmm, now let's change it to the San Francisco downtown instead."
]

for turn in MULTITURN:
    chat(turn)

thank you for this very detailed answer!

it seems like this model always responds with a function call, is that expected? Will it do a chat sequence like:

User: Get me best vegetarian chinese restaurant in Tennyson Park in Oakland?

Assistant: Function call

Append function call response

Assistant: The best vegetarian chinese restaurant is .... continue interpreted response

User: how much wood would a woodchuck, if a woodchuck could chuck wood?

Assistant (no function call) - however much wood a woodchuck could if a wood chuck could chuck wood.

Sign up or log in to comment