What is the prompt template?

#1
by tjtanaa - opened

I would like to host the model myself. May I know what is the prompt template?

I don't know if this is correct, but I am getting decent results with:

###USER: {prompt}
###FUNCTIONS: {func_spec}
###ASSISTANT:

Here is what I am doing in Python (it's a bit rough):

    functions = request.functions
    prompt = " ".join([msg.content for msg in request.messages])
    # Add the functions definitions to the prompt, serialize them to JSON on a new line
    func_json = "\n" + "\n".join([function.json() for function in functions])
    prompt = "###USER: " + prompt + "\n" + "###FUNCTIONS: " + func_json + "\n" + "###ASSISTANT: \n"
    response = get_response(prompt, model, tokenizer, device='cuda')

image.png
I saw some clue about the possible prompt, but it is not enought to make the model to continue the answer.

I don't know if this is correct, but I am getting decent results with:

###USER: {prompt}
###FUNCTIONS: {func_spec}
###ASSISTANT:

Here is what I am doing in Python (it's a bit rough):

    functions = request.functions
    prompt = " ".join([msg.content for msg in request.messages])
    # Add the functions definitions to the prompt, serialize them to JSON on a new line
    func_json = "\n" + "\n".join([function.json() for function in functions])
    prompt = "###USER: " + prompt + "\n" + "###FUNCTIONS: " + func_json + "\n" + "###ASSISTANT: \n"
    response = get_response(prompt, model, tokenizer, device='cuda')

Are you able to reproduce the expected results of the example case in the README?

query = "Call me an Uber ride type \"Plus\" in Berkeley at zipcode 94704 in 10 minutes"
functions = [
    {
        "name": "Uber Carpool",
        "api_name": "uber.ride",
        "description": "Find suitable ride for customers given the location, type of ride, and the amount of time the customer is willing to wait as parameters",
        "parameters":  [{"name": "loc", "description": "location of the starting place of the uber ride"}, {"name":"type", "enum": ["plus", "comfort", "black"], "description": "types of uber ride user is ordering"}, {"name": "time", "description": "the amount of time in minutes the customer is willing to wait"}]
    }
]
get_gorilla_response(query, functions=functions)

Expected output:
uber.ride(loc="berkeley", type="plus", time=10)

I always get

uber.ride(loc="94704", type="plus", time=10)

I get uber.ride(loc="Berkeley", type="plus", time=10), though I have seen that response when testing. Hopefully the actual prompt format will resolve some of the inconsistencies.

You can below :

  1. Deploy using vllm OpenAI-Compatible Server.
    VLLM_USE_MODELSCOPE=True python -m vllm.entrypoints.openai.api_server  model="gorilla-llm/gorilla-openfunctions-v1" --revision="v1.1.8" --trust-remote-code
  1. and then just use the same
import openai

def get_gorilla_response(prompt="Call me an Uber ride type \"Plus\" in Berkeley at zipcode 94704 in 10 minutes", model="gorilla-openfunctions-v0", functions=[]):
  openai.api_key = "EMPTY"
  openai.api_base = "http://luigi.millennium.berkeley.edu:8000/v1"
  try:
    completion = openai.ChatCompletion.create(
      model="gorilla-openfunctions-v1",
      temperature=0.0,
      messages=[{"role": "user", "content": prompt}],
      functions=functions,
    )
    return completion.choices[0].message.content
  except Exception as e:
    print(e, model, prompt)

Then just the call the same way

Gorilla LLM (UC Berkeley) org

Thanks for trying it! Just updated it here: https://github.com/ShishirPatil/gorilla/tree/main/openfunctions#running-openfunctions-locally
Let me know if you run into any other issues!

Sign up or log in to comment