Will chatglm3-6b support parallel-function-calling ?
#53
by
simonwei97
- opened
Will chatglm3-6b support parallel-function-calling ?
Refer: https://platform.openai.com/docs/guides/function-calling
Now
What I Expected
Q: What's the weather like in San Francisco, Tokyo, and Paris?
chatglm3-6b output should be as following:
# LLM response
tool_call: [
{'name': 'get_current_weather', 'parameters': {'location': 'San Francisco'}},
{'name': 'get_current_weather', 'parameters': {'location': 'Tokyo'}},
{'name': 'get_current_weather', 'parameters': {'location': 'Paris'}}
]
Then User send the info for each function call and function response to the model. Do not need call 3 times LLM.
More example like this.
different gpt3.5 version
llm = OpenAI(model="gpt-3.5-turbo-0613")
agent = OpenAIAgent.from_tools([weather_tool], llm=llm, verbose=True)
response = agent.chat(
"What's the weather like in San Francisco, Tokyo, and Paris?"
)
- Issue will be will be solved within a single turn of dialogue for
gpt-3.5-turbo-1106
. (Will chatglm3-6b run like this?) - Issue will be will be solved within 3 separate turns for
gpt-3.5-turbo-0613
, which is not quite advanced.
no support
zRzRzRzRzRzRzR
changed discussion status to
closed