katanemo/Arch-Function-Chat-3B

Overview

The Arch-Function-Chat collection builds upon the Katanemo's Arch-Function collection by extending its capabilities beyond function calling. This new collection maintains the state-of-the-art(SOTA) function calling performance of the original collection while adding powerful new features that make it even more versatile in real-world applications.

In addition to function calling capabilities, this collection now offers:

  • Clarify & refine: Generates natural follow-up questions to collect missing information for function calling
  • Interpret & respond: Provides human-friendly responses based on function execution results
  • Context management: Mantains context in complex multi-turn interactions

Note: Arch-Function-Chat is now the primarly LLM used in then open source Arch Gateway - An AI-native proxy for agents. For more details about the project, check out the Github README.

Requirements

The code of Arch-Function-Chat-3B has been in the Hugging Face transformers library and we advise you to install latest version:

pip install transformers>=4.37.0

How to use

We use the following example to illustrate how to use our model to perform function calling tasks. Please note that, our model works best with our provided prompt format. It allows us to extract JSON output that is similar to the OpenAI's function calling.

Quickstart

import json
from typing import Any, Dict, List
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "katanemo/Arch-Function-Chat-3B"
model = AutoModelForCausalLM.from_pretrained(
    model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Please use our provided prompt for best performance
TASK_PROMPT = (
    "You are a helpful assistant designed to assist with the user query by making one or more function calls if needed."
    "\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{tools}\n</tools>"
    "\n\nYour task is to decide which functions are needed and collect missing parameters if necessary."
)

FORMAT_PROMPT = (
    "\n\nBased on your analysis, provide your response in one of the following JSON formats:"
    '\n1. If no functions are needed:\n```json\n{"response": "Your response text here"}\n```'
    '\n2. If functions are needed but some required parameters are missing:\n```json\n{"required_functions": ["func_name1", "func_name2", ...], "clarification": "Text asking for missing parameters"}\n```'
    '\n3. If functions are needed and all required parameters are available:\n```json\n{"tool_calls": [{"name": "func_name1", "arguments": {"argument1": "value1", "argument2": "value2"}},... (more tool calls as required)]}\n```'
)

# Define available tools
tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get the current weather for a location",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "str",
                        "description": "The city and state, e.g. San Francisco, New York",
                    },
                    "unit": {
                        "type": "str",
                        "enum": ["celsius", "fahrenheit"],
                        "description": "The unit of temperature to return",
                    },
                },
                "required": ["location"],
            },
        },
    }
]


# Helper function to create the system prompt for our model
def format_prompt(tools: List[Dict[str, Any]]):
    tools = "\n".join(
        [json.dumps(tool["function"], ensure_ascii=False) for tool in tools]
    )
    return TASK_PROMPT.format(tools=tools) + FORMAT_PROMPT


system_prompt = format_prompt(tools)

messages = [
    {"role": "system", "content": system_prompt},
    {"role": "user", "content": "What is the weather in Seattle?"},
]

model_inputs = tokenizer.apply_chat_template(
    messages, add_generation_prompt=True, return_tensors="pt"
).to(model.device)

generated_ids = model.generate(**model_inputs, max_new_tokens=32768)

generated_ids = [
    output_ids[len(input_ids) :]
    for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

License

Katanemo Arch-Function collection is distributed under the Katanemo license.

Downloads last month
64
Safetensors
Model size
3.09B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for katanemo/Arch-Function-Chat-3B

Base model

Qwen/Qwen2.5-1.5B
Finetuned
(50)
this model
Quantizations
4 models

Collection including katanemo/Arch-Function-Chat-3B