File size: 6,854 Bytes
e3f5781 bd84aa4 e3f5781 c649912 f463518 c649912 8b555f1 46c6d93 8b555f1 c649912 efaff9d 3ba01fc efaff9d d46bde4 8e0cc8a c649912 ffdd902 10add89 c649912 328f1a2 c649912 328f1a2 fcb0551 328f1a2 fcb0551 328f1a2 fcb0551 328f1a2 c649912 bd84aa4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 |
---
license: other
language:
- en
library_name: transformers
tags:
- RLHF
- Nexusflow
- Athene
- Function Calling
- Agent
- Extraction
base_model:
- Qwen/Qwen2.5-72B-Instruct
---
# Athene-V2-Agent: Surpassing GPT-4o for Tool Use And Agentic Usecases
<p align="center">
<a href="https://huggingface.co/Nexusflow" target="_blank">Nexusflow HF</a> - <a href="https://discord.gg/HDSVmNAs3y" target="_blank">Nexusflow Discord</a> - <a href="https://nexusflow.ai/blogs/athene-v2" target="_blank">Athene-V2 Blogpost</a>
</p>
<p align="center" width="100%">
<a><img src="agent.png" alt="NexusRaven" style="width: 40%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Introducing Athene-V2-Agent
Athene-V2-Agent is an open-source Agent LLM that surpasses the state-of-the-art in function calling and agentic capabilities.
<p align="center" width="100%">
<a><img src="benchmark.png" alt="Benchmark" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
💪 **Versatile Agent Capability**: Athene-V2-Agent is an agent model, capable of operating in environments with deeply nested dependencies with the environment. It is capable of reasoning and doing planning for trajectories with many tool calls necessary to answer a single query.
📊 **Performance Highlights**: Athene-V2-Agent surpasses GPT-4o in single FC tasks by 18% in function calling success rates, and by 17% in Agentic success rates.
🔧 **Generalization to the Unseen**: Athene-V2-Agent has never been trained on the functions or agentic settings used in evaluation.
- **Developed by:** The Nexusflow Team
- **Model type:** Agent Model
- **Finetuned from model:** [Qwen-2.5-72B-Intruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct)
- **License**: [Nexusflow Research License](https://huggingface.co/Nexusflow/Athene-V2-Agent/blob/main/Nexusflow_Research_License_.pdf)
- **Blog**: https://nexusflow.ai/blogs/athene-v2
## Athene-V2-Agent Model Usage
### OpenAI-Compatible FC
Athene-V2-Agent is usable in any OpenAI API-compatible environment using our VLLM docker image. This should be a simple "drop-in" replacement to any agentic or tool-use setting with our VLLM docker image.
```
docker run --name athene-v2-agent \
--runtime nvidia --gpus '"device=0,1,2,3,4,5,6,7"' \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HUGGING_FACE_HUB_TOKEN=<secret>" \
-p <port>:8000 \
--ipc=host \
ghcr.io/nexusflowai/athene-v2-vllm:latest \
--model Nexusflow/Athene-V2-Agent \
--dtype=auto \
--tensor-parallel-size=8 \
--enable-auto-tool-choice \
--tool-call-parser Athene-V2-Agent
```
You can now submit any OpenAI-Compatible tool-use requests to the model by hitting the VLLM endpoint. Athene-V2-Agent will be able to issue tool calls that you can execute and return results for.
**WARNING**: Athene-V2-Agent uses a *CUSTOM* prompting style that is baked into the custom docker image, as the executable calls are extracted from the model's generated planning. For best performance, please ensure to use the docker image above for Athene-V2-Agent, including when benchmarking the model. Using HuggingFace tokenizer's chat template will yield suboptimal results for Agent usecases. Please reach out to us on Discord if you run into any issues!
### Examples
An example Weather agent for this can be found here: [Link](example/vllm_v2_weather_agent.py#L186-L193). This example includes handling Athene for queries that are answerable and not answerable by the current tools.
An example extraction and RAG-Agent can be found here: [Link](example/vllm_v2_extraction_agent.py#L270-L284). This example includes handling RAG-based queries with a wikipedia tool.
### Prompting Tricks
1. When giving docstrings to Athene-V2-Agent, please provide well-indented, detailed, and well-written docstrings as this can help accuracy.
2. We strongly recommend using the docker image to interact with Athene-V2-Agent.
4. We strongly recommend to set sampling to False when prompting Athene-V2-Agent.
5. We strongly recommend a zero temperature.
6. Athene-V2-Agent is designed to work within systems, so it's tuned to be very controllable with the instructions specified in the tools, including for broad behaviors (like rejecting queries, or chatting)
#### Handling Irrelevant Queries
The Athene-V2-Agent model is strongly tuned to have its behavior be controllable with tools to make it easy to integrate into systems.
Therefore, the model won't by default reject queries that are out of domain, as it will try its best to issue the most relevant call.
However, when expecting irrelevant user queries and wanting the model to reject them, you can use a no-op function. For example, something like this would work:
```python
{
"type": "function",
"function" : {
"name": "no_relevant_function",
"description": "Call this when no other provided function can be called to answer the user query.",
"parameters": {
"type": "object",
"properties": {
"user_query_span": {
"type": "string",
"description": "The part of the user_query that cannot be answered by any other function calls."
}
},
"required": ["user_query_span"]
}
}
}
```
Please see the example [Link](example/vllm_v2_weather_agent.py) here for a demo of this.
#### Handling Chat With FC
Since Athene-V2-Agent model is strongly tuned to be controllable, so we wanted to ensure that it does not chat unless explicitly instructed to do so.
You can do this by adding a `chat` tool, and allowing it to do so in the system prompt:
```python
{
"type": "function",
"function": {
"name": "chat",
"description": "Call this tool when you want to chat with the user. The user won't see anything except for whatever you pass into this function. You can use this tool to ask for more information when insufficient information is presented, and to send the final results back to the user.",
"parameters": {
"type": "object",
"properties": {
"chat_string": {
"type": "string",
"description": "The chat message to send to the user to chat back to them.",
}
},
"required": ["chat_string"],
},
},
}
```
And the following system prompt, as an example (but feel free to experiment to make Athene-V2-Agent behave the way you want it to!):
```python
{"role" : "system", "content" : "You can use the chat tool to ask the user for more information, and to send the final results."},
```
Please see the example [Link](example/weather_with_chat.py) here for a demo of this.
## Contact
Please join our [Discord Channel](https://discord.gg/HDSVmNAs3y) to reach out for any issues and comments! |