LLM Name | ZS | ZST | ReaAct | PlanAct | PlanReAct |
Mixtral-8x7B-Instruct-v0.1 | 0.3912 | 0.3971 | 0.3714 | 0.3195 | 0.3039 |
GPT-3.5-Turbo | 0.4196 | 0.3937 | 0.3868 | 0.4182 | 0.3960 |
GPT-4-0613 | 0.5801 | 0.5709 | 0.6129 | 0.5778 | 0.5716 |
xLAM-v0.1-r | 0.5492 | 0.4776 | 0.5020 | 0.5583 | 0.5030 |
## [AgentLite](https://github.com/SalesforceAIResearch/AgentLite/tree/main)
**Please note:** All prompts provided by AgentLite are considered "unseen prompts" for xLAM-v0.1-r, meaning the model has not been trained with data related to these prompts.
#### Webshop
LLM Name | Act | ReAct | BOLAA |
GPT-3.5-Turbo-16k | 0.6158 | 0.6005 | 0.6652 |
GPT-4-0613 | 0.6989 | 0.6732 | 0.7154 |
xLAM-v0.1-r | 0.6563 | 0.6640 | 0.6854 |
#### HotpotQA
| Easy | Medium | Hard |
LLM Name | F1 Score | Accuracy | F1 Score | Accuracy | F1 Score | Accuracy |
GPT-3.5-Turbo-16k-0613 | 0.410 | 0.350 | 0.330 | 0.25 | 0.283 | 0.20 |
GPT-4-0613 | 0.611 | 0.47 | 0.610 | 0.480 | 0.527 | 0.38 |
xLAM-v0.1-r | 0.532 | 0.45 | 0.547 | 0.46 | 0.455 | 0.36 |
## ToolBench
LLM Name | Unseen Insts & Same Set | Unseen Tools & Seen Cat | Unseen Tools & Unseen Cat |
TooLlama V2 | 0.4385 | 0.4300 | 0.4350 |
GPT-3.5-Turbo-0125 | 0.5000 | 0.5150 | 0.4900 |
GPT-4-0125-preview | 0.5462 | 0.5450 | 0.5050 |
xLAM-v0.1-r | 0.5077 | 0.5650 | 0.5200 |
## [MINT-BENCH](https://github.com/xingyaoww/mint-bench)
LLM Name | 1-step | 2-step | 3-step | 4-step | 5-step |
GPT-4-0613 | - | - | - | - | 69.45 |
Claude-Instant-1 | 12.12 | 32.25 | 39.25 | 44.37 | 45.90 |
xLAM-v0.1-r | 4.10 | 28.50 | 36.01 | 42.66 | 43.96 |
Claude-2 | 26.45 | 35.49 | 36.01 | 39.76 | 39.93 |
Lemur-70b-Chat-v1 | 3.75 | 26.96 | 35.67 | 37.54 | 37.03 |
GPT-3.5-Turbo-0613 | 2.73 | 16.89 | 24.06 | 31.74 | 36.18 |
AgentLM-70b | 6.48 | 17.75 | 24.91 | 28.16 | 28.67 |
CodeLlama-34b | 0.17 | 16.21 | 23.04 | 25.94 | 28.16 |
Llama-2-70b-chat | 4.27 | 14.33 | 15.70 | 16.55 | 17.92 |
## [Tool-Query](https://github.com/hkust-nlp/AgentBoard)
LLM Name | Success Rate | Progress Rate |
xLAM-v0.1-r | 0.533 | 0.766 |
DeepSeek-67B | 0.400 | 0.714 |
GPT-3.5-Turbo-0613 | 0.367 | 0.627 |
GPT-3.5-Turbo-16k | 0.317 | 0.591 |
Lemur-70B | 0.283 | 0.720 |
CodeLlama-13B | 0.250 | 0.525 |
CodeLlama-34B | 0.133 | 0.600 |
Mistral-7B | 0.033 | 0.510 |
Vicuna-13B-16K | 0.033 | 0.343 |
Llama-2-70B | 0.000 | 0.483 |