xingyaoww commited on
Commit
0d885cb
β€’
1 Parent(s): 58cbf30

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +76 -0
README.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - xingyaoww/code-act
5
+ language:
6
+ - en
7
+ ---
8
+
9
+ **NOTE: This repo serves a quantized GGUF model of the original [CodeActAgent-Mistral-7b-v0.1](https://huggingface.co/xingyaoww/CodeActAgent-Mistral-7b-v0.1).**
10
+
11
+ ---
12
+
13
+ <h1 align="center"> Executable Code Actions Elicit Better LLM Agents </h1>
14
+
15
+ <p align="center">
16
+ <a href="https://github.com/xingyaoww/code-act">πŸ’» Code</a>
17
+ β€’
18
+ <a href="TODO">πŸ“ƒ Paper</a>
19
+ β€’
20
+ <a href="https://huggingface.co/datasets/xingyaoww/code-act" >πŸ€— Data (CodeActInstruct)</a>
21
+ β€’
22
+ <a href="https://huggingface.co/xingyaoww/CodeActAgent-Mistral-7b-v0.1" >πŸ€— Model (CodeActAgent-Mistral-7b-v0.1)</a>
23
+ β€’
24
+ <a href="https://chat.xwang.dev/">πŸ€– Chat with CodeActAgent!</a>
25
+ </p>
26
+
27
+ We propose to use executable Python **code** to consolidate LLM agents’ **act**ions into a unified action space (**CodeAct**).
28
+ Integrated with a Python interpreter, CodeAct can execute code actions and dynamically revise prior actions or emit new actions upon new observations (e.g., code execution results) through multi-turn interactions (check out [this example!](https://chat.xwang.dev/r/Vqn108G)).
29
+
30
+ ![Overview](https://github.com/xingyaoww/code-act/blob/main/figures/overview.png?raw=true)
31
+
32
+ ## Why CodeAct?
33
+
34
+ Our extensive analysis of 17 LLMs on API-Bank and a newly curated benchmark [M<sup>3</sup>ToolEval](docs/EVALUATION.md) shows that CodeAct outperforms widely used alternatives like Text and JSON (up to 20% higher success rate). Please check our paper for more detailed analysis!
35
+
36
+ ![Comparison between CodeAct and Text/JSON](https://github.com/xingyaoww/code-act/blob/main/figures/codeact-comparison-table.png?raw=true)
37
+ *Comparison between CodeAct and Text / JSON as action.*
38
+
39
+ ![Comparison between CodeAct and Text/JSON](https://github.com/xingyaoww/code-act/blob/main/figures/codeact-comparison-perf.png?raw=true)
40
+ *Quantitative results comparing CodeAct and {Text, JSON} on M<sup>3</sup>ToolEval.*
41
+
42
+
43
+
44
+ ## πŸ“ CodeActInstruct
45
+
46
+ We collect an instruction-tuning dataset CodeActInstruct that consists of 7k multi-turn interactions using CodeAct. Dataset is release at [huggingface dataset πŸ€—](https://huggingface.co/datasets/xingyaoww/code-act). Please refer to the paper and [this section](#-data-generation-optional) for details of data collection.
47
+
48
+
49
+ ![Data Statistics](https://github.com/xingyaoww/code-act/blob/main/figures/data-stats.png?raw=true)
50
+ *Dataset Statistics. Token statistics are computed using Llama-2 tokenizer.*
51
+
52
+ ## πŸͺ„ CodeActAgent
53
+
54
+ Trained on **CodeActInstruct** and general conversaions, **CodeActAgent** excels at out-of-domain agent tasks compared to open-source models of the same size, while not sacrificing generic performance (e.g., knowledge, dialog). We release two variants of CodeActAgent:
55
+ - **CodeActAgent-Mistral-7b-v0.1** (recommended, [model link](https://huggingface.co/xingyaoww/CodeActAgent-Mistral-7b-v0.1)): using Mistral-7b-v0.1 as the base model with 32k context window.
56
+ - **CodeActAgent-Llama-7b** ([model link](https://huggingface.co/xingyaoww/CodeActAgent-Llama-2-7b)): using Llama-2-7b as the base model with 4k context window.
57
+
58
+ ![Model Performance](https://github.com/xingyaoww/code-act/blob/main/figures/model-performance.png?raw=true)
59
+ *Evaluation results for CodeActAgent. ID and OD stand for in-domain and out-of-domain evaluation correspondingly. Overall averaged performance normalizes the MT-Bench score to be consistent with other tasks and excludes in-domain tasks for fair comparison.*
60
+
61
+
62
+ Please check out [our paper](TODO) and [code](https://github.com/xingyaoww/code-act) for more details about data collection, model training, and evaluation.
63
+
64
+
65
+ ## πŸ“š Citation
66
+
67
+ ```bibtex
68
+ @misc{wang2024executable,
69
+ title={Executable Code Actions Elicit Better LLM Agents},
70
+ author={Xingyao Wang and Yangyi Chen and Lifan Yuan and Yizhe Zhang and Yunzhu Li and Hao Peng and Heng Ji},
71
+ year={2024},
72
+ eprint={2402.01030},
73
+ archivePrefix={arXiv},
74
+ primaryClass={cs.CL}
75
+ }
76
+ ```