YANG-Cheng commited on
Commit
27da063
β€’
1 Parent(s): f9b8172

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +152 -1
README.md CHANGED
@@ -58,4 +58,155 @@ size_categories:
58
  task_categories:
59
  - text-generation
60
  pretty_name: AgentBoard
61
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
  task_categories:
59
  - text-generation
60
  pretty_name: AgentBoard
61
+ ---
62
+
63
+
64
+ [Logo of AgentBoard]()
65
+
66
+ # Dataset Card for AgentBoard
67
+ This is the official dataset repository of [AgentBoard]().
68
+
69
+ ## 1. Data Overview
70
+ AgentBoard is composed of 9 diverse tasks which can be divided into 4 types, including **Embodied AI**, **Game**, **Web**, and **Tool**:
71
+
72
+
73
+ <table align="center">
74
+ <tbody>
75
+ <tr align="center" valign="bottom">
76
+ <td>
77
+ <b>Embodied AI</b>
78
+ </td>
79
+ <td>
80
+ <b>Game</b>
81
+ </td>
82
+ <td>
83
+ <b>Web</b>
84
+ </td>
85
+ <td>
86
+ <b>Tool</b>
87
+ </td>
88
+ </tr>
89
+ <tr valign="top">
90
+ <td>
91
+
92
+ - AlfWorld
93
+ - ScienceWorld
94
+ - BabyAI
95
+
96
+ </td>
97
+
98
+ <td>
99
+
100
+ - Jericho
101
+ - PDDL
102
+
103
+ </td>
104
+
105
+ <td>
106
+
107
+ - WebShop
108
+ - WebArena
109
+
110
+ </td>
111
+
112
+ <td>
113
+
114
+ - Tool-Query
115
+ - Tool-Operation
116
+
117
+ </td>
118
+
119
+ </tr>
120
+ </tbody>
121
+ </table>
122
+
123
+
124
+ And statistics of the evaluation data of 9 task are as follows:
125
+
126
+ | | AlfWorld | ScienceWorld | BabyAI | Jericho | PDDL | WebShop | WebArena | Tool-Query | Tool-Operation |
127
+ |-------|----------|--------------|--------|---------|------|---------|----------|------------|----------------|
128
+ | **Metric** | PR/SR | PR/SR | PR/SR | PR/SR | PR/SR | Matching | Matching | PR/SR | PR/SR/Matching |
129
+ | **\#Avg. Turn** | 6 | 15 | 10 | 20 | 20 | 3 | 25 | 5 | 6 |
130
+ | **\#Avg. Action Space** | 13 | 21 | 8 | 150 | 8 | 2 | 12 | 15 | 16 |
131
+ | **\#Avg. Context Length** | 900 | 2800 | 1800 | 1500 | 2700 | 1200 | 15000 | 2100 | 4300 |
132
+ | **\#Avg. Sub-goals** | 3 | 5 | 4 | 6 | 6 | 4 | 6 | 5 | 5 |
133
+ | **\#Environment** | 134 | 90 | 112 | 20 | 60 | 251 | 245 | 60 | 40 |
134
+
135
+
136
+ To help researchers quickly understand evaluation data of each task, we provide **Dataset Viewer** at Huggingface Dataset: [πŸ€— AgentBoard](https://huggingface.co/datasets/YANG-Cheng/alpaca).
137
+
138
+ > Note: Please download the dataset from the link provided below for the reason that the data in Dataset Viewer is not complete.
139
+
140
+ ## 2. Download Link
141
+ You can download the whole evaluation data by running the following command:
142
+ ```shell
143
+ wget https://huggingface.co/datasets/YANG-Cheng/alpaca/resolve/main/data.tar.gz
144
+ ```
145
+ Please uncommpress the file and move the data to `agent-board/data`.
146
+ ```shell
147
+ cd agentboard
148
+ mkdir data
149
+ tar -zxvf data.tar.gz -C ./data
150
+ ```
151
+
152
+ The file structure of evaluation data is as follows:
153
+ <details>
154
+ <summary>
155
+ Click to expand the file structure
156
+ </summary>
157
+
158
+ ```
159
+ data
160
+ β”œβ”€β”€ alfworld
161
+ β”‚ β”œβ”€β”€ alfred.pddl # additional data for alfworld
162
+ β”‚ β”œβ”€β”€ alfred.twl2 # additional data for alfworld
163
+ β”‚ β”œβ”€β”€ json_2.1.1 # additional data for alfworld
164
+ β”‚ └── test.jsonl
165
+ β”œβ”€β”€ babyai
166
+ β”‚ └── test.jsonl
167
+ β”œβ”€β”€ jericho
168
+ β”‚ β”œβ”€β”€ test.jsonl
169
+ β”‚ └── z-machine-games-master # additional data for jericho
170
+ β”œβ”€β”€ pddl
171
+ β”‚ └── test.jsonl
172
+ β”œβ”€β”€ scienceworld
173
+ β”‚ └── test.jsonl
174
+ β”œβ”€β”€ tool-operation
175
+ β”‚ └── test.jsonl
176
+ β”œβ”€β”€ tool-query
177
+ β”‚ β”œβ”€β”€ academia # additional data for academia tool
178
+ β”‚ └── test.jsonl
179
+ β”œβ”€β”€ webarena
180
+ β”‚ └── test.jsonl
181
+ └── webshop
182
+ └── test.jsonl
183
+ ```
184
+
185
+ </details>
186
+
187
+ ## 3. Data Fields
188
+ We take an instance from the `ScienceWorld` task as an example to illustrate the data fields of evaluation data.
189
+ ```json
190
+ {
191
+ "task": "scienceworld",
192
+ "id": 0,
193
+ "goal": "Your task is to find the animal with the longest life span. The animals are in the 'outside' location. Focus on the animal with the longest life span.",
194
+ "subgoals": ["You move to the outside.", "You focus on the crocodile egg."],
195
+ "difficulty": "easy",
196
+ "additional_info": {"var": 5, "env_name": "lifespan-longest-lived"}
197
+ }
198
+ ```
199
+ Details of the data fields are as follows:
200
+ | Field Name | Description |
201
+ |------------|-------------|
202
+ | `task` | The task name of the example, e.g. `alfworld`, `babyai`, `jericho`, `pddl`, `scienceworld`, `tool-operation`, `tool-query`, `webarena`, `webshop`. |
203
+ | `id` | The id of the example. |
204
+ | `goal` | The goal of the example. |
205
+ | `subgoals` | The subgoals of the example. |
206
+ | `difficulty` | The difficulty of the example, e.g. `easy`, `hard`. |
207
+ | `additional_info` | The additional information of the example, each example has its own additional information. |
208
+
209
+
210
+ ## 4. Citation
211
+ ```bibtex
212
+ ```