shulin16 commited on
Commit
6aaa9e2
1 Parent(s): 9eb17fa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -3
README.md CHANGED
@@ -9,9 +9,18 @@ size_categories:
9
 
10
  # MMInA: Benchmarking Multihop Multimodal Internet Agents Dataset
11
 
12
- ## Instructions:
13
 
14
- The dataset consists 6 folders.
 
 
 
 
 
 
 
 
 
15
 
16
  - **normal:** 176 tasks. All of them are 2-hop or 3-hop tasks.
17
 
@@ -25,6 +34,7 @@ The dataset consists 6 folders.
25
 
26
  - **wikipedia:** 308 tasks. All tasks here are limited in wikipedia. Some are comparable tasks and others are simple. (108 tasks of them are filtered from [WebQA])
27
 
 
28
  ***"task_id"*** indicates the position of this task within the current folder.
29
 
30
  ***"start_url"*** is the webpage provided to the agent for initial access.
@@ -33,4 +43,45 @@ The dataset consists 6 folders.
33
 
34
  ***"procedure"*** refers to the evaluation method used in multi-hop tasks. (as mentioned in our paper)
35
 
36
- For single-hop tasks, its evaluation method is reflected in ***'eval_types'***, and a reference answer is provided.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
 
10
  # MMInA: Benchmarking Multihop Multimodal Internet Agents Dataset
11
 
12
+ # [MMInA] Dataset Card
13
 
14
+ MMInA is a multihop and multimodal benchmark to evaluate the embodied agents for compositional Internet tasks.
15
+
16
+
17
+ ## Dataset Details
18
+
19
+ This dataset consists of 6 folders involving 1,050 multihop tasks over 14 evolving websites.
20
+ For each task, the intent with reference answer is provided in the JSON file, as well as the required information.
21
+
22
+
23
+ ### Dataset Subfolders Explanation
24
 
25
  - **normal:** 176 tasks. All of them are 2-hop or 3-hop tasks.
26
 
 
34
 
35
  - **wikipedia:** 308 tasks. All tasks here are limited in wikipedia. Some are comparable tasks and others are simple. (108 tasks of them are filtered from [WebQA])
36
 
37
+
38
  ***"task_id"*** indicates the position of this task within the current folder.
39
 
40
  ***"start_url"*** is the webpage provided to the agent for initial access.
 
43
 
44
  ***"procedure"*** refers to the evaluation method used in multi-hop tasks. (as mentioned in our paper)
45
 
46
+ For single-hop tasks, its evaluation method is reflected in ***'eval_types'***, and a reference answer is provided.
47
+
48
+ ### Dataset Sources
49
+
50
+ - **Repository:** [https://github.com/shulin16/MMInA](https://github.com/shulin16/MMInA)
51
+ - **Paper:** [https://arxiv.org/abs/2404.09992](https://arxiv.org/abs/2404.09992)
52
+
53
+
54
+ ## Uses
55
+
56
+ The intended use of this dataset is to evaluate large language model (LLM) agents on their tool-use abilities for multihop multi-modal tasks.
57
+
58
+ ### Direct Use
59
+
60
+ To use this dataset, you should first select your model (either LLM or VLM) as an agent in code format, and implement it within the [pre-setup environment](https://github.com/shulin16/MMInA).
61
+ You can use this dataset together with the environment to evaluate your agents ability in compositional multihop Internet tasks.
62
+
63
+ ### Out-of-Scope Use
64
+
65
+ This is a benchmark dataset for evaluation purposes only. Not suitable for training agent models.
66
+
67
+
68
+ ## Bias, Risks, and Limitations
69
+
70
+ This dataset has the following limitations:
71
+ - The evaluation for Wikipedia-related questions are not well-defined due to the explainability of LLM/VLM.
72
+
73
+ ## Citation
74
+
75
+
76
+ **BibTeX:**
77
+
78
+ ```
79
+ @misc{zhang2024mmina,
80
+ title={MMInA: Benchmarking Multihop Multimodal Internet Agents},
81
+ author={Ziniu Zhang and Shulin Tian and Liangyu Chen and Ziwei Liu},
82
+ year={2024},
83
+ eprint={2404.09992},
84
+ archivePrefix={arXiv},
85
+ primaryClass={cs.CV}
86
+ }
87
+ ```