--- task_categories: - text-generation language: - en size_categories: - 1K # [MMInA] Dataset Card ## Dataset Details This dataset consists of 6 folders involving 1,050 multihop tasks over 14 evolving websites. For each task, the intent with reference answer is provided in the JSON file, as well as the required information. ### Dataset Subfolders Explanation - **normal:** 176 tasks. All of them are 2-hop or 3-hop tasks. - **multi567:** 180 tasks. All 5-hop, 6-hop, 7-hop tasks are here. - **compare:** 100 tasks. Some of 2-hop, 3-hop, 4-hop tasks are here. All tasks in this folder need to answer a comparable question first. - **multipro:** 86 tasks. All 8-hop, 9-hop, 10-hop tasks are here. - **shopping:** 200 tasks. All tasks here are about items in OneStopMarket - **wikipedia:** 308 tasks. All tasks here are limited in wikipedia. Some are comparable tasks and others are simple. (108 intents are filtered from [WebQA](https://webqna.github.io/)) ***"task_id"*** indicates the position of this task within the current folder. ***"start_url"*** is the webpage provided to the agent for initial access. ***"intent"*** and ***"intent_template"*** are the core of our tasks. The first part is telling agent the final state of each hop. The second part are some reference URLs to solve the task. The third part is our question. ***"procedure"*** refers to the evaluation method used in multi-hop tasks. (as mentioned in our paper) For single-hop tasks, its evaluation method is reflected in ***'eval_types'***, and a reference answer is provided. ### Dataset Sources - **Repository:** [https://github.com/shulin16/MMInA](https://github.com/shulin16/MMInA) - **Paper:** [https://arxiv.org/abs/2404.09992](https://arxiv.org/abs/2404.09992) ## Uses The intended use of this dataset is to evaluate large language model (LLM) agents on their tool-use abilities for multihop multi-modal tasks. ### Direct Use To use this dataset, you should first select your model (either LLM or VLM) as an agent in code format, and implement it within the [pre-setup environment](https://github.com/shulin16/MMInA). You can use this dataset together with the environment to evaluate your agents ability in compositional multihop Internet tasks. ### Out-of-Scope Use This is a benchmark dataset for evaluation purposes only. Not suitable for training agent models. ## Bias, Risks, and Limitations This dataset has the following limitations: - The evaluation for Wikipedia-related questions are not well-defined due to the explainability of LLM/VLM. ## Citation **BibTeX:** ``` @misc{zhang2024mmina, title={MMInA: Benchmarking Multihop Multimodal Internet Agents}, author={Ziniu Zhang and Shulin Tian and Liangyu Chen and Ziwei Liu}, year={2024}, eprint={2404.09992}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```