mmina / README.md
shulin16's picture
Update README.md
6aaa9e2 verified
|
raw
history blame
No virus
3.05 kB
metadata
task_categories:
  - text-generation
language:
  - en
size_categories:
  - 1K<n<10K

MMInA: Benchmarking Multihop Multimodal Internet Agents Dataset

[MMInA] Dataset Card

MMInA is a multihop and multimodal benchmark to evaluate the embodied agents for compositional Internet tasks.

Dataset Details

This dataset consists of 6 folders involving 1,050 multihop tasks over 14 evolving websites. For each task, the intent with reference answer is provided in the JSON file, as well as the required information.

Dataset Subfolders Explanation

  • normal: 176 tasks. All of them are 2-hop or 3-hop tasks.

  • multi567: 180 tasks. All 5-hop, 6-hop, 7-hop tasks are here.

  • compare: 100 tasks. Some of 2-hop, 3-hop, 4-hop tasks are here. All tasks in this folder need to answer a comparable question first.

  • multipro: 86 tasks. All 8-hop, 9-hop, 10-hop tasks are here.

  • shopping: 200 tasks. All tasks here are about items in OneStopMarket

  • wikipedia: 308 tasks. All tasks here are limited in wikipedia. Some are comparable tasks and others are simple. (108 tasks of them are filtered from [WebQA])

"task_id" indicates the position of this task within the current folder.

"start_url" is the webpage provided to the agent for initial access.

"intent" and "intent_template" are the core of our tasks. The first part is telling agent the final state of each hop. The second part are some reference URLs to solve the task. The third part is our question.

"procedure" refers to the evaluation method used in multi-hop tasks. (as mentioned in our paper)

For single-hop tasks, its evaluation method is reflected in 'eval_types', and a reference answer is provided.

Dataset Sources

Uses

The intended use of this dataset is to evaluate large language model (LLM) agents on their tool-use abilities for multihop multi-modal tasks.

Direct Use

To use this dataset, you should first select your model (either LLM or VLM) as an agent in code format, and implement it within the pre-setup environment. You can use this dataset together with the environment to evaluate your agents ability in compositional multihop Internet tasks.

Out-of-Scope Use

This is a benchmark dataset for evaluation purposes only. Not suitable for training agent models.

Bias, Risks, and Limitations

This dataset has the following limitations:

  • The evaluation for Wikipedia-related questions are not well-defined due to the explainability of LLM/VLM.

Citation

BibTeX:

@misc{zhang2024mmina,
        title={MMInA: Benchmarking Multihop Multimodal Internet Agents}, 
        author={Ziniu Zhang and Shulin Tian and Liangyu Chen and Ziwei Liu},
        year={2024},
        eprint={2404.09992},
        archivePrefix={arXiv},
        primaryClass={cs.CV}
  }