--- task_categories: - question-answering - visual-question-answering language: - en tags: - Multimodal Search - Multimodal Long Context size_categories: - n<1K configs: - config_name: end2end data_files: - split: end2end path: end2end.parquet - config_name: rerank data_files: - split: rerank path: rerank.parquet - config_name: summarization data_files: - split: summarization path: summarization.parquet dataset_info: - config_name: end2end features: - name: sample_id dtype: string - name: query dtype: string - name: query_image dtype: image - name: image_search_result dtype: image - name: area dtype: string - name: subfield dtype: string - name: timestamp dtype: string - name: gt_requery dtype: string - name: gt_answer dtype: string - name: alternative_gt_answers sequence: string splits: - name: end2end num_examples: 300 - config_name: rerank features: - name: sample_id dtype: string - name: query dtype: string - name: query_image dtype: image - name: image_search_result dtype: image - name: area dtype: string - name: subfield dtype: string - name: timestamp dtype: string - name: valid sequence: int32 - name: not_sure sequence: int32 - name: invalid sequence: int32 - name: gt_answer dtype: string - name: website0_info struct: - name: title dtype: string - name: snippet dtype: string - name: url dtype: string - name: website1_info struct: - name: title dtype: string - name: snippet dtype: string - name: url dtype: string - name: website2_info struct: - name: title dtype: string - name: snippet dtype: string - name: url dtype: string - name: website3_info struct: - name: title dtype: string - name: snippet dtype: string - name: url dtype: string - name: website4_info struct: - name: title dtype: string - name: snippet dtype: string - name: url dtype: string - name: website5_info struct: - name: title dtype: string - name: snippet dtype: string - name: url dtype: string - name: website6_info struct: - name: title dtype: string - name: snippet dtype: string - name: url dtype: string - name: website7_info struct: - name: title dtype: string - name: snippet dtype: string - name: url dtype: string - name: website0_head_screenshot dtype: image - name: website1_head_screenshot dtype: image - name: website2_head_screenshot dtype: image - name: website3_head_screenshot dtype: image - name: website4_head_screenshot dtype: image - name: website5_head_screenshot dtype: image - name: website6_head_screenshot dtype: image - name: website7_head_screenshot dtype: image splits: - name: rerank num_examples: 300 - config_name: summarization features: - name: sample_id dtype: string - name: query dtype: string - name: query_image dtype: image - name: image_search_result dtype: image - name: area dtype: string - name: subfield dtype: string - name: timestamp dtype: string - name: website_title dtype: string - name: website_snippet dtype: string - name: website_url dtype: string - name: website_original_content dtype: string - name: website_retrieved_content dtype: string - name: website_fullpage_screenshot dtype: image - name: gt_requery dtype: string - name: gt_answer dtype: string - name: alternative_gt_answers sequence: string splits: - name: summarization num_examples: 300 --- # MMSearch 🔥: Benchmarking the Potential of Large Models as Multi-modal Search Engines Official repository for the paper "[MMSearch: Benchmarking the Potential of Large Models as Multi-modal Search Engines](https://huggingface.co/papers/2409.12959)". 🌟 For more details, please refer to the project page with dataset exploration and visualization tools: [https://mmsearch.github.io/](https://mmsearch.github.io). [[🌐 Webpage](https://mmsearch.github.io/)] [[📖 Paper](https://arxiv.org/pdf/2409.12959)] [[🤗 Huggingface Dataset](https://huggingface.co/datasets/CaraJ/MMSearch)] [[🏆 Leaderboard](https://mmsearch.github.io/#leaderboard)] [[🔍 Visualization](https://huggingface.co/datasets/CaraJ/MMSearch/viewer)] ## 💥 News - **[2024.09.25]** 🌟 The [evaluation code](https://github.com/CaraJ7/MMSearch#-evaluation-by-yourself) now supports directly use models implemented in [VLMEvalKit](https://github.com/open-compass/VLMEvalKit)! - **[2024.09.22]** 🔥 We release the [evaluation code](https://github.com/CaraJ7/MMSearch#-evaluation-by-yourself), which you only need to add an inference API of your LMM! - **[2024.09.20]** 🚀 We release the [arXiv paper](https://arxiv.org/abs/2409.12959) and all MMSearch data samples in [huggingface dataset](https://huggingface.co/datasets/CaraJ/MMSearch). ## 📌 ToDo - Coming soon: *MMSearch-Engine (for new query)* ## 👀 About MMSearch The capabilities of **Large Multi-modal Models (LMMs)** in **multimodal search** remain insufficiently explored and evaluated. To fill the blank of a framework for LMM to conduct multimodal AI search engine, we first design a delicate pipeline **MMSearch-Engine** to facilitate **any LMM** to function as a multimodal AI search engine


The overview of MMSearch-Engine.

To further evaluate the potential of LMMs in the multimodal search domain, we introduce **MMSearch**, an all-around multimodal search benchmark designed for assessing the multimodal search performance. The benchmark contains 300 manually collected instances spanning 14 subfields, which involves no overlap with the current LMMs' training data, ensuring the correct answer can only be obtained within searching.


The overview of MMSearch.

In addition, we propose a **step-wise evaluation strategy** to better understand the LMMs' searching capability. The models are evaluated by performing **three individual tasks (requery, rerank, and summarization)**, and **one challenging end-to-end task** with a complete searching process. The final score is weighted by the four tasks.


Outline of Evaluation Tasks, Inputs, and Outputs.

An example of LMM input, output, and ground truth for four evaluation tasks is shown [here](figs/fig4.png). ## 🏆 Leaderboard ### Contributing to the Leaderboard 🚨 The [Leaderboard](https://mmsearch.github.io/#leaderboard) is continuously being updated, welcoming the contribution of your excellent LMMs! ## :white_check_mark: Citation If you find **MMSearch** useful for your research and applications, please kindly cite using this BibTeX: ```latex @article{jiang2024mmsearch, title={MMSearch: Benchmarking the Potential of Large Models as Multi-modal Search Engines}, author={Jiang, Dongzhi and Zhang, Renrui and Guo, Ziyu and Wu, Yanmin and Lei, Jiayi and Qiu, Pengshuo and Lu, Pan and Chen, Zehui and Song, Guanglu and Gao, Peng and others}, journal={arXiv preprint arXiv:2409.12959}, year={2024} } ``` ## 🧠 Related Work Explore our additional research on **Vision-Language Large Models**: - **[MathVerse]** [MathVerse: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?](https://mathverse-cuhk.github.io/) - **[MathVista]** [MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts](https://github.com/lupantech/MathVista) - **[LLaMA-Adapter]** [LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention](https://github.com/OpenGVLab/LLaMA-Adapter) - **[LLaMA-Adapter V2]** [LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model](https://github.com/OpenGVLab/LLaMA-Adapter) - **[ImageBind-LLM]** [Imagebind-LLM: Multi-modality Instruction Tuning](https://github.com/OpenGVLab/LLaMA-Adapter/tree/main/imagebind_LLM) - **[SPHINX]** [The Joint Mixing of Weights, Tasks, and Visual Embeddings for Multi-modal LLMs](https://github.com/Alpha-VLLM/LLaMA2-Accessory/tree/main/SPHINX) - **[SPHINX-X]** [Scaling Data and Parameters for a Family of Multi-modal Large Language Models](https://github.com/Alpha-VLLM/LLaMA2-Accessory/tree/main/SPHINX) - **[Point-Bind & Point-LLM]** [Multi-modality 3D Understanding, Generation, and Instruction Following](https://github.com/ZiyuGuo99/Point-Bind_Point-LLM) - **[PerSAM]** [Personalize segment anything model with one shot](https://github.com/ZrrSkywalker/Personalize-SAM) - **[CoMat]** [CoMat: Aligning Text-to-Image Diffusion Model with Image-to-Text Concept Matching](https://caraj7.github.io/comat/)