--- dataset_info: features: - name: question_id dtype: string - name: image dtype: image - name: question dtype: string - name: answer dtype: string - name: image_source dtype: string - name: capability dtype: string splits: - name: test num_bytes: 77298608.0 num_examples: 218 download_size: 67180444 dataset_size: 77298608.0 configs: - config_name: default data_files: - split: test path: data/test-* ---

# Large-scale Multi-modality Models Evaluation Suite > Accelerating the development of large-scale multi-modality models (LMMs) with `lmms-eval` 🏠 [Homepage](https://lmms-lab.github.io/) | 📚 [Documentation](docs/README.md) | 🤗 [Huggingface Datasets](https://huggingface.co/lmms-lab) # This Dataset This is a formatted version of [MM-Vet](https://github.com/yuweihao/MM-Vet). It is used in our `lmms-eval` pipeline to allow for one-click evaluations of large multi-modality models. ``` @misc{yu2023mmvet, title={MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities}, author={Weihao Yu and Zhengyuan Yang and Linjie Li and Jianfeng Wang and Kevin Lin and Zicheng Liu and Xinchao Wang and Lijuan Wang}, year={2023}, eprint={2308.02490}, archivePrefix={arXiv}, primaryClass={cs.AI} } ```