---
license: apache-2.0
---
# [ViP-Bench: Making Large Multimodal Models Understand Arbitrary Visual Prompts](https://vip-llava.github.io/)
ViP-Bench a region level multimodal model evaulation benchmark curated by University of Wisconsin-Madison. We provides two kinds of visual prompts: (1) bounding boxes, and (2) human drawn diverse visual prompts.
Please refer to [https://huggingface.co/spaces/mucai/ViP-Bench_Evaluator](https://huggingface.co/spaces/mucai/ViP-Bench_Evaluator) to use our evaluation server.
## Source annotation
In `source_image`, we provide the source plain images along with the bounding box/mask annotations. Researchers can use such grounding information to match the special tokens such as `` in `"question"` entry of `vip-bench-meta-data.json`. For example, `` can be replaced by textual coordinates to evaluate the region-level multimodal models.