ViP-Bench / README.md
mu-cai's picture
Add ViP-Bench
878ede4
|
raw
history blame
1.03 kB
metadata
license: apache-2.0


ViP-Bench: Making Large Multimodal Models Understand Arbitrary Visual Prompts

ViP-Bench a region level multimodal model evaulation benchmark curated by University of Wisconsin-Madison. We provides two kinds of visual prompts: (1) bounding boxes, and (2) human drawn diverse visual prompts.

Please refer to https://huggingface.co/spaces/mucai/ViP-Bench_Evaluator to use our evaluation server.

Source annotation

In source_image, we provide the source plain images along with the bounding box/mask annotations. Researchers can use such grounding information to match the special tokens such as <obj> in "question" entry of vip-bench-meta-data.json. For example, <obj> can be replaced by textual coordinates to evaluate the region-level multimodal models.