Datasets:
File size: 1,286 Bytes
33c6f4f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
---
license: mit
task_categories:
- zero-shot-classification
size_categories:
- n<1K
---
# MMVP-VLM Benchmark Datacard
## Basic Information
**Title:** MMVP-VLM Benchmark
**Description:** The MMVP-VLM (Multimodal Visual Patterns - Visual Language Models) Benchmark is designed to systematically evaluate the performance of recent CLIP-based models in understanding and processing visual patterns. It distills a subset of questions from the original MMVP benchmark into simpler language descriptions, categorizing them into distinct visual patterns. Each visual pattern is represented by 15 text-image pairs. The benchmark assesses whether CLIP models can accurately match these image-text combinations, providing insights into the capabilities and limitations of these models.
## Dataset Details
- **Content Types:** Text-Image Pairs
- **Volume:** Balanced number of questions for each visual pattern, with each pattern represented by 15 pairs.
- **Source of Data:** Subset from MMVP benchmark, supplemented with additional questions for balance
- **Data Collection Method:** Distillation and categorization of questions from MMVP benchmark into simpler language
## Usage
### Intended Use
- Evaluation of CLIP models' ability to understand and process various visual patterns.
|