File size: 3,317 Bytes
e6c68ae
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
---
pretty_name: SPEC
task_categories:
- image-to-text
- text-to-image
- image-classification
tags:
- image
- text
language:
- en
license: apache-2.0
size_categories:
- 1K<n<10K
---

# [CVPR 2024] SPEC Benchmark: Evaluating VLMs in Fine-grained and Compositional Understanding
introduced in the CVPR 2024 paper [Synthesize, Diagnose, and Optimize: Towards Fine-Grained Vision-Language Understanding](https://huggingface.co/papers/2312.00081)

[**Code**](https://github.com/wjpoom/SPEC) | [**🤗 Paper**](https://huggingface.co/papers/2312.00081) | [**📖 arXiv**](https://arxiv.org/abs/2312.00081)

To evaluate the understanding capability of visual-language models on fine-grained concepts, we propose a new benchmark, SPEC, 
which consists of six distinct subsets, distributed across the dimensions of **S**ize, **P**osition, **E**xistence, and **C**ount.
Each test case consists of an image candidate set, which differs only in certain visual concepts, and a text candidate set, 
which differs only in the corresponding language concept.
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/649bce4f200e2dff194d9883/sE65-zVjY_HXUT4-eaqZ9.png" width="90%"/>  
<be>
</p>

## 🔧  Usage
### install
``` shell
git clone https://github.com/wjpoom/SPEC.git
cd SPEC/
pip install -e .
```
### prepare data
* run the following code in Python shell, replace `/path/to/save/data` with a specified dir to store the data.
```python
import zipfile
import os
from huggingface_hub import hf_hub_download

data_root = '/path/to/save/data'
hf_hub_download(repo_id='wjpoom/SPEC', repo_type='dataset', filename='data.zip', local_dir=data_root)

with zipfile.ZipFile(os.path.join(data_root, 'data.zip'), 'r') as zip_ref:
    zip_ref.extractall(os.path.join(data_root))
    
os.remove(os.path.join(data_root, 'data.zip'))
```
### explore the dataset
* We provide a 📓notebook that enables you to visually explore the test samples in the SPEC dataset.
* Run this notebook either [locally](https://github.com/wjpoom/SPEC/blob/main/notebooks/explore_spec_local.ipynb) or online using [Colab](https://colab.research.google.com/github/wjpoom/SPEC/blob/main/notebooks/explore_spec_colab.ipynb).

### reproduce the results
* In our paper, we evaluated four popular VLMs using our SPEC dataset, namely: CLIP, BLIP, FLAVA and CoCa.
* To reproduce the results with these VLMs, you can run [this script](https://github.com/wjpoom/SPEC/blob/main/spec/run_eval.sh).
* You can also reproduce with this [local notebook](https://github.com/wjpoom/SPEC/blob/main/notebooks/evaluate_example_local.ipynb) or the online [Colab notebook](https://colab.research.google.com/github/wjpoom/SPEC/blob/main/notebooks/evaluate_example_colab.ipynb).

### evaluate custom VLMs
* If you want to evaluate your custom model on SPEC, you can follow the instructions in [this document](https://github.com/wjpoom/SPEC/blob/main/docs/evaluate_custom_model.md).

* ## ✒️  Citation
If you use our code or data in this repo or find our work helpful, please consider giving a citation:

```
@inproceedings{spec2024,
  title={Synthesize Diagnose and Optimize: Towards Fine-Grained Vision-Language Understanding},
  author={Peng, Wujian and Xie, Sicheng and You, Zuyao and Lan, Shiyi and Wu, Zuxuan},
  booktitle={CVPR},
  year={2024}
}
```