Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
Chinese
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
tianyu-z commited on
Commit
fc28628
1 Parent(s): f614c56
Files changed (1) hide show
  1. README.md +55 -15
README.md CHANGED
@@ -73,12 +73,14 @@ EM means `"Exact Match"` and Jaccard means `"Jaccard Similarity"`. The best in c
73
  | GPT-4 Turbo | - | *78.74* | *88.54* | *45.15* | *65.72* | 0.2 | 8.42 | 0.0 | *8.58* |
74
  | GPT-4V | - | 52.04 | 65.36 | 25.83 | 44.63 | - | - | - | - |
75
  | GPT-4o | - | **91.55** | **96.44** | **73.2** | **86.17** | **14.87** | **39.05** | **2.2** | **22.72** |
 
76
  | Gemini 1.5 Pro | - | 62.73 | 77.71 | 28.07 | 51.9 | 1.1 | 11.1 | 0.7 | 11.82 |
77
  | Qwen-VL-Max | - | 76.8 | 85.71 | 41.65 | 61.18 | *6.34* | *13.45* | *0.89* | 5.4 |
78
  | Reka Core | - | 66.46 | 84.23 | 6.71 | 25.84 | 0.0 | 3.43 | 0.0 | 3.35 |
79
  | Cambrian-1 | 34B | 79.69 | 89.27 | *27.20* | 50.04 | 0.03 | 1.27 | 0.00 | 1.37 |
80
  | Cambrian-1 | 13B | 49.35 | 65.11 | 8.37 | 29.12 | - | - | - | - |
81
  | Cambrian-1 | 8B | 71.13 | 83.68 | 13.78 | 35.78 | - | - | - | - |
 
82
  | CogVLM2 | 19B | *83.25* | *89.75* | **37.98** | **59.99** | 9.15 | 17.12 | 0.08 | 3.67 |
83
  | CogVLM2-Chinese | 19B | 79.90 | 87.42 | 25.13 | 48.76 | **33.24** | **57.57** | **1.34** | **17.35** |
84
  | DeepSeek-VL | 1.3B | 23.04 | 46.84 | 0.16 | 11.89 | 0.0 | 6.56 | 0.0 | 6.46 |
@@ -89,9 +91,11 @@ EM means `"Exact Match"` and Jaccard means `"Jaccard Similarity"`. The best in c
89
  | InternLM-XComposer2-VL | 7B | 46.64 | 70.99 | 0.7 | 12.51 | 0.27 | 12.32 | 0.07 | 8.97 |
90
  | InternLM-XComposer2-VL-4KHD | 7B | 5.32 | 22.14 | 0.21 | 9.52 | 0.46 | 12.31 | 0.05 | 7.67 |
91
  | InternLM-XComposer2.5-VL | 7B | 41.35 | 63.04 | 0.93 | 13.82 | 0.46 | 12.97 | 0.11 | 10.95 |
92
- | InternVL-V1.5 | 25.5B | 14.65 | 51.42 | 1.99 | 16.73 | 4.78 | 26.43 | 0.03 | 8.46 |
93
  | InternVL-V2 | 26B | 74.51 | 86.74 | 6.18 | 24.52 | 9.02 | 32.50 | 0.05 | 9.49 |
94
  | InternVL-V2 | 40B | **84.67** | **92.64** | 13.10 | 33.64 | 22.09 | 47.62 | 0.48 | 12.57 |
 
 
95
  | MiniCPM-V2.5 | 8B | 31.81 | 53.24 | 1.41 | 11.94 | 4.1 | 18.03 | 0.09 | 7.39 |
96
  | Monkey | 7B | 50.66 | 67.6 | 1.96 | 14.02 | 0.62 | 8.34 | 0.12 | 6.36 |
97
  | Qwen-VL | 7B | 49.71 | 69.94 | 2.0 | 15.04 | 0.04 | 1.5 | 0.01 | 1.17 |
@@ -100,39 +104,43 @@ EM means `"Exact Match"` and Jaccard means `"Jaccard Similarity"`. The best in c
100
 
101
  # Model Evaluation
102
 
103
- ## Method 1 (recommended): use the evaluation script
104
- ```bash
105
- git clone https://github.com/tianyu-z/VCR.git
106
- ```
107
  ### Open-source evaluation
108
  We support open-source model_id:
109
  ```python
110
  ["openbmb/MiniCPM-Llama3-V-2_5",
111
  "OpenGVLab/InternVL-Chat-V1-5",
112
  "internlm/internlm-xcomposer2-vl-7b",
 
 
113
  "HuggingFaceM4/idefics2-8b",
114
  "Qwen/Qwen-VL-Chat",
115
  "THUDM/cogvlm2-llama3-chinese-chat-19B",
116
  "THUDM/cogvlm2-llama3-chat-19B",
117
- "echo840/Monkey-Chat",]
 
 
 
 
 
 
 
 
 
118
  ```
119
  For the models not on list, they are not intergated with huggingface, please refer to their github repo to create the evaluation pipeline. Examples of the inference logic are in `src/evaluation/inference.py`
120
 
121
  ```bash
122
  pip install -r requirements.txt
123
  # We use HuggingFaceM4/idefics2-8b and vcr_wiki_en_easy as an example
124
- # Inference from the VLMs and save the results to {model_id}_{difficulty}_{language}.json
125
  cd src/evaluation
126
- python3 inference.py --dataset_handler "vcr-org/VCR-wiki-en-easy-test" --model_id "HuggingFaceM4/idefics2-8b" --device "cuda" --dtype "bf16" --save_interval 50 --resume True
127
-
128
  # Evaluate the results and save the evaluation metrics to {model_id}_{difficulty}_{language}_evaluation_result.json
129
- python3 evaluation_metrics.py --model_id HuggingFaceM4/idefics2-8b --output_path . --json_filename "HuggingFaceM4_idefics2-8b_en_easy.json" --dataset_handler "vcr-org/VCR-wiki-en-easy-test"
130
-
131
- # To get the mean score of all the `{model_id}_{difficulty}_{language}_evaluation_result.json` in `jsons_path` (and the std, confidence interval if `--bootstrap`) of the evaluation metrics
132
- python3 gather_results.py --jsons_path .
133
  ```
 
 
134
 
135
- ### Close-source evaluation
136
  We provide the evaluation script for the close-source models in `src/evaluation/closed_source_eval.py`.
137
 
138
  You need an API Key, a pre-saved testing dataset and specify the path of the data saving the paper
@@ -154,14 +162,46 @@ python3 evaluation_metrics.py --model_id gpt4o --output_path . --json_filename "
154
  # To get the mean score of all the `{model_id}_{difficulty}_{language}_evaluation_result.json` in `jsons_path` (and the std, confidence interval if `--bootstrap`) of the evaluation metrics
155
  python3 gather_results.py --jsons_path .
156
  ```
 
 
 
 
 
 
 
 
 
157
 
158
- ## Method 2: use lmms-eval framework
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
159
  You may need to incorporate the inference method of your model if the lmms-eval framework does not support it. For details, please refer to [here](https://github.com/EvolvingLMMs-Lab/lmms-eval/blob/main/docs/model_guide.md)
160
  ```bash
161
  pip install git+https://github.com/EvolvingLMMs-Lab/lmms-eval.git
162
  # We use HuggingFaceM4/idefics2-8b and vcr_wiki_en_easy as an example
163
  python3 -m accelerate.commands.launch --num_processes=8 -m lmms_eval --model idefics2 --model_args pretrained="HuggingFaceM4/idefics2-8b" --tasks vcr_wiki_en_easy --batch_size 1 --log_samples --log_samples_suffix HuggingFaceM4_idefics2-8b_vcr_wiki_en_easy --output_path ./logs/
164
  ```
 
 
165
  `lmms-eval` supports the following VCR `--tasks` settings:
166
 
167
  * English
 
73
  | GPT-4 Turbo | - | *78.74* | *88.54* | *45.15* | *65.72* | 0.2 | 8.42 | 0.0 | *8.58* |
74
  | GPT-4V | - | 52.04 | 65.36 | 25.83 | 44.63 | - | - | - | - |
75
  | GPT-4o | - | **91.55** | **96.44** | **73.2** | **86.17** | **14.87** | **39.05** | **2.2** | **22.72** |
76
+ | GPT-4o-mini | - | 83.60 | 87.77 | 54.04 | 73.09 | 1.10 | 5.03 | 0 | 2.02 |
77
  | Gemini 1.5 Pro | - | 62.73 | 77.71 | 28.07 | 51.9 | 1.1 | 11.1 | 0.7 | 11.82 |
78
  | Qwen-VL-Max | - | 76.8 | 85.71 | 41.65 | 61.18 | *6.34* | *13.45* | *0.89* | 5.4 |
79
  | Reka Core | - | 66.46 | 84.23 | 6.71 | 25.84 | 0.0 | 3.43 | 0.0 | 3.35 |
80
  | Cambrian-1 | 34B | 79.69 | 89.27 | *27.20* | 50.04 | 0.03 | 1.27 | 0.00 | 1.37 |
81
  | Cambrian-1 | 13B | 49.35 | 65.11 | 8.37 | 29.12 | - | - | - | - |
82
  | Cambrian-1 | 8B | 71.13 | 83.68 | 13.78 | 35.78 | - | - | - | - |
83
+ | CogVLM | 17B | 73.88 | 86.24 | 34.58 | 57.17 | - | - | - | - |
84
  | CogVLM2 | 19B | *83.25* | *89.75* | **37.98** | **59.99** | 9.15 | 17.12 | 0.08 | 3.67 |
85
  | CogVLM2-Chinese | 19B | 79.90 | 87.42 | 25.13 | 48.76 | **33.24** | **57.57** | **1.34** | **17.35** |
86
  | DeepSeek-VL | 1.3B | 23.04 | 46.84 | 0.16 | 11.89 | 0.0 | 6.56 | 0.0 | 6.46 |
 
91
  | InternLM-XComposer2-VL | 7B | 46.64 | 70.99 | 0.7 | 12.51 | 0.27 | 12.32 | 0.07 | 8.97 |
92
  | InternLM-XComposer2-VL-4KHD | 7B | 5.32 | 22.14 | 0.21 | 9.52 | 0.46 | 12.31 | 0.05 | 7.67 |
93
  | InternLM-XComposer2.5-VL | 7B | 41.35 | 63.04 | 0.93 | 13.82 | 0.46 | 12.97 | 0.11 | 10.95 |
94
+ | InternVL-V1.5 | 26B | 14.65 | 51.42 | 1.99 | 16.73 | 4.78 | 26.43 | 0.03 | 8.46 |
95
  | InternVL-V2 | 26B | 74.51 | 86.74 | 6.18 | 24.52 | 9.02 | 32.50 | 0.05 | 9.49 |
96
  | InternVL-V2 | 40B | **84.67** | **92.64** | 13.10 | 33.64 | 22.09 | 47.62 | 0.48 | 12.57 |
97
+ | InternVL-V2 | 76B | 83.20 | 91.26 | 18.45 | 41.16 | 20.58 | 44.59 | 0.56 | 15.31 |
98
+ | InternVL-V2-Pro | - | 77.41 | 86.59 | 12.94 | 35.01 | 19.58 | 43.98 | 0.84 | 13.97 |
99
  | MiniCPM-V2.5 | 8B | 31.81 | 53.24 | 1.41 | 11.94 | 4.1 | 18.03 | 0.09 | 7.39 |
100
  | Monkey | 7B | 50.66 | 67.6 | 1.96 | 14.02 | 0.62 | 8.34 | 0.12 | 6.36 |
101
  | Qwen-VL | 7B | 49.71 | 69.94 | 2.0 | 15.04 | 0.04 | 1.5 | 0.01 | 1.17 |
 
104
 
105
  # Model Evaluation
106
 
107
+ ## Method 1: use the evaluation script
 
 
 
108
  ### Open-source evaluation
109
  We support open-source model_id:
110
  ```python
111
  ["openbmb/MiniCPM-Llama3-V-2_5",
112
  "OpenGVLab/InternVL-Chat-V1-5",
113
  "internlm/internlm-xcomposer2-vl-7b",
114
+ "internlm/internlm-xcomposer2-4khd-7b",
115
+ "internlm/internlm-xcomposer2d5-7b",
116
  "HuggingFaceM4/idefics2-8b",
117
  "Qwen/Qwen-VL-Chat",
118
  "THUDM/cogvlm2-llama3-chinese-chat-19B",
119
  "THUDM/cogvlm2-llama3-chat-19B",
120
+ "THUDM/cogvlm-chat-hf",
121
+ "echo840/Monkey-Chat",
122
+ "THUDM/glm-4v-9b",
123
+ "nyu-visionx/cambrian-phi3-3b",
124
+ "nyu-visionx/cambrian-8b",
125
+ "nyu-visionx/cambrian-13b",
126
+ "nyu-visionx/cambrian-34b",
127
+ "OpenGVLab/InternVL2-26B",
128
+ "OpenGVLab/InternVL2-40B"
129
+ "OpenGVLab/InternVL2-Llama3-76B",]
130
  ```
131
  For the models not on list, they are not intergated with huggingface, please refer to their github repo to create the evaluation pipeline. Examples of the inference logic are in `src/evaluation/inference.py`
132
 
133
  ```bash
134
  pip install -r requirements.txt
135
  # We use HuggingFaceM4/idefics2-8b and vcr_wiki_en_easy as an example
 
136
  cd src/evaluation
 
 
137
  # Evaluate the results and save the evaluation metrics to {model_id}_{difficulty}_{language}_evaluation_result.json
138
+ python3 evaluation_pipeline.py --dataset_handler "vcr-org/VCR-wiki-en-easy-test" --model_id HuggingFaceM4/idefics2-8b --device "cuda" --output_path . --bootstrap --end_index 5000
 
 
 
139
  ```
140
+ For large models like "OpenGVLab/InternVL2-Llama3-76B", you may have to use multi-GPU to do the evaluation. You can specify --device to None to use all GPUs available.
141
+
142
 
143
+ ### Close-source evaluation (using API)
144
  We provide the evaluation script for the close-source models in `src/evaluation/closed_source_eval.py`.
145
 
146
  You need an API Key, a pre-saved testing dataset and specify the path of the data saving the paper
 
162
  # To get the mean score of all the `{model_id}_{difficulty}_{language}_evaluation_result.json` in `jsons_path` (and the std, confidence interval if `--bootstrap`) of the evaluation metrics
163
  python3 gather_results.py --jsons_path .
164
  ```
165
+ ## Method 2: use [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) framework
166
+ You may need to incorporate the inference method of your model if the VLMEvalKit framework does not support it. For details, please refer to [here](https://github.com/open-compass/VLMEvalKit/blob/main/docs/en/Development.md)
167
+ ```bash
168
+ git clone https://github.com/open-compass/VLMEvalKit.git
169
+ cd VLMEvalKit
170
+ # We use HuggingFaceM4/idefics2-8b and VCR_EN_EASY_ALL as an example
171
+ python run.py --data VCR_EN_EASY_ALL --model idefics2_8b --verbose
172
+ ```
173
+ You may find the supported model list [here](https://github.com/open-compass/VLMEvalKit/blob/main/vlmeval/config.py).
174
 
175
+ `VLMEvalKit` supports the following VCR `--data` settings:
176
+
177
+ * English
178
+ * Easy
179
+ * `VCR_EN_EASY_ALL` (full test set, 5000 instances)
180
+ * `VCR_EN_EASY_500` (first 500 instances in the VCR_EN_EASY_ALL setting)
181
+ * `VCR_EN_EASY_100` (first 100 instances in the VCR_EN_EASY_ALL setting)
182
+ * Hard
183
+ * `VCR_EN_HARD_ALL` (full test set, 5000 instances)
184
+ * `VCR_EN_HARD_500` (first 500 instances in the VCR_EN_HARD_ALL setting)
185
+ * `VCR_EN_HARD_100` (first 100 instances in the VCR_EN_HARD_ALL setting)
186
+ * Chinese
187
+ * Easy
188
+ * `VCR_ZH_EASY_ALL` (full test set, 5000 instances)
189
+ * `VCR_ZH_EASY_500` (first 500 instances in the VCR_ZH_EASY_ALL setting)
190
+ * `VCR_ZH_EASY_100` (first 100 instances in the VCR_ZH_EASY_ALL setting)
191
+ * Hard
192
+ * `VCR_ZH_HARD_ALL` (full test set, 5000 instances)
193
+ * `VCR_ZH_HARD_500` (first 500 instances in the VCR_ZH_HARD_ALL setting)
194
+ * `VCR_ZH_HARD_100` (first 100 instances in the VCR_ZH_HARD_ALL setting)
195
+
196
+ ## Method 3: use lmms-eval framework
197
  You may need to incorporate the inference method of your model if the lmms-eval framework does not support it. For details, please refer to [here](https://github.com/EvolvingLMMs-Lab/lmms-eval/blob/main/docs/model_guide.md)
198
  ```bash
199
  pip install git+https://github.com/EvolvingLMMs-Lab/lmms-eval.git
200
  # We use HuggingFaceM4/idefics2-8b and vcr_wiki_en_easy as an example
201
  python3 -m accelerate.commands.launch --num_processes=8 -m lmms_eval --model idefics2 --model_args pretrained="HuggingFaceM4/idefics2-8b" --tasks vcr_wiki_en_easy --batch_size 1 --log_samples --log_samples_suffix HuggingFaceM4_idefics2-8b_vcr_wiki_en_easy --output_path ./logs/
202
  ```
203
+ You may find the supported model list [here](https://github.com/EvolvingLMMs-Lab/lmms-eval/tree/main/lmms_eval/models).
204
+
205
  `lmms-eval` supports the following VCR `--tasks` settings:
206
 
207
  * English