File size: 3,911 Bytes
7ff6c96
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
05be47e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7ff6c96
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
---
license: mit
extra_gated_prompt: >-
  You agree to not use the dataset to conduct experiments that cause harm to
  human subjects. Please note that the data in this dataset may be subject to
  other agreements. Before using the data, be sure to read the relevant
  agreements carefully to ensure compliant use. Video copyrights belong to the
  original video creators or platforms and are for academic research use only.
task_categories:
- visual-question-answering
- video-classification
extra_gated_fields:
  Name: text
  Company/Organization: text
  Country: text
  E-Mail: text
modalities:
- Video
- Text
configs:
- config_name: action_sequence
  data_files: json/action_sequence.json
- config_name: moving_count
  data_files: json/moving_count.json
- config_name: action_prediction
  data_files: json/action_prediction.json
- config_name: episodic_reasoning
  data_files: json/episodic_reasoning.json
- config_name: action_antonym
  data_files: json/action_antonym.json
- config_name: action_count
  data_files: json/action_count.json
- config_name: scene_transition
  data_files: json/scene_transition.json
- config_name: object_shuffle
  data_files: json/object_shuffle.json
- config_name: object_existence
  data_files: json/object_existence.json
- config_name: fine_grained_pose
  data_files: json/fine_grained_pose.json
- config_name: unexpected_action
  data_files: json/unexpected_action.json
- config_name: moving_direction
  data_files: json/moving_direction.json
- config_name: state_change
  data_files: json/state_change.json
- config_name: object_interaction
  data_files: json/object_interaction.json
- config_name: character_order
  data_files: json/character_order.json
- config_name: action_localization
  data_files: json/action_localization.json
- config_name: counterfactual_inference
  data_files: json/counterfactual_inference.json
- config_name: fine_grained_action
  data_files: json/fine_grained_action.json
- config_name: moving_attribute
  data_files: json/moving_attribute.json
- config_name: egocentric_navigation
  data_files: json/egocentric_navigation.json
language:
- en
size_categories:
- 1K<n<10K
---
# MVBench

## Dataset Description

- **Repository:** [MVBench](https://github.com/OpenGVLab/Ask-Anything/blob/main/video_chat2/mvbench.ipynb)
- **Paper:** [2311.17005](https://arxiv.org/abs/2311.17005)
- **Point of Contact:** mailto:[kunchang li](likunchang@pjlab.org.cn)

![images](./assert/generation.png)

We introduce a novel static-to-dynamic method for defining temporal-related tasks. By converting static tasks into dynamic ones, we facilitate systematic generation of video tasks necessitating a wide range of temporal abilities, from perception to cognition. Guided by task definitions, we then **automatically transform public video annotations into multiple-choice QA** for task evaluation. This unique paradigm enables efficient creation of MVBench with minimal manual intervention while ensuring evaluation fairness through ground-truth video annotations and avoiding biased LLM scoring. The **20** temporal task examples are as follows.

![images](./assert/task_example.png)

## Evaluation

An evaluation example is provided in [mvbench.ipynb](https://github.com/OpenGVLab/Ask-Anything/blob/main/video_chat2/mvbench.ipynb). Please follow the pipeline to prepare the evaluation code for various MLLMs.

- **Preprocess**: We preserve the raw video (high resolution, long duration, etc.) along with corresponding annotations (start, end, subtitles, etc.) for future exploration; hence, the decoding of some raw videos like Perception Test may be slow.
- **Prompt**: We explore effective system prompts to encourage better temporal reasoning in MLLM, as well as efficient answer prompts for option extraction.

## Leadrboard

While an [Online leaderboard]() is under construction, the current standings are as follows:

![images](./assert/leaderboard.png)