ynhe commited on
Commit
7ff6c96
1 Parent(s): 6a8e7fe

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -0
README.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ extra_gated_prompt: >-
4
+ You agree to not use the dataset to conduct experiments that cause harm to
5
+ human subjects. Please note that the data in this dataset may be subject to
6
+ other agreements. Before using the data, be sure to read the relevant
7
+ agreements carefully to ensure compliant use. Video copyrights belong to the
8
+ original video creators or platforms and are for academic research use only.
9
+ task_categories:
10
+ - visual-question-answering
11
+ - video-classification
12
+ extra_gated_fields:
13
+ Name: text
14
+ Company/Organization: text
15
+ Country: text
16
+ E-Mail: text
17
+ modalities:
18
+ - Video
19
+ - Text
20
+ configs:
21
+ - config_name: test
22
+ data_files: test.json
23
+ language:
24
+ - en
25
+ size_categories:
26
+ - 1K<n<10K
27
+ ---
28
+ # MVBench
29
+
30
+ ## Dataset Description
31
+
32
+ - **Repository:** [MVBench](https://github.com/OpenGVLab/Ask-Anything/blob/main/video_chat2/mvbench.ipynb)
33
+ - **Paper:** [2311.17005](https://arxiv.org/abs/2311.17005)
34
+ - **Point of Contact:** mailto:[kunchang li](likunchang@pjlab.org.cn)
35
+
36
+ ![images](./assert/generation.png)
37
+
38
+ We introduce a novel static-to-dynamic method for defining temporal-related tasks. By converting static tasks into dynamic ones, we facilitate systematic generation of video tasks necessitating a wide range of temporal abilities, from perception to cognition. Guided by task definitions, we then **automatically transform public video annotations into multiple-choice QA** for task evaluation. This unique paradigm enables efficient creation of MVBench with minimal manual intervention while ensuring evaluation fairness through ground-truth video annotations and avoiding biased LLM scoring. The **20** temporal task examples are as follows.
39
+
40
+ ![images](./assert/task_example.png)
41
+
42
+ ## Evaluation
43
+
44
+ An evaluation example is provided in [mvbench.ipynb](https://github.com/OpenGVLab/Ask-Anything/blob/main/video_chat2/mvbench.ipynb). Please follow the pipeline to prepare the evaluation code for various MLLMs.
45
+
46
+ - **Preprocess**: We preserve the raw video (high resolution, long duration, etc.) along with corresponding annotations (start, end, subtitles, etc.) for future exploration; hence, the decoding of some raw videos like Perception Test may be slow.
47
+ - **Prompt**: We explore effective system prompts to encourage better temporal reasoning in MLLM, as well as efficient answer prompts for option extraction.
48
+
49
+ ## Leadrboard
50
+
51
+ While an [Online leaderboard]() is under construction, the current standings are as follows:
52
+
53
+ ![images](./assert/leaderboard.png)