Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -50,7 +50,7 @@ size_categories:
|
|
50 |
</div>
|
51 |
|
52 |
### Updates
|
53 |
-
- <h4 style="color:red">25 October 2024:
|
54 |
|
55 |
# TVBench
|
56 |
TVBench is a new benchmark specifically created to evaluate temporal understanding in video QA. We identified three main issues in existing datasets: (i) static information from single frames is often sufficient to solve the tasks (ii) the text of the questions and candidate answers is overly informative, allowing models to answer correctly without relying on any visual input (iii) world knowledge alone can answer many of the questions, making the benchmarks a test of knowledge replication rather than visual reasoning. In addition, we found that open-ended question-answering benchmarks for video understanding suffer from similar issues while the automatic evaluation process with LLMs is unreliable, making it an unsuitable alternative.
|
|
|
50 |
</div>
|
51 |
|
52 |
### Updates
|
53 |
+
- <h4 style="color:red">25 October 2024: Please redownload dataset due to removal of duplicated samples for Action Sequence and Unexpected Action.</h4>
|
54 |
|
55 |
# TVBench
|
56 |
TVBench is a new benchmark specifically created to evaluate temporal understanding in video QA. We identified three main issues in existing datasets: (i) static information from single frames is often sufficient to solve the tasks (ii) the text of the questions and candidate answers is overly informative, allowing models to answer correctly without relying on any visual input (iii) world knowledge alone can answer many of the questions, making the benchmarks a test of knowledge replication rather than visual reasoning. In addition, we found that open-ended question-answering benchmarks for video understanding suffer from similar issues while the automatic evaluation process with LLMs is unreliable, making it an unsuitable alternative.
|