Datasets:
Tasks:
Video-Text-to-Text
Modalities:
Text
Formats:
webdataset
Languages:
English
Size:
1K - 10K
ArXiv:
License:
Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: other
|
| 3 |
+
license_name: bsd-3-clause
|
| 4 |
+
license_link: https://github.com/TencentARC/TimeLens/blob/main/LICENSE
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
task_categories:
|
| 8 |
+
- video-text-to-text
|
| 9 |
+
pretty_name: TimeLens
|
| 10 |
+
size_categories:
|
| 11 |
+
- 10K<n<100K
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
# TimeLens-Bench
|
| 15 |
+
|
| 16 |
+
π [**Paper**](TODO) | π» [**Code**](https://github.com/TencentARC/TimeLens) | π [**Project Page**](https://timelens-arc-lab.github.io/) | π€ [**Model & Data**](https://huggingface.co/collections/TencentARC/TimeLens)
|
| 17 |
+
|
| 18 |
+
## β¨ Dataset Description
|
| 19 |
+
|
| 20 |
+
**TimeLens-Bench** is a comprehensive, high-quality evaluation benchmark for video temporal grounding, proposed in our paper [TimeLens: Rethinking Video Temporal Grounding with Multimodal LLMs](TODO).
|
| 21 |
+
|
| 22 |
+
During our annotation process, we identified critical quality issues within existing datasets and performed extensive manual corrections. We observed a **dramatic re-ranking of models** on TimeLens-Bench compared to legacy benchmarks, demonstrating that TimeLens-Bench provides *more reliable* evaluation
|
| 23 |
+
(more details in our [paper](TODO) and [project page](https://timelens-arc-lab.github.io/)).
|
| 24 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/65372e922c6ef949b22c26d9/31s82GO6S5LKlW0-kcIFU.png" alt="performance_comparison_charades-1" width="35%">
|
| 25 |
+
|
| 26 |
+
### Dataset Statistics
|
| 27 |
+
|
| 28 |
+
The benchmark consists of manually refined versions of **three** widely used evaluation datasets for video temporal grounding:
|
| 29 |
+
|
| 30 |
+
| Refined Dataset | # Videos | Avg. Duration | # Annotations | Source Dataset | Source Dataset Link |
|
| 31 |
+
| :--- | :---: | :---: | :---: | :--- | :--- |
|
| 32 |
+
| **Charades-TimeLens** | 1313 | 29.6 | 3363 | Charades-STA | https://github.com/jiyanggao/TALL |
|
| 33 |
+
| **ActivityNet-TimeLens** | 1455* | 134.9 | 4500 | ActivityNet-Captions | https://cs.stanford.edu/people/ranjaykrishna/densevid/ |
|
| 34 |
+
| **QVHighlights-TimeLens** | 1511 | 149.6 | 1541 | QVHighlights | https://github.com/jayleicn/moment_detr |
|
| 35 |
+
|
| 36 |
+
<small>* To reduce the high evaluation cost from the excessively large ActivityNet Captions, we sampled videos uniformly across duration bins to curate ActivityNet-TimeLens.</small>
|
| 37 |
+
|
| 38 |
+
## π Usage
|
| 39 |
+
|
| 40 |
+
To download and use the benchmark for evaluation, please refer to the instructions in our [**GitHub Repository**](https://github.com/TencentARC/TimeLens#-evaluation-on-timelens-bench).
|
| 41 |
+
|
| 42 |
+
## π Citation
|
| 43 |
+
|
| 44 |
+
If you find our work helpful for your research and applications, please cite our paper:
|
| 45 |
+
|
| 46 |
+
```bibtex
|
| 47 |
+
TODO
|
| 48 |
+
```
|