mwxely commited on
Commit
488ff44
ยท
verified ยท
1 Parent(s): 6eec020

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +87 -0
README.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - video-text-to-text
5
+ - visual-question-answering
6
+ language:
7
+ - en
8
+ tags:
9
+ - video
10
+ - long-video
11
+ - reasoning
12
+ - tool-calling
13
+ - multimodal
14
+ size_categories:
15
+ - 100K<n<1M
16
+ ---
17
+
18
+ # LongVT-Source
19
+
20
+ This repository contains the source video and image files for the [LongVT](https://github.com/EvolvingLMMs-Lab/LongVT) project.
21
+
22
+ ## Overview
23
+
24
+ LongVT is an end-to-end agentic framework that enables "Thinking with Long Videos" via interleaved Multimodal Chain-of-Tool-Thought. This dataset provides the raw media files referenced by the training annotations in [LongVT-Parquet](https://huggingface.co/datasets/longvideotool/LongVT-Parquet).
25
+
26
+ ## Dataset Structure
27
+
28
+ The source files are organized by dataset type and stored as zip archives:
29
+
30
+ ### Training Data
31
+
32
+ | Source | Description | Files |
33
+ |--------|-------------|-------|
34
+ | `longvideoreason` | Long video reasoning data | 66 zips |
35
+ | `videor1` | Video-R1 COT data | 13 zips |
36
+ | `longvideoreflection` | Long video reflection data | 27 zips |
37
+ | `selftrace` | Self-distilled iMCoTT traces | 6 zips |
38
+ | `tvg` | Temporal video grounding data | 2 zips |
39
+ | `geminicot` | Gemini-distilled COT data | 2 zips |
40
+ | `llavacot` | LLaVA COT data | 1 zip |
41
+ | `openvlthinker` | OpenVLThinker data | 1 zip |
42
+ | `wemath` | WeMath data | 1 zip |
43
+ | `selfqa` | Self-curated QA for RL | 1 zip |
44
+ | `rl_val` | RL validation data | 1 zip |
45
+
46
+ ### Evaluation Data
47
+
48
+ | Source | Description | Files |
49
+ |--------|-------------|-------|
50
+ | `videosiaheval` | VideoSIAH-Eval benchmark videos | 12 zips |
51
+
52
+ ## Download
53
+
54
+ # Install huggingface_hub
55
+ pip install huggingface_hub
56
+
57
+ # Download all source files
58
+ huggingface-cli download longvideotool/LongVT-Source --repo-type dataset --local-dir ./source
59
+
60
+ # Or download specific files
61
+ huggingface-cli download longvideotool/LongVT-Source longvideoreason_1.zip --repo-type dataset --local-dir ./source## Usage
62
+
63
+ After downloading, extract the zip files to obtain the source media:
64
+
65
+ cd source
66
+ unzip "*.zip"The extracted paths will match those referenced in the [LongVT-Parquet](https://huggingface.co/datasets/longvideotool/LongVT-Parquet) annotations.
67
+
68
+ ## Related Resources
69
+
70
+ - ๐Ÿ“„ **Paper**: [arXiv:2511.20785](https://arxiv.org/abs/2511.20785)
71
+ - ๐ŸŒ **Project Page**: [LongVT Website](https://evolvinglmms-lab.github.io/LongVT/)
72
+ - ๐Ÿ’ป **Code**: [GitHub Repository](https://github.com/EvolvingLMMs-Lab/LongVT)
73
+ - ๐Ÿ“Š **Annotations**: [LongVT-Parquet](https://huggingface.co/datasets/longvideotool/LongVT-Parquet)
74
+ - ๐Ÿค— **Models**: [LongVT Collection](https://huggingface.co/collections/lmms-lab/longvt)
75
+
76
+ ## Citation
77
+
78
+ @article{yang2025longvt,
79
+ title={LongVT: Incentivizing "Thinking with Long Videos" via Native Tool Calling},
80
+ author={Yang, Zuhao and Wang, Sudong and Zhang, Kaichen and Wu, Keming and Leng, Sicong and Zhang, Yifan and Li, Bo and Qin, Chengwei and Lu, Shijian and Li, Xingxuan and Bing, Lidong},
81
+ journal={arXiv preprint arXiv:2511.20785},
82
+ year={2025}
83
+ }
84
+
85
+ ## License
86
+
87
+ This dataset is released under the Apache 2.0 License.