ShuhuaiRen
commited on
Commit
•
078dbbd
1
Parent(s):
caff49b
Upload 2 files
Browse files- .gitignore +6 -0
- README.md +67 -120
.gitignore
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
__pycache__/
|
2 |
+
.DS_Store
|
3 |
+
.idea
|
4 |
+
.ipynb_checkpoints
|
5 |
+
*.ipynb
|
6 |
+
*asr.json
|
README.md
CHANGED
@@ -1,5 +1,7 @@
|
|
1 |
---
|
2 |
-
license:
|
|
|
|
|
3 |
---
|
4 |
|
5 |
|
@@ -24,43 +26,43 @@ Our dataset compiles diverse tasks of time-sensitive long video understanding, i
|
|
24 |
|
25 |
| Task | #Instructions |
|
26 |
|-------------------------------|---------------|
|
27 |
-
| Dense Video Captioning |
|
28 |
-
| Temporal Video Grounding |
|
29 |
-
| Video Summarization |
|
30 |
-
| Video Highlight Detection |
|
31 |
-
| Step Localization |
|
32 |
-
| Transcribed Speech Generation |
|
33 |
-
| Total |
|
34 |
|
35 |
### Task Statistics
|
36 |
|
37 |
-
| Task | Description | #Train |
|
38 |
-
|
39 |
-
| Dense Video Captioning | detects a series of events in the given video and outputs the corresponding timestamps and descriptions |
|
40 |
-
| Temporal Video Grounding | predict a timestamp boundary including the start and end time in the video given a natural language query |
|
41 |
-
| Video Summarization | create a compressed set of frames or clip shots to represent the most informative content of the given video |
|
42 |
-
| Video Highlight Detection | identify the most exciting, impressive, or emotional moments that may not cover the full scope of the original video |
|
43 |
-
| Step Localization | segment and describe significant steps in a long untrimmed video |
|
44 |
-
| Transcribed Speech Generation | predict the speech content and its corresponding start and end timestamps based on visual signals in the video |
|
45 |
-
| Total | - |
|
46 |
|
47 |
### Detailed Dataset Statistics
|
48 |
|
49 |
-
| Task | Dataset | #Train
|
50 |
-
|
51 |
-
| Dense Video Captioning | `ActivityNet Captions` |
|
52 |
-
| | `ViTT` |
|
53 |
-
| | `YouCook2` |
|
54 |
-
| Temporal Video Grounding | `DiDeMo` |
|
55 |
-
| | `QuerYD` |
|
56 |
-
| | `HiREST_grounding` |
|
57 |
-
| | `Charades-STA` |
|
58 |
-
| Video Summarization | `TVSum` |
|
59 |
-
| | `SumMe` |
|
60 |
-
| Video Highlight Detection | `QVHighlights` |
|
61 |
-
| Step Localization | `COIN` |
|
62 |
-
| | `HiREST_step` |
|
63 |
-
| Transcribed Speech Generation | `YT-Temporal` |
|
64 |
|
65 |
## Dataset Structure
|
66 |
|
@@ -106,9 +108,9 @@ dataset = load_dataset("ShuhuaiRen/TimeIT", ds_name)
|
|
106 |
train_set = dataset["train"]
|
107 |
|
108 |
for train_instance in train_set:
|
109 |
-
question = train_instance["
|
110 |
-
answer = train_instance["
|
111 |
-
video_path = train_instance["
|
112 |
```
|
113 |
|
114 |
### Data Fields
|
@@ -118,10 +120,9 @@ import datasets
|
|
118 |
|
119 |
features = datasets.Features(
|
120 |
{
|
121 |
-
"
|
122 |
-
"
|
123 |
-
"
|
124 |
-
"outputs": datasets.Value("string"),
|
125 |
}
|
126 |
)
|
127 |
```
|
@@ -134,50 +135,21 @@ features = datasets.Features(
|
|
134 |
|
135 |
### Source Data
|
136 |
|
137 |
-
| Task
|
138 |
-
|
139 |
-
|
|
140 |
-
|
|
141 |
-
|
|
142 |
-
|
|
143 |
-
|
|
144 |
-
|
|
145 |
-
|
|
146 |
-
|
|
147 |
-
|
|
148 |
-
|
|
149 |
-
|
|
150 |
-
|
|
151 |
-
|
|
152 |
-
| | `ocr-vqa` [12] | [Source](https://ocr-vqa.github.io/) |
|
153 |
-
| | `st-vqa` [13] | [Source](https://rrc.cvc.uab.es/?ch=11) |
|
154 |
-
| | `text-vqa` [14] | [Source](https://textvqa.org/) |
|
155 |
-
| | `gqa` [15] | [Source](https://cs.stanford.edu/people/dorarad/gqa/about.html) |
|
156 |
-
| Knowledgeable Visual QA | `okvqa` [16] | [Source](https://okvqa.allenai.org/) |
|
157 |
-
| | `a-okvqa` [17] | [Source](https://allenai.org/project/a-okvqa/home) |
|
158 |
-
| | `science-qa` [18] | [Source](https://scienceqa.github.io/) |
|
159 |
-
| | `viquae` [19] | [Source](https://github.com/PaulLerner/ViQuAE) |
|
160 |
-
| Reasoning | `clevr` [20] | [Source](https://cs.stanford.edu/people/jcjohns/clevr/) |
|
161 |
-
| | `nlvr` [21] | [Source](https://lil.nlp.cornell.edu/nlvr/) |
|
162 |
-
| | `vcr` [22] | [Source](https://visualcommonsense.com/) |
|
163 |
-
| | `visual-mrc` [23] | [Source](https://github.com/nttmdlab-nlp/VisualMRC) |
|
164 |
-
| | `winoground` [24] | [Source](https://huggingface.co/datasets/facebook/winoground) |
|
165 |
-
| Generation | `vist` [25] | [Source](https://visionandlanguage.net/VIST/) |
|
166 |
-
| | `visual-dialog` [26] | [Source](https://visualdialog.org/) |
|
167 |
-
| | `multi30k` [27] | [Source](https://github.com/multi30k/dataset) |
|
168 |
-
| Chinese | `fm-iqa` [28] | [Source](https://paperswithcode.com/dataset/fm-iqa) |
|
169 |
-
| | `coco-cn` [29] | [Source](https://github.com/li-xirong/coco-cn) |
|
170 |
-
| | `flickr8k-cn` [30] | [Source](https://github.com/li-xirong/flickr8kcn) |
|
171 |
-
| | `chinese-food` [31] | [Source](https://sites.google.com/view/chinesefoodnet) |
|
172 |
-
| | `mmchat` [32] | [Source](https://github.com/silverriver/MMChat) |
|
173 |
-
| Video | `ss` [33] | [Source](https://developer.qualcomm.com/software/ai-datasets/something-something) |
|
174 |
-
| | `ivqa` [34] | [Source](https://antoyang.github.io/just-ask.html) |
|
175 |
-
| | `msvd-qa` [35] | [Source](https://paperswithcode.com/dataset/msvd) |
|
176 |
-
| | `activitynet-qa` [36] | [Source](https://github.com/MILVLG/activitynet-qa) |
|
177 |
-
| | `msrvtt` [35] | [Source](https://paperswithcode.com/dataset/msr-vtt) |
|
178 |
-
| | `msrvtt-qa` [37] | [Source](https://paperswithcode.com/sota/visual-question-answering-on-msrvtt-qa-1) |
|
179 |
-
|
180 |
-
|
181 |
|
182 |
### Annotations
|
183 |
|
@@ -225,40 +197,15 @@ designed to enable the development of general-purpose video agents.
|
|
225 |
|
226 |
## References
|
227 |
|
228 |
-
- [1]
|
229 |
-
- [2]
|
230 |
-
- [3]
|
231 |
-
- [4]
|
232 |
-
- [5]
|
233 |
-
- [6]
|
234 |
-
- [7]
|
235 |
-
- [8]
|
236 |
-
- [9]
|
237 |
-
- [10]
|
238 |
-
- [11]
|
239 |
-
- [12]
|
240 |
-
- [13] Scene Text Visual Question Answering
|
241 |
-
- [14] Towards VQA Models That Can Read
|
242 |
-
- [15] GQA: A new dataset for real-world visual reasoning and compositional question answering
|
243 |
-
- [16] OK-VQA: A Visual Question Answering Benchmark Requiring External Knowledge
|
244 |
-
- [17] A-OKVQA: A Benchmark for Visual Question Answering using World Knowledge
|
245 |
-
- [18] Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering
|
246 |
-
- [19] ViQuAE: a dataset for knowledge-based visual question answering about named entities
|
247 |
-
- [20] CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning
|
248 |
-
- [21] A Corpus of Natural Language for Visual Reasoning
|
249 |
-
- [22] From recognition to cognition: Visual Commonsense Reasoning
|
250 |
-
- [23] VisualMRC: Machine reading comprehension on document images
|
251 |
-
- [24] WinoGround: Probing vision and language models for visio-linguistic compositionality
|
252 |
-
- [25] Visual Storytelling
|
253 |
-
- [26] Visual Dialog
|
254 |
-
- [27] Multi30k: Multilingual english-german image descriptions
|
255 |
-
- [28] Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question
|
256 |
-
- [29] COCO-CN for cross-lingual image tagging, captioning, and retrieval
|
257 |
-
- [30] Adding Chinese Captions to Images
|
258 |
-
- [31] ChineseFoodNet: A large-scale image dataset for chinese food recognition
|
259 |
-
- [32] MMChat: Multi-Modal Chat Dataset on Social Media
|
260 |
-
- [33] The "Something Something" Video Database for Learning and Evaluating Visual Common Sense
|
261 |
-
- [34] Just Ask: Learning to answer questions from millions of narrated videos
|
262 |
-
- [35] Video Question Answering via Gradually Refined Attention over Appearance and Motion
|
263 |
-
- [36] ActivityNet-qa: A dataset for understanding complex web videos via question answering
|
264 |
-
- [37] MSR-VTT: A large video description dataset for bridging video and language
|
|
|
1 |
---
|
2 |
+
license: cc-by-4.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
---
|
6 |
|
7 |
|
|
|
26 |
|
27 |
| Task | #Instructions |
|
28 |
|-------------------------------|---------------|
|
29 |
+
| Dense Video Captioning | 6 |
|
30 |
+
| Temporal Video Grounding | 6 |
|
31 |
+
| Video Summarization | 6 |
|
32 |
+
| Video Highlight Detection | 6 |
|
33 |
+
| Step Localization | 6 |
|
34 |
+
| Transcribed Speech Generation | 6 |
|
35 |
+
| Total | 36 |
|
36 |
|
37 |
### Task Statistics
|
38 |
|
39 |
+
| Task | Description | #Train |
|
40 |
+
|-------------------------------|----------------------------------------------------------------------------------------------------------------------|---------|
|
41 |
+
| Dense Video Captioning | detects a series of events in the given video and outputs the corresponding timestamps and descriptions | 16,342 |
|
42 |
+
| Temporal Video Grounding | predict a timestamp boundary including the start and end time in the video given a natural language query | 60,471 |
|
43 |
+
| Video Summarization | create a compressed set of frames or clip shots to represent the most informative content of the given video | 75 |
|
44 |
+
| Video Highlight Detection | identify the most exciting, impressive, or emotional moments that may not cover the full scope of the original video | 6,858 |
|
45 |
+
| Step Localization | segment and describe significant steps in a long untrimmed video | 9,488 |
|
46 |
+
| Transcribed Speech Generation | predict the speech content and its corresponding start and end timestamps based on visual signals in the video | 31,627 |
|
47 |
+
| Total | - | 124861 |
|
48 |
|
49 |
### Detailed Dataset Statistics
|
50 |
|
51 |
+
| Task | Dataset | #Train |
|
52 |
+
|-------------------------------|------------------------|--------|
|
53 |
+
| Dense Video Captioning | `ActivityNet Captions` | 10,009 |
|
54 |
+
| | `ViTT` | 5,141 |
|
55 |
+
| | `YouCook2` | 1,192 |
|
56 |
+
| Temporal Video Grounding | `DiDeMo` | 33,002 |
|
57 |
+
| | `QuerYD` | 14,602 |
|
58 |
+
| | `HiREST_grounding` | 459 |
|
59 |
+
| | `Charades-STA` | 12,408 |
|
60 |
+
| Video Summarization | `TVSum` | 50 |
|
61 |
+
| | `SumMe` | 25 |
|
62 |
+
| Video Highlight Detection | `QVHighlights` | 6,858 |
|
63 |
+
| Step Localization | `COIN` | 9,029 |
|
64 |
+
| | `HiREST_step` | 459 |
|
65 |
+
| Transcribed Speech Generation | `YT-Temporal` | 31,627 |
|
66 |
|
67 |
## Dataset Structure
|
68 |
|
|
|
108 |
train_set = dataset["train"]
|
109 |
|
110 |
for train_instance in train_set:
|
111 |
+
question = train_instance["question"] # str
|
112 |
+
answer = train_instance["answer"] # str
|
113 |
+
video_path = train_instance["video_path"] # str
|
114 |
```
|
115 |
|
116 |
### Data Fields
|
|
|
120 |
|
121 |
features = datasets.Features(
|
122 |
{
|
123 |
+
"video_path": datasets.Value("string"),
|
124 |
+
"question": datasets.Value("string"),
|
125 |
+
"answer": datasets.Value("string"),
|
|
|
126 |
}
|
127 |
)
|
128 |
```
|
|
|
135 |
|
136 |
### Source Data
|
137 |
|
138 |
+
| Task | Dataset [Citation] | Source |
|
139 |
+
|-------------------------------|----------------------------|--------------------------------------------------------------------------------|
|
140 |
+
| Dense Video Captioning | `ActivityNet Captions` [1] | [Source](https://cs.stanford.edu/people/ranjaykrishna/densevid/) |
|
141 |
+
| | `ViTT` [2] | [Source](https://github.com/google-research-datasets/Video-Timeline-Tags-ViTT) |
|
142 |
+
| | `YouCook2` [3] | [Source](http://youcook2.eecs.umich.edu/) |
|
143 |
+
| Temporal Video Grounding | `DiDeMo` [4] | [Source](https://github.com/LisaAnne/TemporalLanguageRelease) |
|
144 |
+
| | `QuerYD` [5] | [Source](https://www.robots.ox.ac.uk/~vgg/data/queryd/) |
|
145 |
+
| | `HiREST_grounding` [6] | [Source](https://hirest-cvpr2023.github.io/) |
|
146 |
+
| | `Charades-STA` [7] | [Source](https://github.com/jiyanggao/TALL) |
|
147 |
+
| Video Summarization | `TVSum` [8] | [Source](https://github.com/yalesong/tvsum) |
|
148 |
+
| | `SumMe` [9] | [Source](http://classif.ai/dataset/ethz-cvl-video-summe/) |
|
149 |
+
| Video Highlight Detection | `QVHighlights` [10] | [Source](https://github.com/jayleicn/moment_detr/tree/main/data) |
|
150 |
+
| Step Localization | `COIN` [11] | [Source](https://coin-dataset.github.io/) |
|
151 |
+
| | `HiREST_step` [6] | [Source](https://hirest-cvpr2023.github.io/) |
|
152 |
+
| Transcribed Speech Generation | `YT-Temporal` [12] | [Source](https://rowanzellers.com/merlot/#data) |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
153 |
|
154 |
### Annotations
|
155 |
|
|
|
197 |
|
198 |
## References
|
199 |
|
200 |
+
- [1] Dense-Captioning Events in Videos
|
201 |
+
- [2] Multimodal Pretraining for Dense Video Captioning
|
202 |
+
- [3] Towards Automatic Learning of Procedures from Web Instructional Videos
|
203 |
+
- [4] Localizing Moments in Video with Natural Language
|
204 |
+
- [5] QuerYD: A video dataset with high-quality text and audio narrations
|
205 |
+
- [6] Hierarchical Video-Moment Retrieval and Step-Captioning
|
206 |
+
- [7] TALL: Temporal Activity Localization via Language Query
|
207 |
+
- [8] TVSum: Summarizing Web Videos Using Titles
|
208 |
+
- [9] Creating Summaries from User Videos
|
209 |
+
- [10] QVHighlights: Detecting Moments and Highlights in Videos via Natural Language Queries
|
210 |
+
- [11] COIN: A Large-scale Dataset for Comprehensive Instructional Video Analysis
|
211 |
+
- [12] MERLOT: Multimodal Neural Script Knowledge Models
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|