jylins commited on
Commit
a37c490
1 Parent(s): 25f4f4a

Initial Commit

Browse files
Files changed (5) hide show
  1. .gitattributes +3 -0
  2. README.md +94 -0
  3. test_videoxum.json +3 -0
  4. train_videoxum.json +3 -0
  5. val_videoxum.json +3 -0
.gitattributes CHANGED
@@ -53,3 +53,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
53
  *.jpg filter=lfs diff=lfs merge=lfs -text
54
  *.jpeg filter=lfs diff=lfs merge=lfs -text
55
  *.webp filter=lfs diff=lfs merge=lfs -text
 
 
 
 
53
  *.jpg filter=lfs diff=lfs merge=lfs -text
54
  *.jpeg filter=lfs diff=lfs merge=lfs -text
55
  *.webp filter=lfs diff=lfs merge=lfs -text
56
+ train_videoxum.json filter=lfs diff=lfs merge=lfs -text
57
+ val_videoxum.json filter=lfs diff=lfs merge=lfs -text
58
+ test_videoxum.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - summarization
5
+ language:
6
+ - en
7
+ tags:
8
+ - cross-modal-video-summarization
9
+ - video-summarization
10
+ - video-captioning
11
+ pretty_name: VideoXum
12
+ size_categories:
13
+ - 10K<n<100K
14
+ ---
15
+
16
+
17
+ # Dataset Card for VideoXum
18
+
19
+ ## Table of Contents
20
+ - [Table of Contents](#table-of-contents)
21
+ - [Dataset Description](#dataset-description)
22
+ - [Dataset Summary](#dataset-summary)
23
+ - [Languages](#languages)
24
+ - [Dataset Structure](#dataset-structure)
25
+ - [Data Splits](#data-splits)
26
+ - [Data Resources](#data-resources)
27
+ - [Data Fields](#data-fields)
28
+ - [Annotation Sample](#annotation-sample)
29
+ - [Citation](#citation)
30
+
31
+ ## Dataset Description
32
+ - **Homepage:** https://videoxum.github.io/
33
+ - **Paper:** https://arxiv.org/abs/2303.12060
34
+
35
+ ### Dataset Summary
36
+ The VideoXum dataset represents a novel task in the field of video summarization, extending the scope from single-modal to cross-modal video summarization. This new task focuses on creating video summaries that containing both visual and textual elements with semantic coherence. Built upon the foundation of ActivityNet Captions, VideoXum is a large-scale dataset, including over 14,000 long-duration and open-domain videos. Each video is paired with 10 corresponding video summaries, amounting to a total of 140,000 video-text summary pairs.
37
+
38
+ ### Languages
39
+ The textual summarization in the dataset are in English.
40
+
41
+
42
+ ## Dataset Structure
43
+
44
+ ### Dataset Splits
45
+ | |train |validation| test | Overall |
46
+ |-------------|------:|---------:|------:|--------:|
47
+ | # of videos | 8,000 | 2,001 | 4,000 | 14,001 |
48
+
49
+ ### Dataset Resources
50
+ - `train_videoxum.json`: annotations of training set
51
+ - `val_videoxum.json`: annotations of validation set
52
+ - `test_videoxum.json`: annotations of test set
53
+
54
+ ### Dataset Fields
55
+ - `video_id`: `str` a unique identifier for the video.
56
+ - `duration`: `float` total duration of the video in seconds.
57
+ - `sampled_frames`: `int` the number of frames sampled from source video at 1 fps with a uniform sampling schema.
58
+ - `timestamps`: `List_float` a list of timestamp pairs, with each pair representing the start and end times of a segment within the video.
59
+ - `tsum`: `List_str` each textual video summary provides a summarization of the corresponding video segment as defined by the timestamps.
60
+ - `vsum`: `List_float` each visual video summary corresponds to key frames within each video segment as defined by the timestamps. The dimensions (3 x 10) suggest that each video segment was reannotated by 10 different workers.
61
+ - `vsum_onehot`: `List_bool` one-hot matrix transformed from 'vsum'. The dimensions (10 x 83) denotes the one-hot labels spanning the entire length of a video, as annotated by 10 workers.
62
+
63
+ ### Annotation Sample
64
+ For each video, We hire workers to annotate ten shortened video summaries.
65
+ ``` json
66
+ {
67
+ 'video_id': 'v_QOlSCBRmfWY',
68
+ 'duration': 82.73,
69
+ 'sampled_frames': 83
70
+ 'timestamps': [[0.83, 19.86], [17.37, 60.81], [56.26, 79.42]],
71
+ 'tsum': ['A young woman is seen standing in a room and leads into her dancing.',
72
+ 'The girl dances around the room while the camera captures her movements.',
73
+ 'She continues dancing around the room and ends by laying on the floor.'],
74
+ 'vsum': [[[ 7.01, 12.37], ...],
75
+ [[41.05, 45.04], ...],
76
+ [[65.74, 69.28], ...]] (3 x 10 dim)
77
+ 'vsum_onehot': [[[0,0,0,...,1,1,...], ...],
78
+ [[0,0,0,...,1,1,...], ...],
79
+ [[0,0,0,...,1,1,...], ...],] (10 x 83 dim)
80
+ }
81
+ ```
82
+
83
+
84
+ ## Citation
85
+
86
+ ```bibtex
87
+ @article{lin2023videoxum,
88
+ author = {Lin, Jingyang and Hua, Hang and Chen, Ming and Li, Yikang and Hsiao, Jenhao and Ho, Chiuman and Luo, Jiebo},
89
+ title = {VideoXum: Cross-modal Visual and Textural Summarization of Videos},
90
+ journal = {IEEE Transactions on Multimedia},
91
+ year = {2023},
92
+ }
93
+ ```
94
+
test_videoxum.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d60d44e81ee815c0e2dc0d3eae8c49cbfcc23af67d1caf44e9dea70cc052a54d
3
+ size 66271547
train_videoxum.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1df2b42a15c02bbe3228178c35fe77fda29f478f07b0a8248a6bdf59c02ba703
3
+ size 38357573
val_videoxum.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2832f423ff5fd768a979665e30345a56f1723aacc1821986b5ec042bed30f556
3
+ size 9575564