Antreas commited on
Commit
8a5a0e8
1 Parent(s): ec4bf99

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +87 -1
README.md CHANGED
@@ -72,7 +72,93 @@ dataset_info:
72
  num_examples: 61389
73
  download_size: 2058391040534
74
  dataset_size: 2118230876870.25
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
  ---
76
- # Dataset Card for "TALI-big-2.0"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77
 
78
  [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
72
  num_examples: 61389
73
  download_size: 2058391040534
74
  dataset_size: 2118230876870.25
75
+ license: cc-by-4.0
76
+ task_categories:
77
+ - zero-shot-classification
78
+ tags:
79
+ - video
80
+ - audio
81
+ - text
82
+ - image
83
+ - tetramodal
84
+ - multimodal
85
+ - youtube
86
+ - wikipedia
87
+ pretty_name: TALI
88
+ size_categories:
89
+ - 1M<n<10M
90
  ---
91
+ # Dataset Card for "TALI"
92
+
93
+ ## Table of Contents
94
+ 1. Dataset Description
95
+ 1. Abstract
96
+ 2. Brief Description
97
+ 2. Dataset Information
98
+ 1. Modalities
99
+ 2. Dataset Variants
100
+ 3. Dataset Statistics
101
+ 4. Data Fields
102
+ 5. Data Splits
103
+ 3. Dataset Creation
104
+ 4. Dataset Use
105
+ 5. Additional Information
106
+
107
+ ## Dataset Description
108
+
109
+ ### Abstract
110
+ TALI is a large-scale, tetramodal dataset designed to facilitate a shift from unimodal and duomodal to tetramodal research in deep learning. It aligns text, video, images, and audio, providing a rich resource for innovative self-supervised learning tasks and multimodal research. TALI enables exploration of how different modalities and data/model scaling affect downstream performance, with the aim of inspiring diverse research ideas and enhancing understanding of model capabilities and robustness in deep learning.
111
+
112
+ ### Brief Description
113
+ TALI (Temporally and semantically Aligned Audio, Language and Images) is a dataset that uses the Wikipedia Image Text (WIT) captions and article titles to search Youtube for videos that match the captions. It then downloads the video, audio, and subtitles from these videos. The result is a rich multimodal dataset that has multiple caption types related to both the WiT Images, and the Youtube videos. This enables learning to take place between either temporally or semantically aligned text, images, audio and video.
114
+
115
+ ## Dataset Information
116
+ ### Modalities
117
+ The TALI dataset consists of the following modalities:
118
+
119
+ 1. Image:
120
+ 1. Wikipedia caption image
121
+ 2. Randomly sampled image from youtube video
122
+ 2. Text
123
+ 1. Wikipedia Caption Text
124
+ 2. Wikipedia Title Text
125
+ 3. Wikipedia Main Body Text
126
+ 4. YouTube Subtitle Text
127
+ 5. YouTube Description Text
128
+ 6. YouTube Title Text
129
+ 3. Audio
130
+ 1. YouTube Content Audio
131
+ 4. Video
132
+ 1. YouTube Content Video
133
+
134
+ ### Dataset Variants
135
+ The TALI dataset comes in three variants that differ in the training set size:
136
+
137
+ - TALI-small: Contains about 1.3 million 30-second video clips, aligned with 120K WiT entries.
138
+ - TALI-base: Contains about 6.5 million 30-second video clips, aligned with 120K WiT entries.
139
+ - TALI-big: Contains about 13 million 30-second video clips, aligned with 120K WiT entries.
140
+
141
+ The validation and test sets remain consistent across all three variants at about 80K Videos aligned to 8K wikipedia entries (10 subclips for each Wikipedia entry) each.
142
+
143
+
144
+ ### Dataset Statistics
145
+ TBA
146
+
147
+
148
+ ## Dataset Creation
149
+ The TALI dataset was created by starting from the WiT dataset and using either the context_page_description or page_title as a source-query to search YouTube for video that were creative commons opted-in, and, not age restricted. The top 100 result titles were returned and compared with the source-query using the CLIP text embeddings of the largest CLIP model available. The top-1 title’s video based on the CLIP ranking was chosen and downloaded. The video was broken into 30-second segments and the top-10 segments for eachvideo were chosen based on the distance between the CLIP image embedding of the first image of each segment and the video’s title text. The image, audio, and subtitle frames were extracted from these segments. At sampling time, one of these 10 segments is randomly selected, and a 10-second segment is chosen out of the 30-second clip. The result is 200 video frames (spread throughout the 10-second segment), and 160000 audio frames (10 seconds).
150
+
151
+ ## Dataset Use
152
+ TALI is designed for use in a wide range of multimodal research tasks, including but not limited to:
153
+
154
+ - Multimodal understanding and reasoning
155
+ - Self-supervised learning
156
+ - Multimodal alignment and translation
157
+ - Multimodal summarization
158
+ - Multimodal question answering
159
+
160
+ ## Dataset Curators: Antreas Antoniou
161
+ Citation Information: TBA
162
+ Contributions: Thanks to all contributors including data curators, annotators, and software developers.
163
 
164
  [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)