Antreas commited on
Commit
753b140
1 Parent(s): 8a5a0e8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +261 -6
README.md CHANGED
@@ -131,14 +131,269 @@ The TALI dataset consists of the following modalities:
131
  4. Video
132
  1. YouTube Content Video
133
 
134
- ### Dataset Variants
135
- The TALI dataset comes in three variants that differ in the training set size:
136
 
137
- - TALI-small: Contains about 1.3 million 30-second video clips, aligned with 120K WiT entries.
138
- - TALI-base: Contains about 6.5 million 30-second video clips, aligned with 120K WiT entries.
139
- - TALI-big: Contains about 13 million 30-second video clips, aligned with 120K WiT entries.
140
 
141
- The validation and test sets remain consistent across all three variants at about 80K Videos aligned to 8K wikipedia entries (10 subclips for each Wikipedia entry) each.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
142
 
143
 
144
  ### Dataset Statistics
 
131
  4. Video
132
  1. YouTube Content Video
133
 
134
+ ## Usage:
135
+ To get started with TALI, you can load the dataset via Hugging Face's `datasets` library through our helper functions. The reason we don't use `datasets` directly is because we found huggingface_hub downloads much faster and reliable. For a full set of possible configurations look at [examples.py](examples.py). Here's a basic usage example:
136
 
137
+ First install the tali package:
 
 
138
 
139
+ ### Installation
140
+
141
+ For the default install use:
142
+
143
+ ```bash
144
+ pip install git+https://github.com/AntreasAntoniou/TALI
145
+ ```
146
+
147
+ For the dev install use:
148
+
149
+ ```bash
150
+ pip install git+https://github.com/AntreasAntoniou/TALI[dev]
151
+ ```
152
+
153
+ Then use the dataset using:
154
+
155
+ ### Examples
156
+ Import relevant helper functions
157
+ ```python
158
+ import pathlib
159
+ from enum import Enum
160
+
161
+ import torch
162
+ from tqdm.auto import tqdm
163
+
164
+ from tali.data import (
165
+ SubModalityTypes,
166
+ TALIBaseTransform,
167
+ TALIBaseTransformConfig,
168
+ VideoFramesFormat,
169
+ default_transforms,
170
+ load_dataset_via_hub,
171
+ )
172
+ ```
173
+
174
+ #### TALI with default transforms (CLIP and Whisper) and no streaming
175
+
176
+ ```python
177
+ def tali_with_transforms_no_streaming(
178
+ dataset_storage_path: pathlib.Path | str,
179
+ ):
180
+ if isinstance(dataset_storage_path, str):
181
+ dataset_storage_path = pathlib.Path(dataset_storage_path)
182
+
183
+ dataset = load_dataset_via_hub(
184
+ dataset_storage_path, dataset_name="Antreas/TALI"
185
+ )["train"]
186
+
187
+ (
188
+ image_transforms,
189
+ text_transforms,
190
+ audio_transforms,
191
+ video_transforms,
192
+ ) = default_transforms()
193
+
194
+ preprocessing_transform = TALIBaseTransform(
195
+ cache_dir=dataset_storage_path / "cache",
196
+ text_tokenizer=text_transforms,
197
+ image_tokenizer=image_transforms,
198
+ audio_tokenizer=audio_transforms,
199
+ video_tokenizer=video_transforms,
200
+ config=TALIBaseTransformConfig(
201
+ root_filepath=dataset_storage_path,
202
+ modality_list=[
203
+ SubModalityTypes.youtube_content_video,
204
+ SubModalityTypes.youtube_content_audio,
205
+ SubModalityTypes.youtube_random_video_frame,
206
+ SubModalityTypes.youtube_subtitle_text,
207
+ SubModalityTypes.youtube_description_text,
208
+ SubModalityTypes.youtube_title_text,
209
+ SubModalityTypes.wikipedia_caption_image,
210
+ SubModalityTypes.wikipedia_caption_text,
211
+ SubModalityTypes.wikipedia_main_body_text,
212
+ SubModalityTypes.wikipedia_title_text,
213
+ ],
214
+ video_frames_format=VideoFramesFormat.PIL,
215
+ ),
216
+ )
217
+
218
+ for sample in tqdm(dataset):
219
+ sample = preprocessing_transform(sample)
220
+ print(list(sample.keys()))
221
+ for key, value in sample.items():
222
+ if hasattr(value, "shape"):
223
+ print(key, value.shape)
224
+ elif isinstance(value, torch.Tensor):
225
+ print(key, value.shape)
226
+ elif hasattr(value, "__len__"):
227
+ print(key, len(value))
228
+ print(key, type(value))
229
+
230
+ break
231
+
232
+
233
+ ```
234
+
235
+ #### TALI with no transforms and no streaming, returning text as text, images as PIL images, videos as a list of PIL images, and audio as a sequence of floats
236
+
237
+ ```python
238
+ def tali_without_transforms_no_streaming(
239
+ dataset_storage_path: pathlib.Path | str,
240
+ ):
241
+ if isinstance(dataset_storage_path, str):
242
+ dataset_storage_path = pathlib.Path(dataset_storage_path)
243
+
244
+ dataset = load_dataset_via_hub(
245
+ dataset_storage_path, dataset_name="Antreas/TALI"
246
+ )["train"]
247
+
248
+ preprocessing_transform = TALIBaseTransform(
249
+ cache_dir=dataset_storage_path / "cache",
250
+ text_tokenizer=None,
251
+ image_tokenizer=None,
252
+ audio_tokenizer=None,
253
+ video_tokenizer=None,
254
+ config=TALIBaseTransformConfig(
255
+ root_filepath=dataset_storage_path,
256
+ modality_list=[
257
+ SubModalityTypes.youtube_content_video,
258
+ SubModalityTypes.youtube_content_audio,
259
+ SubModalityTypes.youtube_random_video_frame,
260
+ SubModalityTypes.youtube_subtitle_text,
261
+ SubModalityTypes.youtube_description_text,
262
+ SubModalityTypes.youtube_title_text,
263
+ SubModalityTypes.wikipedia_caption_image,
264
+ SubModalityTypes.wikipedia_caption_text,
265
+ SubModalityTypes.wikipedia_main_body_text,
266
+ SubModalityTypes.wikipedia_title_text,
267
+ ],
268
+ video_frames_format=VideoFramesFormat.PIL,
269
+ ),
270
+ )
271
+
272
+ for sample in tqdm(dataset):
273
+ sample = preprocessing_transform(sample)
274
+ print(list(sample.keys()))
275
+ for key, value in sample.items():
276
+ if hasattr(value, "shape"):
277
+ print(key, value.shape)
278
+ elif isinstance(value, torch.Tensor):
279
+ print(key, value.shape)
280
+ elif hasattr(value, "__len__"):
281
+ print(key, len(value))
282
+ print(key, type(value))
283
+
284
+ break
285
+ ```
286
+
287
+ #### TALI with default transforms and streaming
288
+
289
+ ```python
290
+ def tali_with_transforms_streaming(
291
+ dataset_storage_path: pathlib.Path | str,
292
+ ):
293
+ if isinstance(dataset_storage_path, str):
294
+ dataset_storage_path = pathlib.Path(dataset_storage_path)
295
+
296
+ dataset = load_dataset_via_hub(
297
+ dataset_storage_path, dataset_name="Antreas/TALI", streaming=True
298
+ )["train"]
299
+
300
+ (
301
+ image_transforms,
302
+ text_transforms,
303
+ audio_transforms,
304
+ video_transforms,
305
+ ) = default_transforms()
306
+
307
+ preprocessing_transform = TALIBaseTransform(
308
+ cache_dir=dataset_storage_path / "cache",
309
+ text_tokenizer=text_transforms,
310
+ image_tokenizer=image_transforms,
311
+ audio_tokenizer=audio_transforms,
312
+ video_tokenizer=video_transforms,
313
+ config=TALIBaseTransformConfig(
314
+ root_filepath=dataset_storage_path,
315
+ modality_list=[
316
+ SubModalityTypes.youtube_content_video,
317
+ SubModalityTypes.youtube_content_audio,
318
+ SubModalityTypes.youtube_random_video_frame,
319
+ SubModalityTypes.youtube_subtitle_text,
320
+ SubModalityTypes.youtube_description_text,
321
+ SubModalityTypes.youtube_title_text,
322
+ SubModalityTypes.wikipedia_caption_image,
323
+ SubModalityTypes.wikipedia_caption_text,
324
+ SubModalityTypes.wikipedia_main_body_text,
325
+ SubModalityTypes.wikipedia_title_text,
326
+ ],
327
+ video_frames_format=VideoFramesFormat.PIL,
328
+ ),
329
+ )
330
+
331
+ for sample in tqdm(dataset):
332
+ sample = preprocessing_transform(sample)
333
+ print(list(sample.keys()))
334
+ for key, value in sample.items():
335
+ if hasattr(value, "shape"):
336
+ print(key, value.shape)
337
+ elif isinstance(value, torch.Tensor):
338
+ print(key, value.shape)
339
+ elif hasattr(value, "__len__"):
340
+ print(key, len(value))
341
+ print(key, type(value))
342
+
343
+ break
344
+
345
+ ```
346
+
347
+ #### TALI with no transforms and streaming, returning text as text, images as PIL images, videos as a list of PIL images, and audio as a sequence of floats
348
+ ```python
349
+ def tali_without_transforms_streaming(
350
+ dataset_storage_path: pathlib.Path | str,
351
+ ):
352
+ if isinstance(dataset_storage_path, str):
353
+ dataset_storage_path = pathlib.Path(dataset_storage_path)
354
+
355
+ dataset = load_dataset_via_hub(
356
+ dataset_storage_path, dataset_name="Antreas/TALI", streaming=True
357
+ )["train"]
358
+
359
+ preprocessing_transform = TALIBaseTransform(
360
+ cache_dir=dataset_storage_path / "cache",
361
+ text_tokenizer=None,
362
+ image_tokenizer=None,
363
+ audio_tokenizer=None,
364
+ video_tokenizer=None,
365
+ config=TALIBaseTransformConfig(
366
+ root_filepath=dataset_storage_path,
367
+ modality_list=[
368
+ SubModalityTypes.youtube_content_video,
369
+ SubModalityTypes.youtube_content_audio,
370
+ SubModalityTypes.youtube_random_video_frame,
371
+ SubModalityTypes.youtube_subtitle_text,
372
+ SubModalityTypes.youtube_description_text,
373
+ SubModalityTypes.youtube_title_text,
374
+ SubModalityTypes.wikipedia_caption_image,
375
+ SubModalityTypes.wikipedia_caption_text,
376
+ SubModalityTypes.wikipedia_main_body_text,
377
+ SubModalityTypes.wikipedia_title_text,
378
+ ],
379
+ video_frames_format=VideoFramesFormat.PIL,
380
+ ),
381
+ )
382
+
383
+ for sample in tqdm(dataset):
384
+ sample = preprocessing_transform(sample)
385
+ print(list(sample.keys()))
386
+ for key, value in sample.items():
387
+ if hasattr(value, "shape"):
388
+ print(key, value.shape)
389
+ elif isinstance(value, torch.Tensor):
390
+ print(key, value.shape)
391
+ elif hasattr(value, "__len__"):
392
+ print(key, len(value))
393
+ print(key, type(value))
394
+
395
+ break
396
+ ```
397
 
398
 
399
  ### Dataset Statistics