---
license: cc-by-sa-4.0
dataset_info:
features:
- name: video_id
dtype: string
- name: chunk_idx
dtype: int64
- name: chunk_text
dtype: string
- name: video_metadata
dtype: string
- name: video_language
dtype: string
- name: chunk_media
dtype: string
splits:
- name: shard_10339
num_bytes: 1997009
num_examples: 631
- name: shard_10400
num_bytes: 2638827
num_examples: 722
- name: shard_10424
num_bytes: 1839226
num_examples: 552
- name: shard_10324
num_bytes: 1700655
num_examples: 515
- name: shard_10428
num_bytes: 2314345
num_examples: 706
- name: shard_10258
num_bytes: 2899614
num_examples: 881
- name: shard_10396
num_bytes: 3458836
num_examples: 956
- name: shard_10418
num_bytes: 3034319
num_examples: 947
- name: shard_10206
num_bytes: 3714862
num_examples: 891
- name: shard_10442
num_bytes: 1543285
num_examples: 419
- name: shard_10411
num_bytes: 2005599
num_examples: 604
- name: shard_1045
num_bytes: 2042334
num_examples: 648
download_size: 9381853
dataset_size: 29188911
configs:
- config_name: default
data_files:
- split: train
path: data/*.parquet
- split: shard_10339
path: data/shard_10339-*
- split: shard_10400
path: data/shard_10400-*
- split: shard_10424
path: data/shard_10424-*
- split: shard_10324
path: data/shard_10324-*
- split: shard_10428
path: data/shard_10428-*
- split: shard_10258
path: data/shard_10258-*
- split: shard_10396
path: data/shard_10396-*
- split: shard_10411
path: data/shard_10411-*
- split: shard_10418
path: data/shard_10418-*
- split: shard_10206
path: data/shard_10206-*
- split: shard_10442
path: data/shard_10442-*
- split: shard_1045
path: data/shard_1045-*
---
![VALID Dataset](https://huggingface.co/datasets/ontocord/VALID/resolve/main/banner1-1.webp)
# VALID (Video-Audio Large Interleaved Dataset)
## Overview
The **VALID (Video-Audio Large Interleaved Dataset)** is a multimodal dataset comprising approximately 720,000 [Creative Commons licensed](https://creativecommons.org/share-your-work/cclicenses/) videos crawled from YouTube, and processed into audio-video-text data records for machine learning research. **We are in the process of uploading so please be patient.** The dataset provides a unique opportunity for training models to understand relationships between modalities such as video frames, audio clips, and multilingual textual data, making it suitable for applications like multimodal representation learning.
## Features
- Audio-Video-Text Format:
A combination of:
```
English text
```
- The non-text multimodal portion begins the data item and can include multiple media. Some snippets may have more than one audio, and more than one video. Others may have only images/videos or only audio paired with English text. Each video contains multiple frames stored as images, and text captions for each image. There can also be standalone images interleaved as well.
Even though each audio video snippets are no more than 10 seconds, a data record may span over more than 10 secs (e.g., if a data item has two 10 second videos, then the corresponding English text corresponds roughly to 20 seconds of video).
The intention for this format is to teach a model to associate multiple modalities with each other, and understand multiple audio-video elements in an interleaved fashion.
- Data Components:
- **Images**: PNG format, phashed to ensure variability, with 0–10 images per audio snippet. Each image includes a caption created with Florence-2.
- **Audio**: OGG format, multilingual, ~10 seconds per snippet, with shorter sound or music snippets (1–3 seconds) to minimize copyright issues. Each audio snippet is transcribed either with Whisper for non-English, or with the original Youtube ASR for English.
- **Text**: Not including the captions and transcripts, the “text” portion is a concatenation of Youtube’s original English transcripts associated with the above media of around 1–40 words per data record.
- Dataset Size:
- **About 7,000,000 records.**
- **About 15,000,000 images, each captioned with FLorence-2.**
- **About 30,000,000 audio snippets, about half of which transcribed with Whisper-large, and half with Youtube ASR.**
- **Divided into about 12K shards of about 600 records, each in a parquet file and a corresponding .tar.gz file for the media.**
- **About 14TB in total.**
## File Organization
- Each data entry follows the `