The dataset viewer is not available for this subset.
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
VALID (Video-Audio Large Interleaved Dataset)
Overview
The VALID (Video-Audio Large Interleaved Dataset) is a multimodal dataset comprising approximately 720,000 Creative Commons licensed videos crawled from YouTube, and processed into audio-video-text data records for machine learning research. The dataset provides a unique opportunity for training models to understand relationships between modalities such as video frames, audio clips, and multilingual textual data, making it suitable for applications like multimodal representation learning.
- Please note the current version is a PREVIEW version. We are still in the process of uploading. Please be patient.
Features
- Audio-Video-Text Format: A combination of:
<video>
<caption><image> the caption </caption>
<caption><image> the caption </caption>
<caption><image> the caption </caption>
</video>
<transcript> <audio> multi-lingual transcript </transcript>
English text
The non-text multimodal portion begins the data item and can include multiple media. Some snippets may have more than one audio, and more than one video. Others may have only images/videos or only audio paired with English text. Each video contains multiple frames stored as images, and text captions for each image. There can also be standalone images interleaved as well. Even though each audio video snippets are no more than 10 seconds, a data record may span over more than 10 secs (e.g., if a data item has two 10 second videos, then the corresponding English text corresponds roughly to 20 seconds of video). The intention for this format is to teach a model to associate multiple modalities with each other, and understand multiple audio-video elements in an interleaved fashion.
Data Components:
- Images: PNG format, phashed to ensure variability, with 0–10 images per audio snippet. Each image includes a caption created with Florence-2.
- Audio: OGG format, multilingual, ~10 seconds per snippet, with shorter sound or music snippets (1–3 seconds) to minimize copyright issues. Each audio snippet is transcribed either with Whisper for non-English, or with the original Youtube ASR for English.
- Text: Not including the captions and transcripts, the “text” portion is a concatenation of Youtube’s original English transcripts associated with the above media of around 1–40 words per data record.
Dataset Size:
- About 7,000,000 records.
- About 15,000,000 images, each captioned with FLorence-2.
- About 30,000,000 audio snippets, about half of which transcribed with Whisper-large, and half with Youtube ASR.
- Divided into about 12K shards of about 600 records, each in a parquet file and a corresponding .tar.gz file for the media.
- About 14TB in total.
File Organization
- Each data entry follows the
<video><image(s)><audio><text>
structure as described above. - Metadata includes alignment between modalities, and implicit ordering of audio/visual elements.
Multimodal Details
- Audio-Video Alignment: Snippets allow learning temporal relationships between audio and visual elements.
- Text Annotations: Text descriptions, including captions and Youtube ASR English translations, provide linguistic alignment.
Preprocessing
- Phashing for Images: Ensures that images within the dataset are dynamic and non-static.
- Audio Snippet Lengths: Music and sound effects are clipped to 1–3 seconds to minimize copyright concerns under fair use principles.
Licenses
All videos in VALID are CC BY, as declared by their original uploaders on YouTube. We publish the audio snippets of these videos and select image frames here under these rights and under the principles of fair use. However, we cannot guarantee that original uploaders had the rights to share the content. This dataset has only been lightly filtered for safety by removing data records with high proportions of children related words AND high proportions of sexual or violence related words. Moreover, we disclaim all warranties, whether express or implied and all laibilities with respect to infringment, fitness for a particular puprpose, or otherwise.
Intended Uses
- Primary Use Case: Training models for multimodal understanding, such as contrastive multimodal learning (e.g., CLIP, CLAP).
- Not Recommended For: Generation tasks, as the dataset's quality may not meet generative model requirements.
Dataset Limitations
- Quality: Images and audio are sourced from YouTube and may vary in resolution and clarity.
- Rights Uncertainty: While videos are marked as CC-BY by the third party authors of the videos, original rights may not be verifiable.
- Biases: The dataset's multilingual audio paired with English-only text may introduce linguistic biases. The large variety of videos may introduce bias.
Ethical Considerations
The dataset was built under the principles of fair use and CC-BY licensing. Its creation strives to align with the spirit of the EU AI Act, emphasizing transparency and safety in AI model development. Users must exercise caution and adhere to copyright and licensing rules when using VALID.
Policy for Managing Video Deletion Requests
Our goal is to establish a clear process for removing videos from our dataset when requested by users or required by external factors, while balancing the rights of content owners, compliance with CC-BY licenses, and the community's ability to utilize the dataset for training and research purposes.
1. Respecting Content Owners' Rights: All videos in the dataset are under the CC-BY license. As such, proper attribution will always be maintained as required by the license. If a content owner requests the removal of a video from the dataset, we will balance this request with the community's ability to train on the data, considering the original intent of the CC-BY license.
2. Deletion Request Process:
- Content owners or users can request the removal of a video by FIRST requesting it be removed from Youtube: Here and Here.
- Then the onwers or users should verify that it has been removed from YouTube and provide this fact in a feedback to us Here.
- Requests must demonstrate that the video is no longer publicly available on YouTube.
- We will remove the videos confirmed to be deleted in the next release of this dataset.
3. Verification and Balancing Interests: All deletion requests will be verified by checking YouTube to ensure the video is no longer available. We may also remove a video in our sole discretion. Decisions on video removal will take into account:
The rights and wishes of content owners, including their ability to remove their videos from public availability.
The community's need for robust datasets for training and research.
The spirit of the CC-BY license, which permits redistribution and use with proper attribution.
4. Responsibilities for Derivative Datasets: Users creating derivative datasets must ensure compliance by deleting videos listed in
delete_these_videos.json
.5. Proactive Deletion: Videos may be removed proactively under the following circumstances:
Requests from the hosting provider (e.g., Hugging Face).
Legal requirements or enforcement actions.
Internal decisions.
6. Community Considerations:
The community is encouraged to respect the balance between individual content owners’ wishes and the public benefit derived from open access datasets.
Efforts will be made to keep the dataset robust while honoring legitimate requests for content removal.
7. Updates: Users are encouraged to check the
delete_these_videos.json
, from time to time to ensure their copy of the dataset is up to date.
Related Materials:
- If you are looking for CC-BY Youtube transcripts of videos, check out PleIAs’ YouTube-Commons.
- Also, Huggingface has created an excellent CC-BY Youtube video dataset here: Finevideo
- LAION is also building a dataset Here which includes Youtube audio snippets paired with Gemini generated captions.
Acknowledgement and Thanks
This dataset was built by Ontocord.AI in cooperation with Grass and LAION.AI. It was created as part of our SafeLLM/Aurora-M2 project in order to build safe multimodal models that comply with the EU AI Act. This dataset was built on a subset of the Grass Video Repository, a massive video dataset of creative commons videos. We deeply thank Huggingface and the open source community for their support.
About the Contributors:
- Grass is committed to making the public web accessible again. Through its network of millions of globally distributed nodes, it is capable of collecting petabyte-scale datasets for a variety of use cases, including training AI models. The network is run exclusively by users who have downloaded an application to their devices, allowing them to contribute their unused internet bandwidth to the network. On X: @getgrass_io
- LAION, is a non-profit organization, that provides datasets, tools and models to liberate machine learning research. By doing so, we encourage open public education and a more environment-friendly use of resources by reusing existing datasets and models.
- Ontocord is dedicated to making legally compliant AI. Our mission is to make our AGI future lawful and accessible to everyone.
- Alignment Lab AI: Our mission is to build a future leveraging AI as a force for good and as a tool that enhances human lives. We believe everyone deserves to harness the power of personal intelligence.
- And many others ...
Citation
@misc{Huu2024VALID,
title = {VALID (Video-Audio Large Interleaved Dataset)},
author = {Huu Nguyen, Ken Tsui, Andrej Radonjic, Christoph Schuhmann},
year = {2024}
url = {https://huggingface.co/datasets/ontocord/VALID},
}
- Downloads last month
- 9,203