Datasets:
dataset_info:
features:
- name: video_id
dtype: string
- name: asr_raw
list:
- name: start
dtype: float64
- name: end
dtype: float64
- name: text
dtype: string
- name: words
list:
- name: confidence
dtype: float64
- name: start
dtype: float64
- name: end
dtype: float64
- name: text
dtype: string
- name: asr_grouped
list:
list: string
- name: ocr
list:
list: string
- name: blip2_annotations
struct:
- name: actions
list: string
- name: captions
list: string
- name: objects
list: string
- name: replay_graphs
struct:
- name: original_marker_duration
dtype: float64
- name: processed_marker_duration
dtype: float64
- name: multiplier
dtype: float64
- name: markers
list:
- name: start
dtype: float64
- name: end
dtype: float64
- name: replay_score
dtype: float64
- name: likes
dtype: float64
- name: views
dtype: float64
- name: metadata
struct:
- name: title
dtype: string
- name: description
dtype: string
- name: length
dtype: float64
- name: date
dtype: string
- name: channel_data
struct:
- name: channel_id
dtype: string
- name: company_name
dtype: string
- name: subscribers
dtype: float64
splits:
- name: train
num_bytes: 396758465
num_examples: 22569
- name: test
num_bytes: 35343326
num_examples: 2026
download_size: 135245985
dataset_size: 432101791
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
license: mit
pretty_name: Content Behavior Corpus
language:
- en
tags:
- youtube
- content
- behavior
- likes
- views
- transcript
- captions
- OCR
- replay
Dataset Card for Content Behavior Corpus
The Content Behavior Corpus (CBC) dataset, consisting of content and the corresponding receiver behavior.
Dataset Details
The progress of Large Language Models (LLMs) has largely been driven by the availability of large-scale unlabeled text data for unsupervised learning. This work focuses on modeling both content and the corresponding receiver behavior in the same space. Although existing datasets have trillions of content tokens (text, images, audio, and videos), they lack information on receiver effects. To address this, the paper utilizes YouTube, a large publicly available source of content-behavior data, which includes:
Communicator Data: Channel name, and number of subscribers.
Message: Youtube video ids, extracted speech, scene-wise captions, on screen text, video description, video length, upload date.
Receiver Effect: Video likes, views, and replay graphs.
This covers all five factors of communication, with the channel being fixed (YouTube) and receivers being average channel subscribers and viewers.
Dataset Structure
Fields
video_id (
string
): Unique identifier for each video.asr_raw (
list of objects
): Raw Automatic Speech Recognition (ASR) data.- start (
float64
): Start time of the ASR segment. - end (
float64
): End time of the ASR segment. - text (
string
): Transcription of the ASR segment. - words (
list of objects
): Word-level ASR details.- confidence (
float64
): Confidence score of the ASR word. - start (
float64
): Start time of the word. - end (
float64
): End time of the word. - text (
string
): Transcription of the word.
- confidence (
- start (
asr_grouped (
list of lists of strings
): ASR transcriptions grouped by replay segments.ocr (
list of lists of strings
): Optical Character Recognition (OCR) data for each replay segment.blip2_annotations (
object
): BLIP-2 annotations for the video's replay segments.- actions (
list of strings
): List of actions identified in each replay segment. - captions (
list of strings
): List of image captions generated for each replay segment. - objects (
list of strings
): List of objects identified in each replay segment.
- actions (
replay_graphs (
object
): Data related to video replay behavior.- original_marker_duration (
float64
): Original duration for replay segments. - multiplier (
float64
): Number of original replay segments combined to create processed replay segments. - processed_marker_duration (
float64
): Processed duration for replay segments. - markers (
list of objects
): Replay segments.- start (
float64
): Start time of the replay segment. - end (
float64
): End time of the replay segment. - replay_score (
float64
Score[0, 1]
): indicating replay behavior.
- start (
- original_marker_duration (
likes (
float64
): Number of likes the video received.views (
float64
): Number of views the video received.metadata (
object
): Metadata associated with the video.- title (
string
): Title of the video. - description (
string
): Description of the video. - length (
float64
): Length of the video in seconds. - date (
string
): Publication date of the video.
- title (
channel_data (
object
): Information about the YouTube channel.- channel_id (
string
): Unique identifier for the channel. - company_name (
string
): Name of the company or individual owning the channel. - subscribers (
float64
): Number of subscribers to the channel.
- channel_id (
Data Collection and Processing
- videos: Videos were downloaded using pytube.
- asr_raw: Extracted using openai/whisper-medium and the whisper-timestamped library.
- asr_grouped: Extracted words from asr_raw are grouped by the replay segment that they fall into. A word may fall into multiple replay segments if its duration intersects with multiple replay segments.
- ocr: OCR extracted using PaddleOCR.
- blip2_annotations: Annotations extracted using blip2-flan-t5-xxl.
- replay_graphs: Extracted by directly parsing a video page's HTML content. Original replay segments are combined to have a duration >= 1 second, giving processed replay segments.
processed_duration = multiplier * original_duration
. - likes: Extracted by directly parsing a video page's HTML content.
- views: Extracted by directly parsing a video page's HTML content.
- metadata: Extracted by directly parsing a video page's HTML content.
- channel_data: Extracted by directly parsing a video page's HTML content.
Citation
BibTeX:
@inproceedings{
khandelwal2024large,
title={Large Content And Behavior Models To Understand, Simulate, And Optimize Content And Behavior},
author={Ashmit Khandelwal and Aditya Agrawal and Aanisha Bhattacharyya and Yaman Kumar and Somesh Singh and Uttaran Bhattacharya and Ishita Dasgupta and Stefano Petrangeli and Rajiv Ratn Shah and Changyou Chen and Balaji Krishnamurthy},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://arxiv.org/abs/2309.00359}
}
APA:
Khandelwal, A., Agrawal, A., Bhattacharyya, A., Kumar, Y., Singh, S., Bhattacharya, U., Dasgupta, I., Petrangeli, S., Shah, R. R., Chen, C., & Krishnamurthy, B. (2024). Large Content And Behavior Models To Understand, Simulate, And Optimize Content And Behavior. The Twelfth International Conference on Learning Representations. https://arxiv.org/abs/2309.00359
Contact
Contact behavior-in-the-wild@googlegroups.com for questions and suggestions.