|
--- |
|
dataset_info: |
|
features: |
|
- name: video_id |
|
dtype: string |
|
- name: asr_raw |
|
list: |
|
- name: start |
|
dtype: float64 |
|
- name: end |
|
dtype: float64 |
|
- name: text |
|
dtype: string |
|
- name: words |
|
list: |
|
- name: confidence |
|
dtype: float64 |
|
- name: start |
|
dtype: float64 |
|
- name: end |
|
dtype: float64 |
|
- name: text |
|
dtype: string |
|
- name: asr_grouped |
|
list: |
|
list: string |
|
- name: ocr |
|
list: |
|
list: string |
|
- name: blip2_annotations |
|
struct: |
|
- name: actions |
|
list: string |
|
- name: captions |
|
list: string |
|
- name: objects |
|
list: string |
|
- name: replay_graphs |
|
struct: |
|
- name: original_marker_duration |
|
dtype: float64 |
|
- name: processed_marker_duration |
|
dtype: float64 |
|
- name: multiplier |
|
dtype: float64 |
|
- name: markers |
|
list: |
|
- name: start |
|
dtype: float64 |
|
- name: end |
|
dtype: float64 |
|
- name: replay_score |
|
dtype: float64 |
|
- name: likes |
|
dtype: float64 |
|
- name: views |
|
dtype: float64 |
|
- name: metadata |
|
struct: |
|
- name: title |
|
dtype: string |
|
- name: description |
|
dtype: string |
|
- name: length |
|
dtype: float64 |
|
- name: date |
|
dtype: string |
|
- name: channel_data |
|
struct: |
|
- name: channel_id |
|
dtype: string |
|
- name: company_name |
|
dtype: string |
|
- name: subscribers |
|
dtype: float64 |
|
splits: |
|
- name: train |
|
num_bytes: 396758465 |
|
num_examples: 22569 |
|
- name: test |
|
num_bytes: 35343326 |
|
num_examples: 2026 |
|
download_size: 135245985 |
|
dataset_size: 432101791 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- split: test |
|
path: data/test-* |
|
license: mit |
|
pretty_name: Content Behavior Corpus |
|
language: |
|
- en |
|
tags: |
|
- youtube |
|
- content |
|
- behavior |
|
- likes |
|
- views |
|
- transcript |
|
- captions |
|
- OCR |
|
- replay |
|
--- |
|
|
|
# Dataset Card for Content Behavior Corpus |
|
|
|
The Content Behavior Corpus (CBC) dataset, consisting of content and the corresponding receiver behavior. |
|
|
|
## Dataset Details |
|
|
|
<img src="./content-behavior-five-factors.png" alt="content-behavior-five-factors" width="1000"/> |
|
|
|
The progress of Large Language Models (LLMs) has largely been driven by the availability of large-scale unlabeled text data for unsupervised learning. This work focuses on modeling both content and the corresponding receiver behavior in the same space. Although existing datasets have trillions of content tokens (text, images, audio, and videos), they lack information on receiver effects. To address this, the paper utilizes YouTube, a large publicly available source of content-behavior data, which includes: |
|
|
|
Communicator Data: Channel name, and number of subscribers. |
|
Message: Youtube video ids, extracted speech, scene-wise captions, on screen text, video description, video length, upload date. |
|
Receiver Effect: Video likes, views, and replay graphs. |
|
|
|
This covers all five factors of communication, with the channel being fixed (YouTube) and receivers being average channel subscribers and viewers. |
|
|
|
- **Website:** https://behavior-in-the-wild.github.io/LCBM |
|
- **Paper:** https://arxiv.org/abs/2309.00359 |
|
|
|
<!-- - **License:** [More Information Needed] --> |
|
|
|
<!-- ## Uses --> |
|
|
|
<!-- Address questions around how the dataset is intended to be used. --> |
|
|
|
<!-- ### Direct Use --> |
|
|
|
<!-- This section describes suitable use cases for the dataset. --> |
|
|
|
<!-- ### Out-of-Scope Use --> |
|
|
|
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> |
|
|
|
## Dataset Structure |
|
|
|
### Fields |
|
|
|
- **video_id** (`string`): Unique identifier for each video. |
|
|
|
- **asr_raw** (`list of objects`): Raw Automatic Speech Recognition (ASR) data. |
|
- **start** (`float64`): Start time of the ASR segment. |
|
- **end** (`float64`): End time of the ASR segment. |
|
- **text** (`string`): Transcription of the ASR segment. |
|
- **words** (`list of objects`): Word-level ASR details. |
|
- **confidence** (`float64`): Confidence score of the ASR word. |
|
- **start** (`float64`): Start time of the word. |
|
- **end** (`float64`): End time of the word. |
|
- **text** (`string`): Transcription of the word. |
|
|
|
- **asr_grouped** (`list of lists of strings`): ASR transcriptions grouped by replay segments. |
|
|
|
- **ocr** (`list of lists of strings`): Optical Character Recognition (OCR) data for each replay segment. |
|
|
|
- **blip2_annotations** (`object`): BLIP-2 annotations for the video's replay segments. |
|
- **actions** (`list of strings`): List of actions identified in each replay segment. |
|
- **captions** (`list of strings`): List of image captions generated for each replay segment. |
|
- **objects** (`list of strings`): List of objects identified in each replay segment. |
|
|
|
- **replay_graphs** (`object`): Data related to video replay behavior. |
|
- **original_marker_duration** (`float64`): Original duration for replay segments. |
|
- **multiplier** (`float64`): Number of original replay segments combined to create processed replay segments. |
|
- **processed_marker_duration** (`float64`): Processed duration for replay segments. |
|
- **markers** (`list of objects`): Replay segments. |
|
- **start** (`float64`): Start time of the replay segment. |
|
- **end** (`float64`): End time of the replay segment. |
|
- **replay_score** (`float64` Score `[0, 1]`): indicating replay behavior. |
|
|
|
- **likes** (`float64`): Number of likes the video received. |
|
|
|
- **views** (`float64`): Number of views the video received. |
|
|
|
- **metadata** (`object`): Metadata associated with the video. |
|
- **title** (`string`): Title of the video. |
|
- **description** (`string`): Description of the video. |
|
- **length** (`float64`): Length of the video in seconds. |
|
- **date** (`string`): Publication date of the video. |
|
|
|
- **channel_data** (`object`): Information about the YouTube channel. |
|
- **channel_id** (`string`): Unique identifier for the channel. |
|
- **company_name** (`string`): Name of the company or individual owning the channel. |
|
- **subscribers** (`float64`): Number of subscribers to the channel. |
|
|
|
### Data Collection and Processing |
|
|
|
<!-- ### Data Collection and Processing --> |
|
|
|
- **videos**: Videos were downloaded using [pytube](https://github.com/pytube/pytube). |
|
- **asr_raw**: Extracted using [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) and the [whisper-timestamped](https://github.com/linto-ai/whisper-timestamped) library. |
|
- **asr_grouped**: Extracted words from **asr_raw** are grouped by the replay segment that they fall into. A word may fall into multiple replay segments if its duration intersects with multiple replay segments. |
|
- **ocr**: OCR extracted using [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR). |
|
- **blip2_annotations**: Annotations extracted using [blip2-flan-t5-xxl](https://huggingface.co/Salesforce/blip2-flan-t5-xxl). |
|
- **replay_graphs**: Extracted by directly parsing a video page's HTML content. Original replay segments are combined to have a duration >= 1 second, giving processed replay segments. `processed_duration = multiplier * original_duration`. |
|
- **likes**: Extracted by directly parsing a video page's HTML content. |
|
- **views**: Extracted by directly parsing a video page's HTML content. |
|
- **metadata**: Extracted by directly parsing a video page's HTML content. |
|
- **channel_data**: Extracted by directly parsing a video page's HTML content. |
|
|
|
<!-- #### Who are the source data producers? --> |
|
|
|
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> |
|
|
|
<!-- #### Personal and Sensitive Information --> |
|
|
|
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> |
|
|
|
<!-- ## Bias, Risks, and Limitations --> |
|
|
|
<!-- This section is meant to convey both technical and sociotechnical limitations. --> |
|
|
|
<!-- ### Recommendations --> |
|
|
|
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> |
|
|
|
## Citation |
|
|
|
**BibTeX:** |
|
|
|
``` |
|
@inproceedings{ |
|
khandelwal2024large, |
|
title={Large Content And Behavior Models To Understand, Simulate, And Optimize Content And Behavior}, |
|
author={Ashmit Khandelwal and Aditya Agrawal and Aanisha Bhattacharyya and Yaman Kumar and Somesh Singh and Uttaran Bhattacharya and Ishita Dasgupta and Stefano Petrangeli and Rajiv Ratn Shah and Changyou Chen and Balaji Krishnamurthy}, |
|
booktitle={The Twelfth International Conference on Learning Representations}, |
|
year={2024}, |
|
url={https://arxiv.org/abs/2309.00359} |
|
} |
|
``` |
|
|
|
**APA:** |
|
|
|
Khandelwal, A., Agrawal, A., Bhattacharyya, A., Kumar, Y., Singh, S., Bhattacharya, U., Dasgupta, I., Petrangeli, S., Shah, R. R., Chen, C., & Krishnamurthy, B. (2024). Large Content And Behavior Models To Understand, Simulate, And Optimize Content And Behavior. The Twelfth International Conference on Learning Representations. https://arxiv.org/abs/2309.00359 |
|
|
|
## Contact |
|
|
|
Contact behavior-in-the-wild@googlegroups.com for questions and suggestions. |