realtalk / README.md
scottgeng00's picture
Update README.md
1fbaa40 verified
metadata
license: apache-2.0

Dataset Card for the RealTalk Video Dataset

Thank you for your interest in the RealTalk dataset! RealTalk consists of 692 in-the-wild videos of dyadic (i.e. two person) conversations, curated with the goal of advancing multimodal communication research in computer vision. If you find our dataset useful, please cite

@inproceedings{geng2023affective,  
title={Affective Faces for Goal-Driven Dyadic Communication},  
author={Geng, Scott and Teotia, Revant and Tendulkar, Purva and Menon, Sachit and Vondrick, Carl},  
year={2023}  
}

Dataset Details

The dataset contains 692 full-length videos scraped from The Skin Deep, a public YouTube channel that captures long-form, unscripted conversations between diverse indivudals about different facets of the human experience. We also include associated annotations; we detail all files present in the dataset below.

File Overview

General notes:

  • All frame numbers are indexed from 0.
  • We denote 'p0' as the person on the left side of the video, and 'p1' as the person on the right side.
  • denotes the unique 11 digit video ID assigned by YouTube to a specific video.

[0] videos/videos_{xx}.tar

Contains the full-length raw videos that the dataset is created from in shards of 50. Each video is stored at 25 fps in avi format. Each video is stored with filename <video_id>.avi (e.g., 5hxY5Svr2aM.avi).

[1] audio.tar.gz

Contains audio files extracted from the videos, stored in mp3 format.

[2] asr.tar.gz

Contains ASR outputs of Whisper for each video. Subtitles for video <video_id>.avi are stored in the file <video_id>.json as the dictionary

{
    'text': <full asr transcript of video>
    'segments': <time-stamped ASR segments>
    'language': <detected language of video>
}

[3.0] benchmark/train_test_split.json

This json file describes the clips used as the benchmark train/test split in our paper. The file stores the dictionary

{
    'train': [list of train samples],
    'test': [list of test samples]
}

where each entry in the list is another dictionary with format

{
    'id': [video_id, start_frame (inclusive), end_frame (exclusive)],
    'speaker': 'p0'|'p1'
    'listener': 'p0'|'p1'
    'asr': str
}

The ASR of the clip is computed with Whisper.

[3.1] benchmark/embeddings.pkl

Pickle file containing visual embeddings of the listener frames in the training/testing clips, as computed by several pretrained face models implemented in deepface. The file stores a dictionary with format

{
    f'{video_id}.{start_frame}.{end_frame}:{
        {
            <model_name_1>: <array of listener embeddings>,
            <model_name_2>: <array of listener embeddings>,
            ...
        }
    ...
}

[4] annotations.tar.gz

Contains face bounding box and active speaker annotations for every frame of each video. Annotations for video <video_id>.avi are contained in file <video_id>.json, which stores a nested dictionary structure:

{str(frame_number):{
        'people':{
            'p0':{'score': float, 'bbox': array}
            'p1':{'score': float, 'bbox': array}
        }
        'current_speaker': 'p0'|'p1'|None
    }
    ...
}

The 'score' field stores the active speaker score as predicted by TalkNet-ASD; larger positive values indicate a higher probability that the person is speaking. Note also that the 'people' subdictionary may or may not contain the keys 'p0', 'p1', depending on who is visible in the frame.

[5] emoca.tar.gz

Contains EMOCA embeddings for almost all frames in all the videos. The embeddings for<video_id>.avi are contained in the pickle file <video_id>.pkl, which has dictionary structure

{
    int(frame_number):{
        'p0': <embedding dict from EMOCA>,
        'p1': <embedding dict from EMOCA>
    }
    ...
}

Note that some frames may be missing embeddings due to occlusions or failures in face detection.

Dataset Card Authors

Scott Geng

Dataset Card Contact

sgeng@cs.washington.edu