Dataset Viewer
Full Screen
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      JSON parse error: Column(/train/[]/id/[]) changed from string to number in row 0
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 153, in _generate_tables
                  df = pd.read_json(f, dtype_backend="pyarrow")
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 815, in read_json
                  return json_reader.read()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1025, in read
                  obj = self._get_object_parser(self.data)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1051, in _get_object_parser
                  obj = FrameParser(json, **kwargs).parse()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1187, in parse
                  self._parse()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1402, in _parse
                  self.obj = DataFrame(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/core/frame.py", line 778, in __init__
                  mgr = dict_to_mgr(data, index, columns, dtype=dtype, copy=copy, typ=manager)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/core/internals/construction.py", line 503, in dict_to_mgr
                  return arrays_to_mgr(arrays, columns, index, dtype=dtype, typ=typ, consolidate=copy)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/core/internals/construction.py", line 114, in arrays_to_mgr
                  index = _extract_index(arrays)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/core/internals/construction.py", line 677, in _extract_index
                  raise ValueError("All arrays must be of the same length")
              ValueError: All arrays must be of the same length
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 231, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2643, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1659, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1816, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1347, in __iter__
                  for key_example in islice(self.ex_iterable, self.n - ex_iterable_num_taken):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 318, in __iter__
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 156, in _generate_tables
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 130, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Column(/train/[]/id/[]) changed from string to number in row 0

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for the RealTalk Video Dataset

Thank you for your interest in the RealTalk dataset! RealTalk consists of 692 in-the-wild videos of dyadic (i.e. two person) conversations, curated with the goal of advancing multimodal communication research in computer vision. If you find our dataset useful, please cite

@inproceedings{geng2023affective,  
title={Affective Faces for Goal-Driven Dyadic Communication},  
author={Geng, Scott and Teotia, Revant and Tendulkar, Purva and Menon, Sachit and Vondrick, Carl},  
year={2023}  
}

Dataset Details

The dataset contains 692 full-length videos scraped from The Skin Deep, a public YouTube channel that captures long-form, unscripted conversations between diverse indivudals about different facets of the human experience. We also include associated annotations; we detail all files present in the dataset below.

File Overview

General notes:

  • All frame numbers are indexed from 0.
  • We denote 'p0' as the person on the left side of the video, and 'p1' as the person on the right side.
  • denotes the unique 11 digit video ID assigned by YouTube to a specific video.

[0] videos/videos_{xx}.tar

Contains the full-length raw videos that the dataset is created from in shards of 50. Each video is stored at 25 fps in avi format. Each video is stored with filename <video_id>.avi (e.g., 5hxY5Svr2aM.avi).

[1] audio.tar.gz

Contains audio files extracted from the videos, stored in mp3 format.

[2] asr.tar.gz

Contains ASR outputs of Whisper for each video. Subtitles for video <video_id>.avi are stored in the file <video_id>.json as the dictionary

{
    'text': <full asr transcript of video>
    'segments': <time-stamped ASR segments>
    'language': <detected language of video>
}

[3.0] benchmark/train_test_split.json

This json file describes the clips used as the benchmark train/test split in our paper. The file stores the dictionary

{
    'train': [list of train samples],
    'test': [list of test samples]
}

where each entry in the list is another dictionary with format

{
    'id': [video_id, start_frame (inclusive), end_frame (exclusive)],
    'speaker': 'p0'|'p1'
    'listener': 'p0'|'p1'
    'asr': str
}

The ASR of the clip is computed with Whisper.

[3.1] benchmark/embeddings.pkl

Pickle file containing visual embeddings of the listener frames in the training/testing clips, as computed by several pretrained face models implemented in deepface. The file stores a dictionary with format

{
    f'{video_id}.{start_frame}.{end_frame}:{
        {
            <model_name_1>: <array of listener embeddings>,
            <model_name_2>: <array of listener embeddings>,
            ...
        }
    ...
}

[4] annotations.tar.gz

Contains face bounding box and active speaker annotations for every frame of each video. Annotations for video <video_id>.avi are contained in file <video_id>.json, which stores a nested dictionary structure:

{str(frame_number):{
        'people':{
            'p0':{'score': float, 'bbox': array}
            'p1':{'score': float, 'bbox': array}
        }
        'current_speaker': 'p0'|'p1'|None
    }
    ...
}

The 'score' field stores the active speaker score as predicted by TalkNet-ASD; larger positive values indicate a higher probability that the person is speaking. Note also that the 'people' subdictionary may or may not contain the keys 'p0', 'p1', depending on who is visible in the frame.

[5] emoca.tar.gz

Contains EMOCA embeddings for almost all frames in all the videos. The embeddings for<video_id>.avi are contained in the pickle file <video_id>.pkl, which has dictionary structure

{
    int(frame_number):{
        'p0': <embedding dict from EMOCA>,
        'p1': <embedding dict from EMOCA>
    }
    ...
}

Note that some frames may be missing embeddings due to occlusions or failures in face detection.

Dataset Card Authors

Scott Geng

Dataset Card Contact

sgeng@cs.washington.edu

Downloads last month
61