Allo-AVA / README.md
SaifPunjwani's picture
Add files using upload-large-folder tool
c77178c verified

Allo-AVA: A Large-Scale Multimodal Dataset for Allocentric Avatar Animation

Overview

Allo-AVA (Allocentric Audio-Visual Avatar) is a large-scale multimodal dataset designed for research and development in avatar animation. It focuses on generating natural and contextually appropriate gestures from text and audio inputs in an allocentric (third-person) perspective. The dataset addresses the scarcity of high-quality, synchronized multimodal data capturing the intricate synchronization between speech, facial expressions, and body movements, essential for creating lifelike avatar animations in virtual environments.


Dataset Statistics

  • Total Videos: 7,500
  • Total Duration: 1,250 hours
  • Average Video Length: 10 minutes
  • Unique Speakers: ~3,500
  • Total Word Count: 15 million
  • Average Words per Minute: 208
  • Total Keypoints: ~135 billion
  • Dataset Size: 2.46 TB

Content Distribution

  • TED Talks: 40%
  • Interviews: 30%
  • Panel Discussions: 20%
  • Formal Presentations: 10%

Directory Structure

Allo-AVA/
β”œβ”€β”€ video/
β”œβ”€β”€ audio/
β”œβ”€β”€ transcript/
β”œβ”€β”€ keypoints/
└── keypoints_video/
  • video/: Original MP4 video files.
  • audio/: Extracted WAV audio files.
  • transcript/: JSON files with word-level transcriptions and timestamps.
  • keypoints/: JSON files with frame-level keypoint data.
  • keypoints_video/: MP4 files visualizing the extracted keypoints overlaid on the original video.

File Formats

  • Video: MP4 (1080p, 30 fps)
  • Audio: WAV (16-bit PCM, 48 kHz)
  • Transcripts: JSON format with word-level timestamps.
  • Keypoints: JSON format containing normalized keypoint coordinates.
  • Keypoints Video: MP4 format with keypoints overlaid on the original video frames.

Keypoint Data

The dataset includes detailed keypoint information extracted using a fusion of OpenPose and MediaPipe models, capturing comprehensive body pose and movement data.

Keypoint Extraction Models

  • OpenPose:
    • Extracts 18 keypoints corresponding to major body joints.
    • Robust for full-body pose estimation.
  • MediaPipe:
    • Provides 32 additional keypoints with enhanced detail on hands and facial landmarks.
    • Precise capture of subtle gestures and expressions.

Keypoint Structure

Each keypoint is represented by:

  • x: Horizontal position, normalized to [0, 1] from left to right of the frame.
  • y: Vertical position, normalized to [0, 1] from top to bottom of the frame.
  • z: Depth, normalized to [-1, 1], with 0 at the camera plane.
  • visibility: Confidence score in [0.0, 1.0], indicating the keypoint's presence and accuracy.

Example Keypoint Entry:

{
    "timestamp": 0.167,
    "keypoints": [
        {
            "x": 0.32285,
            "y": 0.25760,
            "z": -0.27907,
            "visibility": 0.99733
        },
        ...
    ],
    "transcript": "Today you're going to..."
}

Usage

Downloading the Dataset

To obtain access to the Allo-AVA dataset, please contact us for download instructions.

Extracting the Dataset

Once downloaded, extract the dataset to your desired directory:

unzip allo-ava-dataset.zip -d /path/to/destination

Accessing the Data

You can use various programming languages or tools to process the dataset. Below is an example using Python.

Example Usage in Python

import json
import cv2
import librosa

# Paths to data
video_id = "example_video_id"
video_path = f"Allo-AVA/video/{video_id}.mp4"
audio_path = f"Allo-AVA/audio/{video_id}.wav"
transcript_path = f"Allo-AVA/transcript/{video_id}.json"
keypoints_path = f"Allo-AVA/keypoints/{video_id}.json"

# Load video
cap = cv2.VideoCapture(video_path)

# Load audio
audio, sr = librosa.load(audio_path, sr=48000)

# Load transcript
with open(transcript_path, 'r') as f:
    transcript = json.load(f)

# Load keypoints
with open(keypoints_path, 'r') as f:
    keypoints = json.load(f)

# Your processing code here
# For example, iterate over keypoints and synchronize with video frames

Ethical Considerations

  • Data Source: All videos were collected from publicly available sources such as YouTube, adhering to their terms of service.

  • Privacy:

    • Face Blurring: Faces in keypoint visualization videos have been blurred to protect individual identities.
    • Voice Anonymization: Voice pitch modification has been applied to audio files to anonymize speakers.
    • Transcript Sanitization: Personal identifiers (e.g., names, locations) in transcripts have been replaced with placeholders.
  • Usage Guidelines:

    • The dataset is intended for research and educational purposes only.
    • Users must comply with all applicable laws and regulations regarding data privacy and intellectual property.
    • Any use of the dataset must respect the rights and privacy of individuals represented in the data.

License

The Allo-AVA dataset is released under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).

Please refer to the LICENSE file for more details.


Future Work

Planned enhancements for the Allo-AVA dataset include:

  • Expanding Linguistic and Cultural Diversity: Incorporating more languages and cultural contexts to enable cross-cultural studies.
  • Enhanced Annotations: Adding fine-grained labels for gestures, emotions, and semantic meanings.
  • Multiview Recordings: Including multiview videos to support 3D reconstruction and the study of interactive behaviors.
  • Improved Synchronization: Refining multimodal synchronization to capture subtle expressions and micro-movements.
  • Domain-Specific Subsets: Creating subsets tailored to specific research domains or applications.

Citing Allo-AVA

If you use the Allo-AVA dataset in your research, please cite our paper:

@inproceedings{punjwani2024alloava,
  title={Allo-AVA: A Large-Scale Multimodal Dataset for Allocentric Avatar Animation},
  author={Punjwani, Saif and Heck, Larry},
  booktitle={Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing},
  year={2024}
}

Contact

For any questions or issues regarding the Allo-AVA dataset, please contact:


Acknowledgments

We thank all the content creators whose public videos contributed to this dataset. This work was supported by [list any funding sources or supporting organizations].


Disclaimer

The authors are not responsible for any misuse of the dataset. Users are expected to comply with all relevant ethical guidelines and legal regulations when using the dataset.


Thank you for your interest in the Allo-AVA dataset! We hope it serves as a valuable resource for advancing research in avatar animation and human-computer interaction.