Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    TypeError
Message:      Mask must be a pyarrow.Array of type boolean
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1624, in _prepare_split_single
                  writer.write(example, key)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 495, in write
                  self.write_examples_on_file()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 453, in write_examples_on_file
                  self.write_batch(batch_examples=batch_examples)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 567, in write_batch
                  self.write_table(pa_table, writer_batch_size)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 582, in write_table
                  pa_table = embed_table_storage(pa_table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2270, in embed_table_storage
                  arrays = [
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2271, in <listcomp>
                  embed_array_storage(table[name], feature) if require_storage_embed(feature) else table[name]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in <listcomp>
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2140, in embed_array_storage
                  return feature.embed_storage(array)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/audio.py", line 276, in embed_storage
                  storage = pa.StructArray.from_arrays([bytes_array, path_array], ["bytes", "path"], mask=bytes_array.is_null())
                File "pyarrow/array.pxi", line 3257, in pyarrow.lib.StructArray.from_arrays
                File "pyarrow/array.pxi", line 3697, in pyarrow.lib.c_mask_inverted_from_obj
              TypeError: Mask must be a pyarrow.Array of type boolean
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1633, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 594, in finalize
                  self.write_examples_on_file()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 453, in write_examples_on_file
                  self.write_batch(batch_examples=batch_examples)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 567, in write_batch
                  self.write_table(pa_table, writer_batch_size)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 582, in write_table
                  pa_table = embed_table_storage(pa_table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2270, in embed_table_storage
                  arrays = [
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2271, in <listcomp>
                  embed_array_storage(table[name], feature) if require_storage_embed(feature) else table[name]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in <listcomp>
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2140, in embed_array_storage
                  return feature.embed_storage(array)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/audio.py", line 276, in embed_storage
                  storage = pa.StructArray.from_arrays([bytes_array, path_array], ["bytes", "path"], mask=bytes_array.is_null())
                File "pyarrow/array.pxi", line 3257, in pyarrow.lib.StructArray.from_arrays
                File "pyarrow/array.pxi", line 3697, in pyarrow.lib.c_mask_inverted_from_obj
              TypeError: Mask must be a pyarrow.Array of type boolean
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1387, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in stream_convert_to_parquet
                  builder._prepare_split(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1485, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1642, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

audio
audio
End of preview.
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

Allo-AVA: A Large-Scale Multimodal Dataset for Allocentric Avatar Animation

Overview

Allo-AVA (Allocentric Audio-Visual Avatar) is a large-scale multimodal dataset designed for research and development in avatar animation. It focuses on generating natural and contextually appropriate gestures from text and audio inputs in an allocentric (third-person) perspective. The dataset addresses the scarcity of high-quality, synchronized multimodal data capturing the intricate synchronization between speech, facial expressions, and body movements, essential for creating lifelike avatar animations in virtual environments.


Dataset Statistics

  • Total Videos: 7,500
  • Total Duration: 1,250 hours
  • Average Video Length: 10 minutes
  • Unique Speakers: ~3,500
  • Total Word Count: 15 million
  • Average Words per Minute: 208
  • Total Keypoints: ~135 billion
  • Dataset Size: 2.46 TB

Content Distribution

  • TED Talks: 40%
  • Interviews: 30%
  • Panel Discussions: 20%
  • Formal Presentations: 10%

Directory Structure

Allo-AVA/
β”œβ”€β”€ video/
β”œβ”€β”€ audio/
β”œβ”€β”€ transcript/
β”œβ”€β”€ keypoints/
└── keypoints_video/
  • video/: Original MP4 video files.
  • audio/: Extracted WAV audio files.
  • transcript/: JSON files with word-level transcriptions and timestamps.
  • keypoints/: JSON files with frame-level keypoint data.
  • keypoints_video/: MP4 files visualizing the extracted keypoints overlaid on the original video.

File Formats

  • Video: MP4 (1080p, 30 fps)
  • Audio: WAV (16-bit PCM, 48 kHz)
  • Transcripts: JSON format with word-level timestamps.
  • Keypoints: JSON format containing normalized keypoint coordinates.
  • Keypoints Video: MP4 format with keypoints overlaid on the original video frames.

Keypoint Data

The dataset includes detailed keypoint information extracted using a fusion of OpenPose and MediaPipe models, capturing comprehensive body pose and movement data.

Keypoint Extraction Models

  • OpenPose:
    • Extracts 18 keypoints corresponding to major body joints.
    • Robust for full-body pose estimation.
  • MediaPipe:
    • Provides 32 additional keypoints with enhanced detail on hands and facial landmarks.
    • Precise capture of subtle gestures and expressions.

Keypoint Structure

Each keypoint is represented by:

  • x: Horizontal position, normalized to [0, 1] from left to right of the frame.
  • y: Vertical position, normalized to [0, 1] from top to bottom of the frame.
  • z: Depth, normalized to [-1, 1], with 0 at the camera plane.
  • visibility: Confidence score in [0.0, 1.0], indicating the keypoint's presence and accuracy.

Example Keypoint Entry:

{
    "timestamp": 0.167,
    "keypoints": [
        {
            "x": 0.32285,
            "y": 0.25760,
            "z": -0.27907,
            "visibility": 0.99733
        },
        ...
    ],
    "transcript": "Today you're going to..."
}

Usage

Downloading the Dataset

To obtain access to the Allo-AVA dataset, please contact us for download instructions.

Extracting the Dataset

Once downloaded, extract the dataset to your desired directory:

unzip allo-ava-dataset.zip -d /path/to/destination

Accessing the Data

You can use various programming languages or tools to process the dataset. Below is an example using Python.

Example Usage in Python

import json
import cv2
import librosa

# Paths to data
video_id = "example_video_id"
video_path = f"Allo-AVA/video/{video_id}.mp4"
audio_path = f"Allo-AVA/audio/{video_id}.wav"
transcript_path = f"Allo-AVA/transcript/{video_id}.json"
keypoints_path = f"Allo-AVA/keypoints/{video_id}.json"

# Load video
cap = cv2.VideoCapture(video_path)

# Load audio
audio, sr = librosa.load(audio_path, sr=48000)

# Load transcript
with open(transcript_path, 'r') as f:
    transcript = json.load(f)

# Load keypoints
with open(keypoints_path, 'r') as f:
    keypoints = json.load(f)

# Your processing code here
# For example, iterate over keypoints and synchronize with video frames

Ethical Considerations

  • Data Source: All videos were collected from publicly available sources such as YouTube, adhering to their terms of service.

  • Privacy:

    • Face Blurring: Faces in keypoint visualization videos have been blurred to protect individual identities.
    • Voice Anonymization: Voice pitch modification has been applied to audio files to anonymize speakers.
    • Transcript Sanitization: Personal identifiers (e.g., names, locations) in transcripts have been replaced with placeholders.
  • Usage Guidelines:

    • The dataset is intended for research and educational purposes only.
    • Users must comply with all applicable laws and regulations regarding data privacy and intellectual property.
    • Any use of the dataset must respect the rights and privacy of individuals represented in the data.

License

The Allo-AVA dataset is released under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).

Please refer to the LICENSE file for more details.


Future Work

Planned enhancements for the Allo-AVA dataset include:

  • Expanding Linguistic and Cultural Diversity: Incorporating more languages and cultural contexts to enable cross-cultural studies.
  • Enhanced Annotations: Adding fine-grained labels for gestures, emotions, and semantic meanings.
  • Multiview Recordings: Including multiview videos to support 3D reconstruction and the study of interactive behaviors.
  • Improved Synchronization: Refining multimodal synchronization to capture subtle expressions and micro-movements.
  • Domain-Specific Subsets: Creating subsets tailored to specific research domains or applications.

Citing Allo-AVA

If you use the Allo-AVA dataset in your research, please cite our paper:

@inproceedings{punjwani2024alloava,
  title={Allo-AVA: A Large-Scale Multimodal Dataset for Allocentric Avatar Animation},
  author={Punjwani, Saif and Heck, Larry},
  booktitle={Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing},
  year={2024}
}

Contact

For any questions or issues regarding the Allo-AVA dataset, please contact:


Acknowledgments

We thank all the content creators whose public videos contributed to this dataset. This work was supported by [list any funding sources or supporting organizations].


Disclaimer

The authors are not responsible for any misuse of the dataset. Users are expected to comply with all relevant ethical guidelines and legal regulations when using the dataset.


Thank you for your interest in the Allo-AVA dataset! We hope it serves as a valuable resource for advancing research in avatar animation and human-computer interaction.

Downloads last month
6