Datasets:

ArXiv:
License:
The Dataset Viewer has been disabled on this dataset.

Nymeria Dataset

[Project Page] [Data Explorer] [Code] [Paper]

Nymeria dataset teaser with 100 random samples Nymeria dataset teaser with 100 random samples

Nymeria is the world's largest dataset of human motion in the wild, capturing diverse people performing diverse activities across diverse locations. It is first of a kind to record body motion using multiple egocentric multimodal devices, all accurately synchronized and localized in one metric 3D world. Nymeria is also the world's largest motion dataset with natural language descriptions. The dataset is designed to accelerate research in egocentric human motion understanding and presents exciting challenges to advance contextualized computing and future AR/VR technology.

Dataset Summary

Nymeria dataset records over 300 hours of human motion across 1200 sequences. It captures the rich diversity of everyday activities from 264 participants, performing 20 unscripted scenarios across 50 indoor and outdoor locations. During data capture, participants wear an inertial-based motion capture suit which provides the ground-truth kinematic body motion at 240 Hz, which is retargetted into a linear blend skinning (LBS) based human model using Meta Momentum. In addition, participants also wear a pair of Project Aria glasses and two Project Aria-alike wristbands to collect multimodal egocentric data, including videos, IMUs, magnetometer, barometer, audio and eye tracking. Each sequence also contains an observer, who wears a pair of Project Aria glasses to record the scene from a third-person perspective. All devices are precisely synchronized via hardware solution and localized in one metric 3D world. Collectively, the traveling distance of participant headsets and wristbands are approximately 400 Km and 1053 Km, respectively. To further connect human motion with natural language, human annotators review the playback rendering of the scene and motion and narrate the motion coarse-to-fine, including detail-oriented motion narration, simplified atomic action and high-level activity summarization. In total, the dataset provides 310.5K sentences in 8.64M words with a vocabulary size of 6545.

For more details, please visit the project page, access the dataset online, explore the github repository and read the ECCV 2024 paper.

Citation

When using the Nymeria dataset and code, please attribute it as follows:

@inproceedings{ma24eccv,
      title={Nymeria: A Massive Collection of Multimodal Egocentric Daily Motion in the Wild},
      author={Lingni Ma and Yuting Ye and Fangzhou Hong and Vladimir Guzov and Yifeng Jiang and Rowan Postyeni and Luis Pesqueira and Alexander Gamino and Vijay Baiyya and Hyo Jin Kim and Kevin Bailey and David Soriano Fosas and C. Karen Liu and Ziwei Liu and Jakob Engel and Renzo De Nardi and Richard Newcombe},
      booktitle={the 18th European Conference on Computer Vision (ECCV)},
      year={2024},
      url={https://arxiv.org/abs/2406.09905},
}

License

Nymeria dataset and code is released by Meta under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). Data and code may not be used for commercial purposes.

Contributors

Lingni Ma (@summericequeen), Yuting Ye, Fangzhou Hong, Vladimir Guzov, Yifeng Jiang, Rowan Postyeni, Luis Pesqueira, Alexander Gamino, Vijay Baiyya, Hyo Jin Kim, Kevin Bailey, David Soriano Fosas, C. Karen Liu, Ziwei Liu, Jakob Engel, Renzo De Nardi, Richard Newcombe

Downloads last month
0