EgoMe: A New Dataset and Challenge for Following Me via Egocentric View in Real World
Paper Link: EgoMe: A New Dataset and Challenge for Following Me via Egocentric View in Real World.
Authors: Heqian Qiu, Zhaofeng Shi, Lanxiao Wang, Huiyu Xiong, Xiang Li, Hongliang Li
Institution: University of Electronic Science and Technology of China, Chengdu, China
Download
To download the EgoMe dataset, please sign the License Agreement for downloading our datasets and raw annotations.
Overview
EgoMe is a large-scale egocentric dataset, which towards following the process of human imitation learning via the imitator's egocentric view
in the real world. Our dataset includes 7902 paired exo-ego videos (totaling 15804 videos) spanning diverse daily behaviors in various real-world
scenarios. For each video pair, one video captures an exocentric view of the imitator observing the demonstrator's actions, while the other
captures an egocentric view of the imitator subsequently following those actions. Notably, EgoMe uniquely incorporates exo-ego eye gaze,
other multi-modal sensor IMU data and different-level annotations for assisting in establishing correlations between observing and
imitating process. We further provide a suit of challenging benchmarks (including Exo-Ego cross-modal retrieval, Exo-Ego gaze prediction,
imitative action assessment or reorganization, Exo-Ego coarse-level or fine-level video understanding and etc.) for fully leveraging this data resource and promoting the
robot imitation learning research.
Dataset Structure
The dataset contains three main folders: Video, Annotation, and Gaze_IMU data, with the detailed structural organization outlined below:
For the Videos file, If the file name contains "FALSE", it indicates that the video is impersonated incorrectly; otherwise, it indicates that the video is impersonated correctly.
For the Gaze_IMU file, there are gaze points data, and IMU data including gyroscope (Gyro), accelerometer (Accel) and magnetometer (Mag).
For the Annotations file, there are total.json for all data annotations, train.json for training set, val.json for validation and test.json for testing.
EgoMe
βββ Video
β βββ CF3-A4-20240802_FALSE_240802093603_ego.mp4
β βββ CF3-A4-20240802_FALSE_240802093603_exo.mp4
β βββ CF3-A4-20240802_User1_240802092926_ego.mp4
β βββ CF3-A4-20240802_User1_240802092926_exo.mp4
β βββ ...
β βββ ...
βββ Gaze_IMU
β βββ CF3-A4-20240802_FALSE_240802093603_ego.csv
β βββ CF3-A4-20240802_FALSE_240802093603_exo.csv
β βββ CF3-A4-20240802_User1_240802092926_ego.csv
β βββ CF3-A4-20240802_User1_240802092926_exo.csv
β βββ ...
β βββ ...
βββ Annotation
β βββ train.json
β βββ val.json
β βββ test.json
β βββ total.json
β β
β β
Citation
If you find our EgoMe dataset and benchmarks useful for your research, please consider citing:
@article{qiu2025egome,
title={EgoMe: A New Dataset and Challenge for Following Me via Egocentric View in Real World},
author={Qiu, Heqian and Shi, Zhaofeng and Wang, Lanxiao and Xiong, Huiyu and Li, Xiang and Li, Hongliang},
journal={arXiv preprint arXiv:2501.19061},
year={2025}
}
Acknowledgement
This dataset is jointly accomplished by talented researchers from Intelligent Visual Information Processing Laboratory, University of Electronic Science and Technology of China.
Dataset Card Contact
Primary concat: Heqian Qiu (Email: hqqiu@uestc.edu.cn)
- Downloads last month
- 117