YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
- Dataset Layout
- High-Level Contents
- File Meanings
videos/left_camera.mp4andvideos/right_camera.mp4tracking/hmd_poses.csvtracking/camera_pose_tracking.jsonltracking/tracking_vuer_compat.jsonlcamera/left_camera_image_format.jsonandcamera/right_camera_image_format.jsoncamera/left_camera_characteristics.jsonandcamera/right_camera_characteristics.json
- Time and Synchronization
- Coordinate and Transform Notes
Meta Quest Ego Dataset
This dataset contains 14 short egocentric capture episodes recorded from a Meta Quest-style headset. Each episode includes synchronized stereo camera video, headset pose, camera calibration, per-camera pose tracking, and a Vuer-compatible tracking stream with hand landmarks.
This README describes the dataset structure and sensor streams included in each episode.
Dataset Layout
.
+-- episode_001/
| +-- videos/
| | +-- left_camera.mp4
| | +-- right_camera.mp4
| +-- camera/
| | +-- left_camera_image_format.json
| | +-- right_camera_image_format.json
| | +-- left_camera_characteristics.json
| | +-- right_camera_characteristics.json
| +-- tracking/
| +-- hmd_poses.csv
| +-- camera_pose_tracking.jsonl
| +-- tracking_vuer_compat.jsonl
+-- episode_002/
+-- ...
Each episode_XXX directory follows the same file naming scheme.
High-Level Contents
- Episodes: 14
- Total size: about 20 GB
- Video files: 28 MP4 files, one left and one right camera stream per episode
- Tabular pose files: 14 CSV files
- Tracking streams: 28 JSONL files
- Camera metadata files: 56 JSON files
File Meanings
videos/left_camera.mp4 and videos/right_camera.mp4
Stereo egocentric camera streams for the left and right headset cameras. The files are MP4 containers with H.264 video. Camera resolution is 1280 x 1280 across the dataset.
tracking/hmd_poses.csv
Head-mounted display pose samples. Columns:
unix_time: Unix epoch timestamp in milliseconds.ovr_timestamp: Oculus/OVR runtime timestamp in seconds.pos_x,pos_y,pos_z: headset position in the tracking coordinate frame.rot_x,rot_y,rot_z,rot_w: headset orientation quaternion inx, y, z, worder.
tracking/camera_pose_tracking.jsonl
Per-camera pose tracking stream. Each line is a JSON object. Key fields:
t_unix_ms: Unix epoch timestamp in milliseconds.side:leftorright, identifying the camera stream.ovr_time_sec: Oculus/OVR runtime timestamp in seconds.ovr_time_source: timestamp source, e.g.camera_monotonic.head.matrix_col_major_4x4: headset pose as a 4 x 4 homogeneous transform flattened in column-major order.characteristics.translation: camera translation relative to the headset/gyro reference frame.characteristics.rotation_xyzw: camera orientation quaternion inx, y, z, worder.camera.matrix_col_major_4x4: camera pose as a 4 x 4 homogeneous transform flattened in column-major order.
The side field should be used when aligning these rows with the left or right video.
tracking/tracking_vuer_compat.jsonl
Merged tracking stream formatted for Vuer-style visualization or replay. Each line is a JSON object. Key fields:
t_unix_ms: Unix epoch timestamp in milliseconds.ovr_time_sec: Oculus/OVR runtime timestamp in seconds.left_hand_sample_time_sec,right_hand_sample_time_sec: hand sample timestamps.camera.matrix_col_major_4x4: camera/head pose transform flattened in column-major order.leftHand,rightHand: 4 x 4 hand transforms flattened in column-major order.leftLandmarks,rightLandmarks: 25 3D hand landmark points per hand.
All-zero landmark rows represent unavailable hand tracking for that sample.
camera/left_camera_image_format.json and camera/right_camera_image_format.json
Per-camera image acquisition format. Key fields:
width,height: camera image dimensions.format: image format at capture time.baseTime: monotonic-to-Unix timestamp anchor for aligning camera frames with tracking streams.
camera/left_camera_characteristics.json and camera/right_camera_characteristics.json
Per-camera calibration and device metadata. Key fields:
cameraId: device camera identifier.cameraPositionId: side/position identifier, with0for left and1for right.pose.translation: camera translation relative to the headset/gyro reference frame.pose.rotation: camera orientation quaternion inx, y, z, worder.pose.reference: pose reference frame, hereGYROSCOPE.intrinsics: pinhole camera parametersfx,fy,cx,cy, andskew.distortion: distortion coefficient field.sensor: physical and pixel-array metadata.
Time and Synchronization
The dataset provides both wall-clock and runtime timestamps:
- CSV pose files use
unix_timeandovr_timestamp. - JSONL streams use
t_unix_msandovr_time_sec. - Camera image format files contain a monotonic-to-Unix anchor in
baseTime.
For synchronization, align samples by nearest timestamp (ovr_timestamp/ovr_time_sec or Unix milliseconds) rather than by row number. Video frame counts and tracking row counts differ by stream and episode.
Coordinate and Transform Notes
- Quaternions are stored as
x, y, z, w. - Matrix fields named
matrix_col_major_4x4contain 16 values in column-major order. - Positions and transforms are represented in the headset/OVR tracking frame.
- Translation entries in the 4 x 4 matrices appear in the final column when interpreted as column-major homogeneous transforms.
- Downloads last month
- 18