metadata
license: mit
task_categories:
- video-classification
- robotics
language:
- en
tags:
- egocentric
- exocentric
- first-person
- third-person
- robotics
- lerobot
- smartphone
size_categories:
- 1K<n<10K
Video Subset Dataset
A HuggingFace LeRobot-compatible dataset containing 763 episodes of egocentric and exocentric videos from indoor environments with corresponding AR pose data.
Dataset Summary
- Total Episodes: 763 (Egocentric: 243, Exocentric: 520)
- Total Frames: 321,178
- Frame Rate: 30.00 FPS
- Codebase Version: v2.0
Tasks
The dataset contains two distinct viewpoint tasks:
- Task 0 (Egocentric): First-person view videos - 243 episodes (31.8%)
- Task 1 (Exocentric): Third-person view videos - 520 episodes (68.2%)
Each episode's data includes a task_index field to distinguish between egocentric and exocentric videos.
Environments
Videos were collected from 4 different indoor rooms:
| Room | Episodes | Egocentric | Exocentric |
|---|---|---|---|
| Bedroom | 39 | 14 | 25 |
| Sandwich | 54 | 41 | 13 |
| Laundry | 436 | 156 | 280 |
| Bathroom | 234 | 32 | 202 |
Dataset Structure
{
"episode_index": int,
"frame_index": int,
"timestamp": float,
"observation.state": List[float], # 7D state (placeholder)
"action": List[float], # 7D pose: [quat_x,y,z,w, pos_x,y,z]
"next.reward": float,
"next.done": bool,
"next.success": bool,
"task_index": int, # 0=egocentric, 1=exocentric
"index": int
}
Action Space
The action field contains 7-dimensional AR pose vectors:
- Quaternion (4 values): Camera rotation as [x, y, z, w]
- Position (3 values): Camera translation as [x, y, z]
This data comes from smartphone AR tracking during video recording.
Usage
Load with HuggingFace Datasets
from datasets import load_dataset
# Load full dataset
dataset = load_dataset("YOUR_USERNAME/seesawvideos-lerobot")
# Access frames
print(dataset['train'][0])
Filter by Task (Egocentric vs Exocentric)
# Filter egocentric only (task_index = 0)
egocentric = dataset['train'].filter(lambda x: x['task_index'] == 0)
# Filter exocentric only (task_index = 1)
exocentric = dataset['train'].filter(lambda x: x['task_index'] == 1)
Using Episode Metadata
The meta/episodes.csv file contains detailed metadata for filtering:
import pandas as pd
# Load episode metadata
episodes = pd.read_csv("meta/episodes.csv")
# Filter by task and room
ego_laundry = episodes[
(episodes['task_label'] == 'egocentric') &
(episodes['room'] == 'Laundry')
]
print(f"Found {len(ego_laundry)} egocentric Laundry episodes")
Using Provided Utilities
from utils import SeesawDatasetFilter
dataset = SeesawDatasetFilter(".")
# Get egocentric episodes
ego_episodes = dataset.get_episodes(task="egocentric")
# Filter by room and duration
bathroom_short = dataset.get_episodes(
room="Bathroom",
max_duration=10.0
)
# Get video paths
ego_videos = dataset.get_video_paths(task="egocentric")
Files Included
- data/chunk-000/*.parquet - Frame-level data for all episodes
- videos/chunk-000/observation.image/*.mp4 - Video files
- meta/info.json - Dataset metadata and configuration
- meta/episodes.csv - Per-episode metadata with labels and room info
- meta/splits.json - Pre-computed train/val/test splits (70/15/15)
- utils.py - Helper functions for filtering and loading
- example_usage.py - Complete usage examples
- FILTERING_GUIDE.md - Quick reference for filtering by task
Statistics
Duration by Task
- Egocentric: 12.5 ± 11.2 seconds (range: 1.7 - 82.2s)
- Exocentric: 14.7 ± 22.7 seconds (range: 1.5 - 387.6s)
Frames by Task
- Egocentric: 375 ± 335 frames (range: 51 - 2,467)
- Exocentric: 443 ± 680 frames (range: 46 - 11,629)
Citation
If you use this dataset, please cite:
@dataset{seesawvideos_lerobot_2026,
title={SeesawVideos LeRobot Dataset},
author={Your Name},
year={2026},
publisher={HuggingFace},
howpublished={\url{https://huggingface.co/datasets/YOUR_USERNAME/seesawvideos-lerobot}}
}
License
MIT License
Additional Resources
- LeRobot Documentation
- Dataset Repository
- See
FILTERING_GUIDE.mdfor detailed filtering instructions - See
example_usage.pyfor complete code examples