bridge_256 / README.md
taldatech's picture
Improve dataset card and add metadata (#1)
7daf60a
metadata
license: cc-by-4.0
task_categories:
  - robotics
tags:
  - video
  - prediction
  - robotics
  - manipulation
  - language
  - t5
  - bridge
  - open-x

Bridge Dataset (Preprocessed for LPWM)

This repository contains preprocessed videos from the BRIDGE dataset (part of Open-X Embodiment) as used in the paper Latent Particle World Models: Self-supervised Object-centric Stochastic Dynamics Modeling.

The videos are preprocessed with T5-large embeddings for the text instructions to support language-conditioned world modeling.

Description

Latent Particle World Model (LPWM) is a self-supervised object-centric world model that autonomously discovers keypoints, bounding boxes, and object masks directly from video data. This specific dataset contains Bridge data formatted for training LPWM, including language embeddings for goal conditioning.

Citation

@inproceedings{
daniel2026latent,
title={Latent Particle World Models: Self-supervised Object-centric Stochastic Dynamics Modeling},
author={Tal Daniel and Carl Qi and Dan Haramati and Amir Zadeh and Chuan Li and Aviv Tamar and Deepak Pathak and David Held},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
url={https://openreview.net/forum?id=lTaPtGiUUc}
}