Datasets:

ArXiv:
License:
Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 1 new columns ({'reward'}) and 3 missing columns ({'index', 'action', 'next'}).

This happened while the json dataset builder was generating data using

hf://datasets/compsciencelab/BricksRL-Datasets/2Wheeler/RunAway/expert_data/next/meta.json (at revision c7b763b99755b944b0db92728f078360f3ef6543)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              observation: struct<device: string, shape: list<item: int64>, dtype: string, is_nested: bool>
                child 0, device: string
                child 1, shape: list<item: int64>
                    child 0, item: int64
                child 2, dtype: string
                child 3, is_nested: bool
              reward: struct<device: string, shape: list<item: int64>, dtype: string, is_nested: bool>
                child 0, device: string
                child 1, shape: list<item: int64>
                    child 0, item: int64
                child 2, dtype: string
                child 3, is_nested: bool
              done: struct<device: string, shape: list<item: int64>, dtype: string, is_nested: bool>
                child 0, device: string
                child 1, shape: list<item: int64>
                    child 0, item: int64
                child 2, dtype: string
                child 3, is_nested: bool
              distance: struct<device: string, shape: list<item: int64>, dtype: string, is_nested: bool>
                child 0, device: string
                child 1, shape: list<item: int64>
                    child 0, item: int64
                child 2, dtype: string
                child 3, is_nested: bool
              terminated: struct<device: string, shape: list<item: int64>, dtype: string, is_nested: bool>
                child 0, device: string
                child 1, shape: list<item: int64>
                    child 0, item: int64
                child 2, dtype: string
                child 3, is_nested: bool
              shape: list<item: int64>
                child 0, item: int64
              device: string
              _type: string
              to
              {'observation': {'device': Value(dtype='string', id=None), 'shape': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'dtype': Value(dtype='string', id=None), 'is_nested': Value(dtype='bool', id=None)}, 'distance': {'device': Value(dtype='string', id=None), 'shape': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'dtype': Value(dtype='string', id=None), 'is_nested': Value(dtype='bool', id=None)}, 'done': {'device': Value(dtype='string', id=None), 'shape': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'dtype': Value(dtype='string', id=None), 'is_nested': Value(dtype='bool', id=None)}, 'terminated': {'device': Value(dtype='string', id=None), 'shape': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'dtype': Value(dtype='string', id=None), 'is_nested': Value(dtype='bool', id=None)}, 'action': {'device': Value(dtype='string', id=None), 'shape': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'dtype': Value(dtype='string', id=None), 'is_nested': Value(dtype='bool', id=None)}, 'next': {'type': Value(dtype='string', id=None)}, 'index': {'device': Value(dtype='string', id=None), 'shape': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'dtype': Value(dtype='string', id=None), 'is_nested': Value(dtype='bool', id=None)}, 'shape': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'device': Value(dtype='string', id=None), '_type': Value(dtype='string', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1396, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1045, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1029, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1124, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1884, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2015, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 1 new columns ({'reward'}) and 3 missing columns ({'index', 'action', 'next'}).
              
              This happened while the json dataset builder was generating data using
              
              hf://datasets/compsciencelab/BricksRL-Datasets/2Wheeler/RunAway/expert_data/next/meta.json (at revision c7b763b99755b944b0db92728f078360f3ef6543)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

observation
dict
distance
dict
done
dict
terminated
dict
action
dict
next
dict
index
dict
shape
sequence
device
string
_type
string
reward
dict
{ "device": "cpu", "shape": [ 1987, 5 ], "dtype": "torch.float32", "is_nested": false }
{ "device": "cpu", "shape": [ 1987, 1 ], "dtype": "torch.float32", "is_nested": false }
{ "device": "cpu", "shape": [ 1987, 1 ], "dtype": "torch.bool", "is_nested": false }
{ "device": "cpu", "shape": [ 1987, 1 ], "dtype": "torch.bool", "is_nested": false }
{ "device": "cpu", "shape": [ 1987, 1 ], "dtype": "torch.float32", "is_nested": false }
{ "type": "TensorDict" }
{ "device": "cpu", "shape": [ 1987 ], "dtype": "torch.int64", "is_nested": false }
[ 1987 ]
cpu
<class 'tensordict._td.TensorDict'>
null
{ "device": "cpu", "shape": [ 1987, 5 ], "dtype": "torch.float32", "is_nested": false }
{ "device": "cpu", "shape": [ 1987, 1 ], "dtype": "torch.float32", "is_nested": false }
{ "device": "cpu", "shape": [ 1987, 1 ], "dtype": "torch.bool", "is_nested": false }
{ "device": "cpu", "shape": [ 1987, 1 ], "dtype": "torch.bool", "is_nested": false }
null
null
null
[ 1987 ]
cpu
<class 'tensordict._td.TensorDict'>
{ "device": "cpu", "shape": [ 1987, 1 ], "dtype": "torch.float32", "is_nested": false }
{ "device": "cpu", "shape": [ 1612, 5 ], "dtype": "torch.float32", "is_nested": false }
{ "device": "cpu", "shape": [ 1612, 1 ], "dtype": "torch.float32", "is_nested": false }
{ "device": "cpu", "shape": [ 1612, 1 ], "dtype": "torch.bool", "is_nested": false }
{ "device": "cpu", "shape": [ 1612, 1 ], "dtype": "torch.bool", "is_nested": false }
{ "device": "cpu", "shape": [ 1612, 1 ], "dtype": "torch.float32", "is_nested": false }
{ "type": "TensorDict" }
{ "device": "cpu", "shape": [ 1612 ], "dtype": "torch.int64", "is_nested": false }
[ 1612 ]
cpu
<class 'tensordict._td.TensorDict'>
null
{ "device": "cpu", "shape": [ 1612, 5 ], "dtype": "torch.float32", "is_nested": false }
{ "device": "cpu", "shape": [ 1612, 1 ], "dtype": "torch.float32", "is_nested": false }
{ "device": "cpu", "shape": [ 1612, 1 ], "dtype": "torch.bool", "is_nested": false }
{ "device": "cpu", "shape": [ 1612, 1 ], "dtype": "torch.bool", "is_nested": false }
null
null
null
[ 1612 ]
cpu
<class 'tensordict._td.TensorDict'>
{ "device": "cpu", "shape": [ 1612, 1 ], "dtype": "torch.float32", "is_nested": false }
{ "device": "cpu", "shape": [ 5000, 6 ], "dtype": "torch.float32", "is_nested": false }
null
{ "device": "cpu", "shape": [ 5000, 1 ], "dtype": "torch.bool", "is_nested": false }
{ "device": "cpu", "shape": [ 5000, 1 ], "dtype": "torch.bool", "is_nested": false }
{ "device": "cpu", "shape": [ 5000, 2 ], "dtype": "torch.float32", "is_nested": false }
{ "type": "TensorDict" }
{ "device": "cpu", "shape": [ 5000 ], "dtype": "torch.int64", "is_nested": false }
[ 5000 ]
cpu
<class 'tensordict._td.TensorDict'>
null
{ "device": "cpu", "shape": [ 5000, 6 ], "dtype": "torch.float32", "is_nested": false }
null
{ "device": "cpu", "shape": [ 5000, 1 ], "dtype": "torch.bool", "is_nested": false }
{ "device": "cpu", "shape": [ 5000, 1 ], "dtype": "torch.bool", "is_nested": false }
null
null
null
[ 5000 ]
cpu
<class 'tensordict._td.TensorDict'>
{ "device": "cpu", "shape": [ 5000, 1 ], "dtype": "torch.float32", "is_nested": false }
{ "device": "cpu", "shape": [ 5000, 6 ], "dtype": "torch.float32", "is_nested": false }
null
{ "device": "cpu", "shape": [ 5000, 1 ], "dtype": "torch.bool", "is_nested": false }
{ "device": "cpu", "shape": [ 5000, 1 ], "dtype": "torch.bool", "is_nested": false }
{ "device": "cpu", "shape": [ 5000, 2 ], "dtype": "torch.float32", "is_nested": false }
{ "type": "TensorDict" }
{ "device": "cpu", "shape": [ 5000 ], "dtype": "torch.int64", "is_nested": false }
[ 5000 ]
cpu
<class 'tensordict._td.TensorDict'>
null
{ "device": "cpu", "shape": [ 5000, 6 ], "dtype": "torch.float32", "is_nested": false }
null
{ "device": "cpu", "shape": [ 5000, 1 ], "dtype": "torch.bool", "is_nested": false }
{ "device": "cpu", "shape": [ 5000, 1 ], "dtype": "torch.bool", "is_nested": false }
null
null
null
[ 5000 ]
cpu
<class 'tensordict._td.TensorDict'>
{ "device": "cpu", "shape": [ 5000, 1 ], "dtype": "torch.float32", "is_nested": false }
{ "device": "cpu", "shape": [ 1297, 4 ], "dtype": "torch.float32", "is_nested": false }
null
{ "device": "cpu", "shape": [ 1297, 1 ], "dtype": "torch.bool", "is_nested": false }
{ "device": "cpu", "shape": [ 1297, 1 ], "dtype": "torch.bool", "is_nested": false }
{ "device": "cpu", "shape": [ 1297, 4 ], "dtype": "torch.float32", "is_nested": false }
{ "type": "TensorDict" }
{ "device": "cpu", "shape": [ 1297 ], "dtype": "torch.int64", "is_nested": false }
[ 1297 ]
cpu
<class 'tensordict._td.TensorDict'>
null
{ "device": "cpu", "shape": [ 1297, 4 ], "dtype": "torch.float32", "is_nested": false }
null
{ "device": "cpu", "shape": [ 1297, 1 ], "dtype": "torch.bool", "is_nested": false }
{ "device": "cpu", "shape": [ 1297, 1 ], "dtype": "torch.bool", "is_nested": false }
null
null
null
[ 1297 ]
cpu
<class 'tensordict._td.TensorDict'>
{ "device": "cpu", "shape": [ 1297, 1 ], "dtype": "torch.float32", "is_nested": false }
{ "device": "cpu", "shape": [ 10000, 4 ], "dtype": "torch.float32", "is_nested": false }
null
{ "device": "cpu", "shape": [ 10000, 1 ], "dtype": "torch.bool", "is_nested": false }
{ "device": "cpu", "shape": [ 10000, 1 ], "dtype": "torch.bool", "is_nested": false }
{ "device": "cpu", "shape": [ 10000, 4 ], "dtype": "torch.float32", "is_nested": false }
{ "type": "TensorDict" }
{ "device": "cpu", "shape": [ 10000 ], "dtype": "torch.int64", "is_nested": false }
[ 10000 ]
cpu
<class 'tensordict._td.TensorDict'>
null
{ "device": "cpu", "shape": [ 10000, 4 ], "dtype": "torch.float32", "is_nested": false }
null
{ "device": "cpu", "shape": [ 10000, 1 ], "dtype": "torch.bool", "is_nested": false }
{ "device": "cpu", "shape": [ 10000, 1 ], "dtype": "torch.bool", "is_nested": false }
null
null
null
[ 10000 ]
cpu
<class 'tensordict._td.TensorDict'>
{ "device": "cpu", "shape": [ 10000, 1 ], "dtype": "torch.float32", "is_nested": false }
{ "device": "cpu", "shape": [ 9244, 7 ], "dtype": "torch.float32", "is_nested": false }
null
{ "device": "cpu", "shape": [ 9244, 1 ], "dtype": "torch.bool", "is_nested": false }
{ "device": "cpu", "shape": [ 9244, 1 ], "dtype": "torch.bool", "is_nested": false }
{ "device": "cpu", "shape": [ 9244, 4 ], "dtype": "torch.float32", "is_nested": false }
{ "type": "TensorDict" }
{ "device": "cpu", "shape": [ 9244 ], "dtype": "torch.int64", "is_nested": false }
[ 9244 ]
cpu
<class 'tensordict._td.TensorDict'>
null
{ "device": "cpu", "shape": [ 9244, 7 ], "dtype": "torch.float32", "is_nested": false }
null
{ "device": "cpu", "shape": [ 9244, 1 ], "dtype": "torch.bool", "is_nested": false }
{ "device": "cpu", "shape": [ 9244, 1 ], "dtype": "torch.bool", "is_nested": false }
null
null
null
[ 9244 ]
cpu
<class 'tensordict._td.TensorDict'>
{ "device": "cpu", "shape": [ 9244, 1 ], "dtype": "torch.float32", "is_nested": false }
{ "device": "cpu", "shape": [ 10000, 7 ], "dtype": "torch.float32", "is_nested": false }
null
{ "device": "cpu", "shape": [ 10000, 1 ], "dtype": "torch.bool", "is_nested": false }
{ "device": "cpu", "shape": [ 10000, 1 ], "dtype": "torch.bool", "is_nested": false }
{ "device": "cpu", "shape": [ 10000, 4 ], "dtype": "torch.float32", "is_nested": false }
{ "type": "TensorDict" }
{ "device": "cpu", "shape": [ 10000 ], "dtype": "torch.int64", "is_nested": false }
[ 10000 ]
cpu
<class 'tensordict._td.TensorDict'>
null
{ "device": "cpu", "shape": [ 10000, 7 ], "dtype": "torch.float32", "is_nested": false }
null
{ "device": "cpu", "shape": [ 10000, 1 ], "dtype": "torch.bool", "is_nested": false }
{ "device": "cpu", "shape": [ 10000, 1 ], "dtype": "torch.bool", "is_nested": false }
null
null
null
[ 10000 ]
cpu
<class 'tensordict._td.TensorDict'>
{ "device": "cpu", "shape": [ 10000, 1 ], "dtype": "torch.float32", "is_nested": false }

BricksRL Dataset Card

Dataset Summary

The BricksRL dataset contains curated data for three robotic configurations: 2Wheeler, Walker, and RoboArm. The dataset includes expert and random data for four key tasks: Walker-v0, RoboArm-v0, RunAway-v0, and Spinning-v0. The expert data was collected using a trained Soft Actor-Critic (SAC) agent, while the random data was generated by executing a random policy. This dataset is presented in the paper BricksRL: A Platform for Democratizing Robotics and Reinforcement Learning Research and Education with LEGO (NeurIPS 2024). For more information feel free to check out the project website.

Supported Tasks

The dataset supports the following tasks across various robot configurations:

  • Walker-v0
  • RoboArm-v0
  • RunAway-v0
  • Spinning-v0

Dataset Structure

The dataset contains two types of data:

  • Expert Data: Collected by a trained SAC agent solving the tasks on the real robot. The agent was evaluated over 100 episodes for each task, recording all transitions.
  • Random Data: Generated by executing a random policy on the real robot for 100 episodes per task.

The datasets are TensorDicts, which can be directly loaded into the replay buffer. When initiating (pre-)training, provide the path to the desired TensorDict when prompted to load the replay buffer. Table 1 shows the dataset statistics regarding mean reward (expert data), number of transitions collected, and collection episodes.

stats

Results and Evaluation

The dataset was used to train both online and offline RL algorithms (Table 2). Performance comparisons between these methods demonstrated the effectiveness of offline RL algorithms, particularly when using expert data. Online RL algorithms struggled to generalize or often overfit when provided with expert demonstrations. For more detailed information about the hyperparameters, please refer to the appendix of the paper.

stats

Citation

If you use the BricksRL dataset in your research, please cite the following paper:

@article{dittert2024bricksrl,
  title={BricksRL: A Platform for Democratizing Robotics and Reinforcement Learning Research and Education with LEGO},
  author={Sebastian Dittert and Vincent Moens and Gianni De Fabritiis},
  journal={arXiv preprint arXiv:2406.17490},
  year={2024}
}
Downloads last month
45