Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 1 new columns ({'display_name'}) and 18 missing columns ({'author_email', 'data_format', 'num_episodes_average_score', 'total_steps', 'total_episodes', 'code_permalink', 'env_spec', 'dataset_id', 'eval_env_spec', 'observation_space', 'action_space', 'algorithm_name', 'dataset_size', 'requirements', 'ref_max_score', 'author', 'minari_version', 'ref_min_score'}).

This happened while the json dataset builder was generating data using

hf://datasets/farama-minari/D4RL/antmaze/namespace_metadata.json (at revision 064ce527f209cf2ecd5f7d6f63ddc6ec83dafcad)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 622, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2240, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              display_name: string
              description: string
              to
              {'data_format': Value(dtype='string', id=None), 'observation_space': Value(dtype='string', id=None), 'action_space': Value(dtype='string', id=None), 'env_spec': Value(dtype='string', id=None), 'dataset_id': Value(dtype='string', id=None), 'algorithm_name': Value(dtype='string', id=None), 'author': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'author_email': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'code_permalink': Value(dtype='string', id=None), 'minari_version': Value(dtype='string', id=None), 'eval_env_spec': Value(dtype='string', id=None), 'ref_max_score': Value(dtype='float64', id=None), 'ref_min_score': Value(dtype='float64', id=None), 'num_episodes_average_score': Value(dtype='int64', id=None), 'total_episodes': Value(dtype='int64', id=None), 'total_steps': Value(dtype='int64', id=None), 'dataset_size': Value(dtype='float64', id=None), 'description': Value(dtype='string', id=None), 'requirements': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1412, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 988, in stream_convert_to_parquet
                  builder._prepare_split(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1872, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 1 new columns ({'display_name'}) and 18 missing columns ({'author_email', 'data_format', 'num_episodes_average_score', 'total_steps', 'total_episodes', 'code_permalink', 'env_spec', 'dataset_id', 'eval_env_spec', 'observation_space', 'action_space', 'algorithm_name', 'dataset_size', 'requirements', 'ref_max_score', 'author', 'minari_version', 'ref_min_score'}).
              
              This happened while the json dataset builder was generating data using
              
              hf://datasets/farama-minari/D4RL/antmaze/namespace_metadata.json (at revision 064ce527f209cf2ecd5f7d6f63ddc6ec83dafcad)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

data_format
string
observation_space
string
action_space
string
env_spec
string
dataset_id
string
algorithm_name
string
author
sequence
author_email
sequence
code_permalink
string
minari_version
string
eval_env_spec
string
ref_max_score
float64
ref_min_score
float64
num_episodes_average_score
int64
total_episodes
int64
total_steps
int64
dataset_size
float64
description
string
requirements
sequence
display_name
string
hdf5
{"type": "Dict", "subspaces": {"achieved_goal": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "desired_goal": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "observation": {"type": "Box", "dtype": "float64", "shape": [27], "low": [-Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity]}}}
{"type": "Box", "dtype": "float32", "shape": [8], "low": [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], "high": [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]}
{"id": "AntMaze_Large_Diverse_GR-v4", "entry_point": "gymnasium_robotics.envs.maze.ant_maze_v4:AntMazeEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 1000, "order_enforce": true, "disable_env_checker": false, "kwargs": {"maze_map": [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, "c", 0, 0, 0, 1, "c", 0, 0, 0, 0, 1], [1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1], [1, 0, 0, 0, 0, "c", 0, 1, 0, 0, "c", 1], [1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1], [1, 0, "c", 1, 0, 1, 0, 0, 0, 0, 0, 1], [1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1], [1, 0, 0, 1, "c", 0, "c", 1, 0, "c", 0, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], "reward_type": "sparse", "continuing_task": true, "reset_target": false}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/antmaze/large-diverse-v1
QIteration+SAC
[ "Alex Davey" ]
[ "alexdavey0@gmail.com" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
{"id": "AntMaze_Large_Diverse_GR-v4", "entry_point": "gymnasium_robotics.envs.maze.ant_maze_v4:AntMazeEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 1000, "order_enforce": true, "disable_env_checker": false, "kwargs": {"maze_map": [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, "r", 0, 0, 0, 1, 0, 0, 0, 0, 0, 1], [1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1], [1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1], [1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1], [1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1], [1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1], [1, 0, 0, 1, 0, 0, 0, 1, 0, "g", 0, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], "reward_type": "sparse", "continuing_task": true, "reset_target": false}, "additional_wrappers": [], "vector_entry_point": null}
381.630005
0
100
1,000
1,000,000
605.2
The data is collected from the [`AntMaze_Large_Diverse_GR-v4`](https://robotics.farama.org/envs/maze/ant_maze/) environment. At the beginning of each episode the goal and agent's reset locations are selected from hand-picked cells in the map provided. The success rate of all the trajectories is more than 80%, failed trajectories occur because the Ant flips and can't stand up again. Also note that when the Ant reaches the goal the episode doesn't terminate or generate a new target leading to a reward accumulation. The Ant reaches the goals by following a set of waypoints using a goal-reaching policy trained using SAC.
[ "gymnasium-robotics>=1.2.3", "mujoco>=3.1.1,<=3.1.6" ]
null
hdf5
{"type": "Dict", "subspaces": {"achieved_goal": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "desired_goal": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "observation": {"type": "Box", "dtype": "float64", "shape": [27], "low": [-Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity]}}}
{"type": "Box", "dtype": "float32", "shape": [8], "low": [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], "high": [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]}
{"id": "AntMaze_Large-v4", "entry_point": "gymnasium_robotics.envs.maze.ant_maze_v4:AntMazeEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 1000, "order_enforce": true, "disable_env_checker": false, "kwargs": {"maze_map": [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1], [1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1], [1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1], [1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1], [1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1], [1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1], [1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], "reward_type": "sparse", "continuing_task": true, "reset_target": false}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/antmaze/large-play-v1
QIteration+SAC
[ "Alex Davey" ]
[ "alexdavey0@gmail.com" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
{"id": "AntMaze_Large-v4", "entry_point": "gymnasium_robotics.envs.maze.ant_maze_v4:AntMazeEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 1000, "order_enforce": true, "disable_env_checker": false, "kwargs": {"maze_map": [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, "r", 0, 0, 0, 1, 0, 0, 0, 0, 0, 1], [1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1], [1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1], [1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1], [1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1], [1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1], [1, 0, 0, 1, 0, 0, 0, 1, 0, "g", 0, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], "reward_type": "sparse", "continuing_task": true, "reset_target": false}, "additional_wrappers": [], "vector_entry_point": null}
353.540009
0
100
1,000
1,000,000
605.2
The data is collected from the [`AntMaze_Large-v4`](https://robotics.farama.org/envs/maze/ant_maze/) environment. At the beginning of each episode random locations for the goal and agent's reset are selected. The success rate of all the trajectories is more than 80%, failed trajectories occur because the Ant flips and can't stand up again. Also note that when the Ant reaches the goal the episode doesn't terminate or generate a new target leading to a reward accumulation. The Ant reaches the goals by following a set of waypoints using a goal-reaching policy trained using SAC.
[ "mujoco>=3.1.1,<=3.1.6", "gymnasium-robotics>=1.2.3" ]
null
hdf5
{"type": "Dict", "subspaces": {"achieved_goal": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "desired_goal": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "observation": {"type": "Box", "dtype": "float64", "shape": [27], "low": [-Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity]}}}
{"type": "Box", "dtype": "float32", "shape": [8], "low": [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], "high": [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]}
{"id": "AntMaze_Medium_Diverse_GR-v4", "entry_point": "gymnasium_robotics.envs.maze.ant_maze_v4:AntMazeEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 1000, "order_enforce": true, "disable_env_checker": false, "kwargs": {"maze_map": [[1, 1, 1, 1, 1, 1, 1, 1], [1, "c", 0, 1, 1, 0, 0, 1], [1, 0, 0, 1, 0, 0, "c", 1], [1, 1, 0, 0, 0, 1, 1, 1], [1, 0, 0, 1, 0, 0, 0, 1], [1, "c", 1, 0, 0, 1, 0, 1], [1, 0, 0, 0, 1, "c", 0, 1], [1, 1, 1, 1, 1, 1, 1, 1]], "reward_type": "sparse", "continuing_task": true, "reset_target": false}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/antmaze/medium-diverse-v1
QIteration+SAC
[ "Alex Davey" ]
[ "alexdavey0@gmail.com" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
{"id": "AntMaze_Medium_Diverse_GR-v4", "entry_point": "gymnasium_robotics.envs.maze.ant_maze_v4:AntMazeEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 1000, "order_enforce": true, "disable_env_checker": false, "kwargs": {"maze_map": [[1, 1, 1, 1, 1, 1, 1, 1], [1, "r", 0, 1, 1, 0, 0, 1], [1, 0, 0, 1, 0, 0, 0, 1], [1, 1, 0, 0, 0, 1, 1, 1], [1, 0, 0, 1, 0, 0, 0, 1], [1, 0, 1, 0, 0, 1, 0, 1], [1, 0, 0, 0, 1, 0, "g", 1], [1, 1, 1, 1, 1, 1, 1, 1]], "reward_type": "sparse", "continuing_task": true, "reset_target": false}, "additional_wrappers": [], "vector_entry_point": null}
305.470001
0
100
1,000
1,000,000
605.2
The data is collected from the [`AntMaze_Medium_Diverse_GR-v4`](https://robotics.farama.org/envs/maze/ant_maze/) environment. At the beginning of each episode the goal and agent's reset locations are selected from hand-picked cells in the map provided. The success rate of all the trajectories is more than 80%, failed trajectories occur because the Ant flips and can't stand up again. Also note that when the Ant reaches the goal the episode doesn't terminate or generate a new target leading to a reward accumulation. The Ant reaches the goals by following a set of waypoints using a goal-reaching policy trained using SAC.
[ "mujoco>=3.1.1,<=3.1.6", "gymnasium-robotics>=1.2.3" ]
null
hdf5
{"type": "Dict", "subspaces": {"achieved_goal": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "desired_goal": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "observation": {"type": "Box", "dtype": "float64", "shape": [27], "low": [-Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity]}}}
{"type": "Box", "dtype": "float32", "shape": [8], "low": [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], "high": [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]}
{"id": "AntMaze_Medium-v4", "entry_point": "gymnasium_robotics.envs.maze.ant_maze_v4:AntMazeEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 1000, "order_enforce": true, "disable_env_checker": false, "kwargs": {"maze_map": [[1, 1, 1, 1, 1, 1, 1, 1], [1, 0, 0, 1, 1, 0, 0, 1], [1, 0, 0, 1, 0, 0, 0, 1], [1, 1, 0, 0, 0, 1, 1, 1], [1, 0, 0, 1, 0, 0, 0, 1], [1, 0, 1, 0, 0, 1, 0, 1], [1, 0, 0, 0, 1, 0, 0, 1], [1, 1, 1, 1, 1, 1, 1, 1]], "reward_type": "sparse", "continuing_task": true, "reset_target": false}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/antmaze/medium-play-v1
QIteration+SAC
[ "Alex Davey" ]
[ "alexdavey0@gmail.com" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
{"id": "AntMaze_Medium-v4", "entry_point": "gymnasium_robotics.envs.maze.ant_maze_v4:AntMazeEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 1000, "order_enforce": true, "disable_env_checker": false, "kwargs": {"maze_map": [[1, 1, 1, 1, 1, 1, 1, 1], [1, "r", 0, 1, 1, 0, 0, 1], [1, 0, 0, 1, 0, 0, 0, 1], [1, 1, 0, 0, 0, 1, 1, 1], [1, 0, 0, 1, 0, 0, 0, 1], [1, 0, 1, 0, 0, 1, 0, 1], [1, 0, 0, 0, 1, 0, "g", 1], [1, 1, 1, 1, 1, 1, 1, 1]], "reward_type": "sparse", "continuing_task": true, "reset_target": false}, "additional_wrappers": [], "vector_entry_point": null}
311.959991
0
100
1,000
1,000,000
605.2
The data is collected from the [`AntMaze_Medium-v4`](https://robotics.farama.org/envs/maze/ant_maze/) environment. At the beginning of each episode random locations for the goal and agent's reset are selected. The success rate of all the trajectories is more than 80%, failed trajectories occur because the Ant flips and can't stand up again. Also note that when the Ant reaches the goal the episode doesn't terminate or generate a new target leading to a reward accumulation. The Ant reaches the goals by following a set of waypoints using a goal-reaching policy trained using SAC.
[ "mujoco>=3.1.1,<=3.1.6", "gymnasium-robotics>=1.2.3" ]
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
The [Ant Maze](https://robotics.farama.org/envs/maze/ant_maze/) datasets present a navigation domain that replaces the 2D ball from <a href="../pointmaze" title="poitnmaze">pointmaze</a> with the more complex 8-DoF <a href="https://gymnasium.farama.org/environments/mujoco/ant/" title="ant">Ant</a> quadruped robot. This dataset was introduced in [D4RL](https://github.com/Farama-Foundation/D4RL/wiki/Tasks#antmaze)[1] to test the stitching challenge using a morphologically complex robot that could mimic real-world robotic navigation tasks. Additionally, for this task the reward is sparse 0-1 which is activated upon reaching the goal. To collect the data, a goal reaching expert policy is previously trained with the [SAC](https://stable-baselines3.readthedocs.io/en/master/modules/sac.html#stable_baselines3.sac.SAC) algorithm provided in Stable Baselines 3[2]. This goal reaching policy is then used by the Ant agent to follow a set of waypoints generated by a planner ([QIteration](https://towardsdatascience.com/fundamental-iterative-methods-of-reinforcement-learning-df8ff078652a))[3] to the final goal location. Because the controllers memorize the reached waypoints, the data collection policy is non-Markovian. ## References [1] Fu, Justin, et al. ‘D4RL: Datasets for Deep Data-Driven Reinforcement Learning’. CoRR, vol. abs/2004.07219, 2020, https://arxiv.org/abs/2004.07219. [2] Antonin Raffin, Ashley Hill, Adam Gleave, Anssi Kanervisto, Maximilian Ernestus, & Noah Dormann (2021). Stable-Baselines3: Reliable Reinforcement Learning Implementations. Journal of Machine Learning Research, 22(268), 1-8. [3] Lambert, Nathan. ‘Fundamental Iterative Methods of Reinforcement Learnin’. Apr 8, 2020, https://towardsdatascience.com/fundamental-iterative-methods-of-reinforcement-learning-df8ff078652a
null
Ant Maze
hdf5
{"type": "Dict", "subspaces": {"achieved_goal": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "desired_goal": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "observation": {"type": "Box", "dtype": "float64", "shape": [27], "low": [-Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity]}}}
{"type": "Box", "dtype": "float32", "shape": [8], "low": [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], "high": [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]}
{"id": "AntMaze_UMaze-v4", "entry_point": "gymnasium_robotics.envs.maze.ant_maze_v4:AntMazeEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 700, "order_enforce": true, "disable_env_checker": false, "kwargs": {"maze_map": [[1, 1, 1, 1, 1], [1, 0, 0, 0, 1], [1, 1, 1, 0, 1], [1, 0, 0, 0, 1], [1, 1, 1, 1, 1]], "reward_type": "sparse", "continuing_task": true, "reset_target": false}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/antmaze/umaze-diverse-v1
QIteration+SAC
[ "Alex Davey" ]
[ "alexdavey0@gmail.com" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
{"id": "AntMaze_UMaze-v4", "entry_point": "gymnasium_robotics.envs.maze.ant_maze_v4:AntMazeEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 700, "order_enforce": true, "disable_env_checker": false, "kwargs": {"maze_map": [[1, 1, 1, 1, 1], [1, 0, 0, "r", 1], [1, 0, 1, 1, 1], [1, 0, 0, "g", 1], [1, 1, 1, 1, 1]], "reward_type": "sparse", "continuing_task": true, "reset_target": false}, "additional_wrappers": [], "vector_entry_point": null}
465.049988
0
100
1,430
1,000,000
605
The data is collected from the [`AntMaze_UMaze-v4`](https://robotics.farama.org/envs/maze/ant_maze/) environment, which contains a U shape maze. At the beginning of each episode random locations for the goal and agent's reset are selected. The success rate of all the trajectories is more than 90%, failed trajectories occur because the Ant flips and can't stand up again. Also note that when the Ant reaches the goal the episode doesn't terminate or generate a new target leading to a reward accumulation. The Ant reaches the goals by following a set of waypoints using a goal-reaching policy trained using SAC.
[ "mujoco>=3.1.1,<=3.1.6", "gymnasium-robotics>=1.2.3" ]
null
hdf5
{"type": "Dict", "subspaces": {"achieved_goal": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "desired_goal": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "observation": {"type": "Box", "dtype": "float64", "shape": [27], "low": [-Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity]}}}
{"type": "Box", "dtype": "float32", "shape": [8], "low": [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], "high": [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]}
{"id": "AntMaze_UMaze-v4", "entry_point": "gymnasium_robotics.envs.maze.ant_maze_v4:AntMazeEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 700, "order_enforce": true, "disable_env_checker": false, "kwargs": {"maze_map": [[1, 1, 1, 1, 1], [1, "g", 0, 0, 1], [1, 1, 1, 0, 1], [1, "r", 0, 0, 1], [1, 1, 1, 1, 1]], "reward_type": "sparse", "continuing_task": true, "reset_target": false}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/antmaze/umaze-v1
QIteration+SAC
[ "Alex Davey" ]
[ "alexdavey0@gmail.com" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
{"id": "AntMaze_UMaze-v4", "entry_point": "gymnasium_robotics.envs.maze.ant_maze_v4:AntMazeEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 700, "order_enforce": true, "disable_env_checker": false, "kwargs": {"maze_map": [[1, 1, 1, 1, 1], [1, 0, 0, "r", 1], [1, 0, 1, 1, 1], [1, 0, 0, "g", 1], [1, 1, 1, 1, 1]], "reward_type": "sparse", "continuing_task": true, "reset_target": false}, "additional_wrappers": [], "vector_entry_point": null}
452.190002
0
100
1,430
1,000,000
605
The data is collected from the [`AntMaze_UMaze-v4`](https://robotics.farama.org/envs/maze/ant_maze/) environment, which contains a U shape maze. Every episode has the same fixed goal and reset locations. The success rate of all the trajectories is more than 90%, failed trajectories occur because the Ant flips and can't stand up again. Also note that when the Ant reaches the goal the episode doesn't terminate or generate a new target leading to a reward accumulation. The Ant reaches the goals by following a set of waypoints using a goal-reaching policy trained using SAC.
[ "mujoco>=3.1.1,<=3.1.6", "gymnasium-robotics>=1.2.3" ]
null
hdf5
{"type": "Box", "dtype": "float64", "shape": [39], "low": [-Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity]}
{"type": "Box", "dtype": "float32", "shape": [28], "low": [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], "high": [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]}
{"id": "AdroitHandDoor-v1", "entry_point": "gymnasium_robotics.envs.adroit_hand.adroit_door:AdroitHandDoorEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 200, "order_enforce": true, "disable_env_checker": false, "kwargs": {"reward_type": "dense"}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/door/cloned-v2
null
[ "Rodrigo de Lazcano" ]
[ "rperezvicente@farama.org" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
null
2,940.578369
-45.80706
100
4,358
1,000,000
532.3
Data obtained by training an imitation policy on the demonstrations from `expert` and `human`, then running the policy, and mixing data at a 50-50 ratio with the demonstrations. This dataset is provided by [D4RL](https://github.com/Farama-Foundation/D4RL/wiki/Tasks#adroit). The environment used to collect the dataset is [`AdroitHandDoor-v1`](https://robotics.farama.org/envs/adroit_hand/adroit_door/).
[ "gymnasium-robotics>=1.2.4" ]
null
hdf5
{"type": "Box", "dtype": "float64", "shape": [39], "low": [-Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity]}
{"type": "Box", "dtype": "float32", "shape": [28], "low": [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], "high": [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]}
{"id": "AdroitHandDoor-v1", "entry_point": "gymnasium_robotics.envs.adroit_hand.adroit_door:AdroitHandDoorEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 200, "order_enforce": true, "disable_env_checker": false, "kwargs": {"reward_type": "dense"}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/door/expert-v2
null
[ "Rodrigo de Lazcano" ]
[ "rperezvicente@farama.org" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
null
2,940.578369
-45.80706
100
5,000
1,000,000
543.3
Trajectories have expert data from a fine-tuned RL policy provided in the [DAPG](https://github.com/aravindr93/hand_dapg) repository. The environment used to collect the dataset is [`AdroitHandDoor-v1`](https://robotics.farama.org/envs/adroit_hand/adroit_door/).
[ "gymnasium-robotics>=1.2.4" ]
null
hdf5
{"type": "Box", "dtype": "float64", "shape": [39], "low": [-Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity]}
{"type": "Box", "dtype": "float32", "shape": [28], "low": [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], "high": [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]}
{"id": "AdroitHandDoor-v1", "entry_point": "gymnasium_robotics.envs.adroit_hand.adroit_door:AdroitHandDoorEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 200, "order_enforce": true, "disable_env_checker": false, "kwargs": {"reward_type": "dense"}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/door/human-v2
null
[ "Rodrigo de Lazcano" ]
[ "rperezvicente@farama.org" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
null
2,940.578369
-45.80706
100
25
6,729
3.5
25 human demonstrations provided in the [DAPG](https://github.com/aravindr93/hand_dapg) repository. The environment used to collect the dataset is [`AdroitHandDoor-v1`](https://robotics.farama.org/envs/adroit_hand/adroit_door/).
[ "gymnasium-robotics>=1.2.4" ]
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
These datasets were generated with the [`AdroitHandDoor-v1`](https://robotics.farama.org/envs/adroit_hand/adroit_door/) environment, originally hosted in the [`hand_dapg`](https://github.com/aravindr93/hand_dapg) repository. The objective of the task is to open a door with a 24-DoF robotic hand. This domain was selected to measure the effect of a narrow expert data distributions and human demonstrations on a sparse reward, high-dimensional robotic manipulation task. There are three types of datasets, two from the original paper[1] (`human` and `expert`), and another one introduced in D4RL[2] (`cloned`). ## References [1] Rajeswaran, Aravind, et al. ‘Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations’. CoRR, vol. abs/1709.10087, 2017, http://arxiv.org/abs/1709.10087. [2] Fu, Justin, et al. ‘D4RL: Datasets for Deep Data-Driven Reinforcement Learning’. CoRR, vol. abs/2004.07219, 2020, https://arxiv.org/abs/2004.07219.
null
Door
hdf5
{"type": "Box", "dtype": "float64", "shape": [46], "low": [-Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity]}
{"type": "Box", "dtype": "float32", "shape": [26], "low": [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], "high": [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]}
{"id": "AdroitHandHammer-v1", "entry_point": "gymnasium_robotics.envs.adroit_hand.adroit_hammer:AdroitHandHammerEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 200, "order_enforce": true, "disable_env_checker": false, "kwargs": {"reward_type": "dense"}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/hammer/cloned-v2
null
[ "Rodrigo de Lazcano" ]
[ "rperezvicente@farama.org" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
null
12,635.712891
-267.074463
100
3,606
1,000,000
564.5
Data obtained by training an imitation policy on the demonstrations from `expert` and `human`, then running the policy, and mixing data at a 50-50 ratio with the demonstrations. This dataset is provided by [D4RL](https://github.com/Farama-Foundation/D4RL/wiki/Tasks#adroit). The environment used to collect the dataset is [`AdroitHandHammer-v1`](https://robotics.farama.org/envs/adroit_hand/adroit_hammer/).
[ "gymnasium-robotics>=1.2.3" ]
null
hdf5
{"type": "Box", "dtype": "float64", "shape": [46], "low": [-Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity]}
{"type": "Box", "dtype": "float32", "shape": [26], "low": [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], "high": [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]}
{"id": "AdroitHandHammer-v1", "entry_point": "gymnasium_robotics.envs.adroit_hand.adroit_hammer:AdroitHandHammerEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 200, "order_enforce": true, "disable_env_checker": false, "kwargs": {"reward_type": "dense"}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/hammer/expert-v2
null
[ "Rodrigo de Lazcano" ]
[ "rperezvicente@farama.org" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
null
12,635.712891
-267.074463
100
5,000
1,000,000
584.4
Trajectories have expert data from a fine-tuned RL policy provided in the [DAPG](https://github.com/aravindr93/hand_dapg) repository. The environment used to collect the dataset is [`AdroitHandHammer-v1`](https://robotics.farama.org/envs/adroit_hand/adroit_hammer/).
[ "gymnasium-robotics>=1.2.3" ]
null
hdf5
{"type": "Box", "dtype": "float64", "shape": [46], "low": [-Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity]}
{"type": "Box", "dtype": "float32", "shape": [26], "low": [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], "high": [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]}
{"id": "AdroitHandHammer-v1", "entry_point": "gymnasium_robotics.envs.adroit_hand.adroit_hammer:AdroitHandHammerEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 200, "order_enforce": true, "disable_env_checker": false, "kwargs": {"reward_type": "dense"}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/hammer/human-v2
null
[ "Rodrigo de Lazcano" ]
[ "rperezvicente@farama.org" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
null
12,635.712891
-267.074463
100
25
11,310
6.2
25 human demonstrations provided in the [DAPG](https://github.com/aravindr93/hand_dapg) repository. The environment used to collect the dataset is [`AdroitHandHammer-v1`](https://robotics.farama.org/envs/adroit_hand/adroit_hammer/).
[ "gymnasium-robotics>=1.2.3" ]
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
These datasets were generated with the [`AdroitHandHammer-v1`](https://robotics.farama.org/envs/adroit_hand/adroit_hammer/) environment, originally hosted in the [`hand_dapg`](https://github.com/aravindr93/hand_dapg) repository. The objective of the task is to introduce a nail in a board with a hammer tool using a 24-DoF robotic hand. This domain was selected to measure the effect of a narrow expert data distributions and human demonstrations on a sparse reward, high-dimensional robotic manipulation task. There are three types of datasets, two from the original paper[1] (`human` and `expert`), and another one introduced in D4RL[2] (`cloned`). ## References [1] Rajeswaran, Aravind, et al. ‘Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations’. CoRR, vol. abs/1709.10087, 2017, http://arxiv.org/abs/1709.10087. [2] Fu, Justin, et al. ‘D4RL: Datasets for Deep Data-Driven Reinforcement Learning’. CoRR, vol. abs/2004.07219, 2020, https://arxiv.org/abs/2004.07219.
null
Hammer
hdf5
{"type": "Dict", "subspaces": {"achieved_goal": {"type": "Dict", "subspaces": {"kettle": {"type": "Box", "dtype": "float64", "shape": [7], "low": [-Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity]}, "light switch": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "microwave": {"type": "Box", "dtype": "float64", "shape": [1], "low": [-Infinity], "high": [Infinity]}, "slide cabinet": {"type": "Box", "dtype": "float64", "shape": [1], "low": [-Infinity], "high": [Infinity]}}}, "desired_goal": {"type": "Dict", "subspaces": {"kettle": {"type": "Box", "dtype": "float64", "shape": [7], "low": [-Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity]}, "light switch": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "microwave": {"type": "Box", "dtype": "float64", "shape": [1], "low": [-Infinity], "high": [Infinity]}, "slide cabinet": {"type": "Box", "dtype": "float64", "shape": [1], "low": [-Infinity], "high": [Infinity]}}}, "observation": {"type": "Box", "dtype": "float64", "shape": [59], "low": [-Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity]}}}
{"type": "Box", "dtype": "float64", "shape": [9], "low": [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], "high": [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]}
{"id": "FrankaKitchen-v1", "entry_point": "gymnasium_robotics.envs.franka_kitchen:KitchenEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 280, "order_enforce": true, "disable_env_checker": false, "kwargs": {"remove_task_when_completed": false, "terminate_on_tasks_completed": false, "tasks_to_complete": ["microwave", "kettle", "light switch", "slide cabinet"]}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/kitchen/complete-v2
None
[ "Rodrigo de Lazcano" ]
[ "rperezvicente@farama.org" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
null
4
0
100
19
4,209
4.3
The complete dataset includes demonstrations of all 4 target subtasks being completed, in order.
[ "gymnasium-robotics>=1.2.4" ]
null
hdf5
{"type": "Dict", "subspaces": {"achieved_goal": {"type": "Dict", "subspaces": {"bottom burner": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "kettle": {"type": "Box", "dtype": "float64", "shape": [7], "low": [-Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity]}, "light switch": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "microwave": {"type": "Box", "dtype": "float64", "shape": [1], "low": [-Infinity], "high": [Infinity]}}}, "desired_goal": {"type": "Dict", "subspaces": {"bottom burner": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "kettle": {"type": "Box", "dtype": "float64", "shape": [7], "low": [-Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity]}, "light switch": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "microwave": {"type": "Box", "dtype": "float64", "shape": [1], "low": [-Infinity], "high": [Infinity]}}}, "observation": {"type": "Box", "dtype": "float64", "shape": [59], "low": [-Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity]}}}
{"type": "Box", "dtype": "float64", "shape": [9], "low": [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], "high": [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]}
{"id": "FrankaKitchen-v1", "entry_point": "gymnasium_robotics.envs.franka_kitchen:KitchenEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 450, "order_enforce": true, "disable_env_checker": false, "kwargs": {"remove_task_when_completed": false, "terminate_on_tasks_completed": false, "tasks_to_complete": ["microwave", "kettle", "bottom burner", "light switch"]}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/kitchen/mixed-v2
None
[ "Rodrigo de Lazcano" ]
[ "rperezvicente@farama.org" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
null
4
0
100
621
156,560
157.5
The mixed dataset contains various subtasks being performed, but the 4 target subtasks are never completed in sequence together.
[ "gymnasium-robotics>=1.2.4" ]
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
These datasets were generated with the [`FrankaKitchen-v1`](https://robotics.farama.org/envs/franka_kitchen/franka_kitchen/) environment, originally hosted in the [`D4RL`](https://github.com/aravindr93/hand_dapg)[1] and [`relay-policy-learning`](https://github.com/google-research/relay-policy-learning)[2] repository. The goal of the `FrankaKitchen` environment is to interact with the various objects in order to reach a desired state configuration. The objects you can interact with include the position of the kettle, flipping the light switch, opening and closing the microwave and cabinet doors, or sliding the other cabinet door. The desired goal configuration for all datasets is to complete 4 subtasks: open the microwave, move the kettle, flip the light switch, and slide open the cabinet door. ## References [1] Fu, Justin, et al. ‘D4RL: Datasets for Deep Data-Driven Reinforcement Learning’. CoRR, vol. abs/2004.07219, 2020, https://arxiv.org/abs/2004.07219. [2] Gupta, A., Kumar, V., Lynch, C., Levine, S., & Hausman, K. (2019). Relay policy learning: Solving long-horizon tasks via imitation and reinforcement learning. arXiv preprint arXiv:1910.11956.
null
Kitchen
hdf5
{"type": "Dict", "subspaces": {"achieved_goal": {"type": "Dict", "subspaces": {"kettle": {"type": "Box", "dtype": "float64", "shape": [7], "low": [-Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity]}, "light switch": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "microwave": {"type": "Box", "dtype": "float64", "shape": [1], "low": [-Infinity], "high": [Infinity]}, "slide cabinet": {"type": "Box", "dtype": "float64", "shape": [1], "low": [-Infinity], "high": [Infinity]}}}, "desired_goal": {"type": "Dict", "subspaces": {"kettle": {"type": "Box", "dtype": "float64", "shape": [7], "low": [-Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity]}, "light switch": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "microwave": {"type": "Box", "dtype": "float64", "shape": [1], "low": [-Infinity], "high": [Infinity]}, "slide cabinet": {"type": "Box", "dtype": "float64", "shape": [1], "low": [-Infinity], "high": [Infinity]}}}, "observation": {"type": "Box", "dtype": "float64", "shape": [59], "low": [-Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity]}}}
{"type": "Box", "dtype": "float64", "shape": [9], "low": [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], "high": [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]}
{"id": "FrankaKitchen-v1", "entry_point": "gymnasium_robotics.envs.franka_kitchen:KitchenEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 450, "order_enforce": true, "disable_env_checker": false, "kwargs": {"remove_task_when_completed": false, "terminate_on_tasks_completed": false, "tasks_to_complete": ["microwave", "kettle", "light switch", "slide cabinet"]}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/kitchen/partial-v2
None
[ "Rodrigo de Lazcano" ]
[ "rperezvicente@farama.org" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
null
4
0
100
621
156,560
155.1
The partial dataset includes other tasks being performed, but there are subtrajectories where the 4 target subtasks are completed in sequence.
[ "gymnasium-robotics>=1.2.4" ]
null
hdf5
{"type": "Dict", "subspaces": {"direction": {"type": "Discrete", "dtype": "int64", "start": 0, "n": 4}, "image": {"type": "Box", "dtype": "uint8", "shape": [7, 7, 3], "low": [[[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]]], "high": [[[255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255]], [[255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255]], [[255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255]], [[255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255]], [[255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255]], [[255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255]], [[255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255]]]}, "mission": {"type": "Text", "max_length": 14, "min_length": 1, "charset": " ''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''(),,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdeeeffghijklmnnoopqrrssttuvwxyzz{}"}}}
{"type": "Discrete", "dtype": "int64", "start": 0, "n": 7}
{"id": "MiniGrid-FourRooms-v0", "entry_point": "minigrid.envs:FourRoomsEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": null, "order_enforce": true, "disable_env_checker": false, "kwargs": {}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/minigrid/fourrooms-random-v0
RandomPolicy
[ "Omar G. Younis" ]
[ "omar.g.younis@gmail.com" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
null
null
null
null
10,174
1,000,070
453.8
This dataset was generated sampling random actions from the action space.
[ "minigrid" ]
null
hdf5
{"type": "Dict", "subspaces": {"direction": {"type": "Discrete", "dtype": "int64", "start": 0, "n": 4}, "image": {"type": "Box", "dtype": "uint8", "shape": [7, 7, 3], "low": [[[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]]], "high": [[[255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255]], [[255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255]], [[255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255]], [[255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255]], [[255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255]], [[255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255]], [[255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255], [255, 255, 255]]]}, "mission": {"type": "Text", "max_length": 14, "min_length": 1, "charset": " ''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''(),,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdeeeffghijklmnnoopqrrssttuvwxyzz{}"}}}
{"type": "Discrete", "dtype": "int64", "start": 0, "n": 7}
{"id": "MiniGrid-FourRooms-v0", "entry_point": "minigrid.envs:FourRoomsEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": null, "order_enforce": true, "disable_env_checker": false, "kwargs": {}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/minigrid/fourrooms-v0
ExpertPolicy
[ "Omar G. Younis" ]
[ "omar.younis98@gmail.com" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
null
null
null
null
590
10,010
14.6
This dataset was generated using an expert policy with full observability that goes straight to the goal.
[ "minigrid" ]
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Dataset generated from the [MiniGrid-FourRooms environment](https://minigrid.farama.org/environments/minigrid/FourRoomsEnv/). The objective of the agent is to reach a goal position in a gridworld. We regenerate the dataset of [D4RL](https://github.com/Farama-Foundation/D4RL/wiki/Tasks#minigrid-fourrooms) for full reproducibility, using a random policy and an expert policy that navigates straight to the goal. ## References [1] Fu, Justin, et al. ‘D4RL: Datasets for Deep Data-Driven Reinforcement Learning’. CoRR, vol. abs/2004.07219, 2020, https://arxiv.org/abs/2004.07219.
null
MiniGrid
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
The D4RL dataset group contains a reproduction of the datasets from the [D4RL benchmark](https://github.com/Farama-Foundation/D4RL)[1]. For reproducibility purposes, not all the datasets are the same as in D4RL, but they are generated with the same principles. We provide the code that reproduces each dataset on GitHub in the repository [Farama-Foundation/minari-dataset-generation-scripts](https://github.com/Farama-Foundation/minari-dataset-generation-scripts). ## References [1] Fu, Justin, et al. ‘D4RL: Datasets for Deep Data-Driven Reinforcement Learning’. CoRR, vol. abs/2004.07219, 2020, https://arxiv.org/abs/2004.07219.
null
null
hdf5
{"type": "Box", "dtype": "float64", "shape": [45], "low": [-Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity]}
{"type": "Box", "dtype": "float32", "shape": [24], "low": [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], "high": [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]}
{"id": "AdroitHandPen-v1", "entry_point": "gymnasium_robotics.envs.adroit_hand.adroit_pen:AdroitHandPenEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 100, "order_enforce": true, "disable_env_checker": false, "kwargs": {"reward_type": "dense"}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/pen/cloned-v2
null
[ "Rodrigo de Lazcano" ]
[ "rperezvicente@farama.org" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
null
3,209.684326
-209.256439
100
3,736
500,000
313.6
Data obtained by training an imitation policy on the demonstrations from `expert` and `human`, then running the policy, and mixing data at a 50-50 ratio with the demonstrations. This dataset is provided by [D4RL](https://github.com/Farama-Foundation/D4RL/wiki/Tasks#adroit). The environment used to collect the dataset is [`AdroitHandPen-v1`](https://robotics.farama.org/envs/adroit_hand/adroit_pen/).
[ "gymnasium-robotics>=1.2.3" ]
null
hdf5
{"type": "Box", "dtype": "float64", "shape": [45], "low": [-Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity]}
{"type": "Box", "dtype": "float32", "shape": [24], "low": [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], "high": [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]}
{"id": "AdroitHandPen-v1", "entry_point": "gymnasium_robotics.envs.adroit_hand.adroit_pen:AdroitHandPenEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 100, "order_enforce": true, "disable_env_checker": false, "kwargs": {"reward_type": "dense"}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/pen/expert-v2
null
[ "Rodrigo de Lazcano" ]
[ "rperezvicente@farama.org" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
null
3,209.684326
-209.256439
100
4,958
499,206
338.3
Trajectories have expert data from a fine-tuned RL policy provided in the [DAPG](https://github.com/aravindr93/hand_dapg) repository. The environment used to collect the dataset is [`AdroitHandPen-v1`](https://robotics.farama.org/envs/adroit_hand/adroit_pen/).
[ "gymnasium-robotics>=1.2.3" ]
null
hdf5
{"type": "Box", "dtype": "float64", "shape": [45], "low": [-Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity]}
{"type": "Box", "dtype": "float32", "shape": [24], "low": [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], "high": [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]}
{"id": "AdroitHandPen-v1", "entry_point": "gymnasium_robotics.envs.adroit_hand.adroit_pen:AdroitHandPenEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 100, "order_enforce": true, "disable_env_checker": false, "kwargs": {"reward_type": "dense"}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/pen/human-v2
null
[ "Rodrigo de Lazcano" ]
[ "rperezvicente@farama.org" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
null
3,209.684326
-209.256439
100
25
5,000
2.9
25 human demonstrations provided in the [DAPG](https://github.com/aravindr93/hand_dapg) repository. The environment used to collect the dataset is [`AdroitHandPen-v1`](https://robotics.farama.org/envs/adroit_hand/adroit_pen/).
[ "gymnasium-robotics>=1.2.3" ]
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
These datasets were generated with the [`AdroitHandPen-v1`](https://robotics.farama.org/envs/adroit_hand/adroit_pen/) environment, originally hosted in the [`hand_dapg`](https://github.com/aravindr93/hand_dapg) repository. The objective of the task is to manipulate a pen to reach certain orientation by using a 24-DoF robotic hand. This domain was selected to measure the effect of a narrow expert data distributions and human demonstrations on a sparse reward, high-dimensional robotic manipulation task. There are three types of datasets, two from the original paper[1] (`human` and `expert`), and another one introduced in D4RL[2] (`cloned`). ## References [1] Rajeswaran, Aravind, et al. ‘Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations’. CoRR, vol. abs/1709.10087, 2017, http://arxiv.org/abs/1709.10087. [2] Fu, Justin, et al. ‘D4RL: Datasets for Deep Data-Driven Reinforcement Learning’. CoRR, vol. abs/2004.07219, 2020, https://arxiv.org/abs/2004.07219.
null
Pen
hdf5
{"type": "Dict", "subspaces": {"achieved_goal": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "desired_goal": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "observation": {"type": "Box", "dtype": "float64", "shape": [4], "low": [-Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity]}}}
{"type": "Box", "dtype": "float32", "shape": [2], "low": [-1.0, -1.0], "high": [1.0, 1.0]}
{"id": "PointMaze_LargeDense-v3", "entry_point": "gymnasium_robotics.envs.maze.point_maze:PointMazeEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 1000000, "order_enforce": true, "disable_env_checker": false, "kwargs": {"maze_map": [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1], [1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1], [1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1], [1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1], [1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1], [1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1], [1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], "reward_type": "dense", "continuing_task": true, "reset_target": true}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/pointmaze/large-dense-v2
QIteration
[ "Rodrigo Perez-Vicente" ]
[ "rperezvicente@farama.org" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
{"id": "PointMaze_LargeDense-v3", "entry_point": "gymnasium_robotics.envs.maze.point_maze:PointMazeEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 800, "order_enforce": true, "disable_env_checker": false, "kwargs": {"maze_map": [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1], [1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1], [1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1], [1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1], [1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1], [1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1], [1, 0, 0, 1, 0, 0, 0, 1, 0, "g", 0, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], "reward_type": "dense", "continuing_task": true, "reset_target": false}, "additional_wrappers": [], "vector_entry_point": null}
481.534454
27.165932
100
3,360
1,000,000
239.2
The data is collected from the [`PointMaze_LargeDense-v3`](https://robotics.farama.org/envs/maze/point_maze/) environment. The agent uses a PD controller to follow a path of waypoints generated with QIteration until it reaches the goal. The task is continuing which means that when the agent reaches the goal the environment generates a new random goal without resetting the location of the agent. The reward function is dense, being the exponential negative Euclidean distance between the goal and the agent. To add variance to the collected paths random noise is added to the actions taken by the agent.
[ "gymnasium-robotics>=1.2.4" ]
null
hdf5
{"type": "Dict", "subspaces": {"achieved_goal": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "desired_goal": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "observation": {"type": "Box", "dtype": "float64", "shape": [4], "low": [-Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity]}}}
{"type": "Box", "dtype": "float32", "shape": [2], "low": [-1.0, -1.0], "high": [1.0, 1.0]}
{"id": "PointMaze_Large-v3", "entry_point": "gymnasium_robotics.envs.maze.point_maze:PointMazeEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 1000000, "order_enforce": true, "disable_env_checker": false, "kwargs": {"maze_map": [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1], [1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1], [1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1], [1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1], [1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1], [1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1], [1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], "reward_type": "sparse", "continuing_task": true, "reset_target": true}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/pointmaze/large-v2
QIteration
[ "Rodrigo Perez-Vicente" ]
[ "rperezvicente@farama.org" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
{"id": "PointMaze_Large-v3", "entry_point": "gymnasium_robotics.envs.maze.point_maze:PointMazeEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 800, "order_enforce": true, "disable_env_checker": false, "kwargs": {"maze_map": [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1], [1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1], [1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1], [1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1], [1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1], [1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1], [1, 0, 0, 1, 0, 0, 0, 1, 0, "g", 0, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], "reward_type": "sparse", "continuing_task": true, "reset_target": false}, "additional_wrappers": [], "vector_entry_point": null}
462.26001
3.55
100
3,360
1,000,000
239.2
The data is collected from the [`PointMaze_Large-v3`](https://robotics.farama.org/envs/maze/point_maze/) environment. The agent uses a PD controller to follow a path of waypoints generated with QIteration until it reaches the goal. The task is continuing which means that when the agent reaches the goal the environment generates a new random goal without resetting the location of the agent. The reward function is sparse, only returning a value of 1 if the goal is reached, otherwise 0. To add variance to the collected paths random noise is added to the actions taken by the agent.
[ "gymnasium-robotics>=1.2.4" ]
null
hdf5
{"type": "Dict", "subspaces": {"achieved_goal": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "desired_goal": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "observation": {"type": "Box", "dtype": "float64", "shape": [4], "low": [-Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity]}}}
{"type": "Box", "dtype": "float32", "shape": [2], "low": [-1.0, -1.0], "high": [1.0, 1.0]}
{"id": "PointMaze_MediumDense-v3", "entry_point": "gymnasium_robotics.envs.maze.point_maze:PointMazeEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 1000000, "order_enforce": true, "disable_env_checker": false, "kwargs": {"maze_map": [[1, 1, 1, 1, 1, 1, 1, 1], [1, 0, 0, 1, 1, 0, 0, 1], [1, 0, 0, 1, 0, 0, 0, 1], [1, 1, 0, 0, 0, 1, 1, 1], [1, 0, 0, 1, 0, 0, 0, 1], [1, 0, 1, 0, 0, 1, 0, 1], [1, 0, 0, 0, 1, 0, 0, 1], [1, 1, 1, 1, 1, 1, 1, 1]], "reward_type": "dense", "continuing_task": true, "reset_target": true}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/pointmaze/medium-dense-v2
QIteration
[ "Rodrigo Perez-Vicente" ]
[ "rperezvicente@farama.org" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
{"id": "PointMaze_MediumDense-v3", "entry_point": "gymnasium_robotics.envs.maze.point_maze:PointMazeEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 600, "order_enforce": true, "disable_env_checker": false, "kwargs": {"maze_map": [[1, 1, 1, 1, 1, 1, 1, 1], [1, 0, 0, 1, 1, 0, 0, 1], [1, 0, 0, 1, 0, 0, 0, 1], [1, 1, 0, 0, 0, 1, 1, 1], [1, 0, 0, 1, 0, 0, 0, 1], [1, 0, 1, 0, 0, 1, 0, 1], [1, 0, 0, 0, 1, 0, "g", 1], [1, 1, 1, 1, 1, 1, 1, 1]], "reward_type": "dense", "continuing_task": true, "reset_target": false}, "additional_wrappers": [], "vector_entry_point": null}
368.80896
49.240845
100
4,752
1,000,000
284
The data is collected from the [`PointMaze_MediumDense-v3`](https://robotics.farama.org/envs/maze/point_maze/) environment. The agent uses a PD controller to follow a path of waypoints generated with QIteration until it reaches the goal. The task is continuing which means that when the agent reaches the goal the environment generates a new random goal without resetting the location of the agent. The reward function is dense, being the exponential negative Euclidean distance between the goal and the agent. To add variance to the collected paths random noise is added to the actions taken by the agent.
[ "gymnasium-robotics>=1.2.4" ]
null
hdf5
{"type": "Dict", "subspaces": {"achieved_goal": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "desired_goal": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "observation": {"type": "Box", "dtype": "float64", "shape": [4], "low": [-Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity]}}}
{"type": "Box", "dtype": "float32", "shape": [2], "low": [-1.0, -1.0], "high": [1.0, 1.0]}
{"id": "PointMaze_Medium-v3", "entry_point": "gymnasium_robotics.envs.maze.point_maze:PointMazeEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 1000000, "order_enforce": true, "disable_env_checker": false, "kwargs": {"maze_map": [[1, 1, 1, 1, 1, 1, 1, 1], [1, 0, 0, 1, 1, 0, 0, 1], [1, 0, 0, 1, 0, 0, 0, 1], [1, 1, 0, 0, 0, 1, 1, 1], [1, 0, 0, 1, 0, 0, 0, 1], [1, 0, 1, 0, 0, 1, 0, 1], [1, 0, 0, 0, 1, 0, 0, 1], [1, 1, 1, 1, 1, 1, 1, 1]], "reward_type": "sparse", "continuing_task": true, "reset_target": true}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/pointmaze/medium-v2
QIteration
[ "Rodrigo Perez-Vicente" ]
[ "rperezvicente@farama.org" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
{"id": "PointMaze_Medium-v3", "entry_point": "gymnasium_robotics.envs.maze.point_maze:PointMazeEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 600, "order_enforce": true, "disable_env_checker": false, "kwargs": {"maze_map": [[1, 1, 1, 1, 1, 1, 1, 1], [1, 0, 0, 1, 1, 0, 0, 1], [1, 0, 0, 1, 0, 0, 0, 1], [1, 1, 0, 0, 0, 1, 1, 1], [1, 0, 0, 1, 0, 0, 0, 1], [1, 0, 1, 0, 0, 1, 0, 1], [1, 0, 0, 0, 1, 0, "g", 1], [1, 1, 1, 1, 1, 1, 1, 1]], "reward_type": "sparse", "continuing_task": true, "reset_target": false}, "additional_wrappers": [], "vector_entry_point": null}
361.049988
17.66
100
4,752
1,000,000
284
The data is collected from the [`PointMaze_Medium-v3`](https://robotics.farama.org/envs/maze/point_maze/) environment. The agent uses a PD controller to follow a path of waypoints generated with QIteration until it reaches the goal. The task is continuing which means that when the agent reaches the goal the environment generates a new random goal without resetting the location of the agent. The reward function is sparse, only returning a value of 1 if the goal is reached, otherwise 0. To add variance to the collected paths random noise is added to the actions taken by the agent.
[ "gymnasium-robotics>=1.2.4" ]
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
The [Point Maze](https://robotics.farama.org/envs/maze/point_maze/) domain involves moving a force-actuated ball (along the X and Y axis) to a fixed target location. The observation consists of the (x, y) location and velocities. The dataset consists of one continuous trajectory of the agent navigating to random goal locations, and thus has no terminal states. However, for the purposes of being able to split the trajectory into smaller episodes, the trajectory is truncated when the randomly selected navigation goal has been reached. The datasets for each maze version includes two different reward functions, sparse and dense. The data is generated by selecting goal locations at random and then using a planner ([QIteration](https://towardsdatascience.com/fundamental-iterative-methods-of-reinforcement-learning-df8ff078652a))[2] that generates sequences of waypoints that are followed using a PD controller. Because the controllers memorize the reached waypoints, the data collection policy is non-Markovian. These datasets were originally generated by [D4RL](https://github.com/Farama-Foundation/D4RL/wiki/Tasks#maze2d)[1] under the Maze2D domain. ## References [1] Fu, Justin, et al. ‘D4RL: Datasets for Deep Data-Driven Reinforcement Learning’. CoRR, vol. abs/2004.07219, 2020, https://arxiv.org/abs/2004.07219.. [2] Lambert, Nathan. ‘Fundamental Iterative Methods of Reinforcement Learnin’. Apr 8, 2020, https://towardsdatascience.com/fundamental-iterative-methods-of-reinforcement-learning-df8ff078652a
null
Point Maze
hdf5
{"type": "Dict", "subspaces": {"achieved_goal": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "desired_goal": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "observation": {"type": "Box", "dtype": "float64", "shape": [4], "low": [-Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity]}}}
{"type": "Box", "dtype": "float32", "shape": [2], "low": [-1.0, -1.0], "high": [1.0, 1.0]}
{"id": "PointMaze_OpenDense-v3", "entry_point": "gymnasium_robotics.envs.maze.point_maze:PointMazeEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 1000000, "order_enforce": true, "disable_env_checker": false, "kwargs": {"maze_map": [[1, 1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0, 1], [1, 1, 1, 1, 1, 1, 1]], "reward_type": "dense", "continuing_task": true, "reset_target": true}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/pointmaze/open-dense-v2
QIteration
[ "Rodrigo Perez-Vicente" ]
[ "rperezvicente@farama.org" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
{"id": "PointMaze_OpenDense-v3", "entry_point": "gymnasium_robotics.envs.maze.point_maze:PointMazeEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 300, "order_enforce": true, "disable_env_checker": false, "kwargs": {"maze_map": [[1, 1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 0, 1], [1, 0, 0, "g", 0, 0, 1], [1, 0, 0, 0, 0, 0, 1], [1, 1, 1, 1, 1, 1, 1]], "reward_type": "dense", "continuing_task": true, "reset_target": false}, "additional_wrappers": [], "vector_entry_point": null}
229.426712
70.732933
100
9,525
1,000,000
437.6
The data is collected from the [`PointMaze_OpenDense-v3`](https://robotics.farama.org/envs/maze/point_maze/) environment, which contains an open arena with only perimeter walls. The agent uses a PD controller to follow a path of waypoints generated with QIteration until it reaches the goal. The task is continuing which means that when the agent reaches the goal the environment generates a new random goal without resetting the location of the agent. The reward function is dense, being the exponential negative Euclidean distance between the goal and the agent. To add variance to the collected paths random noise is added to the actions taken by the agent.
[ "gymnasium-robotics>=1.2.4" ]
null
hdf5
{"type": "Dict", "subspaces": {"achieved_goal": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "desired_goal": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "observation": {"type": "Box", "dtype": "float64", "shape": [4], "low": [-Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity]}}}
{"type": "Box", "dtype": "float32", "shape": [2], "low": [-1.0, -1.0], "high": [1.0, 1.0]}
{"id": "PointMaze_Open-v3", "entry_point": "gymnasium_robotics.envs.maze.point_maze:PointMazeEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 1000000, "order_enforce": true, "disable_env_checker": false, "kwargs": {"maze_map": [[1, 1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0, 1], [1, 1, 1, 1, 1, 1, 1]], "reward_type": "sparse", "continuing_task": true, "reset_target": true}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/pointmaze/open-v2
QIteration
[ "Rodrigo Perez-Vicente" ]
[ "rperezvicente@farama.org" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
{"id": "PointMaze_Open-v3", "entry_point": "gymnasium_robotics.envs.maze.point_maze:PointMazeEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 300, "order_enforce": true, "disable_env_checker": false, "kwargs": {"maze_map": [[1, 1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 0, 1], [1, 0, 0, "g", 0, 0, 1], [1, 0, 0, 0, 0, 0, 1], [1, 1, 1, 1, 1, 1, 1]], "reward_type": "sparse", "continuing_task": true, "reset_target": false}, "additional_wrappers": [], "vector_entry_point": null}
229.860001
7.2
100
9,525
1,000,000
437.6
The data is collected from the [`PointMaze_Open-v3`](https://robotics.farama.org/envs/maze/point_maze/) environment, which contains an open arena with only perimeter walls. The agent uses a PD controller to follow a path of waypoints generated with QIteration until it reaches the goal. The task is continuing which means that when the agent reaches the goal the environment generates a new random goal without resetting the location of the agent. The reward function is sparse, only returning a value of 1 if the goal is reached, otherwise 0. To add variance to the collected paths random noise is added to the actions taken by the agent.
[ "gymnasium-robotics>=1.2.4" ]
null
hdf5
{"type": "Dict", "subspaces": {"achieved_goal": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "desired_goal": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "observation": {"type": "Box", "dtype": "float64", "shape": [4], "low": [-Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity]}}}
{"type": "Box", "dtype": "float32", "shape": [2], "low": [-1.0, -1.0], "high": [1.0, 1.0]}
{"id": "PointMaze_UMazeDense-v3", "entry_point": "gymnasium_robotics.envs.maze.point_maze:PointMazeEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 1000000, "order_enforce": true, "disable_env_checker": false, "kwargs": {"maze_map": [[1, 1, 1, 1, 1], [1, 0, 0, 0, 1], [1, 1, 1, 0, 1], [1, 0, 0, 0, 1], [1, 1, 1, 1, 1]], "reward_type": "dense", "continuing_task": true, "reset_target": true}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/pointmaze/umaze-dense-v2
QIteration
[ "Rodrigo Perez-Vicente" ]
[ "rperezvicente@farama.org" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
{"id": "PointMaze_UMazeDense-v3", "entry_point": "gymnasium_robotics.envs.maze.point_maze:PointMazeEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 300, "order_enforce": true, "disable_env_checker": false, "kwargs": {"maze_map": [[1, 1, 1, 1, 1], [1, "g", 0, 0, 1], [1, 1, 1, 0, 1], [1, 0, 0, 0, 1], [1, 1, 1, 1, 1]], "reward_type": "dense", "continuing_task": true, "reset_target": false}, "additional_wrappers": [], "vector_entry_point": null}
223.968872
59.25227
100
13,210
1,000,000
556.2
The data is collected from the [`PointMaze_UMazeDense-v3`](https://robotics.farama.org/envs/maze/point_maze/) environment, which contains a U shape maze. The agent uses a PD controller to follow a path of waypoints generated with QIteration until it reaches the goal. The task is continuing which means that when the agent reaches the goal the environment generates a new random goal without resetting the location of the agent. The reward function is dense, being the exponential negative Euclidean distance between the goal and the agent. To add variance to the collected paths random noise is added to the actions taken by the agent.
[ "gymnasium-robotics>=1.2.4" ]
null
hdf5
{"type": "Dict", "subspaces": {"achieved_goal": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "desired_goal": {"type": "Box", "dtype": "float64", "shape": [2], "low": [-Infinity, -Infinity], "high": [Infinity, Infinity]}, "observation": {"type": "Box", "dtype": "float64", "shape": [4], "low": [-Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity]}}}
{"type": "Box", "dtype": "float32", "shape": [2], "low": [-1.0, -1.0], "high": [1.0, 1.0]}
{"id": "PointMaze_UMaze-v3", "entry_point": "gymnasium_robotics.envs.maze.point_maze:PointMazeEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 1000000, "order_enforce": true, "disable_env_checker": false, "kwargs": {"maze_map": [[1, 1, 1, 1, 1], [1, 0, 0, 0, 1], [1, 1, 1, 0, 1], [1, 0, 0, 0, 1], [1, 1, 1, 1, 1]], "reward_type": "sparse", "continuing_task": true, "reset_target": true}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/pointmaze/umaze-v2
QIteration
[ "Rodrigo Perez-Vicente" ]
[ "rperezvicente@farama.org" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
{"id": "PointMaze_UMaze-v3", "entry_point": "gymnasium_robotics.envs.maze.point_maze:PointMazeEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 300, "order_enforce": true, "disable_env_checker": false, "kwargs": {"maze_map": [[1, 1, 1, 1, 1], [1, "g", 0, 0, 1], [1, 1, 1, 0, 1], [1, 0, 0, 0, 1], [1, 1, 1, 1, 1]], "reward_type": "sparse", "continuing_task": true, "reset_target": false}, "additional_wrappers": [], "vector_entry_point": null}
218.699997
13.49
100
13,210
1,000,000
556.2
The data is collected from the [`PointMaze_UMaze-v3`](https://robotics.farama.org/envs/maze/point_maze/) environment, which contains a U shape maze. The agent uses a PD controller to follow a path of waypoints generated with QIteration until it reaches the goal. The task is continuing which means that when the agent reaches the goal the environment generates a new random goal without resetting the location of the agent. The reward function is sparse, only returning a value of 1 if the goal is reached, otherwise 0. To add variance to the collected paths random noise is added to the actions taken by the agent.
[ "gymnasium-robotics>=1.2.4" ]
null
hdf5
{"type": "Box", "dtype": "float64", "shape": [39], "low": [-Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity]}
{"type": "Box", "dtype": "float32", "shape": [30], "low": [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], "high": [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]}
{"id": "AdroitHandRelocate-v1", "entry_point": "gymnasium_robotics.envs.adroit_hand.adroit_relocate:AdroitHandRelocateEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 200, "order_enforce": true, "disable_env_checker": false, "kwargs": {"reward_type": "dense"}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/relocate/cloned-v2
null
[ "Rodrigo de Lazcano" ]
[ "rperezvicente@farama.org" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
null
4,287.70459
9.189093
100
3,758
1,000,000
527.7
Data obtained by training an imitation policy on the demonstrations from `expert` and `human`, then running the policy, and mixing data at a 50-50 ratio with the demonstrations. This dataset is provided by [D4RL](https://github.com/Farama-Foundation/D4RL/wiki/Tasks#adroit). The environment used to collect the dataset is [`AdroitHandRelocate-v1`](https://robotics.farama.org/envs/adroit_hand/adroit_relocate/).
[ "gymnasium-robotics>=1.2.3" ]
null
hdf5
{"type": "Box", "dtype": "float64", "shape": [39], "low": [-Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity]}
{"type": "Box", "dtype": "float32", "shape": [30], "low": [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], "high": [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]}
{"id": "AdroitHandRelocate-v1", "entry_point": "gymnasium_robotics.envs.adroit_hand.adroit_relocate:AdroitHandRelocateEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 200, "order_enforce": true, "disable_env_checker": false, "kwargs": {"reward_type": "dense"}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/relocate/expert-v2
null
[ "Rodrigo de Lazcano" ]
[ "rperezvicente@farama.org" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
null
4,287.70459
9.189093
100
5,000
1,000,000
552.2
Trajectories have expert data from a fine-tuned RL policy provided in the [DAPG](https://github.com/aravindr93/hand_dapg) repository. The environment used to collect the dataset is [`AdroitHandRelocate-v1`](https://robotics.farama.org/envs/adroit_hand/adroit_relocate/).
[ "gymnasium-robotics>=1.2.3" ]
null
hdf5
{"type": "Box", "dtype": "float64", "shape": [39], "low": [-Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity, -Infinity], "high": [Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity, Infinity]}
{"type": "Box", "dtype": "float32", "shape": [30], "low": [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], "high": [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]}
{"id": "AdroitHandRelocate-v1", "entry_point": "gymnasium_robotics.envs.adroit_hand.adroit_relocate:AdroitHandRelocateEnv", "reward_threshold": null, "nondeterministic": false, "max_episode_steps": 200, "order_enforce": true, "disable_env_checker": false, "kwargs": {"reward_type": "dense"}, "additional_wrappers": [], "vector_entry_point": null}
D4RL/relocate/human-v2
null
[ "Rodrigo de Lazcano" ]
[ "rperezvicente@farama.org" ]
https://github.com/rodrigodelazcano/d4rl-minari-dataset-generation
0.4.3
null
4,287.70459
9.189093
100
25
9,942
5
25 human demonstrations provided in the [DAPG](https://github.com/aravindr93/hand_dapg) repository. The environment used to collect the dataset is [`AdroitHandRelocate-v1`](https://robotics.farama.org/envs/adroit_hand/adroit_relocate/).
[ "gymnasium-robotics>=1.2.3" ]
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
These datasets were generated with the [`AdroitHandRelocate-v1`](https://robotics.farama.org/envs/adroit_hand/adroit_relocate/) environment, originally hosted in the [`hand_dapg`](https://github.com/aravindr93/hand_dapg) repository. The objective of the task is to move a ball to a specific target position with a 24-DoF robotic hand. This domain was selected to measure the effect of a narrow expert data distributions and human demonstrations on a sparse reward, high-dimensional robotic manipulation task. There are three types of datasets, two from the original paper[1] (`human` and `expert`), and another one introduced in D4RL[2] (`cloned`). ## References [1] Rajeswaran, Aravind, et al. ‘Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations’. CoRR, vol. abs/1709.10087, 2017, http://arxiv.org/abs/1709.10087. [2] Fu, Justin, et al. ‘D4RL: Datasets for Deep Data-Driven Reinforcement Learning’. CoRR, vol. abs/2004.07219, 2020, https://arxiv.org/abs/2004.07219.
null
Relocate

No dataset card yet

Downloads last month
807