procgen / README.md
EpicPinkPenguin's picture
Upload dataset
7e2362c verified
|
raw
history blame
17.7 kB
metadata
language:
  - en
license: apache-2.0
size_categories:
  - 10M<n<100M
task_categories:
  - reinforcement-learning
pretty_name: Procgen Benchmark Dataset
dataset_info:
  - config_name: bigfish
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 260435250000
        num_examples: 9000000
      - name: test
        num_bytes: 28937250000
        num_examples: 1000000
    download_size: 31592522500
    dataset_size: 289372500000
  - config_name: bossfight
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 260435250000
        num_examples: 9000000
      - name: test
        num_bytes: 28937250000
        num_examples: 1000000
    download_size: 60368662504
    dataset_size: 289372500000
  - config_name: caveflyer
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 26043525000
        num_examples: 900000
      - name: test
        num_bytes: 2893725000
        num_examples: 100000
    download_size: 5279167331
    dataset_size: 28937250000
  - config_name: chaser
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 26043525000
        num_examples: 900000
      - name: test
        num_bytes: 2893725000
        num_examples: 100000
    download_size: 2126890202
    dataset_size: 28937250000
  - config_name: climber
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 26043525000
        num_examples: 900000
      - name: test
        num_bytes: 2893725000
        num_examples: 100000
    download_size: 2073122202
    dataset_size: 28937250000
  - config_name: coinrun
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 26043525000
        num_examples: 900000
      - name: test
        num_bytes: 2893725000
        num_examples: 100000
    download_size: 2570909693
    dataset_size: 28937250000
  - config_name: dodgeball
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 260435250000
        num_examples: 9000000
      - name: test
        num_bytes: 28937250000
        num_examples: 1000000
    download_size: 34038260004
    dataset_size: 289372500000
  - config_name: fruitbot
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 260435250000
        num_examples: 9000000
      - name: test
        num_bytes: 28937250000
        num_examples: 1000000
    download_size: 184877608931
    dataset_size: 289372500000
  - config_name: heist
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 26043525000
        num_examples: 900000
      - name: test
        num_bytes: 2893725000
        num_examples: 100000
    download_size: 2536872649
    dataset_size: 28937250000
  - config_name: jumper
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 260435250000
        num_examples: 9000000
      - name: test
        num_bytes: 28937250000
        num_examples: 1000000
    download_size: 32385518771
    dataset_size: 289372500000
  - config_name: leaper
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 26043525000
        num_examples: 900000
      - name: test
        num_bytes: 2893725000
        num_examples: 100000
    download_size: 2281835608
    dataset_size: 28937250000
  - config_name: maze
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 26043525000
        num_examples: 900000
      - name: test
        num_bytes: 2893725000
        num_examples: 100000
    download_size: 2458751741
    dataset_size: 28937250000
  - config_name: miner
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 260435250000
        num_examples: 9000000
      - name: test
        num_bytes: 28937250000
        num_examples: 1000000
    download_size: 18949118303
    dataset_size: 289372500000
  - config_name: ninja
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 260435250000
        num_examples: 9000000
      - name: test
        num_bytes: 28937250000
        num_examples: 1000000
    download_size: 33644711360
    dataset_size: 289372500000
  - config_name: plunder
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 26043525000
        num_examples: 900000
      - name: test
        num_bytes: 2893725000
        num_examples: 100000
    download_size: 3420615878
    dataset_size: 28937250000
  - config_name: starpilot
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 260435250000
        num_examples: 9000000
      - name: test
        num_bytes: 28937250000
        num_examples: 1000000
    download_size: 86584615866
    dataset_size: 289372500000
configs:
  - config_name: bigfish
    data_files:
      - split: train
        path: bigfish/train-*
      - split: test
        path: bigfish/test-*
  - config_name: bossfight
    data_files:
      - split: train
        path: bossfight/train-*
      - split: test
        path: bossfight/test-*
  - config_name: caveflyer
    data_files:
      - split: train
        path: caveflyer/train-*
      - split: test
        path: caveflyer/test-*
  - config_name: chaser
    data_files:
      - split: train
        path: chaser/train-*
      - split: test
        path: chaser/test-*
  - config_name: climber
    data_files:
      - split: train
        path: climber/train-*
      - split: test
        path: climber/test-*
  - config_name: coinrun
    data_files:
      - split: train
        path: coinrun/train-*
      - split: test
        path: coinrun/test-*
  - config_name: dodgeball
    data_files:
      - split: train
        path: dodgeball/train-*
      - split: test
        path: dodgeball/test-*
  - config_name: fruitbot
    data_files:
      - split: train
        path: fruitbot/train-*
      - split: test
        path: fruitbot/test-*
  - config_name: heist
    data_files:
      - split: train
        path: heist/train-*
      - split: test
        path: heist/test-*
  - config_name: jumper
    data_files:
      - split: train
        path: jumper/train-*
      - split: test
        path: jumper/test-*
  - config_name: leaper
    data_files:
      - split: train
        path: leaper/train-*
      - split: test
        path: leaper/test-*
  - config_name: maze
    data_files:
      - split: train
        path: maze/train-*
      - split: test
        path: maze/test-*
  - config_name: miner
    data_files:
      - split: train
        path: miner/train-*
      - split: test
        path: miner/test-*
  - config_name: ninja
    data_files:
      - split: train
        path: ninja/train-*
      - split: test
        path: ninja/test-*
  - config_name: plunder
    data_files:
      - split: train
        path: plunder/train-*
      - split: test
        path: plunder/test-*
  - config_name: starpilot
    data_files:
      - split: train
        path: starpilot/train-*
      - split: test
        path: starpilot/test-*
tags:
  - procgen
  - bigfish
  - benchmark
  - openai
  - bossfight
  - caveflyer
  - chaser
  - climber
  - dodgeball
  - fruitbot
  - heist
  - jumper
  - leaper
  - maze
  - miner
  - ninja
  - plunder
  - starpilot

Procgen Benchmark

This dataset contains expert trajectories generated by a PPO reinforcement learning agent trained on each of the 16 procedurally-generated gym environments from the Procgen Benchmark. The environments were created on distribution_mode=easy and with unlimited levels.

Disclaimer: This is not an official repository from OpenAI.

Dataset Usage

Regular usage (for environment bigfish):

from datasets import load_dataset
train_dataset = load_dataset("EpicPinkPenguin/procgen", name="bigfish", split="train")
test_dataset = load_dataset("EpicPinkPenguin/procgen", name="bigfish", split="test")

Usage with PyTorch (for environment bossfight):

from datasets import load_dataset
train_dataset = load_dataset("EpicPinkPenguin/procgen", name="bossfight", split="train").with_format("torch")
test_dataset = load_dataset("EpicPinkPenguin/procgen", name="bossfight", split="test").with_format("torch")

Agent Performance

The PPO RL agent was trained for 50M steps on each environment and obtained the following final performance metrics.

Environment Steps (Train) Steps (Test) Return Observation
bigfish 900,000 100.000 29.16
bossfight 900,000 100.000 11.35
caveflyer 900,000 100.000 09.47
chaser 900,000 100.000 11.46
climber 900,000 100.000 11.17
coinrun 900,000 100.000 09.74
dodgeball 900,000 100.000 16.78
fruitbot 900,000 100.000 29.87
heist 900,000 100.000 09.98
jumper 900,000 100.000 08.71
leaper 900,000 100.000 07.71
maze 900,000 100.000 09.99
miner 900,000 100.000 12.63
ninja 900,000 100.000 09.44
plunder 900,000 100.000 25.98
starpilot 900,000 100.000 55.28

Dataset Structure

Data Instances

Each data instance represents a single step consisting of tuples of the form (observation, action, reward, done, truncated) = (o_t, a_t, r_{t+1}, done_{t+1}, trunc_{t+1}).

{'action': 1,
 'done': False,
 'observation': [[[0, 166, 253],
                  [0, 174, 255],
                  [0, 170, 251],
                  [0, 191, 255],
                  [0, 191, 255],
                  [0, 221, 255],
                  [0, 243, 255],
                  [0, 248, 255],
                  [0, 243, 255],
                  [10, 239, 255],
                  [25, 255, 255],
                  [0, 241, 255],
                  [0, 235, 255],
                  [17, 240, 255],
                  [10, 243, 255],
                  [27, 253, 255],
                  [39, 255, 255],
                  [58, 255, 255],
                  [85, 255, 255],
                  [111, 255, 255],
                  [135, 255, 255],
                  [151, 255, 255],
                  [173, 255, 255],
...
                  [0, 0, 37],
                  [0, 0, 39]]],
 'reward': 0.0,
 'truncated': False}

Data Fields

  • observation: The current RGB observation from the environment.
  • action: The action predicted by the agent for the current observation.
  • reward: The received reward from stepping the environment with the current action.
  • done: If the new observation is the start of a new episode. Obtained after stepping the environment with the current action.
  • truncated: If the new observation is the start of a new episode due to truncation. Obtained after stepping the environment with the current action.

Data Splits

The dataset is divided into a train (90%) and test (10%) split. Each environment-dataset has in sum 1M steps (data points).

Dataset Creation

The dataset was created by training an RL agent with PPO for 50M steps in each environment. The trajectories where generated by sampling from the predicted action distribution at each step (not taking the argmax). The environments were created on distribution_mode=easy and with unlimited levels.

Procgen Benchmark

The Procgen Benchmark, released by OpenAI, consists of 16 procedurally-generated environments designed to measure how quickly reinforcement learning (RL) agents learn generalizable skills. It emphasizes experimental convenience, high diversity within and across environments, and is ideal for evaluating both sample efficiency and generalization. The benchmark allows for distinct training and test sets in each environment, making it a standard research platform for the OpenAI RL team. It aims to address the need for more diverse RL benchmarks compared to complex environments like Dota and StarCraft.