DQN agent for MiniGrid fill

This is a trained model of a DQN agent playing MiniGrid fill using the stable-baselines3 library.

Model Details

  • Environment: MiniGrid fill
  • Algorithm: DQN
  • Seed: 0
  • Framework: Stable Baselines3
  • Repository: ctrlp-zoo

Usage

from stable_baselines3 import DQN
import gymnasium as gym

# Load the trained model
model = DQN.load("best_model.zip")

# Create environment
env = gym.make("MiniGrid-fill")

# Enjoy the trained agent
obs, info = env.reset()
for _ in range(1000):
    action, _ = model.predict(obs, deterministic=True)
    obs, reward, terminated, truncated, info = env.step(action)
    if terminated or truncated:
        obs, info = env.reset()

Training Configuration

env:
  act:
    key:
    - LEFT
    - RIGHT
    - UP
    - DOWN
    - SPACE
    movement:
    - L
    - R
    - U
    - D
    - TOGGLE
  model:
    load:
      dt: 1.0e-05
      power: 1
      speed: 10
      type: point
    mesh:
      length:
      - 0.001
      - 0.001
      n_elements:
      - 10
      - 10
    state:
      melt_temp:
        expr: melt_temp
        init: 0.99
        type: parameter
      phase:
        expr: (temp > melt_temp) | phase
        init: false
        type: derived
      temp:
        expr: temp
        init: 0.0
        type: primary
  obs:
    state:
    - phase
    - load
    - mask

Files Included

  • best_model.zip: The trained model checkpoint
  • vecnormalize.pkl: Vector normalization statistics (if applicable)

Citation

If you use this model in your research, please cite:

@misc{ctrlp-zoo,
  author = {Schmeitz, R.},
  title = {CTRL-P Zoo: Reinforcement Learning Model Repository},
  year = {2026},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/RSchmeitz/ctrlp-zoo}}
}
Downloads last month
1
Video Preview
loading