Instructions to use rschmeitz/minigrid-fill-dqn with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- stable-baselines3
How to use rschmeitz/minigrid-fill-dqn with stable-baselines3:
from huggingface_sb3 import load_from_hub checkpoint = load_from_hub( repo_id="rschmeitz/minigrid-fill-dqn", filename="{MODEL FILENAME}.zip", ) - Notebooks
- Google Colab
- Kaggle
DQN agent for MiniGrid fill
This is a trained model of a DQN agent playing MiniGrid fill using the stable-baselines3 library.
Model Details
- Environment: MiniGrid fill
- Algorithm: DQN
- Seed: 0
- Framework: Stable Baselines3
- Repository: ctrlp-zoo
Usage
from stable_baselines3 import DQN
import gymnasium as gym
# Load the trained model
model = DQN.load("best_model.zip")
# Create environment
env = gym.make("MiniGrid-fill")
# Enjoy the trained agent
obs, info = env.reset()
for _ in range(1000):
action, _ = model.predict(obs, deterministic=True)
obs, reward, terminated, truncated, info = env.step(action)
if terminated or truncated:
obs, info = env.reset()
Training Configuration
env:
act:
key:
- LEFT
- RIGHT
- UP
- DOWN
- SPACE
movement:
- L
- R
- U
- D
- TOGGLE
model:
load:
dt: 1.0e-05
power: 1
speed: 10
type: point
mesh:
length:
- 0.001
- 0.001
n_elements:
- 10
- 10
state:
melt_temp:
expr: melt_temp
init: 0.99
type: parameter
phase:
expr: (temp > melt_temp) | phase
init: false
type: derived
temp:
expr: temp
init: 0.0
type: primary
obs:
state:
- phase
- load
- mask
Files Included
best_model.zip: The trained model checkpointvecnormalize.pkl: Vector normalization statistics (if applicable)
Citation
If you use this model in your research, please cite:
@misc{ctrlp-zoo,
author = {Schmeitz, R.},
title = {CTRL-P Zoo: Reinforcement Learning Model Repository},
year = {2026},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/RSchmeitz/ctrlp-zoo}}
}
- Downloads last month
- 1