Interesting Environments to try
Here we provide a list of interesting environments you can try to train your agents on:
DIAMBRA Arena
DIAMBRA Arena is a software package featuring a collection of high-quality environments for Reinforcement Learning research and experimentation. It provides a standard interface to popular arcade emulated video games, offering a Python API fully compliant with OpenAI Gym/Gymnasium format, that makes its adoption smooth and straightforward.
It supports all major Operating Systems (Linux, Windows and MacOS) and can be easily installed via Python PIP. It is completely free to use, the user only needs to register on the official website.
In addition, its GitHub repository provides a collection of examples covering main use cases of interest that can be run in just a few steps.
Main Features
All environments are episodic Reinforcement Learning tasks, with discrete actions (gamepad buttons) and observations composed by screen pixels plus additional numerical data (RAM values like characters health bars or characters stage side).
They all support both single player (1P) as well as two players (2P) mode, making them the perfect resource to explore Standard RL, Competitive Multi-Agent, Competitive Human-Agent, Self-Play, Imitation Learning and Human-in-the-Loop.
Interfaced games have been selected among the most popular fighting retro-games. While sharing the same fundamental mechanics, they provide different challenges, with specific features such as different type and number of characters, how to perform combos, health bars recharging, etc.
DIAMBRA Arena is built to maximize compatibility will all major Reinforcement Learning libraries. It natively provides interfaces with the two most important packages: Stable Baselines 3 and Ray RLlib, while Stable Baselines is also available but deprecated. Their usage is illustrated in the official documentation and in the DIAMBRA Agents examples repository. It can easily be interfaced with any other package in a similar way.
Competition Platform
DIAMBRA also provides a competition platform fully integrated with the Hugging Face Hub, on which you can submit your trained agents and compete with other coders around the globe in epic video games tournaments!
It features a public leaderboard where users are ranked by the best score achieved by their agents in our different environments.
It also offers the possibility to unlock cool achievements depending on the performances of your agent.
Submitted agents are evaluated and their episodes are streamed on DIAMBRA Twitch channel.
References
To start using this environment, check these resources:
MineRL
MineRL is a Python library that provides a Gym interface for interacting with the video game Minecraft, accompanied by datasets of human gameplay. Every year there are challenges with this library. Check the website
To start using this environment, check these resources:
DonkeyCar Simulator
Donkey is a Self Driving Car Platform for hobby remote control cars. This simulator version is built on the Unity game platform. It uses their internal physics and graphics and connects to a donkey Python process to use our trained model to control the simulated Donkey (car).To start using this environment, check these resources:
Pretrained agents:
Starcraft II
Starcraft II is a famous real-time strategy game. DeepMind has used this game for their Deep Reinforcement Learning research with Alphastar
To start using this environment, check these resources:
Author
This section was written by Thomas Simonini
< > Update on GitHub