|
--- |
|
library_name: ml-agents |
|
tags: |
|
- SoccerTwos |
|
- unity-ml-agents |
|
- deep-reinforcement-learning |
|
- reinforcement-learning |
|
- ML-Agents-SoccerTwos |
|
--- |
|
|
|
# **poca** Agent playing **SoccerTwos** |
|
This is a trained model of a **poca** agent playing **SoccerTwos** |
|
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). |
|
|
|
## Usage (with ML-Agents) |
|
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ |
|
|
|
### Watch your Agent play |
|
You can watch your agent **playing directly in your browser** |
|
|
|
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity |
|
2. Step 1: Find your model_id: MattStammers/poca-SoccerTwos |
|
3. Step 2: Select your *.nn /*.onnx file |
|
4. Click on Watch the agent play ๐ |
|
|
|
### Video |
|
|
|
This video is of the Unity baseline agent (blue) against my agents (purple). The Unity baseline agents are slightly better but only marginally so. |