# Adversarial Environment Design with the PAIRED algorithm

This repo implements the PAIRED algorithm (short for Protagonist Antagonist
Induced Regret Environment Design), published in:

>Dennis,  M.\*, Jaques, N.\*,  Vinitsky,  E.,  Bayen,  A.,  Russell,  S.,
>Critch,  A.,  &  Levine,  S., [Emergent Complexity and Zero-Shot Transfer via
>Unsupervised Environment Design](https://bit.ly/2Hitysn), Neural Information 
>Processing Systems (NeurIPS), Virtual (2020).

For questions about this code, please contact natashajaques AT google.com.

This implementation is based on Tensorflow 2.0 and [TF-Agents](https://github.com/tensorflow/agents); 
for a PyTorch implementation, see: https://github.com/ucl-dark/paired

## Algorithm description

PAIRED leverages adversarial training to generate a curriculum of increasingly
complex environments. An adversary learns to design a gridworld environment by
placing the goal, the start location, and the location of walls or obstacles.
The protagonist agent learns to navigate the generated environments. The
adversary's goal is to minimize the performance of the protagonist.

However, minimizing the performance of the protagonist can be accomplished by
simply creating impossible worlds in which there is no path between the start
location and the goal. Therefore, the PAIRED algorithm introduces a second
agent, the antagonist. Both the protagonist and antagonist play each environment
generated by the adversary several times. We then compute the *regret* as the
difference between the antagonist's maximum score over all episodes, and the
protagonist's average score. The adversary is trained to maximize regret, which
means that it must create environments that are dificult for the protagonist to
complete, but possible for the antagonist to solve in the best case. This leads
the adversary to generate a curriculum of initially easy, then increasingly
challenging environments.

## Code structure

The main algorithm for PAIRED is implemented in `adversarial_driver.py`. This 
handles invoking the adversary to build the environment, and then running the
agents within the built environment. It also handles running minimax adversary
episodes and domain randomization episodes.

The adversarial environment with which the agents interact is located at
`social_rl/gym_multigrid/envs/adversarial.py`. It implements functionality
enabling the adversary to place the goal, agent, and blocks into a new 
environment. Implementing the adversarial gridworld requires adding additional
functions such as `step_adversary()` and `reset_agent()` to the traditional RL 
loop. This required re-implementing some of the tf-agents environment 
infrastructure in `adversarial_env.py` and `adversarial_env_parallel.py`. 

Running `train_adversarial_env.py` will initialize several instances of 
`AgentTrainPackage`, which abstracts functionality common to training any agent, 
whether it is an environment-generating adversary, or a typical RL protagonist 
agent. The agent code is based on tf-agents implementation of PPO.

## Training

To train PAIRED, use:
```
python -m train_adversarial_env.py --debug --root_dir=/tmp/paired/
```

To train an unconstrained minimax adversary (with no antagonist), use:
```
python -m train_adversarial_env.py --debug --root_dir=/tmp/minimax/ \
--unconstrained_adversary
```

To train domain randomization, use:
```
python -m train_adversarial_env.py --debug --root_dir=/tmp/dr/ \
--domain_randomization  
```

To train PAIRED with a population of agents and adversaries, use:
```
python -m train_adversarial_env.py --debug --root_dir=/tmp/paired_pbt/ \
--combined_population --protagonist_population_size=3 \
--adversary_population_size=3
```

To train minimax with a population of agents and adversaries, use:
```
python -m train_adversarial_env.py --debug --root_dir=/tmp/minimax_pbt/ \
--unconstrained_adversary --protagonist_population_size=3 \
--adversary_population_size=3
```

## Zero-shot generalization

Adversarial training can be used to prepare the agent for unknown challenges at
test time. Therefore, we test the zero-shot transfer performance of agents
trained with PAIRED or baseline techniques on highly novel environments such as
labyrinths, and mazes. To run the transfer experiment code, use:

```
python -m run_transfer_experiments.py --hparam_csv='best_hyperparameters.csv'
```

## Manual debugging

There is a text-based UI script which allows you to manually control the
adversary and agent to test the environment functionality. It can be run using:

```
python -m manual_control_adversary.py --env_name 'MultiGrid-Adversarial-v0'
```
