{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "36aba985",
   "metadata": {},
   "source": [
    "# Overcooked Tutorial\n",
    "This Notebook will demonstrate a couple of common use cases of the Overcooked-AI library, including loading and evaluating agents and visualizing trajectories.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "ca4bad07",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Avg rew: 0.00 (std: 0.00, se: 0.00); avg len: 400.00; : 100%|██████████| 10/10 [00:00<00:00, 19.41it/s]\n",
      "Avg rew: 180.00 (std: 0.00, se: 0.00); avg len: 400.00; : 100%|██████████| 1/1 [00:00<00:00,  8.01it/s]\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "02c455cfd0a34a6f8eaab9e8627f7697",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "interactive(children=(IntSlider(value=0, description='timestep', max=399), Output()), _dom_classes=('widget-in…"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "from overcooked_ai_py.agents.agent import AgentPair, RandomAgent\n",
    "from overcooked_ai_py.agents.benchmarking import AgentEvaluator\n",
    "from overcooked_ai_py.visualization.state_visualizer import StateVisualizer\n",
    "\n",
    "# Here we create an evaluator for the cramped_room layout\n",
    "layout = \"cramped_room\"\n",
    "ae = AgentEvaluator.from_layout_name(mdp_params={\"layout_name\": layout, \"old_dynamics\": True}, \n",
    "                                     env_params={\"horizon\": 400})\n",
    "\n",
    "ap = AgentPair(RandomAgent(), RandomAgent())\n",
    "\n",
    "trajs = ae.evaluate_agent_pair(ap, 10)\n",
    "\n",
    "trajs2 = ae.evaluate_human_model_pair(1)\n",
    "\n",
    "\n",
    "StateVisualizer().display_rendered_trajectory(trajs2, ipython_display=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "927f70d1",
   "metadata": {},
   "source": [
    "# Deprecated stuff which requires BC and RL training (see README for details)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "aca6b8ba",
   "metadata": {},
   "source": [
    "# Getting started: Training your agent"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0a8f96f8",
   "metadata": {},
   "source": [
    "You can train BC agents using files under the `human_aware_rl/imitation` directory. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7f493c88",
   "metadata": {},
   "outputs": [],
   "source": [
    "layout = \"cramped_room\" # any compatible layouts \n",
    "from human_aware_rl.imitation.behavior_cloning_tf2 import get_bc_params, train_bc_model\n",
    "from human_aware_rl.static import CLEAN_2019_HUMAN_DATA_TRAIN\n",
    "\n",
    "params_to_override = {\n",
    "    # this is the layouts where the training will happen\n",
    "    \"layouts\": [layout], \n",
    "    # this is the layout that the agents will be evaluated on\n",
    "    # Most of the time they should be the same, but because of refactoring some old layouts have more than one name and they need to be adjusted accordingly\n",
    "    \"layout_name\": layout, \n",
    "    \"data_path\": CLEAN_2019_HUMAN_DATA_TRAIN,\n",
    "    \"epochs\": 10,\n",
    "    \"old_dynamics\": True,\n",
    "}\n",
    "\n",
    "bc_params = get_bc_params(**params_to_override)\n",
    "train_bc_model(\"tutorial_notebook_results/BC\", bc_params, verbose = True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dc068ebc",
   "metadata": {},
   "source": [
    "# 1): Loading trained agents\n",
    "This section will show you how to load a pretrained agents. "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1b2a9df6",
   "metadata": {},
   "source": [
    "## 1.1) Loading BC agent\n",
    "The BC (behavior cloning) agents are trained separately without using Ray. We showed how to train a BC agent in the previous section, and to load a trained agent, we can use the load_bc_model function"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "f94ab2a8",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(<keras.engine.functional.Functional at 0x7f73ac2c2110>,\n",
       " {'eager': True,\n",
       "  'use_lstm': False,\n",
       "  'cell_size': 256,\n",
       "  'data_params': {'layouts': ['cramped_room'],\n",
       "   'check_trajectories': False,\n",
       "   'featurize_states': True,\n",
       "   'data_path': '/nas/ucb/micah/overcooked_ai/src/human_aware_rl/static/human_data/cleaned/2019_hh_trials_train.pickle'},\n",
       "  'mdp_params': {'layout_name': 'cramped_room', 'old_dynamics': True},\n",
       "  'env_params': {'horizon': 400,\n",
       "   'mlam_params': {'start_orientations': False,\n",
       "    'wait_allowed': False,\n",
       "    'counter_goals': [],\n",
       "    'counter_drop': [],\n",
       "    'counter_pickup': [],\n",
       "    'same_motion_goals': True}},\n",
       "  'mdp_fn_params': {},\n",
       "  'mlp_params': {'num_layers': 2, 'net_arch': [64, 64]},\n",
       "  'training_params': {'epochs': 10,\n",
       "   'validation_split': 0.15,\n",
       "   'batch_size': 64,\n",
       "   'learning_rate': 0.001,\n",
       "   'use_class_weights': False},\n",
       "  'evaluation_params': {'ep_length': 400, 'num_games': 1, 'display': False},\n",
       "  'action_shape': (6,),\n",
       "  'observation_shape': (96,)})"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from human_aware_rl.imitation.behavior_cloning_tf2 import load_bc_model\n",
    "#this is the same path you used when training the BC agent\n",
    "bc_model_path = \"tutorial_notebook_results/BC\"\n",
    "bc_model, bc_params = load_bc_model(bc_model_path)\n",
    "bc_model, bc_params"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "20526ac6",
   "metadata": {},
   "source": [
    "Now that we have loaded the model, since we used Tensorflow to train the agent, we need to wrap it so it is compatible with other agents. We can do it by converting it to a Rllib-compatible policy class, and wraps it as a RllibAgent. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "68c37a25",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<human_aware_rl.rllib.rllib.RlLibAgent at 0x7f73ac5b4040>"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from human_aware_rl.imitation.behavior_cloning_tf2 import _get_base_ae, BehaviorCloningPolicy\n",
    "bc_policy = BehaviorCloningPolicy.from_model(bc_model, bc_params, stochastic=True)\n",
    "# We need the featurization function that is specifically defined for BC agent\n",
    "# The easiest way to do it is to create a base environment from the configuration and extract the featurization function\n",
    "# The environment is also needed to do evaluation\n",
    "\n",
    "base_ae = _get_base_ae(bc_params)\n",
    "base_env = base_ae.env\n",
    "\n",
    "from human_aware_rl.rllib.rllib import RlLibAgent\n",
    "bc_agent0 = RlLibAgent(bc_policy, 0, base_env.featurize_state_mdp)\n",
    "bc_agent0\n",
    "\n",
    "bc_agent1 = RlLibAgent(bc_policy, 1, base_env.featurize_state_mdp)\n",
    "bc_agent1"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "351c5687",
   "metadata": {},
   "source": [
    "Now we have a BC agent that is ready for evaluation "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "73698e65",
   "metadata": {},
   "source": [
    "## 1.3) Loading & Creating Agent Pair\n",
    "\n",
    "To do evaluation, we need a pair of agents, or an AgentPair. We can directly load a pair of agents for evaluation, which we can do with the load_agent_pair function, or we can create an AgentPair manually from 2 separate RllibAgent instance. To directly load an AgentPair from a trainer:"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c8bd83bc",
   "metadata": {},
   "source": [
    "To create an AgentPair manually, we can just pair together any 2 RllibAgent object. For example, we have created a **ppo_agent** and a **bc_agent**. To pair them up, we can just construct an AgentPair with them as arguments."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "f0acdeee",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<overcooked_ai_py.agents.agent.AgentPair at 0x7f743e8c9330>"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from human_aware_rl.rllib.rllib import AgentPair\n",
    "ap_bc = AgentPair(bc_agent0, bc_agent1)\n",
    "ap_bc"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4dc6cafa",
   "metadata": {},
   "source": [
    "# 2): Evaluating AgentPair\n",
    "\n",
    "To evaluate an AgentPair, we need to first create an AgentEvaluator. You can create an AgentEvaluator in various ways, but the simpliest way to do so is from the layout_name. \n",
    "\n",
    "You can modify the settings of the layout by changing the **mdp_params** argument, but most of the time you should only need to include \"layout_name\", which is the layout you want to evaluate the agent pair on, and \"old_dynamics\", which determines whether the envrionment conforms to the design in the Neurips2019 paper, or whether the cooking should start automatically when all ingredients are present.  \n",
    "\n",
    "For the **env_params**, you can change how many steps are there in one evaluation. The default is 400, which means the game runs for 400 timesteps. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "95787dc6",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<overcooked_ai_py.agents.benchmarking.AgentEvaluator at 0x7f743e62efe0>"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from overcooked_ai_py.agents.benchmarking import AgentEvaluator\n",
    "# Here we create an evaluator for the cramped_room layout\n",
    "layout = \"cramped_room\"\n",
    "ae = AgentEvaluator.from_layout_name(mdp_params={\"layout_name\": layout, \"old_dynamics\": True}, \n",
    "                                     env_params={\"horizon\": 400})\n",
    "ae"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4471aeda",
   "metadata": {},
   "source": [
    "To run evaluations, we can use the evaluate_agent_pair method associated with the AgentEvaluator:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "93676beb",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Avg rew: 58.00 (std: 24.41, se: 7.72); avg len: 400.00; : 100%|██████████| 10/10 [06:57<00:00, 41.80s/it]\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'ep_actions': array([[((0, 0), (0, 0)), ((0, 0), (0, 0)), ((0, 1), (0, 0)), ...,\n",
       "         ((1, 0), (0, 0)), ((0, -1), (0, 0)), ((0, 0), (0, 0))],\n",
       "        [((0, 0), (0, 0)), ((0, 0), (0, 0)), ((0, 0), (0, 0)), ...,\n",
       "         ((0, -1), (0, 0)), ('interact', (0, 0)), ((1, 0), (0, 0))],\n",
       "        [((0, 0), (0, 0)), ((0, 0), (0, 0)), ((0, -1), (0, 0)), ...,\n",
       "         ((-1, 0), (0, 0)), ((0, 0), (0, 0)), ((0, 0), (0, 0))],\n",
       "        ...,\n",
       "        [((-1, 0), (0, 0)), ((0, 0), (0, 0)), ((0, 0), (0, 0)), ...,\n",
       "         ((0, 0), (0, 0)), ((0, 0), (0, 0)), ((0, 0), (0, 0))],\n",
       "        [((0, 0), (0, 0)), ((0, 0), (0, 0)), ((0, 0), (0, 0)), ...,\n",
       "         ((0, 0), (0, 0)), ((0, 0), (0, 0)), ((0, 0), (0, 0))],\n",
       "        [((0, 0), (0, 0)), ((0, 0), (0, 0)), ((0, 0), (0, 0)), ...,\n",
       "         ((0, 0), (0, 0)), ((0, 0), (0, 0)), ((0, 0), (0, 0))]],\n",
       "       dtype=object),\n",
       " 'metadatas': {},\n",
       " 'ep_infos': array([[{'agent_infos': [{'action_probs': array([[0.05297125, 0.00369196, 0.00564479, 0.01859784, 0.9077534 ,\n",
       "                 0.01134084]], dtype=float32)}, {'action_probs': array([[0.0172119 , 0.00448586, 0.01028795, 0.02042776, 0.9353154 ,\n",
       "                 0.01227117]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         {'agent_infos': [{'action_probs': array([[0.05297125, 0.00369196, 0.00564479, 0.01859784, 0.9077534 ,\n",
       "                 0.01134084]], dtype=float32)}, {'action_probs': array([[0.0172119 , 0.00448586, 0.01028795, 0.02042776, 0.9353154 ,\n",
       "                 0.01227117]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         {'agent_infos': [{'action_probs': array([[0.05297125, 0.00369196, 0.00564479, 0.01859784, 0.9077534 ,\n",
       "                 0.01134084]], dtype=float32)}, {'action_probs': array([[0.0172119 , 0.00448586, 0.01028795, 0.02042776, 0.9353154 ,\n",
       "                 0.01227117]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         ...,\n",
       "         {'agent_infos': [{'action_probs': array([[0.19644046, 0.02003402, 0.04843785, 0.03200788, 0.69757086,\n",
       "                 0.00550887]], dtype=float32)}, {'action_probs': array([[0.01111467, 0.00485337, 0.00452655, 0.06342083, 0.90035826,\n",
       "                 0.01572641]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         {'agent_infos': [{'action_probs': array([[0.27116105, 0.00321556, 0.00400256, 0.00206093, 0.7167076 ,\n",
       "                 0.00285232]], dtype=float32)}, {'action_probs': array([[0.00569393, 0.00288775, 0.00376487, 0.03016732, 0.93764234,\n",
       "                 0.0198438 ]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         {'agent_infos': [{'action_probs': array([[0.01727636, 0.00584591, 0.00119517, 0.00494091, 0.7857944 ,\n",
       "                 0.18494728]], dtype=float32)}, {'action_probs': array([[0.00411911, 0.00513693, 0.00228141, 0.00554584, 0.9706544 ,\n",
       "                 0.01226222]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None, 'episode': {'ep_game_stats': {'tomato_pickup': [[], []], 'useful_tomato_pickup': [[], []], 'tomato_drop': [[], []], 'useful_tomato_drop': [[], []], 'potting_tomato': [[], []], 'onion_pickup': [[5, 78, 117, 137, 182, 199, 211, 241, 247, 265, 305], [6, 28, 47, 311, 329]], 'useful_onion_pickup': [[5, 78, 117, 137, 182, 199, 211, 241, 247, 305], [6, 28, 47, 311]], 'onion_drop': [[], []], 'useful_onion_drop': [[], []], 'potting_onion': [[12, 90, 131, 179, 191, 202, 234, 245, 260, 297, 311], [24, 35, 73, 322]], 'dish_pickup': [[24, 334], [112, 194, 271]], 'useful_dish_pickup': [[24, 334], [112, 194, 271]], 'dish_drop': [[], []], 'useful_dish_drop': [[], []], 'soup_pickup': [[57], [162, 225, 281]], 'soup_delivery': [[66], [180, 264, 298]], 'soup_drop': [[], []], 'optimal_onion_potting': [[12, 90, 131, 179, 191, 202, 234, 245, 260, 297, 311], [24, 35, 73, 322]], 'optimal_tomato_potting': [[], []], 'viable_onion_potting': [[12, 90, 131, 179, 191, 202, 234, 245, 260, 297, 311], [24, 35, 73, 322]], 'viable_tomato_potting': [[], []], 'catastrophic_onion_potting': [[], []], 'catastrophic_tomato_potting': [[], []], 'useless_onion_potting': [[], []], 'useless_tomato_potting': [[], []], 'cumulative_sparse_rewards_by_agent': array([20, 60]), 'cumulative_shaped_rewards_by_agent': array([44, 36])}, 'ep_sparse_r': 80, 'ep_shaped_r': 80, 'ep_sparse_r_by_agent': array([20, 60]), 'ep_shaped_r_by_agent': array([44, 36]), 'ep_length': 400}}],\n",
       "        [{'agent_infos': [{'action_probs': array([[0.05297125, 0.00369196, 0.00564479, 0.01859784, 0.9077534 ,\n",
       "                 0.01134084]], dtype=float32)}, {'action_probs': array([[0.0172119 , 0.00448586, 0.01028795, 0.02042776, 0.9353154 ,\n",
       "                 0.01227117]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         {'agent_infos': [{'action_probs': array([[0.05297125, 0.00369196, 0.00564479, 0.01859784, 0.9077534 ,\n",
       "                 0.01134084]], dtype=float32)}, {'action_probs': array([[0.0172119 , 0.00448586, 0.01028795, 0.02042776, 0.9353154 ,\n",
       "                 0.01227117]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         {'agent_infos': [{'action_probs': array([[0.05297125, 0.00369196, 0.00564479, 0.01859784, 0.9077534 ,\n",
       "                 0.01134084]], dtype=float32)}, {'action_probs': array([[0.0172119 , 0.00448586, 0.01028795, 0.02042776, 0.9353154 ,\n",
       "                 0.01227117]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         ...,\n",
       "         {'agent_infos': [{'action_probs': array([[0.43064207, 0.00118634, 0.04243125, 0.01245082, 0.46324122,\n",
       "                 0.05004829]], dtype=float32)}, {'action_probs': array([[0.07216967, 0.01146591, 0.09417965, 0.00753717, 0.7981282 ,\n",
       "                 0.01651939]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         {'agent_infos': [{'action_probs': array([[0.035366  , 0.00080447, 0.00464188, 0.00211906, 0.30970713,\n",
       "                 0.64736146]], dtype=float32)}, {'action_probs': array([[0.08782221, 0.01013853, 0.09689118, 0.00953836, 0.7771887 ,\n",
       "                 0.018421  ]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [3, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         {'agent_infos': [{'action_probs': array([[0.00814912, 0.00546082, 0.30809313, 0.02386194, 0.62155545,\n",
       "                 0.03287959]], dtype=float32)}, {'action_probs': array([[0.07929008, 0.02225238, 0.06561285, 0.00576123, 0.8015312 ,\n",
       "                 0.0255523 ]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None, 'episode': {'ep_game_stats': {'tomato_pickup': [[], []], 'useful_tomato_pickup': [[], []], 'tomato_drop': [[], []], 'useful_tomato_drop': [[], []], 'potting_tomato': [[], []], 'onion_pickup': [[10, 30, 38, 73, 107, 128, 176, 388], []], 'useful_onion_pickup': [[10, 30, 38, 73, 107, 128, 176, 388], []], 'onion_drop': [[], []], 'useful_onion_drop': [[], []], 'potting_onion': [[22, 36, 48, 100, 111, 166, 351, 398], []], 'dish_pickup': [[], [63, 96, 395]], 'useful_dish_pickup': [[], [63, 395]], 'dish_drop': [[], []], 'useful_dish_drop': [[], []], 'soup_pickup': [[], [78, 323]], 'soup_delivery': [[], [87, 347]], 'soup_drop': [[], []], 'optimal_onion_potting': [[22, 36, 48, 100, 111, 166, 351, 398], []], 'optimal_tomato_potting': [[], []], 'viable_onion_potting': [[22, 36, 48, 100, 111, 166, 351, 398], []], 'viable_tomato_potting': [[], []], 'catastrophic_onion_potting': [[], []], 'catastrophic_tomato_potting': [[], []], 'useless_onion_potting': [[], []], 'useless_tomato_potting': [[], []], 'cumulative_sparse_rewards_by_agent': array([ 0, 40]), 'cumulative_shaped_rewards_by_agent': array([24, 16])}, 'ep_sparse_r': 40, 'ep_shaped_r': 40, 'ep_sparse_r_by_agent': array([ 0, 40]), 'ep_shaped_r_by_agent': array([24, 16]), 'ep_length': 400}}],\n",
       "        [{'agent_infos': [{'action_probs': array([[0.05297125, 0.00369196, 0.00564479, 0.01859784, 0.9077534 ,\n",
       "                 0.01134084]], dtype=float32)}, {'action_probs': array([[0.0172119 , 0.00448586, 0.01028795, 0.02042776, 0.9353154 ,\n",
       "                 0.01227117]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         {'agent_infos': [{'action_probs': array([[0.05297125, 0.00369196, 0.00564479, 0.01859784, 0.9077534 ,\n",
       "                 0.01134084]], dtype=float32)}, {'action_probs': array([[0.0172119 , 0.00448586, 0.01028795, 0.02042776, 0.9353154 ,\n",
       "                 0.01227117]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         {'agent_infos': [{'action_probs': array([[0.05297125, 0.00369196, 0.00564479, 0.01859784, 0.9077534 ,\n",
       "                 0.01134084]], dtype=float32)}, {'action_probs': array([[0.0172119 , 0.00448586, 0.01028795, 0.02042776, 0.9353154 ,\n",
       "                 0.01227117]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         ...,\n",
       "         {'agent_infos': [{'action_probs': array([[1.9425798e-02, 2.5771598e-03, 3.3497889e-04, 1.6918801e-01,\n",
       "                 8.0651075e-01, 1.9632669e-03]], dtype=float32)}, {'action_probs': array([[9.9211924e-02, 1.4156876e-02, 1.3569271e-02, 1.5255912e-03,\n",
       "                 8.7080568e-01, 7.3057896e-04]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         {'agent_infos': [{'action_probs': array([[1.9425798e-02, 2.5771598e-03, 3.3497889e-04, 1.6918801e-01,\n",
       "                 8.0651075e-01, 1.9632669e-03]], dtype=float32)}, {'action_probs': array([[9.9211924e-02, 1.4156876e-02, 1.3569271e-02, 1.5255912e-03,\n",
       "                 8.7080568e-01, 7.3057896e-04]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         {'agent_infos': [{'action_probs': array([[1.9425798e-02, 2.5771598e-03, 3.3497889e-04, 1.6918801e-01,\n",
       "                 8.0651075e-01, 1.9632669e-03]], dtype=float32)}, {'action_probs': array([[9.9211924e-02, 1.4156876e-02, 1.3569271e-02, 1.5255912e-03,\n",
       "                 8.7080568e-01, 7.3057896e-04]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None, 'episode': {'ep_game_stats': {'tomato_pickup': [[], []], 'useful_tomato_pickup': [[], []], 'tomato_drop': [[], []], 'useful_tomato_drop': [[], []], 'potting_tomato': [[], []], 'onion_pickup': [[5, 12, 42, 67, 81, 101, 132, 143, 214, 228, 257, 299, 339, 365, 379], [7, 151, 299]], 'useful_onion_pickup': [[5, 12, 42, 67, 81, 101, 132, 143, 214, 228, 257, 299, 365, 379], [7, 151, 299]], 'onion_drop': [[], []], 'useful_onion_drop': [[], []], 'potting_onion': [[9, 28, 63, 77, 94, 124, 137, 154, 222, 238, 294, 322, 359, 372], [20, 196, 307]], 'dish_pickup': [[161], [26, 93, 224, 342, 368]], 'useful_dish_pickup': [[161], [26, 93, 224, 342, 368]], 'dish_drop': [[], []], 'useful_dish_drop': [[], []], 'soup_pickup': [[185], [49, 115, 264, 350]], 'soup_delivery': [[], [62, 126, 280, 358]], 'soup_drop': [[190], []], 'optimal_onion_potting': [[9, 28, 63, 77, 94, 124, 137, 154, 222, 238, 294, 322, 359, 372], [20, 196, 307]], 'optimal_tomato_potting': [[], []], 'viable_onion_potting': [[9, 28, 63, 77, 94, 124, 137, 154, 222, 238, 294, 322, 359, 372], [20, 196, 307]], 'viable_tomato_potting': [[], []], 'catastrophic_onion_potting': [[], []], 'catastrophic_tomato_potting': [[], []], 'useless_onion_potting': [[], []], 'useless_tomato_potting': [[], []], 'cumulative_sparse_rewards_by_agent': array([ 0, 80]), 'cumulative_shaped_rewards_by_agent': array([50, 44])}, 'ep_sparse_r': 80, 'ep_shaped_r': 94, 'ep_sparse_r_by_agent': array([ 0, 80]), 'ep_shaped_r_by_agent': array([50, 44]), 'ep_length': 400}}],\n",
       "        ...,\n",
       "        [{'agent_infos': [{'action_probs': array([[0.05297125, 0.00369196, 0.00564479, 0.01859784, 0.9077534 ,\n",
       "                 0.01134084]], dtype=float32)}, {'action_probs': array([[0.0172119 , 0.00448586, 0.01028795, 0.02042776, 0.9353154 ,\n",
       "                 0.01227117]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         {'agent_infos': [{'action_probs': array([[0.15208364, 0.02959962, 0.03100314, 0.09762987, 0.6794261 ,\n",
       "                 0.01025762]], dtype=float32)}, {'action_probs': array([[0.01076664, 0.00366979, 0.01067814, 0.01863392, 0.94092256,\n",
       "                 0.01532892]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         {'agent_infos': [{'action_probs': array([[0.15208364, 0.02959962, 0.03100314, 0.09762987, 0.6794261 ,\n",
       "                 0.01025762]], dtype=float32)}, {'action_probs': array([[0.01076664, 0.00366979, 0.01067814, 0.01863392, 0.94092256,\n",
       "                 0.01532892]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         ...,\n",
       "         {'agent_infos': [{'action_probs': array([[0.02323768, 0.01111274, 0.00306184, 0.0065601 , 0.95041126,\n",
       "                 0.00561639]], dtype=float32)}, {'action_probs': array([[0.01019176, 0.00143141, 0.0016537 , 0.24706495, 0.72344553,\n",
       "                 0.01621261]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         {'agent_infos': [{'action_probs': array([[0.02323768, 0.01111274, 0.00306184, 0.0065601 , 0.95041126,\n",
       "                 0.00561639]], dtype=float32)}, {'action_probs': array([[0.01019176, 0.00143141, 0.0016537 , 0.24706495, 0.72344553,\n",
       "                 0.01621261]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         {'agent_infos': [{'action_probs': array([[0.02323768, 0.01111274, 0.00306184, 0.0065601 , 0.95041126,\n",
       "                 0.00561639]], dtype=float32)}, {'action_probs': array([[0.01019176, 0.00143141, 0.0016537 , 0.24706495, 0.72344553,\n",
       "                 0.01621261]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None, 'episode': {'ep_game_stats': {'tomato_pickup': [[], []], 'useful_tomato_pickup': [[], []], 'tomato_drop': [[], []], 'useful_tomato_drop': [[], []], 'potting_tomato': [[], []], 'onion_pickup': [[], [59, 84, 94, 105, 146, 160, 190, 211, 224, 252, 266, 308, 337, 364, 392]], 'useful_onion_pickup': [[], [59, 84, 94, 105, 146, 160, 190, 211, 224, 252, 266, 308, 337, 364, 392]], 'onion_drop': [[], []], 'useful_onion_drop': [[], []], 'potting_onion': [[], [75, 90, 98, 135, 152, 165, 207, 216, 227, 261, 272, 326, 361, 371]], 'dish_pickup': [[104, 159, 237, 284, 368], []], 'useful_dish_pickup': [[104, 159, 237, 284, 368], []], 'dish_drop': [[], []], 'useful_dish_drop': [[], []], 'soup_pickup': [[122, 189, 250, 348], [286]], 'soup_delivery': [[201, 257], [296]], 'soup_drop': [[130, 362], []], 'optimal_onion_potting': [[], [75, 90, 98, 135, 152, 165, 207, 216, 227, 261, 272, 326, 361, 371]], 'optimal_tomato_potting': [[], []], 'viable_onion_potting': [[], [75, 90, 98, 135, 152, 165, 207, 216, 227, 261, 272, 326, 361, 371]], 'viable_tomato_potting': [[], []], 'catastrophic_onion_potting': [[], []], 'catastrophic_tomato_potting': [[], []], 'useless_onion_potting': [[], []], 'useless_tomato_potting': [[], []], 'cumulative_sparse_rewards_by_agent': array([40, 20]), 'cumulative_shaped_rewards_by_agent': array([35, 42])}, 'ep_sparse_r': 60, 'ep_shaped_r': 77, 'ep_sparse_r_by_agent': array([40, 20]), 'ep_shaped_r_by_agent': array([35, 42]), 'ep_length': 400}}],\n",
       "        [{'agent_infos': [{'action_probs': array([[0.05297125, 0.00369196, 0.00564479, 0.01859784, 0.9077534 ,\n",
       "                 0.01134084]], dtype=float32)}, {'action_probs': array([[0.0172119 , 0.00448586, 0.01028795, 0.02042776, 0.9353154 ,\n",
       "                 0.01227117]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         {'agent_infos': [{'action_probs': array([[0.05297125, 0.00369196, 0.00564479, 0.01859784, 0.9077534 ,\n",
       "                 0.01134084]], dtype=float32)}, {'action_probs': array([[0.0172119 , 0.00448586, 0.01028795, 0.02042776, 0.9353154 ,\n",
       "                 0.01227117]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         {'agent_infos': [{'action_probs': array([[0.05297125, 0.00369196, 0.00564479, 0.01859784, 0.9077534 ,\n",
       "                 0.01134084]], dtype=float32)}, {'action_probs': array([[0.0172119 , 0.00448586, 0.01028795, 0.02042776, 0.9353154 ,\n",
       "                 0.01227117]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         ...,\n",
       "         {'agent_infos': [{'action_probs': array([[0.01547191, 0.00609912, 0.01313627, 0.04950164, 0.81513834,\n",
       "                 0.10065278]], dtype=float32)}, {'action_probs': array([[0.10248725, 0.00658033, 0.1326387 , 0.01534621, 0.74053526,\n",
       "                 0.00241225]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         {'agent_infos': [{'action_probs': array([[0.01547191, 0.00609912, 0.01313627, 0.04950164, 0.81513834,\n",
       "                 0.10065278]], dtype=float32)}, {'action_probs': array([[0.10248725, 0.00658033, 0.1326387 , 0.01534621, 0.74053526,\n",
       "                 0.00241225]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         {'agent_infos': [{'action_probs': array([[0.01547191, 0.00609912, 0.01313627, 0.04950164, 0.81513834,\n",
       "                 0.10065278]], dtype=float32)}, {'action_probs': array([[0.10248725, 0.00658033, 0.1326387 , 0.01534621, 0.74053526,\n",
       "                 0.00241225]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None, 'episode': {'ep_game_stats': {'tomato_pickup': [[], []], 'useful_tomato_pickup': [[], []], 'tomato_drop': [[], []], 'useful_tomato_drop': [[], []], 'potting_tomato': [[], []], 'onion_pickup': [[34, 53, 71, 99, 140, 177, 210, 226, 269, 280, 292, 303, 337], [11]], 'useful_onion_pickup': [[34, 53, 71, 99, 140, 177, 210, 226, 269, 280, 292, 303, 337], [11]], 'onion_drop': [[], []], 'useful_onion_drop': [[], []], 'potting_onion': [[41, 64, 94, 105, 171, 207, 220, 233, 276, 288, 298, 328, 355], [29]], 'dish_pickup': [[], [42, 104, 210, 300, 343]], 'useful_dish_pickup': [[], [42, 104, 210, 300, 343]], 'dish_drop': [[], []], 'useful_dish_drop': [[], []], 'soup_pickup': [[], [87, 192, 268, 318]], 'soup_delivery': [[], [90, 196, 289, 329]], 'soup_drop': [[], []], 'optimal_onion_potting': [[41, 64, 94, 105, 171, 207, 220, 233, 276, 288, 298, 328, 355], [29]], 'optimal_tomato_potting': [[], []], 'viable_onion_potting': [[41, 64, 94, 105, 171, 207, 220, 233, 276, 288, 298, 328, 355], [29]], 'viable_tomato_potting': [[], []], 'catastrophic_onion_potting': [[], []], 'catastrophic_tomato_potting': [[], []], 'useless_onion_potting': [[], []], 'useless_tomato_potting': [[], []], 'cumulative_sparse_rewards_by_agent': array([ 0, 80]), 'cumulative_shaped_rewards_by_agent': array([39, 38])}, 'ep_sparse_r': 80, 'ep_shaped_r': 77, 'ep_sparse_r_by_agent': array([ 0, 80]), 'ep_shaped_r_by_agent': array([39, 38]), 'ep_length': 400}}],\n",
       "        [{'agent_infos': [{'action_probs': array([[0.05297125, 0.00369196, 0.00564479, 0.01859784, 0.9077534 ,\n",
       "                 0.01134084]], dtype=float32)}, {'action_probs': array([[0.0172119 , 0.00448586, 0.01028795, 0.02042776, 0.9353154 ,\n",
       "                 0.01227117]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         {'agent_infos': [{'action_probs': array([[0.05297125, 0.00369196, 0.00564479, 0.01859784, 0.9077534 ,\n",
       "                 0.01134084]], dtype=float32)}, {'action_probs': array([[0.0172119 , 0.00448586, 0.01028795, 0.02042776, 0.9353154 ,\n",
       "                 0.01227117]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         {'agent_infos': [{'action_probs': array([[0.05297125, 0.00369196, 0.00564479, 0.01859784, 0.9077534 ,\n",
       "                 0.01134084]], dtype=float32)}, {'action_probs': array([[0.0172119 , 0.00448586, 0.01028795, 0.02042776, 0.9353154 ,\n",
       "                 0.01227117]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         ...,\n",
       "         {'agent_infos': [{'action_probs': array([[0.02207099, 0.10103653, 0.00525927, 0.08316644, 0.66694003,\n",
       "                 0.12152667]], dtype=float32)}, {'action_probs': array([[0.0107964 , 0.00767973, 0.00561914, 0.05614307, 0.90523726,\n",
       "                 0.01452435]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         {'agent_infos': [{'action_probs': array([[0.02207099, 0.10103653, 0.00525927, 0.08316644, 0.66694003,\n",
       "                 0.12152667]], dtype=float32)}, {'action_probs': array([[0.0107964 , 0.00767973, 0.00561914, 0.05614307, 0.90523726,\n",
       "                 0.01452435]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None},\n",
       "         {'agent_infos': [{'action_probs': array([[0.02207099, 0.10103653, 0.00525927, 0.08316644, 0.66694003,\n",
       "                 0.12152667]], dtype=float32)}, {'action_probs': array([[0.0107964 , 0.00767973, 0.00561914, 0.05614307, 0.90523726,\n",
       "                 0.01452435]], dtype=float32)}], 'sparse_r_by_agent': [0, 0], 'shaped_r_by_agent': [0, 0], 'phi_s': None, 'phi_s_prime': None, 'episode': {'ep_game_stats': {'tomato_pickup': [[], []], 'useful_tomato_pickup': [[], []], 'tomato_drop': [[], []], 'useful_tomato_drop': [[], []], 'potting_tomato': [[], []], 'onion_pickup': [[27, 229, 306, 315, 317, 377, 383], [30, 58, 75, 157, 175, 211]], 'useful_onion_pickup': [[27], [30, 58, 75, 157, 175]], 'onion_drop': [[301, 308, 316, 370, 382, 392], []], 'useful_onion_drop': [[301, 308, 316, 370, 382, 392], []], 'potting_onion': [[33], [49, 73, 136, 161, 183]], 'dish_pickup': [[52], []], 'useful_dish_pickup': [[52], []], 'dish_drop': [[], []], 'useful_dish_drop': [[], []], 'soup_pickup': [[110], []], 'soup_delivery': [[129], []], 'soup_drop': [[], []], 'optimal_onion_potting': [[33], [49, 73, 136, 161, 183]], 'optimal_tomato_potting': [[], []], 'viable_onion_potting': [[33], [49, 73, 136, 161, 183]], 'viable_tomato_potting': [[], []], 'catastrophic_onion_potting': [[], []], 'catastrophic_tomato_potting': [[], []], 'useless_onion_potting': [[], []], 'useless_tomato_potting': [[], []], 'cumulative_sparse_rewards_by_agent': array([20,  0]), 'cumulative_shaped_rewards_by_agent': array([11, 15])}, 'ep_sparse_r': 20, 'ep_shaped_r': 26, 'ep_sparse_r_by_agent': array([20,  0]), 'ep_shaped_r_by_agent': array([11, 15]), 'ep_length': 400}}]],\n",
       "       dtype=object),\n",
       " 'ep_dones': array([[False, False, False, ..., False, False, True],\n",
       "        [False, False, False, ..., False, False, True],\n",
       "        [False, False, False, ..., False, False, True],\n",
       "        ...,\n",
       "        [False, False, False, ..., False, False, True],\n",
       "        [False, False, False, ..., False, False, True],\n",
       "        [False, False, False, ..., False, False, True]], dtype=object),\n",
       " 'ep_returns': array([80, 40, 80, 40, 80, 80, 20, 60, 80, 20]),\n",
       " 'env_params': array([{'start_state_fn': None, 'horizon': 400, 'info_level': 0, 'num_mdp': 1},\n",
       "        {'start_state_fn': None, 'horizon': 400, 'info_level': 0, 'num_mdp': 1},\n",
       "        {'start_state_fn': None, 'horizon': 400, 'info_level': 0, 'num_mdp': 1},\n",
       "        {'start_state_fn': None, 'horizon': 400, 'info_level': 0, 'num_mdp': 1},\n",
       "        {'start_state_fn': None, 'horizon': 400, 'info_level': 0, 'num_mdp': 1},\n",
       "        {'start_state_fn': None, 'horizon': 400, 'info_level': 0, 'num_mdp': 1},\n",
       "        {'start_state_fn': None, 'horizon': 400, 'info_level': 0, 'num_mdp': 1},\n",
       "        {'start_state_fn': None, 'horizon': 400, 'info_level': 0, 'num_mdp': 1},\n",
       "        {'start_state_fn': None, 'horizon': 400, 'info_level': 0, 'num_mdp': 1},\n",
       "        {'start_state_fn': None, 'horizon': 400, 'info_level': 0, 'num_mdp': 1}],\n",
       "       dtype=object),\n",
       " 'ep_lengths': array([400, 400, 400, 400, 400, 400, 400, 400, 400, 400]),\n",
       " 'mdp_params': array([{'layout_name': 'cramped_room', 'terrain': [['X', 'X', 'P', 'X', 'X'], ['O', ' ', ' ', ' ', 'O'], ['X', ' ', ' ', ' ', 'X'], ['X', 'D', 'X', 'S', 'X']], 'start_player_positions': [(1, 2), (3, 1)], 'start_bonus_orders': [], 'rew_shaping_params': {'PLACEMENT_IN_POT_REW': 3, 'DISH_PICKUP_REWARD': 3, 'SOUP_PICKUP_REWARD': 5, 'DISH_DISP_DISTANCE_REW': 0, 'POT_DISTANCE_REW': 0, 'SOUP_DISTANCE_REW': 0}, 'start_all_orders': [{'ingredients': ['onion', 'onion', 'onion']}]},\n",
       "        {'layout_name': 'cramped_room', 'terrain': [['X', 'X', 'P', 'X', 'X'], ['O', ' ', ' ', ' ', 'O'], ['X', ' ', ' ', ' ', 'X'], ['X', 'D', 'X', 'S', 'X']], 'start_player_positions': [(1, 2), (3, 1)], 'start_bonus_orders': [], 'rew_shaping_params': {'PLACEMENT_IN_POT_REW': 3, 'DISH_PICKUP_REWARD': 3, 'SOUP_PICKUP_REWARD': 5, 'DISH_DISP_DISTANCE_REW': 0, 'POT_DISTANCE_REW': 0, 'SOUP_DISTANCE_REW': 0}, 'start_all_orders': [{'ingredients': ['onion', 'onion', 'onion']}]},\n",
       "        {'layout_name': 'cramped_room', 'terrain': [['X', 'X', 'P', 'X', 'X'], ['O', ' ', ' ', ' ', 'O'], ['X', ' ', ' ', ' ', 'X'], ['X', 'D', 'X', 'S', 'X']], 'start_player_positions': [(1, 2), (3, 1)], 'start_bonus_orders': [], 'rew_shaping_params': {'PLACEMENT_IN_POT_REW': 3, 'DISH_PICKUP_REWARD': 3, 'SOUP_PICKUP_REWARD': 5, 'DISH_DISP_DISTANCE_REW': 0, 'POT_DISTANCE_REW': 0, 'SOUP_DISTANCE_REW': 0}, 'start_all_orders': [{'ingredients': ['onion', 'onion', 'onion']}]},\n",
       "        {'layout_name': 'cramped_room', 'terrain': [['X', 'X', 'P', 'X', 'X'], ['O', ' ', ' ', ' ', 'O'], ['X', ' ', ' ', ' ', 'X'], ['X', 'D', 'X', 'S', 'X']], 'start_player_positions': [(1, 2), (3, 1)], 'start_bonus_orders': [], 'rew_shaping_params': {'PLACEMENT_IN_POT_REW': 3, 'DISH_PICKUP_REWARD': 3, 'SOUP_PICKUP_REWARD': 5, 'DISH_DISP_DISTANCE_REW': 0, 'POT_DISTANCE_REW': 0, 'SOUP_DISTANCE_REW': 0}, 'start_all_orders': [{'ingredients': ['onion', 'onion', 'onion']}]},\n",
       "        {'layout_name': 'cramped_room', 'terrain': [['X', 'X', 'P', 'X', 'X'], ['O', ' ', ' ', ' ', 'O'], ['X', ' ', ' ', ' ', 'X'], ['X', 'D', 'X', 'S', 'X']], 'start_player_positions': [(1, 2), (3, 1)], 'start_bonus_orders': [], 'rew_shaping_params': {'PLACEMENT_IN_POT_REW': 3, 'DISH_PICKUP_REWARD': 3, 'SOUP_PICKUP_REWARD': 5, 'DISH_DISP_DISTANCE_REW': 0, 'POT_DISTANCE_REW': 0, 'SOUP_DISTANCE_REW': 0}, 'start_all_orders': [{'ingredients': ['onion', 'onion', 'onion']}]},\n",
       "        {'layout_name': 'cramped_room', 'terrain': [['X', 'X', 'P', 'X', 'X'], ['O', ' ', ' ', ' ', 'O'], ['X', ' ', ' ', ' ', 'X'], ['X', 'D', 'X', 'S', 'X']], 'start_player_positions': [(1, 2), (3, 1)], 'start_bonus_orders': [], 'rew_shaping_params': {'PLACEMENT_IN_POT_REW': 3, 'DISH_PICKUP_REWARD': 3, 'SOUP_PICKUP_REWARD': 5, 'DISH_DISP_DISTANCE_REW': 0, 'POT_DISTANCE_REW': 0, 'SOUP_DISTANCE_REW': 0}, 'start_all_orders': [{'ingredients': ['onion', 'onion', 'onion']}]},\n",
       "        {'layout_name': 'cramped_room', 'terrain': [['X', 'X', 'P', 'X', 'X'], ['O', ' ', ' ', ' ', 'O'], ['X', ' ', ' ', ' ', 'X'], ['X', 'D', 'X', 'S', 'X']], 'start_player_positions': [(1, 2), (3, 1)], 'start_bonus_orders': [], 'rew_shaping_params': {'PLACEMENT_IN_POT_REW': 3, 'DISH_PICKUP_REWARD': 3, 'SOUP_PICKUP_REWARD': 5, 'DISH_DISP_DISTANCE_REW': 0, 'POT_DISTANCE_REW': 0, 'SOUP_DISTANCE_REW': 0}, 'start_all_orders': [{'ingredients': ['onion', 'onion', 'onion']}]},\n",
       "        {'layout_name': 'cramped_room', 'terrain': [['X', 'X', 'P', 'X', 'X'], ['O', ' ', ' ', ' ', 'O'], ['X', ' ', ' ', ' ', 'X'], ['X', 'D', 'X', 'S', 'X']], 'start_player_positions': [(1, 2), (3, 1)], 'start_bonus_orders': [], 'rew_shaping_params': {'PLACEMENT_IN_POT_REW': 3, 'DISH_PICKUP_REWARD': 3, 'SOUP_PICKUP_REWARD': 5, 'DISH_DISP_DISTANCE_REW': 0, 'POT_DISTANCE_REW': 0, 'SOUP_DISTANCE_REW': 0}, 'start_all_orders': [{'ingredients': ['onion', 'onion', 'onion']}]},\n",
       "        {'layout_name': 'cramped_room', 'terrain': [['X', 'X', 'P', 'X', 'X'], ['O', ' ', ' ', ' ', 'O'], ['X', ' ', ' ', ' ', 'X'], ['X', 'D', 'X', 'S', 'X']], 'start_player_positions': [(1, 2), (3, 1)], 'start_bonus_orders': [], 'rew_shaping_params': {'PLACEMENT_IN_POT_REW': 3, 'DISH_PICKUP_REWARD': 3, 'SOUP_PICKUP_REWARD': 5, 'DISH_DISP_DISTANCE_REW': 0, 'POT_DISTANCE_REW': 0, 'SOUP_DISTANCE_REW': 0}, 'start_all_orders': [{'ingredients': ['onion', 'onion', 'onion']}]},\n",
       "        {'layout_name': 'cramped_room', 'terrain': [['X', 'X', 'P', 'X', 'X'], ['O', ' ', ' ', ' ', 'O'], ['X', ' ', ' ', ' ', 'X'], ['X', 'D', 'X', 'S', 'X']], 'start_player_positions': [(1, 2), (3, 1)], 'start_bonus_orders': [], 'rew_shaping_params': {'PLACEMENT_IN_POT_REW': 3, 'DISH_PICKUP_REWARD': 3, 'SOUP_PICKUP_REWARD': 5, 'DISH_DISP_DISTANCE_REW': 0, 'POT_DISTANCE_REW': 0, 'SOUP_DISTANCE_REW': 0}, 'start_all_orders': [{'ingredients': ['onion', 'onion', 'onion']}]}],\n",
       "       dtype=object),\n",
       " 'ep_rewards': array([[0, 0, 0, ..., 0, 0, 0],\n",
       "        [0, 0, 0, ..., 0, 0, 0],\n",
       "        [0, 0, 0, ..., 0, 0, 0],\n",
       "        ...,\n",
       "        [0, 0, 0, ..., 0, 0, 0],\n",
       "        [0, 0, 0, ..., 0, 0, 0],\n",
       "        [0, 0, 0, ..., 0, 0, 0]], dtype=object),\n",
       " 'ep_states': array([[<overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f743e953c70>,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f743e5e1a50>,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f743e91ad10>,\n",
       "         ...,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f73ac488730>,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f73ac4c1b70>,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f743e5e1390>],\n",
       "        [<overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f743e6d11e0>,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f743e668370>,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f73ac43c5b0>,\n",
       "         ...,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f73ac595120>,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f73ac268580>,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f73ac3a27a0>],\n",
       "        [<overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f743e918760>,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f743e7dab90>,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f743e919ff0>,\n",
       "         ...,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f73ac3b77c0>,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f73ac3b4940>,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f73ac16a350>],\n",
       "        ...,\n",
       "        [<overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f738c3666e0>,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f738c1b9720>,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f738c365930>,\n",
       "         ...,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f738c748490>,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f738c1dd0f0>,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f73647caf80>],\n",
       "        [<overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f73647ca230>,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f73647b70a0>,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f738c374550>,\n",
       "         ...,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f73ac1d0a30>,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f738c186080>,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f7364693e80>],\n",
       "        [<overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f738c1874c0>,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f738c1defb0>,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f73ac1d1120>,\n",
       "         ...,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f738c3c9d50>,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f7364610a90>,\n",
       "         <overcooked_ai_py.mdp.overcooked_mdp.OvercookedState object at 0x7f73ac4f2c80>]],\n",
       "       dtype=object)}"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# ap: The AgentPair we created earlier\n",
    "# 10: how many times we should run the evaluation since the policy is stochastic\n",
    "trajs = ae.evaluate_agent_pair(ap_bc, 10)\n",
    "trajs"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "332b6cca",
   "metadata": {},
   "source": [
    "The result returned by the AgentEvaluator contains detailed information about the evaluation runs, including actions taken by each agent at each timestep. Usually you don't need to directly interact with them, but the most direct performance measures can be retrieved with result[\"ep_returns\"], which returns the average sparse reward of each evaluation run"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "9fed7df5",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0])"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "trajs[\"ep_returns\"]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "48875a68",
   "metadata": {},
   "outputs": [],
   "source": [
    "result = ae.evaluate_agent_pair(ap_sp, 1, 400)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4898bae8",
   "metadata": {},
   "source": [
    "# 3): Visualization\n",
    "\n",
    "We can also visualize the trajectories of agents. One way is to run the web demo with the agents you choose, and the specific instructions can be found in the [overcooked_demo](https://github.com/HumanCompatibleAI/overcooked_ai/tree/master/src/overcooked_demo) module, which requires some setup. Another simpler way is to use the StateVisualizer, which uses the information returned by the AgentEvaluator to create a simple dynamic visualization. You can checkout [this Colab Notebook](https://colab.research.google.com/drive/1AAVP2P-QQhbx6WTOnIG54NXLXFbO7y6n#scrollTo=6Xlu54MkiXCR) that let you play with fixed agents"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "464d0c84",
   "metadata": {},
   "outputs": [],
   "source": [
    "from overcooked_ai_py.visualization.state_visualizer import StateVisualizer\n",
    "StateVisualizer().display_rendered_trajectory(trajs, ipython_display=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "49b62122",
   "metadata": {},
   "source": [
    "This should spawn a window where you can see what the agents are doing at each timestep. You can drag the slider to go forward and backward in time."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": ".venv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.16"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
