{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Understanding Asynchronous Advantage Actor-Critic (A3C): A Complete Guide\n",
    "\n",
    "### Important Note:\n",
    "\n",
    "Since it is based on multiple workers, A3C is a bit more complex than A2C. It's better to implement it in a separate Python file. The Python file can be found with the name `a3c_training.py` in the same directory as this notebook. This notebook will only cover the theoretical aspects of A3C, while the implementation will be done in the `a3c_training.py` file.\n",
    "\n",
    "# Table of Contents\n",
    "\n",
    "- [Introduction](#introduction)\n",
    "- [What is A3C?](#what-is-a3c)\n",
    "  - [The Power of Asynchrony](#the-power-of-asynchrony)\n",
    "  - [Advantages over A2C/Vanilla PG](#advantages-over-a2cvanilla-pg)\n",
    "- [Where and How A3C is Used](#where-and-how-a3c-is-used)\n",
    "- [Mathematical Foundation of A3C](#mathematical-foundation-of-a3c)\n",
    "  - [Actor-Critic Recap](#actor-critic-recap)\n",
    "  - [N-Step Returns](#n-step-returns)\n",
    "  - [Asynchronous Gradient Updates](#asynchronous-gradient-updates)\n",
    "  - [Shared vs. Local Networks](#shared-vs-local-networks)\n",
    "  - [Loss Function (per Worker)](#loss-function-per-worker)\n",
    "- [Step-by-Step Explanation of A3C (Worker Perspective)](#step-by-step-explanation-of-a3c-worker-perspective)\n",
    "- [Key Components of A3C](#key-components-of-a3c)\n",
    "  - [Global Actor Network](#global-actor-network)\n",
    "  - [Global Critic Network](#global-critic-network)\n",
    "  - [Worker Processes](#worker-processes)\n",
    "  - [Local Actor-Critic Networks](#local-actor-critic-networks)\n",
    "  - [Environment Instances](#environment-instances)\n",
    "  - [N-Step Returns Calculation](#n-step-returns-calculation)\n",
    "  - [Asynchronous Gradient Application](#asynchronous-gradient-application)\n",
    "  - [Shared Optimizer (or Equivalent)](#shared-optimizer-or-equivalent)\n",
    "  - [Hyperparameters](#hyperparameters)\n",
    "- [Practical Example: Custom Grid World](#practical-example-custom-grid-world)\n",
    "- [Setting up the Environment](#setting-up-the-environment)\n",
    "- [Creating the Custom Environment](#creating-the-custom-environment)\n",
    "- [Implementing the A3C Algorithm](#implementing-the-a3c-algorithm)\n",
    "  - [Defining the Shared Actor-Critic Model](#defining-the-shared-actor-critic-model)\n",
    "  - [N-Step Return and Advantage Calculation](#n-step-return-and-advantage-calculation)\n",
    "  - [Defining the Worker Process](#defining-the-worker-process)\n",
    "  - [Setting up Shared Optimizer (Simplified Approach)](#setting-up-shared-optimizer-simplified-approach)\n",
    "- [Running the A3C Algorithm](#running-the-a3c-algorithm)\n",
    "  - [Hyperparameter Setup](#hyperparameter-setup)\n",
    "  - [Initialization (Global Network, Shared Counter, Workers)](#initialization-global-network-shared-counter-workers)\n",
    "  - [Training Execution (Starting & Joining Workers)](#training-execution-starting--joining-workers)\n",
    "- [Visualizing the Learning Process (Challenges with Async)](#visualizing-the-learning-process-challenges-with-async)\n",
    "- [Analyzing the Learned Policy (Optional Visualization)](#analyzing-the-learned-policy-optional-visualization)\n",
    "- [Common Challenges and Solutions in A3C](#common-challenges-and-solutions-in-a3c)\n",
    "- [Conclusion](#conclusion)\n",
    "\n",
    "## Introduction\n",
    "\n",
    "Asynchronous Advantage Actor-Critic (A3C) was a landmark algorithm in deep reinforcement learning, demonstrating that parallel training with asynchronous updates could lead to stable and efficient learning across a variety of tasks. It builds directly on the actor-critic framework but introduces a novel way to achieve data decorrelation and training stability without relying on large replay buffers.\n",
    "\n",
    "## What is A3C?\n",
    "\n",
    "A3C is an **on-policy, asynchronous, actor-critic** algorithm. Its core idea is to run multiple 'worker' agents in parallel, each interacting with its own instance of the environment. These workers independently compute gradients based on their local experiences and asynchronously update a shared *global* set of actor and critic network parameters.\n",
    "\n",
    "Key elements:\n",
    "1.  **Parallel Workers:** Multiple independent processes (workers) are created.\n",
    "2.  **Shared Global Network:** A single instance of the actor and critic networks exists globally, accessible by all workers.\n",
    "3.  **Local Networks:** Each worker maintains its own local copy of the actor and critic networks.\n",
    "4.  **Asynchronous Interaction & Updates:**\n",
    "    *   Each worker periodically syncs its local network parameters *from* the global network.\n",
    "    *   It then interacts with its environment instance for a fixed number of steps (or until episode end), collecting experience (states, actions, rewards, dones).\n",
    "    *   Using this local experience, it calculates parameter updates (gradients) for the actor and critic.\n",
    "    *   Crucially, it applies these gradients directly *to the shared global network parameters* asynchronously, without waiting for other workers.\n",
    "\n",
    "### The Power of Asynchrony\n",
    "The asynchronous nature is key. Because each worker interacts with its own environment instance and updates the global network at different times based on slightly different versions of the policy (due to concurrent updates from other workers), the overall stream of gradient updates applied to the global network becomes less correlated. This inherent data diversity and slightly 'stale' updates were shown to act as a regularizer, stabilizing learning and often eliminating the need for experience replay (unlike DQN).\n",
    "\n",
    "### Advantages over A2C/Vanilla PG\n",
    "- **Data Decorrelation:** Asynchronous updates from diverse experiences help break correlations found in single-trajectory updates (like REINFORCE) or synchronous batch updates (like A2C without parallel environments).\n",
    "- **Efficiency (CPU Usage):** A3C was particularly effective on multi-core CPUs, as each worker could run on a separate core, leading to faster wall-clock training times compared to single-threaded methods.\n",
    "- **No Replay Buffer:** Reduces memory requirements compared to DQN.\n",
    "- **Stability:** Generally more stable than vanilla policy gradients due to the actor-critic structure and decorrelation.\n",
    "\n",
    "However, A2C (the synchronous version) often matches or exceeds A3C performance with modern hardware (especially GPUs) and simpler implementation, and PPO provides further stability improvements.\n",
    "\n",
    "## Where and How A3C is Used\n",
    "\n",
    "A3C was highly influential and demonstrated strong performance on:\n",
    "1.  **Atari Games:** Achieved state-of-the-art results at the time of its publication.\n",
    "2.  **Continuous Control (MuJoCo):** Showcased effectiveness in complex physics-based simulations.\n",
    "3.  **Labyrinth Exploration (VizDoom):** Demonstrated capability in tasks requiring memory and exploration.\n",
    "\n",
    "While still a viable algorithm, its use has somewhat decreased in favor of A2C (for simplicity and GPU utilization) and PPO (for stability and performance). It remains relevant when:\n",
    "- CPU-based parallel training is the primary mode.\n",
    "- Simulating the original asynchronous paradigm is desired for research or education.\n",
    "- Avoiding replay buffers is a priority.\n",
    "\n",
    "## Mathematical Foundation of A3C\n",
    "\n",
    "A3C shares the core actor-critic mathematical basis with A2C but differs in the update mechanism.\n",
    "\n",
    "### Actor-Critic Recap\n",
    "The goal is still to optimize a policy $\\pi(a|s; \\theta)$ (actor) using guidance from a value function $V(s; \\phi)$ (critic). The policy gradient is typically weighted by an advantage estimate $\\hat{A}_t$.\n",
    "\n",
    "### N-Step Returns\n",
    "A3C commonly uses **n-step returns** to calculate targets for both the critic update and the advantage estimation. For a trajectory segment starting at time $t$ and running for $n$ steps (or until termination):\n",
    "\n",
    "- **n-step Return (Value Target $R_t$):**\n",
    "$$ R_t = \\sum_{k=0}^{n-1} \\gamma^k r_{t+k+1} + \\gamma^n V_\\phi(s_{t+n}) \\quad \\text{(if not terminated before t+n)} $$\n",
    "$$ R_t = \\sum_{k=0}^{T-t-1} \\gamma^k r_{t+k+1} \\quad \\text{(if terminated at step T < t+n)} $$\n",
    "Where $V_\\phi(s_{t+n})$ is the bootstrapped value estimate from the critic for the state reached after $n$ steps.\n",
    "\n",
    "- **n-step Advantage Estimate $\\hat{A}_t$:**\n",
    "$$ \\hat{A}_t = R_t - V_\\phi(s_t) $$\n",
    "This advantage estimate balances the bias of one-step TD errors and the variance of full Monte Carlo returns.\n",
    "\n",
    "### Asynchronous Gradient Updates\n",
    "Each worker $i$ computes gradients $\\nabla_\\theta J_i(\\theta)$ and $\\nabla_\\phi L^{VF}_i(\\phi)$ based on its locally collected $n$-step trajectory segment. It then uses these gradients to asynchronously update the *global* parameters $\\theta_{global}$ and $\\phi_{global}$.\n",
    "\n",
    "### Shared vs. Local Networks\n",
    "- **Global Network:** Holds the master parameters $(\\theta_{global}, \\phi_{global})$.\n",
    "- **Local Network:** Each worker $i$ has local parameters $(\\theta'_i, \\phi'_i)$. Periodically, these are updated: $(\\theta'_i, \\phi'_i) \\leftarrow (\\theta_{global}, \\phi_{global})$. Rollouts are performed using the local network.\n",
    "\n",
    "### Loss Function (per Worker)\n",
    "Each worker computes a loss based on its $n$-step segment, similar to A2C:\n",
    "$$ L_i(\\theta'_i, \\phi'_i) = \\underbrace{-\\log \\pi(a_t | s_t; \\theta'_i) (R_t - V(s_t; \\phi'_i))}_{\\text{Policy Loss (negative objective)}} + \\underbrace{c_v (R_t - V(s_t; \\phi'_i))^2}_{\\text{Value Loss}} \\underbrace{- c_e H(\\pi(\\cdot|s_t; \\theta'_i))}_{\\text{Entropy Bonus}} $$\n",
    "Note: $(R_t - V(s_t; \\phi'_i))$ is the advantage estimate $\\hat{A}_t$, treated as constant for the policy loss term. The gradients $\\nabla_{\\theta_{global}} L_i$ and $\\nabla_{\\phi_{global}} L_i$ are then computed (implicitly, by applying worker gradients computed wrt local params to the global params) and applied asynchronously.\n",
    "\n",
    "## Step-by-Step Explanation of A3C (Worker Perspective)\n",
    "\n",
    "1.  **Initialize Worker:** Create local environment instance, local actor $\\pi_{\theta'}$ and critic $V_{\\phi'}$. Initialize step counter $t=0$. Sync local parameters from global: $(\\theta', \\phi') \\leftarrow (\\theta_{global}, \\phi_{global})$. Reset local environment $s_0$.\n",
    "2.  **Loop (until global termination signal):**\n",
    "    a.  Reset local gradient accumulators: $d\\theta = 0, d\\phi = 0$.\n",
    "    b.  Sync local parameters from global: $(\\theta', \\phi') \\leftarrow (\\theta_{global}, \\phi_{global})$.\n",
    "    c.  Initialize trajectory storage for n-step rollout.\n",
    "    d.  **Rollout Phase**: For $k = 0$ to $n-1$ (or until episode ends):\n",
    "        i.   Using state $s_k$ and local policy $\\pi_{\theta'}$, sample action $a_k$, get $\\log \\pi(a_k|s_k; \\theta')$.\n",
    "        ii.  Execute $a_k$ in the local environment, get reward $r_{k+1}$ and next state $s_{k+1}$.\n",
    "        iii. Store $(s_k, a_k, r_{k+1}, \\log \\pi(a_k|s_k; \\theta'))$.\n",
    "        iv.  Update $s_k \\leftarrow s_{k+1}$.\n",
    "        v.   If episode terminated, store termination flag and break rollout loop.\n",
    "    e.  **Calculate N-Step Returns & Advantages**: \n",
    "        i.   Estimate bootstrap value $R = V(s_{k+1}; \\phi')$ if not terminated, else $R=0$.\n",
    "        ii.  Iterate *backwards* from $k$ down to $0$:\n",
    "            - $R \\leftarrow r_{j+1} + \\gamma R$.\n",
    "            - Calculate advantage $\\hat{A}_j = R - V(s_j; \\phi')$.\n",
    "            - Accumulate policy gradient: $d\\theta \\leftarrow d\\theta + \\nabla_{\\theta'} \\log \\pi(a_j|s_j; \\theta') \\hat{A}_j + c_e \\nabla_{\\theta'} H(\\pi(\\cdot|s_j; \\theta'))$.\n",
    "            - Accumulate value gradient: $d\\phi \\leftarrow d\\phi + \\nabla_{\\phi'} (R - V(s_j; \\phi'))^2$.\n",
    "    f.  **Update Global Network**: Apply accumulated gradients $d\\theta, d\\phi$ to the global parameters $(\\theta_{global}, \\phi_{global})$ using a shared optimizer.\n",
    "    g.  If episode terminated during rollout, reset local environment $s_0$.\n",
    "\n",
    "## Key Components of A3C\n",
    "\n",
    "### Global Actor Network\n",
    "- Holds the shared parameters $\\theta_{global}$ of the policy.\n",
    "\n",
    "### Global Critic Network\n",
    "- Holds the shared parameters $\\phi_{global}$ of the value function.\n",
    "\n",
    "### Worker Processes\n",
    "- Independent processes executing the learning loop in parallel.\n",
    "\n",
    "### Local Actor-Critic Networks\n",
    "- Each worker's copy of the networks, synced periodically from global.\n",
    "\n",
    "### Environment Instances\n",
    "- Each worker interacts with its own separate copy of the environment.\n",
    "\n",
    "### N-Step Returns Calculation\n",
    "- Workers compute targets based on $n$ steps of interaction plus a bootstrapped value estimate.\n",
    "\n",
    "### Asynchronous Gradient Application\n",
    "- Workers compute gradients locally and apply them to the global network without locking or waiting.\n",
    "\n",
    "### Shared Optimizer (or Equivalent)\n",
    "- An optimization algorithm (e.g., SharedAdam, RMSprop) that handles concurrent updates to the shared global parameters.\n",
    "\n",
    "### Hyperparameters\n",
    "- Number of workers.\n",
    "- N-step rollout length ($n$).\n",
    "- Learning rates.\n",
    "- Entropy/Value coefficients ($c_e, c_v$).\n",
    "- Discount factor $\\gamma$.\n",
    "- Optimizer parameters.\n",
    "\n",
    "## Practical Example: Custom Grid World\n",
    "\n",
    "We will implement A3C for the Grid World. Note that the benefits of A3C's parallelism are less pronounced on such a simple, fast environment compared to complex tasks like Atari. This example focuses on illustrating the *asynchronous structure*.\n",
    "\n",
    "**Environment Description:** (Same as before)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Setting up the Environment\n",
    "\n",
    "Import libraries, including `multiprocessing`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Using device: cpu\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<torch._C.Generator at 0x1f8d37b39b0>"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Import necessary libraries\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "import random\n",
    "import math\n",
    "from collections import namedtuple\n",
    "from itertools import count\n",
    "from typing import List, Tuple, Dict, Optional, Callable\n",
    "import time\n",
    "import queue\n",
    "\n",
    "# Import PyTorch and multiprocessing\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.optim as optim\n",
    "import torch.nn.functional as F\n",
    "from torch.distributions import Categorical\n",
    "import torch.multiprocessing as mp # Use torch multiprocessing\n",
    "\n",
    "# Set up device (Workers likely run on CPU, global model might be GPU but requires care)\n",
    "# For simplicity, let's assume CPU for this example to avoid GPU sharing complexities.\n",
    "device = torch.device(\"cpu\") \n",
    "print(f\"Using device: {device}\")\n",
    "\n",
    "# Set random seeds for reproducibility in the main process\n",
    "# Note: workers will need their own seeding if full reproducibility is needed\n",
    "seed = 42\n",
    "random.seed(seed)\n",
    "np.random.seed(seed)\n",
    "torch.manual_seed(seed)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Creating the Custom Environment\n",
    "\n",
    "Reusing the `GridEnvironment` class. No changes needed."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Custom Grid World Environment (Identical)\n",
    "class GridEnvironment:\n",
    "    def __init__(self, rows: int = 10, cols: int = 10) -> None:\n",
    "        self.rows: int = rows\n",
    "        self.cols: int = cols\n",
    "        self.start_state: Tuple[int, int] = (0, 0)\n",
    "        self.goal_state: Tuple[int, int] = (rows - 1, cols - 1)\n",
    "        self.state: Tuple[int, int] = self.start_state\n",
    "        self.state_dim: int = 2\n",
    "        self.action_dim: int = 4\n",
    "        self.action_map: Dict[int, Tuple[int, int]] = {0: (-1, 0), 1: (1, 0), 2: (0, -1), 3: (0, 1)}\n",
    "\n",
    "    def reset(self) -> torch.Tensor:\n",
    "        self.state = self.start_state\n",
    "        return self._get_state_tensor(self.state)\n",
    "\n",
    "    def _get_state_tensor(self, state_tuple: Tuple[int, int]) -> torch.Tensor:\n",
    "        norm_row = state_tuple[0] / (self.rows - 1) if self.rows > 1 else 0.0\n",
    "        norm_col = state_tuple[1] / (self.cols - 1) if self.cols > 1 else 0.0\n",
    "        normalized_state: List[float] = [norm_row, norm_col]\n",
    "        # Ensure tensor is created on the correct device (CPU for workers)\n",
    "        return torch.tensor(normalized_state, dtype=torch.float32, device=torch.device(\"cpu\"))\n",
    "\n",
    "    def step(self, action: int) -> Tuple[torch.Tensor, float, bool]:\n",
    "        if self.state == self.goal_state:\n",
    "            return self._get_state_tensor(self.state), 0.0, True\n",
    "        dr, dc = self.action_map[action]\n",
    "        current_row, current_col = self.state\n",
    "        next_row, next_col = current_row + dr, current_col + dc\n",
    "        reward: float = -0.1\n",
    "        if not (0 <= next_row < self.rows and 0 <= next_col < self.cols):\n",
    "            next_row, next_col = current_row, current_col\n",
    "            reward = -1.0\n",
    "        self.state = (next_row, next_col)\n",
    "        next_state_tensor: torch.Tensor = self._get_state_tensor(self.state)\n",
    "        done: bool = (self.state == self.goal_state)\n",
    "        if done:\n",
    "            reward = 10.0\n",
    "        return next_state_tensor, reward, done\n",
    "\n",
    "    def get_action_space_size(self) -> int:\n",
    "        return self.action_dim\n",
    "\n",
    "    def get_state_dimension(self) -> int:\n",
    "        return self.state_dim\n",
    "\n",
    "# Instantiate once to get dims for network setup\n",
    "temp_env = GridEnvironment(rows=10, cols=10)\n",
    "n_actions_custom = temp_env.get_action_space_size()\n",
    "n_observations_custom = temp_env.get_state_dimension()\n",
    "del temp_env # No longer needed"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Implementing the A3C Algorithm\n",
    "\n",
    "We need a combined Actor-Critic network structure, shared memory handling, and the worker process logic.\n",
    "\n",
    "### Defining the Shared Actor-Critic Model\n",
    "\n",
    "A single network that outputs both action probabilities (logits) and state values. This is common in A3C/A2C to share initial layers."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "class ActorCriticNetwork(nn.Module):\n",
    "    \"\"\" Combined Actor-Critic network for A3C \"\"\"\n",
    "    def __init__(self, n_observations: int, n_actions: int):\n",
    "        super(ActorCriticNetwork, self).__init__()\n",
    "        # Shared layers\n",
    "        self.layer1 = nn.Linear(n_observations, 128)\n",
    "        self.layer2 = nn.Linear(128, 128)\n",
    "        \n",
    "        # Actor head (outputs action logits)\n",
    "        self.actor_head = nn.Linear(128, n_actions)\n",
    "        \n",
    "        # Critic head (outputs state value)\n",
    "        self.critic_head = nn.Linear(128, 1)\n",
    "\n",
    "    def forward(self, x: torch.Tensor) -> Tuple[Categorical, torch.Tensor]:\n",
    "        \"\"\"\n",
    "        Forward pass, returns action distribution and state value.\n",
    "        \n",
    "        Parameters:\n",
    "        - x (torch.Tensor): Input state tensor.\n",
    "        \n",
    "        Returns:\n",
    "        - Tuple[Categorical, torch.Tensor]: \n",
    "            - Action distribution (Categorical).\n",
    "            - State value estimate (Tensor).\n",
    "        \"\"\"\n",
    "        if not isinstance(x, torch.Tensor):\n",
    "             x = torch.tensor(x, dtype=torch.float32, device=x.device) # Use input tensor's device\n",
    "        elif x.dtype != torch.float32:\n",
    "             x = x.to(dtype=torch.float32)\n",
    "        if x.dim() == 1:\n",
    "            x = x.unsqueeze(0)\n",
    "\n",
    "        # Shared layers\n",
    "        x = F.relu(self.layer1(x))\n",
    "        shared_features = F.relu(self.layer2(x))\n",
    "        \n",
    "        # Actor head\n",
    "        action_logits = self.actor_head(shared_features)\n",
    "        action_dist = Categorical(logits=action_logits)\n",
    "        \n",
    "        # Critic head\n",
    "        state_value = self.critic_head(shared_features)\n",
    "        \n",
    "        return action_dist, state_value"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### N-Step Return and Advantage Calculation\n",
    "\n",
    "Function to compute n-step returns and advantages, used by each worker."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "def compute_n_step_returns_advantages(rewards: List[float], \n",
    "                                      values: List[torch.Tensor], \n",
    "                                      bootstrap_value: torch.Tensor, \n",
    "                                      dones: List[float], \n",
    "                                      gamma: float) -> Tuple[torch.Tensor, torch.Tensor]:\n",
    "    \"\"\"\n",
    "    Computes n-step returns (targets for critic) and advantages for actor.\n",
    "    \n",
    "    Parameters:\n",
    "    - rewards (List[float]): List of rewards from the n-step rollout.\n",
    "    - values (List[torch.Tensor]): List of value estimates V(s_t) for the rollout steps.\n",
    "    - bootstrap_value (torch.Tensor): Value estimate V(s_{t+n}) for bootstrapping.\n",
    "    - dones (List[float]): List of done flags (0.0 or 1.0).\n",
    "    - gamma (float): Discount factor.\n",
    "\n",
    "    Returns:\n",
    "    - Tuple[torch.Tensor, torch.Tensor]:\n",
    "        - n_step_returns: Target values for the critic.\n",
    "        - advantages: Advantage estimates for the actor.\n",
    "    \"\"\"\n",
    "    n_steps = len(rewards)\n",
    "    # Ensure values is a tensor; detach as these are inputs for calculation\n",
    "    values_tensor = torch.cat(values).squeeze().detach()\n",
    "    # Detach bootstrap value as well\n",
    "    R = bootstrap_value.detach()\n",
    "    \n",
    "    # Initialize tensors on CPU (as workers run on CPU)\n",
    "    returns = torch.zeros(n_steps, dtype=torch.float32, device=torch.device(\"cpu\"))\n",
    "    advantages = torch.zeros(n_steps, dtype=torch.float32, device=torch.device(\"cpu\"))\n",
    "\n",
    "    # Calculate backwards from the last step\n",
    "    for t in reversed(range(n_steps)):\n",
    "        # R becomes the n-step return target for state s_t\n",
    "        R = rewards[t] + gamma * R * (1.0 - dones[t]) # If done, bootstrap value is 0\n",
    "        returns[t] = R\n",
    "        \n",
    "        # Advantage A_t = n_step_return(R_t) - V(s_t)\n",
    "        advantages[t] = R - values_tensor[t]\n",
    "\n",
    "    # Standardization of advantages is often done, but omitted here for simplicity\n",
    "    # following the original A3C paper's typical setup.\n",
    "    # Can be added: advantages = (advantages - advantages.mean()) / (advantages.std() + 1e-8)\n",
    "    \n",
    "    return returns, advantages"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Defining the Worker Process\n",
    "\n",
    "This is the core logic for each asynchronous worker."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "def worker(worker_id: int,\n",
    "           global_model: ActorCriticNetwork,\n",
    "           global_optimizer: optim.Optimizer, \n",
    "           global_counter: mp.Value, # Shared counter for total steps\n",
    "           max_global_steps: int,\n",
    "           env_rows: int,\n",
    "           env_cols: int,\n",
    "           n_steps: int,\n",
    "           gamma: float,\n",
    "           value_loss_coeff: float,\n",
    "           entropy_coeff: float,\n",
    "           result_queue: mp.Queue) -> None: # Queue for sending results\n",
    "    \"\"\"\n",
    "    Function executed by each A3C worker process.\n",
    "    \"\"\"\n",
    "    try:\n",
    "        print(f\"Worker {worker_id} started.\")\n",
    "        # Ensure worker uses CPU and has own seed if needed\n",
    "        worker_device = torch.device(\"cpu\")\n",
    "        torch.manual_seed(seed + worker_id) # Worker-specific seed\n",
    "        \n",
    "        # Create local environment and model\n",
    "        local_env = GridEnvironment(rows=env_rows, cols=env_cols)\n",
    "        local_model = ActorCriticNetwork(n_observations_custom, n_actions_custom).to(worker_device)\n",
    "        state = local_env.reset()\n",
    "\n",
    "        episode_reward = 0.0\n",
    "        episode_length = 0\n",
    "        episode_counter = 0\n",
    "\n",
    "        while global_counter.value < max_global_steps:\n",
    "            # Sync local model with global model\n",
    "            local_model.load_state_dict(global_model.state_dict())\n",
    "            \n",
    "            # Storage for n-step rollout\n",
    "            log_probs_list: List[torch.Tensor] = []\n",
    "            values_list: List[torch.Tensor] = []\n",
    "            rewards_list: List[float] = []\n",
    "            dones_list: List[float] = []\n",
    "            entropies_list: List[torch.Tensor] = []\n",
    "\n",
    "            # --- Rollout Phase (n steps or until done) ---\n",
    "            for step_idx in range(n_steps):\n",
    "                episode_length += 1\n",
    "                # Get action and value from local model\n",
    "                state_tensor = state.to(worker_device)\n",
    "                action_dist, value_pred = local_model(state_tensor)\n",
    "                \n",
    "                action = action_dist.sample()\n",
    "                log_prob = action_dist.log_prob(action)\n",
    "                entropy = action_dist.entropy()\n",
    "                \n",
    "                # Interact with environment\n",
    "                next_state, reward, done = local_env.step(action.item())\n",
    "                \n",
    "                # Store transition data\n",
    "                log_probs_list.append(log_prob)\n",
    "                values_list.append(value_pred)\n",
    "                rewards_list.append(reward)\n",
    "                dones_list.append(float(done))\n",
    "                entropies_list.append(entropy)\n",
    "                \n",
    "                episode_reward += reward\n",
    "                state = next_state\n",
    "\n",
    "                # Increment global step counter\n",
    "                with global_counter.get_lock():\n",
    "                    global_counter.value += 1\n",
    "                    # Send periodic progress updates (every 100 steps)\n",
    "                    if global_counter.value % 100 == 0:\n",
    "                        result_queue.put((\"progress\", worker_id, global_counter.value))\n",
    "                \n",
    "                if done or episode_length >= MAX_STEPS_PER_EPISODE_A3C:\n",
    "                    # Log episode stats\n",
    "                    episode_counter += 1\n",
    "                    result_queue.put((episode_reward, episode_length))\n",
    "                    # Reset environment and local stats\n",
    "                    state = local_env.reset()\n",
    "                    episode_reward = 0.0\n",
    "                    episode_length = 0\n",
    "                    break # End rollout if episode finished\n",
    "                    \n",
    "                if global_counter.value >= max_global_steps:\n",
    "                    break\n",
    "\n",
    "            # Skip update if we didn't collect any steps (shouldn't happen)\n",
    "            if len(rewards_list) == 0:\n",
    "                continue\n",
    "\n",
    "            # --- Calculate N-Step Returns and Advantages --- \n",
    "            with torch.no_grad(): # Bootstrap value from the *local* critic\n",
    "                if done:\n",
    "                    bootstrap_value = torch.tensor([0.0], dtype=torch.float32, device=worker_device)\n",
    "                else:\n",
    "                    _, bootstrap_value = local_model(state.to(worker_device))\n",
    "                    \n",
    "            returns_tensor, advantages_tensor = compute_n_step_returns_advantages(\n",
    "                rewards_list, values_list, bootstrap_value, dones_list, gamma\n",
    "            )\n",
    "\n",
    "            # --- Calculate Losses --- \n",
    "            # Stack collected tensors\n",
    "            log_probs_tensor = torch.stack(log_probs_list).squeeze()\n",
    "            values_tensor = torch.stack(values_list).squeeze()\n",
    "            entropies_tensor = torch.stack(entropies_list).squeeze()\n",
    "\n",
    "            # Calculate combined loss using stored tensors\n",
    "            policy_loss = -(log_probs_tensor * advantages_tensor.detach()).mean()\n",
    "            value_loss = F.mse_loss(values_tensor, returns_tensor.detach())\n",
    "            entropy_loss = -entropies_tensor.mean() # Minimize negative entropy\n",
    "            \n",
    "            total_loss = policy_loss + value_loss_coeff * value_loss + entropy_coeff * entropy_loss\n",
    "\n",
    "            # --- Compute Gradients and Update Global Network --- \n",
    "            # Lock for optimizer update to avoid race conditions\n",
    "            global_model.zero_grad()  # Zero gradients of global model\n",
    "            total_loss.backward()    # Calculate gradients on local model\n",
    "            \n",
    "            # Transfer gradients from local model to global model\n",
    "            for local_param, global_param in zip(local_model.parameters(), global_model.parameters()):\n",
    "                if local_param.grad is not None:\n",
    "                    global_param.grad = local_param.grad.clone()\n",
    "            \n",
    "            # Apply gradients to the global model via the optimizer\n",
    "            global_optimizer.step()\n",
    "            \n",
    "            # Short sleep to reduce CPU contention\n",
    "            time.sleep(0.001)\n",
    "\n",
    "        print(f\"Worker {worker_id} finished.\")\n",
    "        result_queue.put(None) # Signal completion\n",
    "    \n",
    "    except Exception as e:\n",
    "        print(f\"Worker {worker_id} encountered error: {str(e)}\")\n",
    "        import traceback\n",
    "        traceback.print_exc()\n",
    "        result_queue.put((\"error\", worker_id, str(e)))\n",
    "        result_queue.put(None)  # Signal completion despite error"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Setting up Shared Optimizer (Simplified Approach)\n",
    "\n",
    "For simplicity, we'll use a standard optimizer instance in the main process and pass it to workers. The workers will calculate gradients on their local models and then manually copy these gradients to the corresponding parameters of the *global* model before calling `optimizer.step()`. This avoids implementing a custom `SharedAdam` but relies on the optimizer being able to handle updates based on gradients attached to the shared global parameters. *Note: This simplified approach might lack the efficiency of true shared optimizers in high-contention scenarios.*"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Optimizer will be created in the main process and passed to workers\n",
    "# Workers will compute gradients locally and apply them to the shared model parameters\n",
    "# Example: optimizer = optim.Adam(global_model.parameters(), lr=...) "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Running the A3C Algorithm\n",
    "\n",
    "Set up hyperparameters, initialize the global network, shared counter, optimizer, and worker processes.\n",
    "\n",
    "### Hyperparameter Setup\n",
    "\n",
    "Define A3C hyperparameters, including the number of workers and n-step length."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Hyperparameters for A3C on Custom Grid World\n",
    "GAMMA_A3C = 0.99             # Discount factor\n",
    "LR_A3C = 1e-4                # Learning rate (often lower in A3C due to async updates)\n",
    "N_STEPS = 5                  # Steps per update (n-step)\n",
    "VALUE_LOSS_COEFF_A3C = 0.5   # Coefficient for value loss\n",
    "ENTROPY_COEFF_A3C = 0.01     # Coefficient for entropy bonus\n",
    "\n",
    "NUM_WORKERS = 4 # mp.cpu_count() # Use number of available CPU cores ideally\n",
    "MAX_GLOBAL_STEPS_A3C = 50000  # Total training steps across all workers\n",
    "MAX_STEPS_PER_EPISODE_A3C = 200 # Max steps per episode for environment reset\n",
    "\n",
    "# Ensure environment parameters are available\n",
    "ENV_ROWS = 10\n",
    "ENV_COLS = 10"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Initialization (Global Network, Shared Counter, Workers)\n",
    "\n",
    "Create the global network, ensure its parameters are in shared memory, create the optimizer, and set up worker processes."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Initialize Global Network\n",
    "global_model_a3c = ActorCriticNetwork(n_observations_custom, n_actions_custom).to(device)\n",
    "# Crucial step: Ensure model parameters are shared across processes\n",
    "global_model_a3c.share_memory()\n",
    "\n",
    "# Initialize Optimizer (acting on the shared global model's parameters)\n",
    "global_optimizer_a3c = optim.Adam(global_model_a3c.parameters(), lr=LR_A3C)\n",
    "\n",
    "# Shared counter for total steps taken\n",
    "global_step_counter = mp.Value('i', 0) # 'i' for integer\n",
    "\n",
    "# Queue for workers to send back results (episode rewards/lengths)\n",
    "manager = mp.Manager()\n",
    "result_queue = manager.Queue()\n",
    "\n",
    "# Lists for plotting overall progress (collected from queue)\n",
    "a3c_episode_rewards = []\n",
    "a3c_episode_lengths = []\n",
    "\n",
    "# Create worker processes\n",
    "workers = []\n",
    "for i in range(NUM_WORKERS):\n",
    "    p = mp.Process(target=worker,\n",
    "                   args=(i, global_model_a3c, global_optimizer_a3c, \n",
    "                         global_step_counter, MAX_GLOBAL_STEPS_A3C, \n",
    "                         ENV_ROWS, ENV_COLS, N_STEPS, GAMMA_A3C, \n",
    "                         VALUE_LOSS_COEFF_A3C, ENTROPY_COEFF_A3C, result_queue))\n",
    "    workers.append(p)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Training Execution (Starting & Joining Workers)\n",
    "\n",
    "Start the worker processes and wait for them to complete. Collect results from the queue for plotting."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "###########################################################\n",
    "# Please use py script to run this script in a terminal.   \n",
    "# This is to avoid issues with multiprocessing in Jupyter. \n",
    "###########################################################\n",
    "\n",
    "print(f\"Starting A3C Training with {NUM_WORKERS} workers...\")\n",
    "start_time = time.time()\n",
    "\n",
    "# Start all worker processes\n",
    "for p in workers:\n",
    "    p.start()\n",
    "\n",
    "# Monitor the queue and collect results while workers are running\n",
    "completed_workers = 0\n",
    "while completed_workers < NUM_WORKERS:\n",
    "    result = result_queue.get() # Blocking call\n",
    "    if result is None: # Worker finished signal\n",
    "        completed_workers += 1\n",
    "    else:\n",
    "        ep_reward, ep_length = result\n",
    "        a3c_episode_rewards.append(ep_reward)\n",
    "        a3c_episode_lengths.append(ep_length)\n",
    "        # Optionally print progress as episodes finish\n",
    "        if len(a3c_episode_rewards) % 50 == 0:\n",
    "             print(f\" > Steps: {global_step_counter.value}, Episodes: {len(a3c_episode_rewards)}, Avg Reward (last 50): {np.mean(a3c_episode_rewards[-50:]):.2f}\")\n",
    "\n",
    "# Ensure all workers have finished\n",
    "for p in workers:\n",
    "    p.join()\n",
    "\n",
    "end_time = time.time()\n",
    "print(f\"\\nCustom Grid World Training Finished (A3C).\")\n",
    "print(f\"Total steps: {global_step_counter.value}\")\n",
    "print(f\"Total episodes: {len(a3c_episode_rewards)}\")\n",
    "print(f\"Training time: {end_time - start_time:.2f} seconds\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Visualizing the Learning Process (Challenges with Async)\n",
    "\n",
    "Plotting A3C results requires collecting statistics from workers. Since updates are asynchronous, plotting loss per 'iteration' isn't as meaningful as in synchronous methods. We plot the collected episode rewards and lengths over time (approximated by episode count).\n",
    "\n",
    "*Note: The x-axis represents episode completion order across all workers, not synchronous iterations.*"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Plotting collected results for A3C on Custom Grid World\n",
    "plt.figure(figsize=(15, 4))\n",
    "\n",
    "# Episode Rewards\n",
    "plt.subplot(1, 2, 1)\n",
    "plt.plot(a3c_episode_rewards, alpha=0.6, label='Raw Ep Reward')\n",
    "plt.title('A3C Custom Grid: Episode Rewards (Across Workers)')\n",
    "plt.xlabel('Episode Count (Completion Order)')\n",
    "plt.ylabel('Total Reward')\n",
    "plt.grid(True)\n",
    "# Add moving average\n",
    "if len(a3c_episode_rewards) >= 100:\n",
    "    rewards_ma_a3c = np.convolve(a3c_episode_rewards, np.ones(100)/100, mode='valid')\n",
    "    plt.plot(np.arange(len(rewards_ma_a3c)) + 99, rewards_ma_a3c, label='100-episode MA', color='orange', linewidth=2)\n",
    "plt.legend()\n",
    "\n",
    "# Episode Lengths\n",
    "plt.subplot(1, 2, 2)\n",
    "plt.plot(a3c_episode_lengths, alpha=0.6, label='Raw Ep Length')\n",
    "plt.title('A3C Custom Grid: Episode Lengths (Across Workers)')\n",
    "plt.xlabel('Episode Count (Completion Order)')\n",
    "plt.ylabel('Steps')\n",
    "plt.grid(True)\n",
    "# Add moving average\n",
    "if len(a3c_episode_lengths) >= 100:\n",
    "    lengths_ma_a3c = np.convolve(a3c_episode_lengths, np.ones(100)/100, mode='valid')\n",
    "    plt.plot(np.arange(len(lengths_ma_a3c)) + 99, lengths_ma_a3c, label='100-episode MA', color='orange', linewidth=2)\n",
    "plt.legend()\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![Hello World]()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Okay, here is a concise analysis of the A3C learning curves:\n",
    "\n",
    "**Analysis of A3C Learning Curves (Custom Grid World):**\n",
    "\n",
    "1.  **Episode Rewards over Time (Left Plot):**\n",
    "    The agent shows significant learning, with the moving average (red line) climbing steadily from highly negative rewards towards the optimal range (around 8) by episode ~300. The raw episode rewards (blue line) exhibit considerable variance, a common characteristic of A3C due to asynchronous updates from multiple actors leading to slightly decorrelated experiences and potentially noisier gradients than synchronous methods like A2C or PPO. Still, the overall trend is clearly positive and converges stably.\n",
    "\n",
    "2.  **Episode Lengths over Time (Right Plot):**\n",
    "    The episode length plot confirms the learning progress. Initially high and volatile (hitting the max steps frequently), the moving average (blue line) shows a sharp decrease, particularly between episodes ~150 and 300, eventually stabilizing near the optimal path length (~20 steps). The raw episode lengths (orange line) remain somewhat noisy but cluster tightly around the optimal value post-convergence, indicating efficient policy learning.\n",
    "\n",
    "**Overall Conclusion:**\n",
    "A3C successfully learns an effective and efficient policy for the Grid World task, as shown by the convergent reward and episode length curves. The asynchronous nature contributes to exploration and potentially faster wall-clock time on multi-core systems, but also introduces noticeable variance in the raw performance metrics compared to its synchronous counterpart (A2C) or methods with stronger policy constraints (PPO/TRPO)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Analyzing the Learned Policy (Optional Visualization)\n",
    "\n",
    "Visualize the policy learned by the *global* A3C actor network."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Reusing the policy plotting function\n",
    "def plot_a3c_policy_grid(policy_net: ActorCriticNetwork, env: GridEnvironment, device: torch.device) -> None:\n",
    "    \"\"\"\n",
    "    Plots the greedy policy derived from the A3C Actor network component.\n",
    "    Shows the most likely action for each state.\n",
    "    \"\"\"\n",
    "    rows: int = env.rows\n",
    "    cols: int = env.cols\n",
    "    policy_grid: np.ndarray = np.empty((rows, cols), dtype=str)\n",
    "    action_symbols: Dict[int, str] = {0: '↑', 1: '↓', 2: '←', 3: '→'}\n",
    "\n",
    "    fig, ax = plt.subplots(figsize=(cols * 0.6, rows * 0.6))\n",
    "\n",
    "    # Ensure model is on the correct device for inference\n",
    "    policy_net.to(device) \n",
    "    policy_net.eval() # Set to evaluation mode\n",
    "\n",
    "    for r in range(rows):\n",
    "        for c in range(cols):\n",
    "            state_tuple: Tuple[int, int] = (r, c)\n",
    "            if state_tuple == env.goal_state:\n",
    "                policy_grid[r, c] = 'G'\n",
    "                ax.text(c, r, 'G', ha='center', va='center', color='green', fontsize=12, weight='bold')\n",
    "            else:\n",
    "                # Use the specific device for state tensor\n",
    "                state_tensor: torch.Tensor = env._get_state_tensor(state_tuple).to(device)\n",
    "                with torch.no_grad():\n",
    "                    action_dist, _ = policy_net(state_tensor)\n",
    "                    best_action: int = action_dist.probs.argmax(dim=1).item()\n",
    "\n",
    "                policy_grid[r, c] = action_symbols[best_action]\n",
    "                ax.text(c, r, policy_grid[r, c], ha='center', va='center', color='black', fontsize=12)\n",
    "\n",
    "    ax.matshow(np.zeros((rows, cols)), cmap='Greys', alpha=0.1)\n",
    "    ax.set_xticks(np.arange(-.5, cols, 1), minor=True)\n",
    "    ax.set_yticks(np.arange(-.5, rows, 1), minor=True)\n",
    "    ax.grid(which='minor', color='black', linestyle='-', linewidth=1)\n",
    "    ax.set_xticks([])\n",
    "    ax.set_yticks([])\n",
    "    ax.set_title(\"A3C Learned Policy (Most Likely Action)\")\n",
    "    plt.show()\n",
    "\n",
    "# Plot the policy learned by the final global A3C actor\n",
    "# Ensure the global model is accessible and potentially moved to eval device if needed\n",
    "print(\"\\nPlotting Learned Policy from Global A3C Model:\")\n",
    "plot_a3c_policy_grid(global_model_a3c, temp_env, device)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Common Challenges and Solutions in A3C\n",
    "\n",
    "**Challenge: Implementation Complexity (Multiprocessing)**\n",
    "*   **Problem:** Correctly managing shared memory for the global network, handling shared optimizers (or the gradient passing mechanism), synchronizing workers, and collecting results requires careful use of multiprocessing libraries and can be prone to bugs (deadlocks, race conditions).\n",
    "*   **Solutions**:\n",
    "    *   **Use `torch.multiprocessing`:** Specifically designed to handle PyTorch tensors and models in parallel processes.\n",
    "    *   **`.share_memory_()`:** Essential for making global model parameters accessible to workers.\n",
    "    *   **Simplified Optimizer Approach:** Using local optimizers acting on shared parameters (as implemented here) avoids complex shared optimizer states but might be less efficient.\n",
    "    *   **Thorough Testing:** Debugging multiprocessing code can be difficult.\n",
    "\n",
    "**Challenge: Stale Gradients**\n",
    "*   **Problem:** Because workers update asynchronously, a worker might compute gradients based on slightly outdated global parameters (parameters that have been updated by other workers since the local worker last synced). This can introduce noise or bias.\n",
    "*   **Solutions**:\n",
    "    *   **Generally Tolerated:** A3C's design relies on this effect for decorrelation; it often works well despite staleness.\n",
    "    *   **Lower Learning Rates:** Can mitigate the impact of noisy/stale updates.\n",
    "    *   **A2C/PPO:** Synchronous updates avoid stale gradients entirely.\n",
    "\n",
    "**Challenge: Resource Utilization (CPU vs. GPU)**\n",
    "*   **Problem:** A3C was designed for multi-core CPUs. While workers run efficiently on CPUs, applying gradients to a global model on a GPU requires data transfer and might not fully utilize the GPU if updates are too frequent and small. A2C often achieves better GPU utilization with batched updates.\n",
    "   **Solutions**:\n",
    "    *   **Run Global Model on CPU:** Simplifies implementation if workers are CPU-bound.\n",
    "    *   **Optimize Data Transfer:** If using a GPU for the global model, ensure efficient handling of gradient transfers.\n",
    "    *   **Consider A2C:** Often better suited for GPU-centric training.\n",
    "\n",
    "**Challenge: Hyperparameter Tuning**\n",
    "*   **Problem:** A3C has several sensitive hyperparameters, including the number of workers, n-step length, learning rates, and coefficients.\n",
    "   **Solutions**:\n",
    "    *   **N-step Length:** Balances bias (low n) and variance (high n). Common values range from 5 to 20.\n",
    "    * **Number of Workers:** More workers increase data diversity but also potential for staleness. Often matched to CPU cores.\n",
    "    * **Learning Rates:** Often need to be lower than synchronous methods.\n",
    "\n",
    "## Conclusion\n",
    "\n",
    "Asynchronous Advantage Actor-Critic (A3C) marked a significant development by successfully leveraging parallelism for stable and efficient reinforcement learning without relying on experience replay. Its core innovation lies in using multiple workers interacting with independent environment instances to generate diverse experiences and apply asynchronous updates to a shared global network, effectively decorrelating the data and stabilizing the learning process.\n",
    "\n",
    "While its implementation complexity, potential for stale gradients, and often less optimal GPU utilization have led to the increased popularity of synchronous methods like A2C and PPO, A3C remains historically important. It demonstrated the power of asynchronous parallel training and influenced subsequent algorithm design. Understanding A3C provides valuable context for the evolution of actor-critic methods and the trade-offs between synchronous and asynchronous learning paradigms in RL."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": ".venv-all-rl-algos",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
