{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Twin delayed DDPG\n",
    "\n",
    "Now, we will look into another interesting actor-critic algorithm, known as TD3.\n",
    "TD3 is an improvement (and basically a successor) to the DDPG algorithm we just\n",
    "covered.\n",
    "\n",
    "In the previous section, we learned how DDPG uses a deterministic policy to\n",
    "work on the continuous action space. DDPG has several advantages and has been\n",
    "successfully used in a variety of continuous action space environments.\n",
    "\n",
    "We understood that DDPG is an actor-critic method where an actor is a policy\n",
    "network and it finds the optimal policy, while the critic evaluates the policy\n",
    "produced by the actor by estimating the Q function using a DQN.\n",
    "One of the problems with DDPG is that the critic overestimates the target Q value.\n",
    "This overestimation causes several issues. We learned that the policy is improved\n",
    "based on the Q value given by the critic, but when the Q value has an approximation\n",
    "error, it causes stability issues to our policy and the policy may converge to local\n",
    "optima.\n",
    "\n",
    "Thus, to combat this, TD3 proposes three important features, which are as follows:\n",
    "1. Clipped double Q learning\n",
    "2. Delayed policy updates\n",
    "3. Target policy smoothing\n",
    "\n",
    "First, we will understand how TD3 works intuitively, and then we will look at the\n",
    "algorithm in detail.\n",
    "\n",
    "\n",
    "## Key features of TD3\n",
    "\n",
    "TD3 is essentially the same as DDPG, except that it proposes three important features\n",
    "to mitigate the problems in DDPG. In this section, let's first get a basic understanding\n",
    "of the key features of TD3. The three key features of TD3 are:\n",
    "\n",
    "__Clipped double Q learning:__ Instead of using one critic network, we use two\n",
    "main critic networks to compute the Q value and also use two target critic\n",
    "networks to compute the target value.\n",
    "We compute two target Q values using two target critic networks and use the\n",
    "minimum value of these two while computing the loss. This helps to prevent\n",
    "overestimation of the target Q value. We will learn more about this in detail\n",
    "in the next section.\n",
    "\n",
    "__Delayed policy updates:__ In DDPG, we learned that we update the parameter\n",
    "of both the actor (policy network) and critic (DQN) network at every step\n",
    "of the episode. Unlike DDPG, here we delay updating the parameter of the\n",
    "actor network.\n",
    "That is, the critic network parameter is updated at every step of the episode,\n",
    "but the actor network (policy network) parameter is delayed and updated\n",
    "only after every two steps of the episode.\n",
    "\n",
    "__Target policy smoothing:__ The DDPG method produces different target\n",
    "values even for the same action. Hence, the variance of the target value will\n",
    "be high even for the same action, so we reduce this variance by adding some\n",
    "noise to the target action.\n",
    "\n",
    "\n",
    "__Now that we have a basic idea of the key features of TD3, in the next section we will get into\n",
    "more detail and learn how exactly these three key features work and how\n",
    "they solve the problems associated with DDPG.__"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
