{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Algorithm Soft Actor-Critic<br>\n",
    "FROM PAPER\n",
    "***\n",
    "**Input:**  $\\theta_1, \\theta_2, \\phi .$<br>\n",
    "&nbsp;&nbsp;&nbsp;&nbsp;$\\bar{\\theta}_1\\leftarrow \\theta_1 ,\\bar{\\theta}_2\\leftarrow \\theta_2$<br>\n",
    "&nbsp;&nbsp;&nbsp;&nbsp;$D \\leftarrow \\varnothing$<br>\n",
    "&nbsp;&nbsp;&nbsp;&nbsp;**for** each iteration **do**<br>\n",
    "&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**for** each environment step **do**<br>\n",
    "&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;$a_t \\sim \\pi_\\phi(a_t|s_t)$<br>\n",
    "&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;$s_{t+1}\\sim p(s_{t+1}|s_t, a_t)$<br>\n",
    "&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;$D \\leftarrow D\\cup\\{(s_t,a_t,r(s_t,a_t),s_{t+1})\\}$<br>\n",
    "&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**end for**<br>\n",
    "&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**for** each gradient step **do**<br>\n",
    "&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;$\\theta_i\\leftarrow\\theta_i - \\lambda_Q\\hat{\\nabla}_{\\theta_i}J_Q(\\theta_i) $&nbsp;&nbsp;for $i\\in \\{1,2\\}$ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Update the Q-function parameters<br>\n",
    "&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;$\\phi \\leftarrow \\phi - \\lambda_\\pi\\hat{\\nabla}_\\phi J_\\pi(\\phi)$&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Update policy weights<br>\n",
    "&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;$\\psi\\leftarrow \\lambda_V\\hat{\\nabla}_\\psi J_V(\\psi)$&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Adjust temperature<br>\n",
    "&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;$\\bar{\\theta}_i\\leftarrow \\tau\\theta_i+(1-\\tau)\\bar{\\theta}_i $&nbsp; for$i\\in \\{1,2\\}$&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Update target network weights<br>\n",
    "&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**end for**<br>\n",
    "&nbsp;&nbsp;&nbsp;&nbsp;**end for**<br>\n",
    "**Output:**&nbsp;$\\theta_1,\\theta_2,\\phi$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Main Formula:\n",
    "1. Soft Bellman residual:\n",
    "$$\n",
    "J_Q(\\theta)=\\mathbb{E}_{(s_t,a_t)\\sim D}\\big[\\frac{1}{2}(Q_\\theta(s_t,a_t)-{\\cal{T}}^\\pi Q(s_t,a_t))^2\\big]\\tag{1}\n",
    "$$\n",
    "Soft Q-value function:\n",
    "$$\n",
    "{\\cal{T}}^\\pi Q(s_t,a_t) \\triangleq r(s_t,a_t)+\\gamma\\mathbb{E}_{s_{t+1}\\sim p}[V_\\bar{\\theta}(s_{t+1})]\\tag{2}\n",
    "$$\n",
    "Soft state value function:\n",
    "$$\n",
    "V(s_t) = \\mathbb{E}_{a_t\\sim \\pi}[Q(s_t,a_t)-\\alpha\\log\\pi(a_t|s_t)]\\tag{3}\n",
    "$$\n",
    "由此推出Soft Bellman residual导数:\n",
    "$$\n",
    "\\hat{\\nabla}_\\theta J_Q(\\theta)=\\nabla_\\theta Q_\\theta(a_t,s_t)(Q_\\theta(s_t,a_t)-(r(s_t,a_t)+\\gamma(Q_{\\bar{\\theta}}(s_{t+1},a_{t+1})-\\alpha\\log(\\pi_\\phi(a_{t+1}|s_{t+1}))))\\tag{4}\n",
    "$$\n",
    "2. Policy Loss:\n",
    "$$\n",
    "J_\\pi(\\phi)=-\\mathbb{E}_{s_t\\sim D}\\big[\\mathbb{E}_{a_t\\sim \\pi_\\phi}[Q_\\phi(s_t,a_t)-\\alpha\\log(\\pi_\\phi(a_t|s_t))]\\big]\\tag{5}\n",
    "$$\n",
    "又\n",
    "$$\n",
    "a_t=f_\\phi(\\epsilon_t;s_t),\\tag{6}\n",
    "$$\n",
    "所以可写成:\n",
    "$$\n",
    "J_\\pi(\\phi)=-\\mathbb{E}_{s_t\\sim D,\\;\\epsilon_t\\sim N}[Q_\\theta(s_t,f_\\phi(\\epsilon_t;s_t))-\\alpha\\log\\pi_\\phi(f_\\phi(\\epsilon_t;s_t)|s_t)]\\tag{7}\n",
    "$$\n",
    "所以其导数形式为:\n",
    "$$\n",
    "\\hat{\\nabla}_\\phi J_\\pi(\\phi)=\\nabla_\\phi\\alpha\\log(\\pi_\\phi(a_t|s_t))+\\big(\\nabla_{a_t}\\alpha\\log(\\pi_\\phi(a_t|s_t))-\\nabla_{a_t}Q(s_t,a_t)\\big)\\nabla_\\phi f_\\phi(\\epsilon_t;s_t),\\tag{8}\n",
    "$$\n",
    "3. 自适应temperature $\\alpha$(论文中说$\\alpha$和$Q$、$\\pi$是对偶问题，有点不明白):\n",
    "$$\n",
    "\\alpha^*_t=\\arg {\\min_{\\alpha_t}}\\mathbb{E}_{a_t\\sim\\pi^*_t}[-\\alpha_t\\log\\pi^*_t(a_t|s_t;a_t)-\\alpha_t\\bar{H}]\\tag{9}\n",
    "$$\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Formula Proofs\n",
    "1. Proof formula $f_2(f_3)$:<br>\n",
    "从最开始的动作转移开始，对于在每次$state$采取的$action$所获得的$soft\\ reward$都可定义如下:<br><br>\n",
    "$$\n",
    "r_{soft}(s_t,a_t)=r(s_t,a_t)+\\gamma\\alpha\\mathbb{E}_{s_{t+1}\\sim \\rho}H(\\pi(\\cdot|s_{t+1}))\\tag{10}\n",
    "$$<br>\n",
    "将其带入到原始的$Q\\ function\\ : Q(s_t,a_t)=r(s_t,a_t)+\\gamma\\mathbb{E}_{s_{t+1},a_{t+1}}[Q(s_{t+1},a_{t+1})]$中得:<br><br>\n",
    "$$\n",
    "\\begin{aligned}\n",
    "Q_{soft}(s_t,a_t)&=r(s_t,a_t)+\\gamma\\alpha\\mathbb{E}_{s_{t+1}\\sim\\rho}H(\\pi(\\cdot|s_{t+1}))+\\gamma\\mathbb{E}_{s_{t+1},a_{t+1}}[Q_{soft}(s_{t+1},a_{t+1})]\\\\\n",
    "&=r(s_t,a_t)+\\gamma\\mathbb{E}_{s_{t+1}\\sim\\rho,a_{t+1}\\sim\\pi}[Q_{soft}(s_{t+1},a_{t+1})]+\\gamma\\alpha\\mathbb{E}_{s_{t+1}\\sim\\rho}H(\\pi(\\cdot|s_{t+1}))\\\\\n",
    "&=r(s_t,a_t)+\\gamma\\mathbb{E}_{s_{t+1}\\sim\\rho,a_{t+1}\\sim\\pi}[Q_{soft}(s_{t+1},a_{t+1})]+\\gamma\\mathbb{E}_{s_{t+1}\\sim\\rho}\\mathbb{E}_{a_{t+1}\\sim\\pi}[-\\alpha\\log\\pi(a_{t+1}|s_{t+1})]\\\\\n",
    "&=r(s_t,a_t)+\\gamma\\mathbb{E}_{s_{t+1}\\sim\\rho}[\\mathbb{E}_{a_{t+1}\\sim\\pi}[Q_{soft}(s_{t+1},a_{t+1})-\\alpha\\log(\\pi(a_{t+1}|s_{t+1}))]]\n",
    "\\end{aligned}\\tag{11}\n",
    "$$\n",
    "2. Proff formula $f$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### [相对熵(KL散度)](https://blog.csdn.net/tsyccnh/article/details/79163834)\n",
    "对于同一个随机变量x单独的概率分布P(x)和Q(x)，用来衡量其分布的差异\n",
    "$$\n",
    "D_{KL}(p||q)=\\sum_{i=1}^n p(x_i)\\log[\\frac{p(x_i)}{q(x_i)}]\n",
    "$$\n",
    "$D_{KL}$越接近于0，$p,q$分布越接近<br><br>\n",
    "展开得\n",
    "$$\n",
    "\\begin{aligned}\n",
    "D_{KL}(p||q)&=\\sum_{i=1}^np(x_i)\\log(p(x_i))-\\sum^n_{i+1}p(x_i)\\log(q(x_i))\\\\\\\n",
    "&=-H(p(x))+[-\\sum^n_{i=1}p(x_i)\\log(q(x_i))]\n",
    "\\end{aligned}\n",
    "$$\n",
    "在分类问题中，假设label为p，则，前部分是不变的，即只需计算后部分，即**交叉熵**"
   ]
  }
 ],
 "metadata": {
  "hide_input": false,
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.8"
  },
  "varInspector": {
   "cols": {
    "lenName": 16,
    "lenType": 16,
    "lenVar": 40
   },
   "kernels_config": {
    "python": {
     "delete_cmd_postfix": "",
     "delete_cmd_prefix": "del ",
     "library": "var_list.py",
     "varRefreshCmd": "print(var_dic_list())"
    },
    "r": {
     "delete_cmd_postfix": ") ",
     "delete_cmd_prefix": "rm(",
     "library": "var_list.r",
     "varRefreshCmd": "cat(var_dic_list()) "
    }
   },
   "types_to_exclude": [
    "module",
    "function",
    "builtin_function_or_method",
    "instance",
    "_Feature"
   ],
   "window_display": false
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
