{
 "metadata": {
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.8-final"
  },
  "orig_nbformat": 2,
  "kernelspec": {
   "name": "python_defaultSpec_1598152227823",
   "display_name": "Python 3.7.8 64-bit ('venv': venv)"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2,
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 总结\n",
    "+ 走迷宫是根据在状态 s 上采取动作 a 的概率 $\\pi$ 来选择动作直到终点。\n",
    "+ 概率 $\\pi$ 由 `softmax` 函数求得\n",
    "+ `softmax` 函数如下：\n",
    "$$\n",
    "    P(\\theta_i) = \\frac{exp(\\beta\\theta_i)}{\\sum_{j=1}^{N_a}exp(\\beta\\theta_j)}\n",
    "$$\n",
    "+ 初始的 $\\theta$ 参数值可平均设定\n",
    "+ 每走一次迷宫后更新 $\\theta$ 值：\n",
    "$$\n",
    "    new~\\theta_{s_i,a_j} = old~\\theta_{s_i,a_j} + \\eta\\cdot\\Delta\\theta_{s,a_j}\n",
    "$$\n",
    "+ $\\Delta\\theta_{s,a_j}$ 值是该状态下该动作执行的次数 与 该状态下该动作概率值的插值 与 该次走迷宫总动作数的比值：\n",
    "$$\n",
    "    \\Delta\\theta_{s,a_j}=\\{N(s_i,a_j)-P(s_i,a_j)N(s_i,a)\\}/T\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "`策略 policy`：强化学习中定义智能体行为方式的规则：\n",
    "\n",
    "$\\pi_\\theta(s, a)$: 在状态 $s$ 下采取动作 $a$ 的概率遵循由参数 $\\theta$ 确定的策略 $\\pi$\n",
    "\n",
    "在迷宫任务中，\n",
    "\n",
    "状态`s`指示智能体在迷宫中的位置\n",
    "    有`S0~S8`九个状态\n",
    "\n",
    "动作`a`指智能体在该状态下执行的操作\n",
    "    有向上下左右移动四种类型\n",
    "\n",
    "策略 $\\pi$ 可通过各种方式表达\n",
    "    有时使用`函数`，有时使用`表格`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "%matplotlib inline"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 迷宫的初始位置\n",
    "\n",
    "# 声明图的大小以及图的变量名\n",
    "fig = plt.figure(figsize=(5, 5))\n",
    "ax = plt.gca()\n",
    "\n",
    "# 画出红色的墙壁\n",
    "plt.plot([1, 1], [0, 1], color='red', linewidth=2)\n",
    "plt.plot([1, 2], [2, 2], color='red', linewidth=2)\n",
    "plt.plot([2, 2], [2, 1], color='red', linewidth=2)\n",
    "plt.plot([2, 3], [1, 1], color='red', linewidth=2)\n",
    "\n",
    "# 画出表示状态的文字 S0~S8\n",
    "plt.text(0.5, 2.5, 'S0', size=14, ha='center')\n",
    "plt.text(1.5, 2.5, 'S1', size=14, ha='center')\n",
    "plt.text(2.5, 2.5, 'S2', size=14, ha='center')\n",
    "plt.text(0.5, 1.5, 'S3', size=14, ha='center')\n",
    "plt.text(1.5, 1.5, 'S4', size=14, ha='center')\n",
    "plt.text(2.5, 1.5, 'S5', size=14, ha='center')\n",
    "plt.text(0.5, 0.5, 'S6', size=14, ha='center')\n",
    "plt.text(1.5, 0.5, 'S7', size=14, ha='center')\n",
    "\n",
    "plt.text(2.5, 0.5, 'S8', size=14, ha='center')\n",
    "plt.text(0.5, 2.3, 'START', ha='center')\n",
    "plt.text(2.5, 0.3, 'GOAL', ha='center')\n",
    "\n",
    "# 设定画图的范围\n",
    "ax.set_xlim(0, 3)\n",
    "ax.set_ylim(0, 3)\n",
    "plt.tick_params(axis='both', which='both', bottom='off', top='off',\n",
    "                labelbottom='off', right='off', left='off', labelleft='off')\n",
    "\n",
    "# 当前位置 S0 用绿色圆圈画出\n",
    "line, = ax.plot([0.5], [2.5], marker=\"o\", color='g', markersize=60)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "设定参数 $\\theta$ 的初始值 theta_0，用于确定初始方案\n",
    "\n",
    "行为状态 `0~7`，列为用$\\uparrow,\\rightarrow,\\downarrow,\\leftarrow$表示的移动方向\n",
    "\n",
    "`np.nan` 表示不含任何内容的缺省值\n",
    "\n",
    "`1` 表示该方向可以前进\n",
    "`mp.nan` 表示有墙壁"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "theta_0 = np.array([[np.nan, 1, 1, np.nan], # S0\n",
    "                    [np.nan, 1, np.nan, 1], # S1\n",
    "                    [np.nan, np.nan, 1, 1], # S2\n",
    "                    [1, 1, 1, np.nan], # S3\n",
    "                    [np.nan, np.nan, 1, 1], # S4\n",
    "                    [1, np.nan, np.nan, np.nan], # S5\n",
    "                    [1, np.nan, np.nan, np.nan], # S6\n",
    "                    [1, 1, np.nan, np.nan], # S7，S8是目标所以无策略\n",
    "                    ])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "求初始策略 $\\pi$\n",
    "\n",
    "对参数 $\\theta_0$ 进行转换以求 $\\pi_\\theta(s,a)$\n",
    "\n",
    "简单的转换方法，将对应于前进方向的 $\\theta$ 值转换为百分比\n",
    "\n",
    "定义函数 `simple_convert_into_pi_from_theta`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def simple_convert_into_pi_from_theta(theta):\n",
    "    '''简单的计算百分比'''\n",
    "    [m, n] = theta.shape    # 获取 θ 的矩阵大小\n",
    "    pi = np.zeros((m, n))   # pi定义为全 0 矩阵\n",
    "    for i in range(0, m):   # 计算百分比\n",
    "        pi[i, :] = theta[i, :] / np.nansum(theta[i, :])\n",
    "\n",
    "    pi = np.nan_to_num(pi)  # 将 nan 转换为 0\n",
    "\n",
    "    return pi"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 求初始策略 pi\n",
    "pi_0 = simple_convert_into_pi_from_theta(theta_0)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "pi_0    # 初始时，每个位置上向每个可移动方向移动的概率相同"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "然后，让智能体根据策略 $\\pi_{\\theta_0}(s, a)$ 行动。\n",
    "\n",
    "定义一个 get_next_s 函数，在 $1$ 步移动后获取智能体的状态 `s`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_next_s(pi, s):\n",
    "    direction = [\"up\", \"right\", \"down\", \"left\"]\n",
    "\n",
    "    next_direction = np.random.choice(direction, p=pi[s, :])\n",
    "    # 根据概率 pi[s, :] 选择 direction\n",
    "    s_next = 0\n",
    "    if next_direction == \"up\":\n",
    "        s_next = s - 3\n",
    "    elif next_direction == \"right\":\n",
    "        s_next = s + 1\n",
    "    elif next_direction == \"down\":\n",
    "        s_next = s + 3\n",
    "    elif next_direction == \"left\":\n",
    "        s_next = s - 1\n",
    "\n",
    "    print(str(s) + \" to \" + str(s_next))\n",
    "    return s_next"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 开搞开搞啊\n",
    "\n",
    "def goal_maze(pi):\n",
    "    s = 0\n",
    "    state_history = [0]\n",
    "\n",
    "    while (1):\n",
    "        next_s = get_next_s(pi, s)\n",
    "        state_history.append(next_s)\n",
    "\n",
    "        if next_s == 8:\n",
    "            break\n",
    "        else:\n",
    "            s = next_s\n",
    "    \n",
    "    return state_history"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "state_history = goal_maze(pi_0)\n",
    "print(state_history)\n",
    "print(\"求解迷宫路径此次走了 \"+ str(len(state_history) - 1) + \" 步。\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 可视化\n",
    "from matplotlib import animation\n",
    "from IPython.display import HTML\n",
    "\n",
    "def init():\n",
    "    '''出书画背景图像'''\n",
    "    line.set_data([], [])\n",
    "    return (line, )\n",
    "\n",
    "def animate(i):\n",
    "    '''每一帧的画面内容'''\n",
    "    state = state_history[i]\n",
    "    x = (state % 3) + 0.5\n",
    "    y = 2.5 - int(state / 3)\n",
    "    line.set_data(x, y)\n",
    "    return (line, )\n",
    "\n",
    "# 用初始化函数和绘图函数来生成动画\n",
    "anim = animation.FuncAnimation(fig, animate, init_func=init, frames=len(state_history), interval=200, repeat=False)\n",
    "HTML(anim.to_jshtml())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "策略梯度法\n",
    "常用 softmax 函数：\n",
    "$$\n",
    "    P(\\theta_i)=\\frac{exp(\\beta\\theta_i)}{exp(\\beta\\theta_1)+exp(\\beta\\theta_2)+\\cdots}=\\frac{exp(\\beta\\theta_i)}{\\sum_{j=1}^{N_a}exp(\\beta\\theta_j)}\n",
    "$$\n",
    "根据 softmax 函数 从参数 $\\theta$ 来求得策略 $\\pi_\\theta(s,a)$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def softmax_convert_into_pi_from_theta(theta):\n",
    "    '''根据 softmax 函数计算比率'''\n",
    "\n",
    "    beta = 1.0\n",
    "    [m, n] = theta.shape    # 获取 θ 的矩阵大小\n",
    "    pi = np.zeros((m, n))   # pi定义为全 0 矩阵\n",
    "\n",
    "    exp_theta = np.exp(beta * theta)    # 将 theta 转换为 exp(theta)\n",
    "\n",
    "    for i in range(0, m):   # 计算百分比\n",
    "        pi[i, :] = exp_theta[i, :] / np.nansum(exp_theta[i, :])\n",
    "        # 用 softmax 计算比率\n",
    "\n",
    "    pi = np.nan_to_num(pi)  # 将 nan 转换为 0\n",
    "    pi[1, :]\n",
    "\n",
    "    return pi"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "# 执行\n",
    "pi_0 = softmax_convert_into_pi_from_theta(theta_0)\n",
    "print(pi_0)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 定义求取动作 $a$ 以及 $1$ 步移动后的状态 $s$ 的函数\n",
    "\n",
    "def get_action_and_next_s(pi, s):\n",
    "    direction = [\"up\", \"right\", \"down\", \"left\"]\n",
    "    # 根据 pi[s, :] 的概率来寻找 direction\n",
    "    next_direction = np.random.choice(direction, p=pi[s, :])\n",
    "    # 根据概率 pi[s, :] 选择 direction\n",
    "    s_next = 0\n",
    "    action = 0\n",
    "    if next_direction == \"up\":\n",
    "        action = 0\n",
    "        s_next = s - 3\n",
    "    elif next_direction == \"right\":\n",
    "        action = 1\n",
    "        s_next = s + 1\n",
    "    elif next_direction == \"down\":\n",
    "        action = 2\n",
    "        s_next = s + 3\n",
    "    elif next_direction == \"left\":\n",
    "        action = 3\n",
    "        s_next = s - 1\n",
    "\n",
    "    # print(str(s) + \" to \" + str(s_next))\n",
    "    return [action, s_next]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 定义解决迷宫问题的函数，输出状态和动作\n",
    "\n",
    "def goal_maze_ret_s_a(pi):\n",
    "    s = 0\n",
    "    s_a_history = [[0, np.nan]]\n",
    "\n",
    "    while (1):\n",
    "        [action, next_s] = get_action_and_next_s(pi, s)\n",
    "        s_a_history[-1][1] = action\n",
    "        # 代入当前状态（目前最后一个状态 index=-1）的动作\n",
    "\n",
    "        s_a_history.append([next_s, np.nan])\n",
    "        # 代入下一个状态，由于动作未知，用 nan 表示\n",
    "\n",
    "        if next_s == 8:\n",
    "            break\n",
    "        else:\n",
    "            s = next_s\n",
    "    \n",
    "    return s_a_history"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "s_a_history = goal_maze_ret_s_a(pi_0)\n",
    "print(s_a_history)\n",
    "print(\"求解迷宫问题需要走 \" + str(len(s_a_history) - 1) + \" 步\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "接下来，根据策略梯度法实现策略更新部分的代码，\n",
    "\n",
    "参数 $\\theta$ 根据以下公式更新：\n",
    "$$\n",
    "    \\theta_{s_i,a_j}=\\theta_{s_i,a_j}+\\eta\\cdot\\Delta\\theta_{s,a_j}\n",
    "$$\n",
    "$$\n",
    "    \\Delta\\theta_{s,a_j}=\\{N(s_i,a_j)-P(s_i,a_j)N(s_i,a)\\}/T\n",
    "$$\n",
    "\n",
    "$\\theta$ 是概率，$s_i$ 是状态，$a_j$ 是动作。$\\eta$ 被称为学习系数，$N$ 是次数，$P$ 是概率，$T$ 是总步数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 定义该表达式为 update_theta\n",
    "\n",
    "def update_theta(theta, pi, s_a_history):\n",
    "    eta = 0.1\n",
    "    T = len(s_a_history) - 1\n",
    "\n",
    "    [m, n] = theta.shape\n",
    "    delta_theta = theta.copy()\n",
    "    # 生成初始的 delta_thata\n",
    "\n",
    "    # 求 delta_theta \n",
    "    for i in range(0, m):\n",
    "        for j in range(0, n):\n",
    "            if not(np.isnan(theta[i, j])):\n",
    "                # theta 不是 nan 时\n",
    "                SA_i = [SA for SA in s_a_history if SA[0] == 1]\n",
    "                # 从列表中取出状态 i\n",
    "                SA_ij = [SA for SA in s_a_history if SA == [i, j]]\n",
    "                # 取出状态 i 下应该采取的动作 j\n",
    "\n",
    "                N_i = len(SA_i) # 状态 i 下动作的总次数\n",
    "                N_ij = len(SA_ij)   # 状态 i 下采取动作 j 的次数\n",
    "                delta_theta[i, j] = (N_ij - pi[i, j] * N_i) / T\n",
    "\n",
    "    new_theta = theta + eta * delta_theta\n",
    "\n",
    "    return new_theta"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "new_theta = update_theta(theta_0, pi_0 ,s_a_history)\n",
    "pi = softmax_convert_into_pi_from_theta(new_theta)\n",
    "print(pi)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": [
     "outputPrepend"
    ]
   },
   "outputs": [],
   "source": [
    "# 策略梯度法求解迷宫问题\n",
    "\n",
    "stop_epsilon = 10**-4\n",
    "\n",
    "theta = theta_0\n",
    "pi = pi_0\n",
    "\n",
    "is_continue = True\n",
    "count = 1\n",
    "\n",
    "while is_continue:\n",
    "    s_a_history = goal_maze_ret_s_a(pi) # 由策略 pi 索索迷宫搜索历史\n",
    "    new_theta = update_theta(theta, pi, s_a_history)    # 更新参数 θ\n",
    "    new_pi = softmax_convert_into_pi_from_theta(new_theta)  # 更新参数 pi\n",
    "\n",
    "    print(np.sum(np.abs(new_pi - pi)))  # 输出策略的变化\n",
    "    print(\"第 \" + str(count) + \"次求解迷宫问题所需的步数：\" + str(len(s_a_history) - 1))\n",
    "\n",
    "    if np.sum(np.abs(new_pi - pi)) < stop_epsilon:\n",
    "        is_continue = False\n",
    "    else:\n",
    "        theta = new_theta\n",
    "        pi = new_pi\n",
    "        count = count + 1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "# 确定最终策略\n",
    "np.set_printoptions(precision=3, suppress=True) # 设置有效位数 3，不显示指数\n",
    "print(pi)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 可视化\n",
    "from matplotlib import animation\n",
    "from IPython.display import HTML\n",
    "\n",
    "def init():\n",
    "    '''出书画背景图像'''\n",
    "    line.set_data([], [])\n",
    "    return (line, )\n",
    "\n",
    "def animate(i):\n",
    "    '''每一帧的画面内容'''\n",
    "    state = s_a_history[i][0]\n",
    "    x = (state % 3) + 0.5\n",
    "    y = 2.5 - int(state / 3)\n",
    "    line.set_data(x, y)\n",
    "    return (line, )\n",
    "\n",
    "# 用初始化函数和绘图函数来生成动画\n",
    "anim = animation.FuncAnimation(fig, animate, init_func=init, frames=len(s_a_history), interval=200, repeat=False)\n",
    "HTML(anim.to_jshtml())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ]
}