{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "55add111",
   "metadata": {},
   "source": [
    "# OpenAI强化学习实战第4课书面作业\n",
    "学号：114488  \n",
    "\n",
    "**作业内容：**  \n",
    "1. 编写实现值迭代的程序。  \n",
    "2. 尝试使用课上讲到的任意一种方法来训练一个强化学习的问题。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ba36ba0e",
   "metadata": {},
   "source": [
    "## 1、作业1\n",
    "编写实现值迭代的程序："
   ]
  },
  {
   "cell_type": "markdown",
   "id": "af40422f",
   "metadata": {},
   "source": [
    "周志华书中P382中对值迭代算法描述如下：  \n",
    "**输入：** \n",
    "1. MDP四元组$E=<X,A,P,R>$；  \n",
    "2. 累积奖赏参数$T$；  \n",
    "3. 收敛阈值$\\theta$。  \n",
    "**过程：**  \n",
    "1:$\\forall x\\in X:V(x)=0$;  \n",
    "2:**for** t=1,2,... **do**   \n",
    "3:&emsp;$\\forall x \\in X:V'(x)=\\operatorname{max}_{a\\in A}\\sum_{x'\\in X}P_{x\\rightarrow x'}^a(\\frac{1}{t}R_{x\\rightarrow x'}^a+\\frac{t-1}{t}V(x')$;   \n",
    "4:&emsp;**if** $\\operatorname{max}_{x \\in X}|V(x)-V'(x)|<\\theta$ **then**    \n",
    "5:&emsp;&emsp;**break**  \n",
    "6:&emsp;**else**   \n",
    "7:&emsp;&emsp;$V=V'$   \n",
    "8:&emsp;**end if**   \n",
    "9:**end for**  \n",
    "**输出：** 策略$\\pi(x)=\\operatorname{arg\\,max}_{a\\in A}Q(x,a)$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "3650aa90",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "\n",
    "class Agent:\n",
    "    def __init__(self, env):\n",
    "        self.env = env\n",
    "\n",
    "    def policy_evaluation(self, policy):#实现值迭代算法中迭代计算V值部分\n",
    "        V = np.zeros(self.env.nS)\n",
    "        THETA = 0.0001\n",
    "        delta = float(\"inf\")\n",
    "        \n",
    "        while delta > THETA:\n",
    "            delta = 0\n",
    "            for s in range(self.env.nS):\n",
    "                expected_value = 0\n",
    "                for action, action_prob in enumerate(policy[s]):\n",
    "                    prob, next_state, reward, done = self.env.P[s][action]\n",
    "                    expected_value += action_prob * prob * (reward + DISCOUNT_FACTOR * V[next_state-1])\n",
    "                delta = max(delta, np.abs(V[s] - expected_value))\n",
    "                V[s] = expected_value\n",
    "        \n",
    "        return V\n",
    "    \n",
    "    def next_best_action(self, s, V):#实现值迭代算法中求令Q函数取最大 值的a\n",
    "        action_values = np.zeros(env.nA)\n",
    "        for a in range(env.nA):\n",
    "            prob, next_state, reward, done = self.env.P[s][a]\n",
    "            action_values[a] += prob * (reward + DISCOUNT_FACTOR * V[next_state-1])\n",
    "        return np.argmax(action_values), np.max(action_values)\n",
    "    def optimize(self): #值迭代优化算法\n",
    "        policy = np.tile(np.eye(self.env.nA)[1], (self.env.nS, 1))\n",
    "        \n",
    "        is_stable = False\n",
    "        \n",
    "        round_num = 0\n",
    "        \n",
    "        while not is_stable:\n",
    "            is_stable = True\n",
    "            \n",
    "            print(\"\\nRound Number:\" + str(round_num))\n",
    "            round_num += 1\n",
    "            \n",
    "            print(\"Current Policy\")\n",
    "            print(np.reshape([env.get_action_name(entry) for entry in [np.argmax(policy[s]) for s in range(self.env.nS)]], self.env.shape))\n",
    "            \n",
    "            V = self.policy_evaluation(policy)\n",
    "            print(\"Expected Value accoridng to Policy Evaluation\")\n",
    "            print(np.reshape(V, self.env.shape))\n",
    "            \n",
    "            for s in range(self.env.nS):\n",
    "                action_by_policy = np.argmax(policy[s])\n",
    "                best_action, best_action_value = self.next_best_action(s, V)\n",
    "                # print(\"\\nstate=\" + str(s) + \" action=\" + str(best_action))\n",
    "                policy[s] = np.eye(self.env.nA)[best_action]\n",
    "                if action_by_policy != best_action:\n",
    "                    is_stable = False\n",
    "            \n",
    "        policy = [np.argmax(policy[s]) for s in range(self.env.nS)]\n",
    "        return policy"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d70623dc",
   "metadata": {},
   "source": [
    "## 2）作业2\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "946de0b5",
   "metadata": {},
   "source": [
    "### 2.1）利用值迭代算法训练上周迷宫问题"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "ced27263",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "Round Number:0\n",
      "Current Policy\n",
      "[['e']\n",
      " ['e']\n",
      " ['e']\n",
      " ['e']\n",
      " ['e']\n",
      " ['e']\n",
      " ['e']\n",
      " ['e']\n",
      " ['e']\n",
      " ['e']\n",
      " ['e']\n",
      " ['e']\n",
      " ['e']\n",
      " ['e']\n",
      " ['e']]\n",
      "Expected Value accoridng to Policy Evaluation\n",
      "[[ 0.        ]\n",
      " [ 0.        ]\n",
      " [ 0.        ]\n",
      " [ 0.        ]\n",
      " [ 0.        ]\n",
      " [ 0.        ]\n",
      " [ 0.        ]\n",
      " [-0.02205225]\n",
      " [-0.07425   ]\n",
      " [-0.25      ]\n",
      " [ 0.        ]\n",
      " [ 0.0441045 ]\n",
      " [ 0.09801   ]\n",
      " [ 0.33      ]\n",
      " [ 0.        ]]\n",
      "\n",
      "Round Number:1\n",
      "Current Policy\n",
      "[['n']\n",
      " ['n']\n",
      " ['n']\n",
      " ['n']\n",
      " ['n']\n",
      " ['n']\n",
      " ['n']\n",
      " ['n']\n",
      " ['s']\n",
      " ['s']\n",
      " ['n']\n",
      " ['e']\n",
      " ['e']\n",
      " ['e']\n",
      " ['w']]\n",
      "Expected Value accoridng to Policy Evaluation\n",
      "[[0.        ]\n",
      " [0.        ]\n",
      " [0.        ]\n",
      " [0.        ]\n",
      " [0.        ]\n",
      " [0.        ]\n",
      " [0.        ]\n",
      " [0.        ]\n",
      " [0.03359812]\n",
      " [0.0857039 ]\n",
      " [0.        ]\n",
      " [0.05090624]\n",
      " [0.11312915]\n",
      " [0.38090812]\n",
      " [0.17140865]]\n",
      "\n",
      "Round Number:2\n",
      "Current Policy\n",
      "[['n']\n",
      " ['n']\n",
      " ['n']\n",
      " ['n']\n",
      " ['n']\n",
      " ['s']\n",
      " ['n']\n",
      " ['e']\n",
      " ['s']\n",
      " ['s']\n",
      " ['s']\n",
      " ['e']\n",
      " ['e']\n",
      " ['e']\n",
      " ['w']]\n",
      "Expected Value accoridng to Policy Evaluation\n",
      "[[0.        ]\n",
      " [0.        ]\n",
      " [0.        ]\n",
      " [0.        ]\n",
      " [0.        ]\n",
      " [0.02545312]\n",
      " [0.        ]\n",
      " [0.00997588]\n",
      " [0.03359812]\n",
      " [0.0857039 ]\n",
      " [0.05090812]\n",
      " [0.05090624]\n",
      " [0.11312915]\n",
      " [0.38090812]\n",
      " [0.17140865]]\n",
      "\n",
      "Round Number:3\n",
      "Current Policy\n",
      "[['n']\n",
      " ['n']\n",
      " ['s']\n",
      " ['n']\n",
      " ['s']\n",
      " ['s']\n",
      " ['w']\n",
      " ['e']\n",
      " ['s']\n",
      " ['s']\n",
      " ['s']\n",
      " ['e']\n",
      " ['e']\n",
      " ['e']\n",
      " ['w']]\n",
      "Expected Value accoridng to Policy Evaluation\n",
      "[[0.        ]\n",
      " [0.        ]\n",
      " [0.00755749]\n",
      " [0.        ]\n",
      " [0.00447986]\n",
      " [0.02545312]\n",
      " [0.00755958]\n",
      " [0.00997588]\n",
      " [0.03359812]\n",
      " [0.0857039 ]\n",
      " [0.05090812]\n",
      " [0.05090624]\n",
      " [0.11312915]\n",
      " [0.38090812]\n",
      " [0.17140865]]\n",
      "\n",
      "Round Number:4\n",
      "Current Policy\n",
      "[['s']\n",
      " ['e']\n",
      " ['s']\n",
      " ['s']\n",
      " ['s']\n",
      " ['s']\n",
      " ['w']\n",
      " ['e']\n",
      " ['s']\n",
      " ['s']\n",
      " ['s']\n",
      " ['e']\n",
      " ['e']\n",
      " ['e']\n",
      " ['w']]\n",
      "Expected Value accoridng to Policy Evaluation\n",
      "[[0.00201594]\n",
      " [0.00340087]\n",
      " [0.00755958]\n",
      " [0.00340181]\n",
      " [0.00448915]\n",
      " [0.02545406]\n",
      " [0.00755986]\n",
      " [0.00997864]\n",
      " [0.03359936]\n",
      " [0.08570433]\n",
      " [0.05090837]\n",
      " [0.05090812]\n",
      " [0.11312971]\n",
      " [0.38090837]\n",
      " [0.17140877]]\n",
      "\n",
      "Best Policy\n",
      "[['s']\n",
      " ['e']\n",
      " ['s']\n",
      " ['s']\n",
      " ['s']\n",
      " ['s']\n",
      " ['w']\n",
      " ['e']\n",
      " ['s']\n",
      " ['s']\n",
      " ['s']\n",
      " ['e']\n",
      " ['e']\n",
      " ['e']\n",
      " ['w']]\n",
      "start: 8\n",
      "action: e\n",
      "9 0.0 False {}\n",
      "action: s\n",
      "13 0.0 False {}\n",
      "action: e\n",
      "14 0.0 False {}\n",
      "action: e\n",
      "15 1.0 True {}\n",
      "successful!\n"
     ]
    }
   ],
   "source": [
    "import gym\n",
    "\n",
    "DISCOUNT_FACTOR = 0.9\n",
    "\n",
    "env = gym.make('MazeWorld-v0')\n",
    "\n",
    "agent = Agent(env)\n",
    "policy = agent.optimize()\n",
    "print(\"\\nBest Policy\")\n",
    "print(np.reshape([env.get_action_name(entry) for entry in policy], env.shape))\n",
    "\n",
    "observation = env.reset()\n",
    "print('start:',observation)\n",
    "for _ in range(1000):\n",
    "    env.render()\n",
    "    action = env.get_action_name(policy[observation-1])\n",
    "    print('action:',action)\n",
    "    observation, reward, done, info = env.step(action)\n",
    "    print(observation, reward, done, info)\n",
    "    if done:\n",
    "        if reward==1:\n",
    "            print('successful!')\n",
    "        break\n",
    "#         observation = env.reset()\n",
    "#         print('start:',observation)\n",
    "env.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2b0c297a",
   "metadata": {},
   "source": [
    "### 2.2）利用qlearning算法解决迷宫问题"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a4b17cf7",
   "metadata": {},
   "source": [
    "周志华书P388页，描述Q-learning算法如下：  \n",
    "**输入：**  \n",
    "* 环境$E$;\n",
    "* 动作空间$A$;\n",
    "* 起始状态$x_0$;\n",
    "* 奖赏折扣$\\gamma$;\n",
    "* 更新步长$\\alpha$.  \n",
    "\n",
    "**过程：**  \n",
    "1: $Q(x,a)=0,\\pi(x,a)=\\frac{1}{|A(x)|}$;  \n",
    "2: $x=x_0$;  \n",
    "3: **for** t=1,2,...**do**  \n",
    "4: &emsp;$r,x'=$在$E$中执行动作$\\pi^{\\epsilon}(x)$产生的奖赏与转移的状态;  \n",
    "5: &emsp;$a'=\\pi(x')$;  \n",
    "6: &emsp;$Q(x,a)=Q(x,a)+\\alpha\\big(r+\\gamma Q(x',a')-Q(x,a)\\big)$;  \n",
    "7: &emsp;$\\pi(x)=\\operatorname{arg\\;max}_{a^\"}Q(x,a^\")$;  \n",
    "8: &emsp;$x=x',a=a'$  \n",
    "9: **end for**  \n",
    "\n",
    "**输出：** 策略$\\pi$"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "914a0006",
   "metadata": {},
   "source": [
    "实现算法如下（重点是qlearning函数）："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "4833d610",
   "metadata": {},
   "outputs": [],
   "source": [
    "import sys\n",
    "import gym\n",
    "import random\n",
    "import time\n",
    "from gym import wrappers\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "random.seed(0)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "1c0c0eb8",
   "metadata": {},
   "outputs": [],
   "source": [
    "maze = gym.make('MazeWorld-v0') #自定义的迷宫问题\n",
    "states = maze.env.getStates()        #获得迷宫的状态空间，这个对应上面算法输入部分的E\n",
    "actions = maze.env.getAction()      #获得迷宫的动作空间，这个对应上面算法输入部分的A\n",
    "gamma = maze.env.getGamma()       #获得折扣因子，这个对应上面算法输入部分的折扣gamma\n",
    "#计算当前策略和最优策略之间的差\n",
    "best = dict() #储存最优行为值函数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "61e82408",
   "metadata": {},
   "outputs": [],
   "source": [
    "#  贪婪策略\n",
    "def greedy(qfunc, state):\n",
    "    amax = 0\n",
    "    key = \"%d_%s\" % (state, actions[0])\n",
    "    qmax = qfunc[key]\n",
    "    for i in range(len(actions)):  # 扫描动作空间得到最大动作值函数\n",
    "        key = \"%d_%s\" % (state, actions[i])\n",
    "        q = qfunc[key]\n",
    "        if qmax < q:\n",
    "            qmax = q\n",
    "            amax = i\n",
    "    return actions[amax]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "18674c4b",
   "metadata": {},
   "outputs": [],
   "source": [
    "#######epsilon贪婪策略\n",
    "def epsilon_greedy(qfunc, state, epsilon):\n",
    "    amax = 0\n",
    "    key = \"%d_%s\"%(state, actions[0])\n",
    "    qmax = qfunc[key]\n",
    "    for i in range(len(actions)):    #扫描动作空间得到最大动作值函数\n",
    "        key = \"%d_%s\"%(state, actions[i])\n",
    "        q = qfunc[key]\n",
    "        if qmax < q:\n",
    "            qmax = q\n",
    "            amax = i\n",
    "    #概率部分\n",
    "    pro = [0.0 for i in range(len(actions))]\n",
    "    pro[amax] += 1-epsilon\n",
    "    for i in range(len(actions)):\n",
    "        pro[i] += epsilon/len(actions)\n",
    "\n",
    "    ##选择动作\n",
    "    r = random.random()\n",
    "    s = 0.0\n",
    "    for i in range(len(actions)):\n",
    "        s += pro[i]\n",
    "        if s>= r: return actions[i]\n",
    "    return actions[len(actions)-1]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "0f1f1f98",
   "metadata": {},
   "outputs": [],
   "source": [
    "def qlearning(num_iter1, alpha, epsilon):\n",
    "    '''\n",
    "    qlearning算法，需要有5个输入：\n",
    "    E：全局变量\n",
    "    A：全局变量\n",
    "    起始状态x0: 这个在本函数中通过maze.reset()获得\n",
    "    gamma: 全局变量\n",
    "    alpha: 输入参数\n",
    "    num_iter1：对应算法中for循环的次数\n",
    "    '''\n",
    "    x = []\n",
    "    y = []\n",
    "    qfunc = dict()   #行为值函数为字典\n",
    "    #初始化行为值函数为0\n",
    "    for s in states:\n",
    "        for a in actions:\n",
    "            key = \"%d_%s\"%(s,a)\n",
    "            qfunc[key] = 0.0\n",
    "    for iter1 in range(num_iter1):\n",
    "#         x.append(iter1)\n",
    "#         y.append(compute_error(qfunc))\n",
    "\n",
    "        #初始化初始状态\n",
    "        s = maze.reset()\n",
    "        a = actions[int(random.random()*len(actions))]\n",
    "        t = False\n",
    "        count = 0\n",
    "        while False == t and count <100:\n",
    "            key = \"%d_%s\"%(s, a)\n",
    "            #与环境进行一次交互，从环境中得到新的状态及回报\n",
    "            s1, r, t1, i =maze.step(a)\n",
    "            key1 = \"\"\n",
    "            #s1处的最大动作\n",
    "            a1 = greedy(qfunc, s1)\n",
    "            key1 = \"%d_%s\"%(s1, a1)\n",
    "            #利用qlearning方法更新值函数\n",
    "            qfunc[key] = qfunc[key] + alpha*(r + gamma * qfunc[key1]-qfunc[key])\n",
    "            #转到下一个状态\n",
    "            s = s1;\n",
    "            a = epsilon_greedy(qfunc, s1, epsilon)\n",
    "            count += 1\n",
    "#     plt.plot(x,y,\"-.,\",label =\"q alpha=%2.1f epsilon=%2.1f\"%(alpha,epsilon))\n",
    "    return qfunc"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c099af6f",
   "metadata": {},
   "source": [
    "用来实现迷宫问题的解决如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "de980a5f",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "the qfunc of key (1_n) is 0.248934\n",
      "the qfunc of key (1_e) is 0.327680\n",
      "the qfunc of key (1_s) is 0.187581\n",
      "the qfunc of key (1_w) is 0.259221\n",
      "the qfunc of key (2_n) is 0.305968\n",
      "the qfunc of key (2_e) is 0.409600\n",
      "the qfunc of key (2_s) is 0.279559\n",
      "the qfunc of key (2_w) is 0.243967\n",
      "the qfunc of key (3_n) is 0.398107\n",
      "the qfunc of key (3_e) is 0.320272\n",
      "the qfunc of key (3_s) is 0.512000\n",
      "the qfunc of key (3_w) is 0.314293\n",
      "the qfunc of key (4_n) is 0.299212\n",
      "the qfunc of key (4_e) is 0.285410\n",
      "the qfunc of key (4_s) is 0.320682\n",
      "the qfunc of key (4_w) is 0.409576\n",
      "the qfunc of key (5_n) is 0.262075\n",
      "the qfunc of key (5_e) is 0.193123\n",
      "the qfunc of key (5_s) is 0.201479\n",
      "the qfunc of key (5_w) is 0.181894\n",
      "the qfunc of key (6_n) is 0.398424\n",
      "the qfunc of key (6_e) is 0.405907\n",
      "the qfunc of key (6_s) is 0.640000\n",
      "the qfunc of key (6_w) is 0.493114\n",
      "the qfunc of key (7_n) is 0.280143\n",
      "the qfunc of key (7_e) is 0.374472\n",
      "the qfunc of key (7_s) is -0.964816\n",
      "the qfunc of key (7_w) is 0.511966\n",
      "the qfunc of key (8_n) is 0.194654\n",
      "the qfunc of key (8_e) is 0.511010\n",
      "the qfunc of key (8_s) is -0.964816\n",
      "the qfunc of key (8_w) is 0.279091\n",
      "the qfunc of key (9_n) is 0.476119\n",
      "the qfunc of key (9_e) is 0.613376\n",
      "the qfunc of key (9_s) is 0.640000\n",
      "the qfunc of key (9_w) is 0.382607\n",
      "the qfunc of key (10_n) is 0.510271\n",
      "the qfunc of key (10_e) is -0.998066\n",
      "the qfunc of key (10_s) is 0.800000\n",
      "the qfunc of key (10_w) is 0.512000\n",
      "the qfunc of key (11_n) is 0.000000\n",
      "the qfunc of key (11_e) is 0.000000\n",
      "the qfunc of key (11_s) is 0.000000\n",
      "the qfunc of key (11_w) is 0.000000\n",
      "the qfunc of key (12_n) is 0.000000\n",
      "the qfunc of key (12_e) is 0.000000\n",
      "the qfunc of key (12_s) is 0.000000\n",
      "the qfunc of key (12_w) is 0.000000\n",
      "the qfunc of key (13_n) is 0.476802\n",
      "the qfunc of key (13_e) is 0.800000\n",
      "the qfunc of key (13_s) is 0.625588\n",
      "the qfunc of key (13_w) is -0.988471\n",
      "the qfunc of key (14_n) is 0.639605\n",
      "the qfunc of key (14_e) is 1.000000\n",
      "the qfunc of key (14_s) is 0.799568\n",
      "the qfunc of key (14_w) is 0.630626\n",
      "the qfunc of key (15_n) is 0.000000\n",
      "the qfunc of key (15_e) is 0.000000\n",
      "the qfunc of key (15_s) is 0.000000\n",
      "the qfunc of key (15_w) is 0.000000\n",
      "the learned policy is:\n",
      "the policy of state 1 is (e)\n",
      "the policy of state 2 is (e)\n",
      "the policy of state 3 is (s)\n",
      "the policy of state 4 is (w)\n",
      "the policy of state 5 is (n)\n",
      "the policy of state 6 is (s)\n",
      "the policy of state 7 is (w)\n",
      "the policy of state 8 is (e)\n",
      "the policy of state 9 is (s)\n",
      "the policy of state 10 is (s)\n",
      "the state 11 is terminate_states\n",
      "the state 12 is terminate_states\n",
      "the policy of state 13 is (e)\n",
      "the policy of state 14 is (e)\n",
      "the state 15 is terminate_states\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "C:\\Users\\Administrator\\anaconda3\\lib\\site-packages\\pyglet\\window\\__init__.py:667: UserWarning: \n",
      "Your graphics drivers do not support OpenGL 2.0.\n",
      "You may experience rendering issues or crashes.\n",
      "Microsoft Corporation\n",
      "GDI Generic\n",
      "1.1.0\n",
      "  warnings.warn(message)\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "4 w\n",
      "3 s\n",
      "6 s\n",
      "10 s\n",
      "14 e\n",
      "15\n",
      "reach the terminate state 15\n",
      "3 s\n",
      "6 s\n",
      "10 s\n",
      "14 e\n",
      "15\n",
      "reach the terminate state 15\n",
      "1 e\n",
      "2 e\n",
      "3 s\n",
      "6 s\n",
      "10 s\n",
      "14 e\n",
      "15\n",
      "reach the terminate state 15\n",
      "13 e\n",
      "14 e\n",
      "15\n",
      "reach the terminate state 15\n",
      "4 w\n",
      "3 s\n",
      "6 s\n",
      "10 s\n",
      "14 e\n",
      "15\n",
      "reach the terminate state 15\n",
      "5 n\n",
      "1 e\n",
      "2 e\n",
      "3 s\n",
      "6 s\n",
      "10 s\n",
      "14 e\n",
      "15\n",
      "reach the terminate state 15\n",
      "4 w\n",
      "3 s\n",
      "6 s\n",
      "10 s\n",
      "14 e\n",
      "15\n",
      "reach the terminate state 15\n",
      "1 e\n",
      "2 e\n",
      "3 s\n",
      "6 s\n",
      "10 s\n",
      "14 e\n",
      "15\n",
      "reach the terminate state 15\n",
      "4 w\n",
      "3 s\n",
      "6 s\n",
      "10 s\n",
      "14 e\n",
      "15\n",
      "reach the terminate state 15\n",
      "1 e\n",
      "2 e\n",
      "3 s\n",
      "6 s\n",
      "10 s\n",
      "14 e\n",
      "15\n",
      "reach the terminate state 15\n",
      "10 s\n",
      "14 e\n",
      "15\n",
      "reach the terminate state 15\n",
      "14 e\n",
      "15\n",
      "reach the terminate state 15\n",
      "10 s\n",
      "14 e\n",
      "15\n",
      "reach the terminate state 15\n",
      "8 e\n",
      "9 s\n",
      "13 e\n",
      "14 e\n",
      "15\n",
      "reach the terminate state 15\n",
      "14 e\n",
      "15\n",
      "reach the terminate state 15\n",
      "7 w\n",
      "6 s\n",
      "10 s\n",
      "14 e\n",
      "15\n",
      "reach the terminate state 15\n",
      "4 w\n",
      "3 s\n",
      "6 s\n",
      "10 s\n",
      "14 e\n",
      "15\n",
      "reach the terminate state 15\n",
      "8 e\n",
      "9 s\n",
      "13 e\n",
      "14 e\n",
      "15\n",
      "reach the terminate state 15\n",
      "14 e\n",
      "15\n",
      "reach the terminate state 15\n",
      "6 s\n",
      "10 s\n",
      "14 e\n",
      "15\n",
      "reach the terminate state 15\n"
     ]
    }
   ],
   "source": [
    "sleeptime=0.5\n",
    "terminate_states= maze.env.getTerminate_states()\n",
    "qfunc = dict()\n",
    "qfunc = qlearning(num_iter1=500, alpha=0.2, epsilon=0.2)\n",
    "\n",
    "#学到的值函数\n",
    "for s in states:\n",
    "    for a in actions:\n",
    "        key = \"%d_%s\"%(s,a)\n",
    "        print(\"the qfunc of key (%s) is %f\" %(key, qfunc[key]) )\n",
    "        qfunc[key]\n",
    "#学到的策略为：\n",
    "print(\"the learned policy is:\")\n",
    "for i in range(len(states)):\n",
    "    if states[i] in terminate_states:\n",
    "        print(\"the state %d is terminate_states\"%(states[i]))\n",
    "    else:\n",
    "        print(\"the policy of state %d is (%s)\" % (states[i], greedy(qfunc, states[i])))\n",
    "# 设置系统初始状态\n",
    "s0 = 1\n",
    "maze.env.setAction(s0)\n",
    "# 对训练好的策略进行测试\n",
    "maze = wrappers.Monitor(maze, './robotfindgold', force=True)  # 记录回放动画\n",
    "#随机初始化，寻找金币的路径\n",
    "for i in range(20):\n",
    "    #随机初始化\n",
    "    s0 = maze.reset()\n",
    "    maze.render()\n",
    "    time.sleep(sleeptime)\n",
    "    t = False\n",
    "    count = 0\n",
    "    #判断随机状态是否在终止状态中\n",
    "    if s0 in terminate_states:\n",
    "        print(\"reach the terminate state %d\" % (s0))\n",
    "    else:\n",
    "        while False == t and count < 100:\n",
    "            a1 = greedy(qfunc, s0)\n",
    "            print(s0, a1)\n",
    "            maze.render()\n",
    "            time.sleep(sleeptime)\n",
    "            key = \"%d_%s\" % (s0, a)\n",
    "            # 与环境进行一次交互，从环境中得到新的状态及回报\n",
    "            s1, r, t, i = maze.step(a1)\n",
    "            if True == t:\n",
    "                #打印终止状态\n",
    "                print(s1)\n",
    "                maze.render()\n",
    "                time.sleep(sleeptime)\n",
    "                print(\"reach the terminate state %d\" % (s1))\n",
    "            # s1处的最大动作\n",
    "            s0 = s1\n",
    "            count += 1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "0506e2ac",
   "metadata": {},
   "outputs": [],
   "source": [
    "maze.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "80e1de35",
   "metadata": {},
   "source": [
    "### 2.3）附：迷宫实现代码\n",
    "在 第3周迷宫代码基础上又修改了一下，能够支撑本周作业实现。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "f6eea6dc",
   "metadata": {},
   "outputs": [],
   "source": [
    "import logging\n",
    "import numpy\n",
    "import random\n",
    "from gym import spaces\n",
    "import gym\n",
    "\n",
    "# logger = logging.getLogger(__name__)\n",
    "\n",
    "class MazeEnv(gym.Env):\n",
    "    metadata = {\n",
    "        'render.modes': ['human', 'rgb_array'],\n",
    "        'video.frames_per_second': 2\n",
    "    }\n",
    "\n",
    "    def __init__(self):\n",
    "        #状态空间,定义16个状态空间，每一个状态对应一个格子，可以看出不是每个格子都是可到达的\n",
    "        self.states = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15] #状态空间\n",
    "        self.nS=len(self.states)\n",
    "        self.shape=(4,4)\n",
    "        self.x=[105,235,365,495,105,365,495,105,235,365,495,105,235,365,495]#80,\n",
    "        self.y=[320,320,320,320,240,240,240,160,160,160,160, 80, 80, 80, 80]#\n",
    "        #定义终止状态\n",
    "        self.terminate_states = dict()  #终止状态为字典格式\n",
    "        self.terminate_states[11] = 1\n",
    "        self.terminate_states[12] = 1\n",
    "        self.terminate_states[15] = 1\n",
    "        \n",
    "        #定义动作空间\n",
    "        self.actions = ['n','e','s','w']\n",
    "        self.nA=len(self.actions)\n",
    "        \n",
    "        #定义奖励\n",
    "        self.rewards = dict();        #回报的数据结构为字典\n",
    "        self.rewards['7_s'] = -1.0\n",
    "        self.rewards['10_e'] = -1.0\n",
    "        self.rewards['8_s'] = -1.0\n",
    "        self.rewards['13_w'] = -1.0\n",
    "        self.rewards['14_e'] = 1.0\n",
    "        #定义状态转移\n",
    "        self.t = dict();             #状态转移的数据格式为字典\n",
    "        self.t['1_s'] = 5\n",
    "        self.t['1_e'] = 2\n",
    "        self.t['2_w'] = 1\n",
    "        self.t['2_e'] = 3\n",
    "        self.t['3_s'] = 6\n",
    "        self.t['3_w'] = 2\n",
    "        self.t['3_e'] = 4\n",
    "        self.t['4_w'] = 3\n",
    "        self.t['4_s'] = 7\n",
    "        self.t['5_n'] = 1\n",
    "        self.t['5_s'] = 8\n",
    "        self.t['6_n'] = 3\n",
    "        self.t['6_s'] = 10\n",
    "        self.t['6_e'] = 7\n",
    "        self.t['7_n'] = 4\n",
    "        self.t['7_s'] = 11 #-1\n",
    "        self.t['7_w'] = 6\n",
    "        self.t['8_n'] = 5\n",
    "        self.t['8_s'] = 12 #-1\n",
    "        self.t['8_e'] = 9\n",
    "        self.t['9_e'] = 10\n",
    "        self.t['9_s'] = 13\n",
    "        self.t['9_w'] = 8\n",
    "        self.t['10_n'] = 6\n",
    "        self.t['10_s'] = 14\n",
    "        self.t['10_w'] = 9\n",
    "        self.t['10_e'] = 11 #-1\n",
    "        self.t['11_n'] = 7\n",
    "        self.t['11_s'] = 15\n",
    "        self.t['11_w'] = 10\n",
    "        self.t['12_e'] = 13\n",
    "        self.t['12_n'] = 8\n",
    "        self.t['13_e'] = 14\n",
    "        self.t['13_w'] = 12 #-1\n",
    "        self.t['13_n'] = 9\n",
    "        self.t['14_e'] = 15 #+1\n",
    "        self.t['14_w'] = 13\n",
    "        self.t['14_n'] = 10\n",
    "        self.t['15_w'] = 14\n",
    "        self.t['15_n'] = 11\n",
    "        #定义P矩阵\n",
    "        self.P=[\n",
    "            [[0,1,0,0],[0.5,2,0,0],[0.5,5,0,0],[0,1,0,0]],#state 1\n",
    "            [[0,2,0,0],[0.5,3,0,0],[0,2,0,0],[0.5,1,0,0]],#state 2\n",
    "            [[0,3,0,0],[0.33,4,0,0],[0.33,6,0,0],[0.33,2,0,0]],#state 3\n",
    "            [[0,4,0,0],[0,4,0,0],[0.5,7,0,0],[0.5,3,0,0]],#sate 4\n",
    "            [[0.5,1,0,0],[0,5,0,0],[0.5,8,0,0],[0,5,0,0]],#state 5\n",
    "            [[0.33,3,0,0],[0.33,7,0,0],[0.33,10,0,0],[0,1,0,0]],#state 6\n",
    "            [[0.33,4,0,0],[0,7,0,0],[0.33,11,-1,1],[0.33,6,0,0]],#state 7\n",
    "            [[0.33,5,0,0],[0.33,9,0,0],[0.33,12,-1,1],[0,8,0,0]],#state 8\n",
    "            [[0,9,0,0],[0.33,10,0,0],[0.33,13,0,0],[0.33,8,0,0]],#state 9\n",
    "            [[0.25,6,0,0],[0.25,11,-1,1],[0.25,14,0,0],[0.25,9,0,0]],#state 10\n",
    "            [[0.33,7,0,0],[0,11,0,0],[0.33,15,0,0],[0.33,10,0,0]],#state 11\n",
    "            [[0.5,8,0,0],[0.5,13,0,0],[0,12,0,0],[0,12,0,0]],#state 12\n",
    "            [[0.33,9,0,0],[0.33,14,0,0],[0,13,0,0],[0.33,12,-1,1]],#state 13\n",
    "            [[0.33,10,0,0],[0.33,15,1,1],[0,14,0,0],[0.33,13,0,0]],#state 14\n",
    "            [[0.5,11,0,0],[0,15,0,0],[0,15,0,0],[0.5,14,0,0]]#state 15\n",
    "        ]\n",
    "\n",
    "        self.gamma = 0.8         #折扣因子\n",
    "        self.viewer = None\n",
    "        self.state = None\n",
    "\n",
    "    def seed(self, seed=None):\n",
    "        self.np_random, seed = seeding.np_random(seed)\n",
    "        return [seed]\n",
    "\n",
    "    def getTerminal(self):\n",
    "        return self.terminate_states\n",
    "\n",
    "    def getGamma(self):\n",
    "        return self.gamma\n",
    "\n",
    "    def getStates(self):\n",
    "        return self.states\n",
    "\n",
    "    def getAction(self):\n",
    "        return self.actions\n",
    "    def getTerminate_states(self):\n",
    "        return self.terminate_states\n",
    "    def setAction(self,s):\n",
    "        self.state=s\n",
    "    def get_action_name(self,a):\n",
    "        return self.actions[a]\n",
    "    def step(self, action):\n",
    "        #系统当前状态\n",
    "        state = self.state\n",
    "        if state in self.terminate_states:\n",
    "            return state, 0, True, {}\n",
    "        key = \"%d_%s\"%(state, action)   #将状态和动作组成字典的键值\n",
    "\n",
    "        #状态转移\n",
    "        if key in self.t:\n",
    "            next_state = self.t[key]\n",
    "        else:\n",
    "            next_state = state\n",
    "        self.state = next_state\n",
    "\n",
    "        is_terminal = False\n",
    "\n",
    "        if next_state in self.terminate_states:\n",
    "            is_terminal = True\n",
    "\n",
    "        if key not in self.rewards:\n",
    "            r = 0.0\n",
    "        else:\n",
    "            r = self.rewards[key]\n",
    "\n",
    "\n",
    "        return next_state, r,is_terminal,{}\n",
    "    def reset(self):\n",
    "        self.state = self.states[int(random.random() * len(self.states))]\n",
    "        while self.state in self.terminate_states:\n",
    "            #print('final state: ',self.state)\n",
    "            self.state = self.states[int(random.random() * len(self.states))]\n",
    "        return self.state\n",
    "    def render(self, mode='human', close=False):\n",
    "        if close:\n",
    "            if self.viewer is not None:\n",
    "                self.viewer.close()\n",
    "                self.viewer = None\n",
    "            return\n",
    "        screen_width = 600\n",
    "        screen_height = 400\n",
    "\n",
    "        if self.viewer is None:\n",
    "            from gym.envs.classic_control import rendering\n",
    "            self.viewer = rendering.Viewer(screen_width, screen_height)\n",
    "            #创建网格世界\n",
    "            self.line1 = rendering.Line((40, 360), (560,360))\n",
    "            self.line2 = rendering.Line((40, 280), (560, 280))\n",
    "            self.line3 = rendering.Line((40, 200), (560, 200))\n",
    "            self.line4 = rendering.Line((40, 120), (560, 120))\n",
    "            self.line5 = rendering.Line((40, 40), (560, 40))\n",
    "            self.line6 = rendering.Line((40, 360), (40, 40))\n",
    "            self.line7 = rendering.Line((170, 360), (170, 40))\n",
    "            self.line8 = rendering.Line((300, 360), (300, 40))\n",
    "            self.line9 = rendering.Line((430, 360), (430, 40))\n",
    "            self.line10 = rendering.Line((560, 360), (560, 40))\n",
    "            #self.line11 = rendering.Line((420, 100), (500, 100))\n",
    "            #创建石柱\n",
    "            self.shizhu1 = rendering.make_circle(30)#30半径\n",
    "            self.circletrans = rendering.Transform(translation=(235,240))#圆心\n",
    "            self.shizhu1.add_attr(self.circletrans)\n",
    "            self.shizhu1.set_color(0.5,0.5,0.5)\n",
    "            #创建第一个火坑\n",
    "            self.kulo1 = rendering.make_circle(30)#40半径\n",
    "            self.circletrans = rendering.Transform(translation=(495,160))#圆心\n",
    "            self.kulo1.add_attr(self.circletrans)\n",
    "            self.kulo1.set_color(1,0,0)\n",
    "            #创建第二个火坑\n",
    "            self.kulo2 = rendering.make_circle(30)\n",
    "            self.circletrans = rendering.Transform(translation=(105, 80))\n",
    "            self.kulo2.add_attr(self.circletrans)\n",
    "            self.kulo2.set_color(1, 0, 0)\n",
    "            #创建钻石\n",
    "            self.gold = rendering.make_circle(30)\n",
    "            self.circletrans = rendering.Transform(translation=(495, 80))\n",
    "            self.gold.add_attr(self.circletrans)\n",
    "            self.gold.set_color(0, 0, 1)\n",
    "            #创建机器人\n",
    "            self.robot= rendering.make_circle(20)\n",
    "            self.robotrans = rendering.Transform()\n",
    "            self.robot.add_attr(self.robotrans)\n",
    "            self.robot.set_color(0.8, 0.6, 0.4)\n",
    "\n",
    "            self.line1.set_color(0, 0, 0)\n",
    "            self.line2.set_color(0, 0, 0)\n",
    "            self.line3.set_color(0, 0, 0)\n",
    "            self.line4.set_color(0, 0, 0)\n",
    "            self.line5.set_color(0, 0, 0)\n",
    "            self.line6.set_color(0, 0, 0)\n",
    "            self.line7.set_color(0, 0, 0)\n",
    "            self.line8.set_color(0, 0, 0)\n",
    "            self.line9.set_color(0, 0, 0)\n",
    "            self.line10.set_color(0, 0, 0)\n",
    "            #self.line11.set_color(0, 0, 0)\n",
    "\n",
    "            self.viewer.add_geom(self.line1)\n",
    "            self.viewer.add_geom(self.line2)\n",
    "            self.viewer.add_geom(self.line3)\n",
    "            self.viewer.add_geom(self.line4)\n",
    "            self.viewer.add_geom(self.line5)\n",
    "            self.viewer.add_geom(self.line6)\n",
    "            self.viewer.add_geom(self.line7)\n",
    "            self.viewer.add_geom(self.line8)\n",
    "            self.viewer.add_geom(self.line9)\n",
    "            self.viewer.add_geom(self.line10)\n",
    "            self.viewer.add_geom(self.shizhu1)\n",
    "            self.viewer.add_geom(self.kulo1)\n",
    "            self.viewer.add_geom(self.kulo2)\n",
    "            self.viewer.add_geom(self.gold)\n",
    "            self.viewer.add_geom(self.robot)\n",
    "\n",
    "        if self.state is None: return None\n",
    "        #self.robotrans.set_translation(self.x[self.state-1],self.y[self.state-1])\n",
    "        self.robotrans.set_translation(self.x[self.state-1], self.y[self.state- 1])\n",
    "\n",
    "\n",
    "\n",
    "        return self.viewer.render(return_rgb_array=mode == 'rgb_array')\n",
    "\n",
    "    def close(self):\n",
    "        if self.viewer is not None:\n",
    "            self.viewer.close()\n",
    "            self.viewer = None"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
