{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Report -AI自动下五子棋\n",
    "* 姓名：刘晓冉\n",
    "* 学号：2021300706"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 任务简介\n",
    "选择一个游戏（俄罗斯方块、五子棋其中一个）实现计算机自动玩游戏：通过构建游戏仿真环境，并研究强化学习方法，让计算机自动计算最优的策略，从而实现让计算机自动玩。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 解决途径  \n",
    "## 一、对问题的分析与思考  \n",
    " \n",
    "使用强化学习算法DQN实现五子棋自动对弈，DQN是在Q-learning的基础上进行得到的。对于五子棋来说，就相当于在整个棋盘上制作出一个Q表格，在这个表格中的每个位置上有三种状态，分别为黑子，白子和无子。所以当棋盘较大时，会导致数据量较大。这里使用了$8*8$的棋盘大小，保证合适的数据量。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 二、模型的构建过程  \n",
    "### 1、游戏棋盘的搭建  \n",
    "游戏的棋盘设计与自动对弈部分关系不大，在这里采用了格外定义函数来处理，不再进行额外的介绍，详见Map.py\n",
    "\n",
    "### 2、思路简单介绍\n",
    "在两个对手进行对弈的这个过程中，对于整张棋盘的Q表格来说，对手落子对己方落子也在产生影响。在零和博弈的这种思路下，将对手的Q表格乘以符号来训练Q值，这就相当于对手的最坏状况为己方的最好状况，己方的最坏状况为对方的最好状况。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 三、代码部分"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:From c:\\Users\\lxh\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\tensorflow\\python\\compat\\v2_compat.py:107: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "non-resource variables are not supported in the long term\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "import random\n",
    "import os\n",
    "import tensorflow.compat.v1 as tf\n",
    "\n",
    "tf.disable_v2_behavior()\n",
    "\n",
    "import Map"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 1、代码"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:tensorflow:Restoring parameters from e:\\jupyter\\homework\\homework_report_wuziqi\\Saver\\cnnsaver.ckpt-0\n",
      "白旗走\n",
      "2,7\n",
      "此位置的价值为：0.28471518\n",
      "黑旗走\n",
      "4,7\n",
      "此位置的价值为：0.32274377\n",
      "白旗走\n",
      "3,7\n",
      "此位置的价值为：0.41095582\n",
      "黑旗走\n",
      "1,7\n",
      "此位置的价值为：0.40710226\n",
      "白旗走\n",
      "7,3\n",
      "此位置的价值为：0.47723356\n",
      "黑旗走\n",
      "3,6\n",
      "此位置的价值为：0.5211922\n",
      "白旗走\n",
      "7,4\n",
      "此位置的价值为：0.52479154\n",
      "黑旗走\n",
      "1,6\n",
      "此位置的价值为：0.5870025\n",
      "白旗走\n",
      "5,7\n",
      "此位置的价值为：0.50184983\n",
      "黑旗走\n",
      "7,1\n",
      "此位置的价值为：0.6585838\n",
      "白旗走\n",
      "1,5\n",
      "此位置的价值为：0.55666476\n",
      "黑旗走\n",
      "7,2\n",
      "此位置的价值为：0.6213395\n",
      "白旗走\n",
      "4,5\n",
      "此位置的价值为：0.60868675\n",
      "黑旗走\n",
      "6,4\n",
      "此位置的价值为：0.5949437\n",
      "白旗走\n",
      "2,6\n",
      "此位置的价值为：0.5838336\n",
      "黑旗走\n",
      "1,0\n",
      "此位置的价值为：0.6707801\n",
      "白旗走\n",
      "6,2\n",
      "此位置的价值为：0.6158268\n",
      "黑旗走\n",
      "6,3\n",
      "此位置的价值为：0.6440792\n",
      "白旗走\n",
      "2,5\n",
      "此位置的价值为：0.6872608\n",
      "黑旗走\n",
      "3,5\n",
      "此位置的价值为：0.6628415\n",
      "白旗走\n",
      "3,2\n",
      "此位置的价值为：0.7015381\n",
      "黑旗走\n",
      "4,6\n",
      "此位置的价值为：0.64727867\n",
      "白旗走\n",
      "4,2\n",
      "此位置的价值为：0.78278697\n",
      "黑旗走\n",
      "2,0\n",
      "此位置的价值为：0.6542281\n",
      "白旗走\n",
      "3,3\n",
      "此位置的价值为：0.81500995\n",
      "黑旗走\n",
      "6,6\n",
      "此位置的价值为：0.67197436\n",
      "白旗走\n",
      "2,1\n",
      "此位置的价值为：0.7944778\n",
      "黑旗走\n",
      "5,6\n",
      "此位置的价值为：0.65615165\n",
      "白旗走\n",
      "1,1\n",
      "此位置的价值为：0.8336434\n",
      "黑旗走\n",
      "0,0\n",
      "此位置的价值为：0.65259665\n",
      "白旗走\n",
      "0,7\n",
      "此位置的价值为：0.8267085\n",
      "黑旗走\n",
      "6,1\n",
      "此位置的价值为：0.71104985\n",
      "白旗走\n",
      "4,4\n",
      "此位置的价值为：0.8469398\n",
      "黑旗走\n",
      "4,0\n",
      "此位置的价值为：0.7815458\n",
      "白旗走\n",
      "5,4\n",
      "此位置的价值为：0.8490913\n",
      "黑旗走\n",
      "6,0\n",
      "此位置的价值为：0.7582147\n",
      "白旗走\n",
      "0,3\n",
      "此位置的价值为：0.82074\n",
      "黑旗走\n",
      "2,4\n",
      "此位置的价值为：0.7593221\n",
      "白旗走\n",
      "0,2\n",
      "此位置的价值为：0.8286858\n",
      "黑旗走\n",
      "3,1\n",
      "此位置的价值为：0.75341594\n",
      "白旗走\n",
      "0,1\n",
      "此位置的价值为：0.820767\n",
      "黑旗走\n",
      "0,6\n",
      "此位置的价值为：0.7604304\n",
      "白旗走\n",
      "1,4\n",
      "此位置的价值为：0.8279803\n",
      "黑旗走\n",
      "6,5\n",
      "此位置的价值为：0.7871984\n",
      "白旗走\n",
      "2,3\n",
      "此位置的价值为：0.8569743\n",
      "黑旗走\n",
      "6,7\n",
      "此位置的价值为：0.81069463\n",
      "黑方已经五连，黑方赢\n",
      "train avg loss 1.6553086042404175\n"
     ]
    }
   ],
   "source": [
    "class DQN():\n",
    "    def __init__(self):\n",
    "        self.n_input = Map.mapsize * Map.mapsize\n",
    "        self.n_output = 1\n",
    "        self.current_q_step = 0\n",
    "        self.avg_loss = 0\n",
    "        # placeholder是在神经网络构建graph的时候在模型中的占位，此时并没有把要输入的数据传入模型，它只会分配必要的内存。\n",
    "        # 建立完session后，在会话中，运行模型的时候通过feed_dict()函数向占位符喂入数据。\n",
    "        self.x = tf.placeholder(\"float\", [None, Map.mapsize, Map.mapsize], name='x')\n",
    "        self.y = tf.placeholder(\"float\", [None, self.n_output], name='y')\n",
    "        self.create_Q_network()\n",
    "        self.create_training_method()\n",
    "        self.saver = tf.train.Saver()\n",
    "        self.sess = tf.Session()\n",
    "        # 它能让你在运行图的时候，插入一些计算图\n",
    "        self.sess = tf.InteractiveSession()\n",
    "        self.sess.run(tf.global_variables_initializer())\n",
    "\n",
    "    def create_Q_network(self):\n",
    "        # tf.random_normal()函数用于从“服从指定正态分布的序列”中随机取出指定个数的值。  stddev: 正态分布的标准差\n",
    "        wc1 = tf.Variable(tf.random_normal([3, 3, 1, 64], stddev=0.1), dtype=tf.float32, name='wc1')\n",
    "        wc2 = tf.Variable(tf.random_normal([3, 3, 64, 128], stddev=0.1), dtype=tf.float32, name='wc2')\n",
    "        wc3 = tf.Variable(tf.random_normal([3, 3, 128, 256], stddev=0.1), dtype=tf.float32, name='wc3')\n",
    "        wd1 = tf.Variable(tf.random_normal([256, 128], stddev=0.1), dtype=tf.float32, name='wd1')\n",
    "        wd2 = tf.Variable(tf.random_normal([128, self.n_output], stddev=0.1), dtype=tf.float32, name='wd2')\n",
    "        # tf.Variable 得到的是张量，而张量并不是具体的值，而是计算过程\n",
    "        bc1 = tf.Variable(tf.random_normal([64], stddev=0.1), dtype=tf.float32, name='bc1')\n",
    "        bc2 = tf.Variable(tf.random_normal([128], stddev=0.1), dtype=tf.float32, name='bc2')\n",
    "        bc3 = tf.Variable(tf.random_normal([256], stddev=0.1), dtype=tf.float32, name='bc3')\n",
    "        bd1 = tf.Variable(tf.random_normal([128], stddev=0.1), dtype=tf.float32, name='bd1')\n",
    "        bd2 = tf.Variable(tf.random_normal([self.n_output], stddev=0.1), dtype=tf.float32, name='bd2')\n",
    "\n",
    "        weights = {\n",
    "            'wc1': wc1,\n",
    "            'wc2': wc2,\n",
    "            'wc3': wc3,\n",
    "            'wd1': wd1,\n",
    "            'wd2': wd2\n",
    "        }\n",
    "\n",
    "        biases = {\n",
    "            'bc1': bc1,\n",
    "            'bc2': bc2,\n",
    "            'bc3': bc3,\n",
    "            'bd1': bd1,\n",
    "            'bd2': bd2\n",
    "        }\n",
    "\n",
    "        self.Q_value = self.conv_basic(self.x, weights, biases)\n",
    "        self.Q_Weihgts = [weights, biases]\n",
    "\n",
    "    def conv_basic(self, _input, _w, _b):\n",
    "        # input\n",
    "        _out = tf.reshape(_input, shape=[-1, Map.mapsize, Map.mapsize, 1])\n",
    "        # conv layer 1  conv2d 用于做二维卷积  strides, # 步长参数  padding, # 卷积方式\n",
    "        _out = tf.nn.conv2d(_out, _w['wc1'], strides=[1, 1, 1, 1], padding='SAME')\n",
    "        # bias_add 一个叫bias的向量加到一个叫value的矩阵上，是向量与矩阵的每一行进行相加\n",
    "        _out = tf.nn.relu(tf.nn.bias_add(_out, _b['bc1']))\n",
    "        # ksize 池化窗口的大小，取一个四维向量  padding： 填充的方法，SAME或VALID，SAME表示添加全0填充，VALID表示不添加\n",
    "        _out = tf.nn.max_pool(_out, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')\n",
    "        # conv layer2\n",
    "        _out = tf.nn.conv2d(_out, _w['wc2'], strides=[1, 1, 1, 1], padding='SAME')\n",
    "        _out = tf.nn.relu(tf.nn.bias_add(_out, _b['bc2']))\n",
    "        _out = tf.nn.max_pool(_out, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')\n",
    "        # conv layer3\n",
    "        _out = tf.nn.conv2d(_out, _w['wc3'], strides=[1, 1, 1, 1], padding='SAME')\n",
    "        _out = tf.nn.relu(tf.nn.bias_add(_out, _b['bc3']))\n",
    "        # 计算张量tensor沿着指定的数轴（tensor的某一维度）上的的平均值，主要用作降维或者计算tensor（图像）的平均值。\n",
    "        _out = tf.reduce_mean(_out, [1, 2])\n",
    "        # fully connected layer1 matmul 两个矩阵中对应元素各自相乘\n",
    "        _out = tf.nn.relu(tf.add(tf.matmul(_out, _w['wd1']), _b['bd1']))\n",
    "        # fully connected layer2\n",
    "        _out = tf.add(tf.matmul(_out, _w['wd2']), _b['bd2'])\n",
    "        return _out\n",
    "\n",
    "    def create_training_method(self):\n",
    "        # squared_difference 计算张量 x、y 对应元素差平方\n",
    "        self.cost = tf.reduce_mean(tf.squared_difference(self.Q_value, self.y))\n",
    "        self.optm = tf.train.AdamOptimizer(learning_rate=0.001, name='Adam').minimize(self.cost)\n",
    "\n",
    "    def restore(self):\n",
    "        if os.path.exists('Saver/cnnsaver.ckpt-0.index'):\n",
    "            self.saver.restore(self.sess, os.path.abspath('Saver/cnnsaver.ckpt-0'))\n",
    "\n",
    "    def computerPlay(self, IsTurnWhite):\n",
    "        if IsTurnWhite:\n",
    "            print('白旗走')\n",
    "            # 如果该白旗走的话 用黑的棋盘，1代表黑，-1代表白\n",
    "            board = np.array(Map.blackBoard)\n",
    "        else:\n",
    "            print('黑旗走')\n",
    "            # 如果该黑旗走的话 用白的棋盘 1代表白，-1代表黑\n",
    "            board = np.array(Map.whiteBoard)\n",
    "        # 建立所有可下位置的数组，每下一个位置一个数组\n",
    "        boards = []\n",
    "        # 当前棋谱中空白的地方\n",
    "        positions = []\n",
    "        for i in range(Map.mapsize):\n",
    "            for j in range(Map.mapsize):\n",
    "                # 如果这个当前棋谱这个位置是空白的\n",
    "                if board[j][i] == Map.backcode:\n",
    "                    predx = np.copy(board)\n",
    "                    # -1代表自己，更方便计算\n",
    "                    predx[j][i] = -1\n",
    "                    boards.append(predx)\n",
    "                    positions.append([i, j])\n",
    "        if len(positions) == 0:\n",
    "            return 0, 0, 0\n",
    "        # 计算所有可下的位置的价值\n",
    "        nextStep = self.sess.run(self.Q_value, feed_dict={self.x: boards})\n",
    "        maxx = 0\n",
    "        maxy = 0\n",
    "        maxValue = -1000  # 实际最大价值  用于后续学习\n",
    "        # 从所有可下的地方找一个价值最大的位置下棋\n",
    "        for i in range(len(positions)):\n",
    "            value = nextStep[i] + random.randint(0, 10) / 1000  # 如果没有最优步子 则随机选择一步\n",
    "            if value > maxValue:\n",
    "                maxValue = value\n",
    "                maxx = positions[i][0]\n",
    "                maxy = positions[i][1]\n",
    "        print(str(maxx) + ',' + str(maxy))\n",
    "        print('此位置的价值为：' + str(maxValue[0]))\n",
    "        return maxx, maxy, maxValue\n",
    "\n",
    "    # 下完了一局就更新一下AI模型\n",
    "    def TrainOnce(self, winner):\n",
    "        # 记录棋图\n",
    "        # board1 白棋 board2 黑棋\n",
    "        board1 = np.array(Map.mapRecords1)\n",
    "        board2 = np.array(Map.mapRecords2)\n",
    "        # 记录棋步\n",
    "        step1 = np.array(Map.stepRecords1)\n",
    "        step2 = np.array(Map.stepRecords2)\n",
    "        # 记录得分\n",
    "        scoreR1 = np.array(Map.scoreRecords1)\n",
    "        scoreR2 = np.array(Map.scoreRecords2)\n",
    "        board1 = np.reshape(board1, [-1, Map.mapsize, Map.mapsize])\n",
    "        board2 = np.reshape(board2, [-1, Map.mapsize, Map.mapsize])\n",
    "        step1 = np.reshape(step1, [-1, Map.mapsize, Map.mapsize])\n",
    "        step2 = np.reshape(step2, [-1, Map.mapsize, Map.mapsize])\n",
    "\n",
    "        score1 = []\n",
    "        score2 = []\n",
    "\n",
    "        board1 = (board1 * (1 - step1)) + step1 * Map.blackcode\n",
    "        board2 = (board2 * (1 - step2)) + step2 * Map.blackcode\n",
    "        # 每步的价值 = 奖励（胜1 负-0.9） + 对方棋盘能达到的最大价值（max taget Q） * （-0.9）\n",
    "        for i in range(len(board1)):\n",
    "            if i == len(scoreR2):  # 白方已经五连  白方赢\n",
    "                print('白方已经五连，白方赢')\n",
    "                score1.append([1.0])  # 白方的最后一步获得1分奖励\n",
    "            else:\n",
    "                # 白方的价值为：黑方棋盘能达到的最大价值（max taget Q） * （-0.9）\n",
    "                score1.append([scoreR2[i][0] * -0.9])\n",
    "        if winner == 2:\n",
    "            print('惩罚白方的最后一步，将其价值设为 -0.9')\n",
    "            score1[len(score1) - 1][0] = -0.9\n",
    "\n",
    "        # 1 白棋 2 黑棋\n",
    "        for i in range(len(board2)):\n",
    "            if i == len(scoreR1) - 1:  # 黑方赢\n",
    "                print('黑方已经五连，黑方赢')\n",
    "                score2.append([1.0])\n",
    "            else:\n",
    "                # 黑棋的得分为：白方棋盘能达到的最大价值（max taget Q） * （-0.9）\n",
    "                score2.append([scoreR1[i + 1][0] * -0.9])\n",
    "        if winner == 1:\n",
    "            print('惩罚黑方的最后一步，将其价值设为 -0.9')\n",
    "            # 惩罚黑方的最后一步\n",
    "            score2[len(score2) - 1][0] = -0.9\n",
    "\n",
    "        # 一次完成多个数组的拼接\n",
    "        borders = np.concatenate([board1, board2], axis=0)\n",
    "        scores = np.concatenate([score1, score2], axis=0)\n",
    "        _, totalLoss = self.sess.run([self.optm, self.cost], feed_dict={self.x: borders,\n",
    "                                                                        self.y: scores})\n",
    "        self.avg_loss += totalLoss\n",
    "        print('train avg loss ' + str(self.avg_loss))\n",
    "        self.avg_loss = 0\n",
    "        # os.path.abspath取决于os.getcwd,如果是一个绝对路径，就返回，\n",
    "        self.saver.save(self.sess, os.path.abspath('Saver/cnnsaver.ckpt'), global_step=0)\n",
    "\n",
    "    def PlayWidthHuman(self):\n",
    "        # 读取历史存储的模型\n",
    "        self.restore()\n",
    "        Map.PlayWithComputer = self.computerPlay\n",
    "        Map.TrainNet = self.TrainOnce\n",
    "        Map.ShowWind()\n",
    "\n",
    "\n",
    "if __name__ == '__main__':\n",
    "    dqn = DQN()\n",
    "    dqn.PlayWidthHuman()\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 总结\n",
    "代码主要使用DQN强化学习方法进行编程实现了人机对抗五子棋算法以及计算机自动实现五子棋算法。算法的特色是将整张棋盘作为Q表格，在棋盘的每个位置处进行价值计算。但是，由于数量\n",
    "的限制，棋盘大小不能过大，否则数据量过大会导致运行时间过长。代码实现了计算机自动对弈，其针对于对弈双方的Q表格进行了互逆取法，及对方Q表格取负来训练当前的Q值，以实现对抗的目的。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.13"
  },
  "vscode": {
   "interpreter": {
    "hash": "e493fda9cc8926f25ba9fad514dad2b0cb451ea0b81fddc4e4958c171d4e7f04"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
