{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 机器读心术之神经网络与深度学习第10课书面作业\n",
    "学号：207567\n",
    "\n",
    "**书面作业：**  \n",
    "1. 画出漂亮的卷积神经网络结构图是件赏心悦目的事情，试寻找合适的神经网络可视化工具，根据《Mastering the Game of Go with Deep Neural Networks and Tree Search》一文中第27页所描述的策略网络结构，画出其具体结构  \n",
    "2. （可选）在以下链接有AlphaGo的一套山寨版源代码（Deepmind的源代码是不公开的）  \n",
    "https://github.com/Rochester-NRT/RocAlphaGo  \n",
    "试部署并运行测试之，抓图整个过程，对代码的质量给出你的评论"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 第1题\n",
    "画出漂亮的卷积神经网络结构图是件赏心悦目的事情，试寻找合适的神经网络可视化工具，根据《Mastering the Game of Go with Deep Neural Networks and Tree Search》一文中第27页所描述的策略网络结构，画出其具体结构"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**答：**  \n",
    "采用如下工具：https://netron.app/  \n",
    "这个工具是在线的，可以将模型数据传入这个工具就能画出漂亮的工具。  \n",
    "模型采用了第2题中的模型代码，保存为*.h5文件，再将文件转入上面的在线工具得到如下模型图：\n",
    "![my_model.h5](https://gitee.com/dotzhen/cloud-notes/raw/master/my_model.h5.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 第2题\n",
    "（可选）在以下链接有AlphaGo的一套山寨版源代码（Deepmind的源代码是不公开的）  \n",
    "https://github.com/Rochester-NRT/RocAlphaGo  \n",
    "试部署并运行测试之，抓图整个过程，对代码的质量给出你的评论"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**答：**  \n",
    "* 首先，题目给出的链接已经失效。  \n",
    "* 在Github上还能找到RocAlphaGO复制项目，但是：  \n",
    "  * 这个项目需要tensorflow 1.2，这个古老的东西在pip上都找不到；  \n",
    "  * 这个项目是基于python 2.x的。  \n",
    "  * 因此，去部署研究一下RocAlphaGO我实现**没有兴趣**！！！  \n",
    "* 我在网上找了一下，有更好的复刻AlphaGo的源代码存在，为啥不部署和研读一下呢。  \n",
    "  * 这个链接是：https://github.com/maxpumperla/deep_learning_and_the_game_of_go  \n",
    "  * 制作这个源代码的两个作者为此还出了本书就叫“deep learning and the game of go”，还有发行的中文译本叫《深度学习和围棋》  \n",
    "  * 第2题目我基于这个项目来做的。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.1 部署"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1. 需要安装dlgo包，这个包也是他们俩开发的(pip install dlgo)  \n",
    "2. dlgo包中对于tensorflow和keras版本有要求的，但是没有文档说明，我试了一下，如果把tensorflow和keras降到2.4版本可以运行了。\n",
    "3. 我试了一下训练$SL(P_{\\sigma})$网络，可以成功运行，截图如下：\n",
    "\n",
    "![mind10-1](https://gitee.com/dotzhen/cloud-notes/raw/master/mind10-1.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.2 关于模型输入的说明"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "课堂上，老师对于输入数据48个特征平面有些没有看懂，在这本《深度学习和围棋》书中，有比较清晰的说明：\n",
    "| 特征名称   | 平面数量 | 说明                                                         |\n",
    "| ---------- | -------- | ------------------------------------------------------------ |\n",
    "| 执子颜色   | 3        | 3个特征平面分别代表当前执子方、对手方、以及棋盘上的空点的棋子颜色。 |\n",
    "| 1          | 1        | 一个全部填入值1的特征平面                                    |\n",
    "| 0          | 1        | 一个全部填入值0的特征平面                                    |\n",
    "| 明智度     | 1        | 一个动作如果合法，且不会填补当前 棋手的眼，则会在平面上填入1，否则填入0 |\n",
    "| 动作回合数 | 8        | 这个集合有8个二元平面，代表一个动作落子离现在有多少个回合    |\n",
    "| 气数       | 8        | 当前 动作所在的棋链的气数，也分为8个二元平面                 |\n",
    "| 动作后气数 | 8        | 如果这个动作执行了之后，还会剩多少口气                       |\n",
    "| 吃子数     | 8        | 这个动作会吃掉多少颗对方棋子                                 |\n",
    "| 自劫争数   | 8        | 如果这个动作执行之后，有多少己方的棋子会陷入劫争，可能 在下一回合被对方提走 |\n",
    "| 征子提子   | 1        | 这颗棋子是否会被通过征子吃掉                                 |\n",
    "| 引征       | 1        | 这颗棋子是否能够逃出一个可能的征子局面                       |\n",
    "| 当前执子方 | 1        | 如果当前执子方是黑子，整个平面填入1；如果是白子，则填入0     |"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.3 模型实现\n",
    "第1题中的模型由下面的代码产生："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Using TensorFlow backend.\n"
     ]
    }
   ],
   "source": [
    "from dlgo.networks.alphago import alphago_model\n",
    "from dlgo.encoders.alphago import AlphaGoEncoder\n",
    "\n",
    "rows, cols = 19, 19\n",
    "num_classes = rows * cols\n",
    "num_games = 10000\n",
    "\n",
    "encoder = AlphaGoEncoder(use_player_plane=False)\n",
    "input_shape = (encoder.num_planes, rows, cols)\n",
    "alphago_sl_policy = alphago_model(input_shape, is_policy_net=True)\n",
    "alphago_sl_policy.save('my_model.h5')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.4 评论\n",
    "我看了一下dlgo的代码，整体封装是相当不错的，比如如下的模型代码，这个模型代码将策略网络、评估网络的定义高度融合，共享度很高。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def alphago_model(input_shape, is_policy_net=False,  # <1>\n",
    "                  num_filters=192,  # <2>\n",
    "                  first_kernel_size=5,\n",
    "                  other_kernel_size=3):  # <3>\n",
    "\n",
    "    model = Sequential()\n",
    "    model.add(\n",
    "        Conv2D(num_filters, first_kernel_size, input_shape=input_shape, padding='same',\n",
    "               data_format='channels_first', activation='relu'))\n",
    "\n",
    "    for i in range(2, 12):  # <4>\n",
    "        model.add(\n",
    "            Conv2D(num_filters, other_kernel_size, padding='same',\n",
    "                   data_format='channels_first', activation='relu'))\n",
    "# <1> With this boolean flag you specify if you want a policy or value network\n",
    "# <2> All but the last convolutional layers have the same number of filters\n",
    "# <3> The first layer has kernel size 5, all others only 3.\n",
    "# <4> The first 12 layers of AlphaGo's policy and value network are identical.\n",
    "# end::alphago_base[]\n",
    "\n",
    "# tag::alphago_policy[]\n",
    "    if is_policy_net:\n",
    "        model.add(\n",
    "            Conv2D(filters=1, kernel_size=1, padding='same',\n",
    "                   data_format='channels_first', activation='softmax'))\n",
    "        model.add(Flatten())\n",
    "        return model\n",
    "# end::alphago_policy[]\n",
    "\n",
    "# tag::alphago_value[]\n",
    "    else:\n",
    "        model.add(\n",
    "            Conv2D(num_filters, other_kernel_size, padding='same',\n",
    "                   data_format='channels_first', activation='relu'))\n",
    "        model.add(\n",
    "            Conv2D(filters=1, kernel_size=1, padding='same',\n",
    "                   data_format='channels_first', activation='relu'))\n",
    "        model.add(Flatten())\n",
    "        model.add(Dense(256, activation='relu'))\n",
    "        model.add(Dense(1, activation='tanh'))\n",
    "        return model"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "整体上看这款软件质量比较高，结合他们出的书一起看是学习AlphaGo的很好地参考。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "interpreter": {
   "hash": "bd60df88caa55b5780060e02ff93c702a612ebf36325d537487b60a073bcad27"
  },
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.7"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
